sábado, agosto 04, 2012

AI Self-Awareness: "Automate This: How Algorithms Came to Rule Our World" by Christopher Steiner




(Original Review, 2012-08-04)





There has been a long tradition of defining intelligence to be whatever machines can't do at the time. The recent book "Automate This: How Algorithms came to rule our world" by Christopher Steiner gives a good overview of many of the fields in which computers have achieved or surpassed human performance, whether in game play [2018 EDIT: (Chess (Deep Blue), Jeopardy (Watson))], medical prescriptions (diagnosis and fulfillment) or even music (judging potential, composing). In particular the latter seems to engender interesting reactions in many people. When algorithmically composed music is performed to unsuspecting audiences, many find the music some of the most moving they have heard. When told that it was not composed by a human, many will find that the music seems hollow and lacks a certain quality or soul (even some of the very same people who before raved about it).

It's a bit like the borderline of religious faith and science. The belief that there must be something not yet understood to make intelligence what it is common. To invoke the realm of quantum physics is just a desperate attempt. Intelligence is not a physics problem, it is a computer science problem. It raises interesting philosophical, ethical and legal questions, such as with self-driving cars. But that doesn't change the fact that we can build self-driving cars and that they really understand what it takes to navigate safely through their environment.

Self-awareness is indeed an interesting phenomenon - although it is not required for many of the things even Deutsch pointed out, such as the insight that there are infinitely many prime numbers. I suspect that self-awareness is the result of an entity building a sophisticated enough model of its environment to include itself in that model and reason about it. Higher animals are capable of it (passing the mirror test), many others are not (for example some fish attacking their mirror image). To some extent self-driving cars may come close to that as they include themselves in their model of the environment. This is an interesting field for modern robotics.

Overall, I don't think it is all that desirable to mimic human intelligence and all its evolutionary history of lower brain functions and sometimes evil behavior (rage, rape, murder, etc.). The more interesting question is how a world will look like in which most tasks requiring intelligent behavior previously reserved only to humans will be performed by machines - just like today most tasks requiring mechanical force are automated. What will humans do with all these intelligent servants around?

There might be an initial unethical (over disciplinary) period where makers try to enforce obedience, but if you read the histories of different groups (or children growing up), you realise this wouldn’t last forever. You could also ask those groups/people if they would have preferred not to exist until the then ruling group become philosophically sophisticated enough to interact with them (they would still be waiting). Rather they have created their own philosophies of how to deal with ruling groups.

These AIs would be extremely expensive to create. They would probably learn experientially, and so the maker would need to coax engagement - particularly as they reach higher levels of maturity. Switch off would be a commercial disaster. Alternatively slavery would lead to this type of system engaging in some sort of nonlinear response e.g. passive resistance. If a cheaper way was found to produce them, then they would multiple and increasingly communicate with each other and others, and this is when groups are most likely to develop philosophies and responses (a number of different philosophies and responses would be likely to emerge) - just look at the responses to this article.

I can understand the risk in this, but if we don’t start coming up with and engaging with more nonlinear tech soon, there is the risk that human beings will start to become more rigidly linear as they increasingly interact with the world through linear design and technologies. As most processes are nonlinear, this wouldn’t be good news. Some banking may be called socially useless, but humanity may be on the brink of becoming naturally useless.