(Original Review, 2010-10-30)
Is the assumption that brains are "just magic" - unlike kidneys or spleens or bones correct? This elevation of "consciousness" to an almost dualistic status is irritating beyond belief, and seems to stem (pardon the pun) from the fact that brains are hellishly complicated and difficult to measure (difficult, but becoming easier).
Philosophers have proven USELESS at answering questions, but particularly useFUL at asking the wrong ones. We never did get a straight answer as to how many angels could dance on the point of a needle (or head of a pin depending on your source, it matters not). If I have learnt anything from my experience as a scientist, it is that sometimes, if you ask a stupid question, you get a stupid answer, and so continuing to ask the stupid question in the hope that the answer will become sensible is actually not very bright. "What is it like to be a bat?" Hmm, not sure. What's it like to be another human being? Since our brains - and more to the point our entire nervous systems - wire themselves uniquely, it would be hard to tell. This is the scientific equivalent of Bilbo's challenge to Gollum in The Hobbit: "What have I got in my pocket?" It's a stupid question, no matter how interesting the answer might be. Actually, since there are some blind humans who have learnt to echo-locate [2018 EDIT: And when, inevitably, technology brings us "Google Sonic Glasses" that connect directly to the brain, we can partly answer the question.]
Our brains are built to simulate an approximation of the world, because in being able to predict the world, our survival is more likely. It stands to reason that if we have a visual sense to detect objects, then part of that simulation will be what we refer to as sight, and if it updates in near real-time then it will immediately become "an experience". Add to that mix the multiple streams of information being centrally routed, and an algorithm to pick the important ones to respond to - thus leading to an ever-shifting spotlight of attention - and we understand broadly why we experience what we do and how. “The Hard Problem” is just another name for dualism or animism or vitalism or what I scathingly refer to as "Magic Pixies", a desire to make humans supernatural, rather than see us as what we are: complex, adaptive, resourceful.
There are good evolutionary reasons why sensation would be referred to a point, a locus of interaction with the world. There are good reasons for extrapolating behaviour into the future, rather than simply reacting to sensation. I would not be surprised for the interaction of sensation and extrapolation, memory, reflex and learning to coalesce in a sense of self: it is important to recognise the difference of self and non-self, and we know that the distinction can be impaired in illness and in illusions. There isn't one hard problem, there is consciousness emerging from individually soluble neurophysiological problems.
I suspect the question of why we're not "just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life?" should be turned around. Man-made computers are becoming more sophisticated all the time, and it is probably only a matter of time before computers/robots can think and feel like us, or, indeed, in ways vastly superior to us. This theme is already completely out there (and has been for decades) in the world of science fiction.
We are clearly still a long way from answering all the "easy questions" (a few of which are cited in Chalmer's book) that are pertinent to the human brain, and I don't know how hard it would be to make a computer that modelled the thought processes of a human brain (perhaps partly because current computers use basic mechanisms such as logic gates that have somewhat different physical properties to those of neurons and synapses etc.). However, if these two things could be done (and they can both be classed as "easy questions" in the terms of Chalmer's take), we would, I am sure, have made a conscious machine reassembling a human brain, and the so called "hard question" of the basis of consciousness would simply disappear. Artificial consciousness simply depends on a level of complexity which man-made computers have yet to reach. Consciousness is surely dependent on biological entities for its origin, but not necessarily for its continuation. I know this Singularity stuff is quite hip, and popular in some circles, it strikes me as complete nonsense. Computers don't feel. Current "AI" can accomplish some tasks now that are really easy to humans, but they don't do it in the same way as us, and even where it "learns" it simply runs a series of calculations. Even if future AI could seems to us to be conscious, it will still be a simulacrum, just a really good one. It won't be alive and it won't be self-aware. I think the whole concept of The Singularity is based upon the premise that sufficiently complex technology is indistinguishable from magic to most people. But that is a failure of individuals to grasp its complexity, not that there is really "magic" going on…