"There were infinite lights, the luminous walls and ceilings that seemed to drip cool, even phosphorescence; the flashing advertisements screaming for attention; the harsh, steady gleam of the 'lightworms' that directed:
THIS WAY TO JERSEY SECTIONS, FOLLOW ARROWS TO EAST RIVER SHUTTLE, UPPER LEVEL FOR ALL WAYS TO LONG ISLAND SECTIONS.
Most of all, there was the noise that was inseparable from life. The sound of millions talking, laughing, coughing, calling, humming, breathing."
In "The Caves of Steel" by Isaac Asimov
Set 2,000 years in the future, "The Caves of Steel" shows us contrasting pictures of Earth and the Outer Worlds - colonized planets throughout the Galaxy. Although the inhabitants of the Outer Worlds trace their origins to Earth, they are separated from it by much more than mere distance, now calling themselves Spacers and ruling the decaying mother planet as benevolent despots. In his earlier novels, Asimov mastered the translation of speech into its written equivalent; but to recreate the speech of a human being is a problem every novelist faces. Credible robotic speech is a much less common challenge, and in "The Caves of Steel" Asimov developed a form of dialogue for Daneel that is completely believable. Daneel's speech, while possessing the rather formal lilt one might expect from a machine, also possesses a gentle, tempered quality that allows him to pass for human. I was always conscious of a slight mechanical flavour as well.
No zeroth law yet here...it'd have made allowed some interesting variations. In "Robots and Empire", Asimov's robots do indeed find a cunning way around the three laws - they invent a Zeroth Law which states that "no robot can injure humanity or through inaction allow humanity to come to harm" which doesn't directly contradict the First Law, so their brains will accept it, but has the interesting effect in moral philosophical terms of turning them from Kantians to utilitarians. So rather than being guided by an absolute "thou shalt not kill" imperative they become able to kill or harm humans if and only if they have calculated it's for the greater good. Rather than becoming brutal overlords because of this (as the other laws still apply) they end up guiding the development of humanity quietly from the shadows, taking on a role not a billion miles from Banks's AIs. As I say, it was a billion years since I read Asimov but I had hell of a blast re-reading this first volume in the Robot Series.
I always thought Asimov's setup with the Three Laws of Robotics had a bit of a problem when it came to defining 'injure'. Is psychological damage also injury? Tell me lies, tell me sweet little lies but don't tell me the truth if my feelings are going to be impacted. The ignorance and avoidance of truth causes a lot of harm in this world. Asimov's laws would clearly not cope with that. You would need to resolve the inherent conflict in the first law and it strikes me that’s when you have to include a decision made regarding relative good (i.e., five lives = better than one life). But then you have to include other factors (e.g., are children 'better' than old people) which becomes subjective. And this is in a simple situation where the "knowns" are all there, not the unknown consequences.
How can we give robots morals? What is our best guide to morality in practical affairs? Cicero's "De Officiis”, surely. Throw in his "Academica", "De Finibus" and "De Natura Deorum", and the robots might have a better sense of what it is to be human, and what it means to be a good person, absent life after death. These are ideas that have stuck fast in the history of European literature and philosophy, and I reckon Cicero's practical style of philosophy is a better guide to acting morally than any work of fiction. But the whole point of AI surely is to create an intelligence which surpasses human capabilities. What could ethics, applied or otherwise, possibly mean at this level of cognition...? AI is meant to make in-roads into the 'paradoxes' of philosophy; paradoxes which we 'resolve' in practical affairs with the virtue of prudence, or practical wisdom. Asimov's robot collapses into a heap of motionless metal when confronted with such paradoxes, but it seems to me that AI might be capable, at one point, of dealing with them. The big question is how...? Would we be willing to cede moral judgments to a non-human intelligence, if it could not adequately convey its 'prudence' to us in our own language?
Obviously, we enter into the realm of speculation here. But I think it behooves us to speculate...
Bottom-line: One of Asimov's best novels. I'd be content with politicians having some morals actually too. It's not the robots we have to worry about...I'd also add that rather than teach robots to read literature so that they can become more human, we should teach literature students to read texts as featuring not 'ethical dilemmas' but concurrency or race hazard problems so that they can become less robotic when they in turn become pedagogues...It's important, however, that those Sex Robots coming off Japanese production lines are also kept well away from feminist stuff, though, I would have thought. I suppose that Fighter Robots might be programmed with only war stories. Obviously stuff about muskets and cannon balls and stuff like that would need to be excluded from the reading lists as well. What would happen if Daneel started reading Enid Blyton? I think it's just encourage Daneel to wander about all day trying to solve mysteries, being beastly to travellers, having high tea and picnics with lashings of strawberry jam (which probably wouldn't be good for him).