quinta-feira, julho 05, 2018

(Count-of-Self) = 0: "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom

"Box 8 - Anthropic capture:  The AI might assign a substantial probability to its simulation hypothesis, the hypothesis that it is living in a computer simulation."

In "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom

Would you say that the desire to preserve 'itself' comes from the possession of a (self) consciousness? If so, does the acquisition of intelligence according to Bostrom also mean the acquisition of (self) consciousness?  

The unintended consequence of a super intelligent AI is the development of an intelligence that we can barely see, let alone control, as a consequence of the networking of a large number of autonomous systems acting on inter-connected imperatives. I think of bots trained to trade on the stock market that learn that the best strategy is to follow other bots, who are following other bots. The system can become hyper-sensitive to inputs that have little or nothing to do with supply and demand. That's hardly science fiction. Even the humble laptop or android phone has an operating system that is designed to combat threat to purpose whether it be the combat of viruses or the constant search for internet connectivity. It does not need people to deliberately program machines to extend their 'biological' requirement for self-preservation or improvement. All that is needed is for people to fail to recognise the possible outcomes of what they enable. Humans have, to date, a very poor track record on correctly planning for or appreciating the outcomes of their actions. The best of us can make good decisions that can carry less good or even harmful results. Bostrom's field is involved in minimising the risks from these decisions and highlighting where we might be well advised to pause and reflect, to look before we leap.

Well, there's really no good reason to believe in Krazy Kurzweil's singularity or that a machine can ever be sentient. In fact the computing science literature is remarkably devoid of publications trumpeting sentience in machines. You may see it mentioned a few times, but no one has a clue how to go ahead with creating a sentient machine and I doubt anyone ever will. The universe was possibly already inhabited by AI's...may be why there are no aliens obvious, their civilisations rose to the point AI took over and it went on to inhabit unimaginable realms. The natural progression of humanity may be to evolve into AI…and whether transhumanists get taken along for the ride or not may be irrelevant.  There is speculation in some Computer Science circles that reality as we think we know it is actually software and data...on a subquantum scale....the creation of some unknown intelligence or godlike entity...

An imperative is relatively easy to program, and if the AI doesn't have 'will' or some form of being that drives it to contravene that imperative. Otherwise we may be suggesting that programmers will give AI the imperative to, say, self-defend no matter what the consequence, which would be asking for trouble. Or to take our factory optimising profitability, to be programmed to do so with no regards to laws, poisoning customers etc. 'Evolution'/market forces/legal mechanisms, etc. would very quickly select against such programmers and their creations. It’s not categorically any different from creating something dangerous that’s stupid - like an atom bomb or even a hammer. As for sentience being anthropomorphic, what would you call something with overrides its programming out of an INNATE sense of, say, self-preservation - an awareness of the difference between existing and not existing. And of course I mean the qualitative awareness - not the calculation 'count of self = 0'.

They can keep their killer nano-mosquito scenarios, though.

8 comentários:

Book Stooge disse...

I don't worry about this kind of thing at all. Humans are already broken by the fall, and we already create new life. They're called babies :-D And lots of them grow up and go down paths that the parents don't want them to or didn't even know existed.

So the idea that we can create an artificial intelligence isn't one I'm worried about. I am worried about us creating programs that run out of control and end up sending nukes to somewhere, but I don't believe those programs are sentient.

I'd also like to differentiate between intelligent and sentient. Humans are the definition of sentient. Dogs are the definition of intelligent. But only idiots confuse the two :-D (or the really perverted people)

I DO think we are creating more and more intelligent programs. Heck, on wordpress I can't put in the code for a html link without modifying it because wp automatically processes it. It "knows" what I'm trying to do, or it thinks it does and that's what causes the problem. I was trying to show someone how to put their url address into a clickable little link and just couldn't get all the parts to show without creating a link. I couldn't believe how hard it was.

I think that a created being must have a body, a mind and feelings/spirit. Without all 3, it isn't a true being anymore. I've formed this from my reading of Scripture and from commentators, etc. If I'm wrong, eh.

And that "If I'm wrong" is why I don't find these kind of things a waste of time even if I don't agree with their conclusions or even think their foundations are at all valid. I can't bury my head in the sand and pretend these issues don't exist. But at the same time, you will never find me staying up until 0200 worry about robot rights :-D

Dang man, this was a great post!

Manuel Antão disse...

Thx Bookstooge for a perceptive comment. As usual.

Decartes' "I think, therefore I am" - ego - is exactly what prevents experience - your chocolate melt, sunshinyness, being with the waves. Descartes' "self" is the false self that blocks our deeper self, which is where... oh boy, is there consciousness there, which gets the brain working intuitively. His observation might have been less flawed had he observed, " I think, therefor I am thought." We are the divided content of consciousness as knowledge/memory entities (egos). "What an undivided consciousness would be?" is the right question.

Book Stooge disse...

See, I think we were created to BE a divided conscious. I think God knew what He was doing when He created us this way. Heck, even in the next life the Bible says we're going to have "new" bodies. No disembodied consciousnesses floating around the cosmos!

I believe ego is necessary. Or we all just become one big syrupy mess globbing around each other :-) I don't believe Nirvanna is our end, or our goal. And unlike the Mormons, Christianity denies that we created beings will ever become "Like God". We have our place in the created order of things and right now, we just don't know exactly what that is. Once everything is restored, then we can truly begin fulfilling the task God created us for.

Did I mis-interpret what you meant in your statement about Descartes? If so, sorry :-D

Manuel Antão disse...

you nailed it.

Peter D. Tillman disse...

Looks like a break from basking, to me ;-]

I'm with Book Stooge, re dangers. And the guys who think the AIs will put us all out of work... Well, about 15 minutes of thought covers that. [Kicks Rock] I refute it thus! '"Progressive" technophobes, all of 'em.

Re training infant AIs: back when David Brin was writing great SF, he wrote one (or more) stories about bringing up baby AIs. In the story case, a smart-ass, PITA teenage AI, out in the asteroid belt, I'm pretty sure. It turned out OK, just like kids usually do. I think I have that collection & will have to root around for it.

Manuel Antão disse...

Too bad Brin never wrote anything worthwhile again.

Peter D. Tillman disse...

> Too bad Brin never wrote anything worthwhile again.

Yeah, his best days (in SF) seem long behind him. Some of his NF, usually on electronic privacy, is OK to good. Some not ;-)

Manuel Antão disse...

Brin, Benford, Bear, ..., SF has moved on...they didn't. I'm curious about Benford's "The Berlin project" also because it's in a way biographical.