segunda-feira, agosto 13, 2018

GOFAI vs AML: "Common Sense, the Turing Test, and the Quest for Real AI" by Hector J. Levesque




“Its is not true that we have only one life to live; if we can read, we can live as many more lives and as many kinds of lives as we wish.”

S. I. Hayakawa, quoted by Hector J. Levesque In "Common Sense, the Turing Test, and the Quest for Real AI"



The problem here is in the frequent ambiguity of the English language caused by its excessively simplistic grammar, made so by the collision between Germanic and Romance that produced the English language of today, essentially a creole construction. It would not arise in a language that is less mixed and more precise, e.g., German (my favourite language for rigorous thoughts and statements). Yet, it should be easy enough to fix, by making the parser look up idiomatic expressions and test them against the context of the conversation. The devices of gender and declension, present in German, allow for quite precise associations. How would the parser work in German? Imagine I want to use the following sentence: "I want to get a case for my camera; it should be strong."  (Die Kamera, die Kameratasche; the same definite article for both nouns.If I change the sentence to "I want to get a case for my camera; it should be protected." then the "it" in English refers to camera instead of case; can gender and declension help us in German? Well, for starters, I could associate "case" and "strong" by using accusative for “case” in German, and I'd use dative for "camera". So here, again, the declension gets me out of trouble. What about "Ich möchte eine Tasche für meine Kamera kaufen; sie soll stark sein."  The pronoun "sie" is the subject of the second clause; so how can declension help us at all? The solution would be to say "Ich möchte eine starke Tasche für meine Kamera kaufen." In this case, indeed, the declension doesn't help because both nouns, Tasche and Kamera, are in the same case, the accusative. But you could drop "für" and use dative for Kamera instead, "meiner Kamera," although this would not sound natural in the present day German. But this is the meaning of dative. The "für" is implicit in the case. It would work with a human object, e.g., meine Frau. On the other hand, if we were to stick to "Tasche," my wife might want something more glamorous, like a Hermes Birkin Handbag handle bag. Another solution would be to not use a pronoun: the case must be strong. But that wasn't the point of Levesque's examples, which was that conversational English is hard for a computer to understand and it is. People don't want to have to think about whether the computer will find their sentences ambiguous; we want the computers to understand us as well as another human.

One issue that I have with the working definition of intelligence implied by the use of a Winograd schema to determine the level of artificial intelligence a system has attained is we have a particular answer to the question posed that we consider correct. Take for instance the second example question posed above "João comforted Manuel because he was so upset. Who was upset?" Naturally the implied answer to this question is Manuel. We assume that if João is doing the comforting then clearly he is not the person who is upset because then Manuel would be doing the comforting if that were so. I based my take to this question on the most probable outcome of possible situations; it would be unlikely that João would comfort Manuel if he was upset unless he derived some comfort out of comforting someone else who furthermore didn't need it. This interpretation is not wrong but simply not statistically probable and we (as humans) would probably say that this interpretation was wrong. However I could see an artificially intelligent system arriving at both conclusions. So according to the Winograd schema to be intelligent you need working knowledge of common social and physical situations as well as a database of outcomes that resulted. Then the system needs to be able to determine which possibility is most likely to occur based on the information it has. But if you use this as a test we could find that the AI system can consistently predict the answer to the question and still not be intelligent; and what's more could misunderstand the context of the situation this question is simulating. In addition you may also find that if you gave this test to a group of humans that they may get some of the questions "wrong" as well. So then what does that say about those humans?

What if instead we gave an AI system a set of seemingly random objects and no instructions on what to do with those objects. If those "seemingly" random objects assembled into something recognizable and the AI system could without instruction create that recognizable object then I think one could state that system was intelligent. For example if you gave a robot, with AI, a set of Lego's that assembled into the shape of a box. If the AI system could without instruction assemble that box that would show some signs of intelligence. Alternatively, like when a child is given a set of Lego's with or without instructions they will often times make other things besides what is shown in the instructions. Thus an even more intelligent system could not only create the aforementioned box but may also exercise creativity and create something recognizable with the parts provided that was never intended to be created. In that case I think the AI system would need to recognize the objects that it creates and possibly describe what they made and why they made it.

Actually all this is about depth. if you go in enough depth, the computer will get to the end if its rule base, rather fast. One does not need to invoke Godel and incompleteness! 10**9 rules can be transited in just 9 question, or far less, and humans know exactly what questions to ask. Its human magic. 10***12 rules, an impossible large rule base to maintain, can be transited in 12 questions or less, 10 deep at a time. If you invoke Godel, then a human can go to infinite depth, leaving the computer far, far behind. Human intelligence is truly infinite, and computer intelligence is finite. That is an idea from Aristotle, and Incompleteness is the formal statement of that.

One of the problems with Jepardy is that I got bored with it years ago. it seemed like a contest that could be played by a computer ( I thought years ago) so not very interesting. What is interesting is this: human might learn rules, perhaps 10**9 rules for navigating the world, but get very bored with invoking the rules over and over, roboticly. Once a human has truly learned a rule, he or she never wants to actually apply it. Boring. Humans want to transience (spell checker error that I will leave in to make the point) the rules, break out of the box, and do something different and creative. Always. So... let the computer perform those 10**9 rules so humans can go out much farther. Is the ability to perform a task a measure of intelligence? Let suppose that an AI device can be trained to perform a task, like facial recognition, better than a human.

That performance is likely. A car can roll faster than a human can walk; Most machines can out perform a human at what they are designed for. That is why the machines exists. But we dont call a fast car intelligent, even if it can drive itself around a race course faster than an untrained human, which it can. Intelligence is the ability to drive the car, write an advertisement, hire a new DJ for your nightclub, then make a sculpture out of ice, all within minutes. Show me a computer that comes has one chance in a million. And the human has not even warmed up. Next he or she will invent a new blender recipe, create a programming language, and find the connection between Aristotle and Kurt Godel, all within a few hours. Where is the computer on all this? Its way back, by a factor so great we cannot even estimate it. Is the factor 10**(-6)? 10**(-9)? 10**(-12)? 10**(-15)?  More important, is that factor getting smaller and smaller, the computer falling farther behind, as the human intellect is turned loose, in part by computers?

From (I think) Terry Winograd. Can you parse this? Fruit flies like a banana. :-) Obviously, there are multiple interpretations of this sentence. What is the intended inference? Aside from programming the options, can an AI determine what they are? I think that the Winograd schemas may be a viable alternative to the Turing test, if they are able to verify that the system under test is able to use all the emerging rules. So if it can move from the basic representation to gradually more abstract (and more flexible), if it is able to find the regularities that are present in them, and if it is able to extract from these the rules that can be used for generate forecasts and plan actions and behaviors.

Then what do we think? Can we believe in the definition of AI proposed by Levesque, “AI is the study of how to make computers behave the way they do in movies.”?

Levesque's book is more concerned with science aspects of AI than with its technology. The means we won't find any bright ideas on how to REALLY implement the Winograd schemas.

NB: I recommend also reading Turing's original paper before reading Levesque's book. It’ll give you an idea on how the Winograd schemas can improve the way we use the Turing test.

PS. GOFAI = Good Old Fashioned AI; AML = Adaptive Machine Learning.

sexta-feira, agosto 10, 2018

Sock Theory vs. String Theory: "When Einstein Walked with Gödel - Excursions to the Edge of Thought" by Jim Holt



My contribution to Holt’s Edge of Thoughts in the form of an article too:

An unauthorised and short version of physics.


How did scientists first deduce that the universe had hidden dimensions, dimensions that are curled up so tight we can't see them? Until recently SOCK THEORY was the ruling paradigm. It was thought that Theodor Kaluza and Oscar Klein deduced the existence of at least one additional dimension from well known tendency of socks to disappear and then re-appear in unlikely places. How else to explain the mysterious behaviour of hosiery? Latterly a new paradigm, STRING THEORY, has superseded sock theory. Leave a length of string or anything long, thin and flexible lying undisturbed for even a day and you will find it has somehow got itself tied into knots. This can only be explained if we assume at least one additional dimension. String theory also gave birth to QUANTUM FIELD THEORY. Richard Feynman found that if you stored several discreet pieces of string in a cupboard for an hour or so they would become inextricably entangled. Feynman realised that given half a chance everything would get ENTANGLED with everything else. String theory also gave rise to superstring theory which in turn morphed into the theory of branes. If you've read this far you're probably a P-BRANE.

Quantum theory is flawed and quantum proponents are in denial. String theory is in crisis (it has recently been described as dangerous nonsense). The Copenhagen interpretation is under attack (by recent experiments - even though this not allowed by Copenhagen). Neutrinos don't actually exist (did your Neutrino lose it's flavour on the bedpost overnight?).

LOOP QUANTUM GRAVITY RULES! Determinism rules OK!


NB (not part of the article above): The references to one of Gödel's Incompleteness theorems in Holt’s first article suggest a slight misunderstanding of the meaning of Godel's work. What Gödel showed was not that the axioms of mathematics must be taken on faith (this insight is much older and relatively harmless) but something more subtle. Gödel showed that in any reasonably powerful mathematics, there must be perfectly legal statements which cannot be proven within the framework of that system, but will require additional axioms to plug the gap. And that this more powerful system will in turn necessarily include legal statements that cannot be demonstrated to be true or false without resort to still more new axioms, and so on... In other words, that no systems of mathematics along the lines of the Principia of Whitehead and Russell can ever be self contained. Yet another way of saying this is that the mathematical backbone of thought is a convention or a construct, not a pure, freestanding Platonic ideal. This was a startling insight, because prior to this discovery it had been assumed by all who are equipped to assume such things (e.g., Hilbert, Russell, etc) that a proof of the completeness of mathematics would be positive, not negative. Mathematicians are Platonists in their souls - it's profoundly disturbing to find out that the universe is not Platonic. Turing's and Church's related insight (the discovery of well-formed problems which no computer program can solve) was even more unsettling and of far greater practical significance. (The strategy of reducing a new problem to the halting problem, and thereby demonstrating it to be unsolvable is routine even for undergraduates, and applies to a universe of problems that come up frequently in practical applications, which undecidability does not.) Godel's theorem is simply the formal-logic manifestation of the same drubbing that Einstein, Plank, Heisenberg, Turing, Darwin, Freud, Wittgenstein, Lyell, et al, gave in other fields to our formerly rather poetical understanding of the nature of knowing.

Gödel's incompleteness proof shows that axioms, formulated in the artificial language of Peano arithmetic (the five Peano's axioms that is), could not be reducible to logic. They required supplementing with other branches of mathematics such as set theory. Effectively that requirements for completeness and consistences in any logical system were violated - hence the need for supplementing logic with other constructs.

quarta-feira, agosto 08, 2018

Pro- or Anti-Wittgensteinian SF: “Rogue Protocol” by Martha Wells



“I signaled Miki I would be withdrawing for one minute. I needed to have an emotion in private.”

In “Rogue Protocol” by Martha Wells


Is the SecUnit Pro- or Anti-Wittgenstein?

Wittgenstein is often cited by believers in the possibility of more or less "sentient" AI. He is cited because he seems to re-cast our understanding of what we mean when we talk about our own sentience, and by making ourselves seem more machine-like we can make machines seem more human-like. But I think this is a misapplication of Wittgenstein's thought. Wittgenstein objected to the "picture picture", i.e. to the way that we represent ourselves as having representations inside our heads, the way that we picture ourselves as viewers of an endless cinematic reel of internal pictures of the external world. According to Wittgenstein we don't need these internal representations, as our "view" of the world is located in the world, which is actively present to our sense when we interact with it. Nor do we need representations inside our heads of the "rules" which govern these interactions. When we "follow a rule" we don't necessarily have a representation in our minds of what that rule "means", rather the "meaning" of the rule is demonstrated in its practice. We can say that we have "understood" a rule when we our performance accords with it, and yet we may not be able to give an algorithmic description of what we have successfully performed. Modern machine-learning is more Wittgensteinian in this sense than older rule-based attempts to create AI (e.g. think of Prolog). We have developed pattern recognition systems which respond to statistical features of data, without requiring explicit descriptions of the data features, and without sets of programmatic instructions telling the machine how to group or analyse or otherwise process component features into identifiable categories. These machines are not merely rule-following; at least any "rules" which they do follow were not pre-given programmatically, nor are they easily inspectable even by the system's programmers.

So far, so good Martha: Pro-Wittgenstein.

But nonetheless there is good reason to believe that though they are distributed, implicit, and emergent, what these machines develop are indeed representations of statistical properties and classificatory schema gleaned from iterative presentations of data. The machines could even be given the ability to inspect or "represent" these representations; and it is this level of meta-representation which is often said to be necessary (and sufficient?) for self-awareness. Meta-representations are used in human and machine cognition for example in "chunking" and classifying operations. Sentience does not necessarily follow from the capacity to represent one's representations. What does follow from meta-representation is a "picture picture", precisely what Wittgenstein warned us to be wary of when "inspecting" our own perceptual/conceptual processes.

Do you remember what Wittgenstein called "seeing-as"? We can see the duck-rabbit as a duck or a rabbit, and context can make us more likely to perceive one or the other (e.g. the rabbit aspect apparently jumps out at more viewers around about Easter time). Now in all likelihood a machine could be trained to use contextual cues for seeing the duck or the rabbit contextually. But this is simply contextualisation by data addition. The machine is trained to take proximate data points into account when it makes it decision about what it is "seeing". It is not "true" seeing-as, as there is nothing in the machine which sees the seeing. Even meta-representations in a machine are just hierarchically structured representations. They are the "picture picture" without the picturer. That’s what the Murderbot embodies in a fictionalised way. I imagine that the microwave doesn't know that three minutes have passed - but actually neither do we. What we get via Wells is that we feel the Murderbot knows. We have the sense of knowing that it/she really knows. I have read arguments saying "oh but our sense of ourselves as knowing subjects is illusory" - in what sense (excuse the homophone) is this the case? Our sense of self is said to be an illusion, but still we have it. Can we not equally say that the Murderbot’s self just is its sense of itself/herself? Since the Murderbot undeniably experiences a sense of self then what does it mean to say that that sense is itself illusory? How can an experience of an experience be illusory? This time we’ve got Miki to give the Murderbot some further contextualisation of self which is quite different from what happened in the last two installments (good fiction works extremely well by using opposites.) The Murderbot’s experiences of the external world can be illusory because its/her senses can be deceived so that they don't correspond to the "objective" reality of what they appear to represent it/her. Yet, experience is already inherently subjective. How can we compare it to itself and say that the comparison is false or even falsifiable? How can the Murderbot’s experience of the Murderbot’s experience be anything other than the Murderbot’s experience?

You see, it’s through the Murderbot’s snarky internal monologue that we can experience its/her experience and it’s just how things seem to it/her, no more and no less. It makes no sense to say that it/she was mistaken when it seemed to it/her that this is how things seemed to it/her, or that in fact they seemed some other way, only to it/her it seemed that they seemed how they seemed... Isn't this just marvellous that we can get this kind of thing out of a SF story?


NB: “It” is the personal pronoun the Murderbot uses to refer to itself/herself. In my mind, the Murderbot is a she. Not sure why. It just feels like that.


domingo, agosto 05, 2018

Stale Spin-off: “The Grey Bastards” by Jonathan French



“Jackal ignored him, throwing his arms wide in a mock flummox. ‘The name escapes us. Anyway, he’s some inbred, overstuffed sack of shit that weds his cousins, fucks his sisters, and has small boys attach leeches to his tiny, tiny prick.’”

In “The Grey Bastards” by Jonathan French



Didn't Warcraft come about because they were originally developing a Warhammer game, then lost the license before publishing? Yes. Do we need another Warcraft book derivative with half-orcs and shit like that?

Claims:

Many spin-offs or Plan Bs have become much bigger than the Plan A:

- The Playstation became much bigger than the N64, for whom Sony were developing a CD drive before they had a bust-up and Sony decided to do their own console

- Warcraft/WoW became many, many times bigger than Warhammer

- "Assassin's Creed" was initially created as a Prince of Persia game but converted into something that became ten times as big as its origin franchise.

- "Star Wars" was what George Lucas created when he couldn't secure the rights to "Flash Gordon". Guess which IP is bigger now?

- Hasbro/Tomy created "Transformers" as a response to the Bandai "Robo Machines/GoBots" toy range, and were accused of ripping them off. Thirty years later, Robo Machines are pretty much forgotten, and Transformers movies make billions.

- "The Hunger Games" lifts very obviously from "Battle Royale", but THG is now a worldwide IP titan whereas BR is now thought of as a well-regarded cinematic footnote.

- "Mac and Me" became far more famous and successful than E.T. - The Extra-Terrestrial, and is now one of the most highly-regarded movies of the 80s.

- Do we need another Warcraft book derivative with half-orcs and shit like that?


One of the above-mentioned claims is false. Care to guess which one...?

Coda:


Roses are red
Violets are blue
This book is crap
And the sequels will be too.

NB: This world seems to know no CGI-battle (the book equivalent I mean) saturation point. It'd be cool if this book were just about some orc refugee rambling around the landscape, eking out a living on the edges of the conflict zone, maybe falling in love with a human refugee and the whole thing going down with zero scenes of violence, some Orc-philosophy shit and inter-species love on top pf everything else. It's hard these days not to thirst for violence-, foul-mouthing- and conflict-free books; they're such a rarity.

sábado, agosto 04, 2018

Best of my blog: 12 years, 801 posts, and 177K hits later




11 years are a goner... Now it's time for the best of my blog: 12 years, 801 posts, and 137 K hits later [in one year I wrote 171 posts (801-630 = 171 posts), and this blog you're reading had around 40K views (177K-137K = 40K)].

As I stated in 2017, blogging is the one thing in life I did before it went mainstream in 2004 [the blog you're reading exists since 04.08.2006; before that I'd two other blogs: the first in 2004, in Portuguese, no longer exists (Geocities Blog); the second, in German, still exists (GooglePages Blog)]. I can’t say that about anything else at all. In fact, I’m usually behind the times, but blogging — blogging is something I loved the minute I found out about it. I've been a content creator all my life. I can't remember a time where I didn't want to create something. Blogging as a sustained exercise demands, I suspect, a great deal more time and focus than many people can now allow it. I usually say there are three types of people blogging-wise: There are people with blogs who like to write about blogging and people with blogs who like to read those articles, then there are the other 99% of the population who don't give a shit about blogging....Am I successful? Is it important? For me "successful" bloggers are not those who just post their stuff on the internet and then wait for the crowds to come flocking. Their blogs are just part of a conversation, but most of that conversation is happening away from their blog. Of course, some people steal your content and broadcast it as their own. That's the nature of the beast. Those who cannot create steal...And this isn't confined to other bloggers. But, there's always tomorrow.

There are still quite a few successful blogs out there, if not successful in creating a massive following then at least at generating original content that is more meaningful and truthful than the aggregated drivel of mainstream media. The traditional definition of the term blog still has that whiff of naive amateurism attached to it. It may be that we just don't consider the more professional blogs as "blogs" per se. I know I don't; I'm too absorbed by the ones I follow to even consider whether they should be defined as such. You know who you're.

How self-absorbed are some people who have got a blog? "Seriously?", I always say. Somehow the stereotype of the blogger who notes the shitty minutiae of his or her daily life has become the public idea of a blog, whereas most real blogs are a series reports and/or opinion pieces with a set topic. That is, not self-absorbed at all if you also are into that topic. In my case is book-reviewing and other related-stuff (programming, plays, movies, TV Series, Shakespeare, etc.) A friend of mine, a retired history teacher, writes mainly about history but has also produces brilliant essays on Tolkien, in addition to interesting posts on George Orwell, Jonathan Swift, Shakespeare, Henry James, Jane Austen and many more. In my own case, I started a blog because I thought it would encourage me to do more writing, which it has. I see my blog as an online magazine that anyone is free to read or ignore. Take your pick.

Some people do indeed write solely for the satisfaction that comes from creative self-expression. 

Here are some of my favourite entries:

  1. For the Love of the Game: "Lions vs. Al Blacks - 2017 Series - Third Game"
  2. The Linux Server Encyclopaedia: "Anonymous" by Roland Emmerich
  3. GOATnaldo - It was men against God: "Portugal vs. Spain in Russian World Cup"
  4. Ronaldo's WTF Moment: Overhead Goal - Juventus vs. Real Madrid
  5. Advanced Python Class: "PacMan" (extra-project)
  6. Homemade and Fermented Hot-Sauce: Habaneros and Bird's Eye Chiles - Part 2
  7. Bayes' Theorem: "Música da Sra Bach/Mrs Bach's Music" by Alex McCall, and Irini Vachlioti
  8. Android App: Thinking in Code: "Antao's Tretis Ported from Python"
  9. Advanced Python Class: "Argghh, Just One More Go Mum: "Exploring Tetris On My Own"
  10. Android App: "Spinner Portuguese Style" by MySelfie
  11. Beckettian SF: "The Man in the High Castle" by Philip K. Dick
  12. Minor SF Movie: "Blade Runner 2049" by Denis Villeneuve, Ridley Scott
  13. Space 1999 Reboot: "Interstellar" by Christopher Nolan
  14. Mushroom Ghosts: "Star Trek: Discovery" by Bryan Fuller, Alex Kurtzman
  15. Tickboxing Screen SF: "Altered Carbon" by Laeta Kalogridis
  16. The Endless Loops of Space Opera: "The Last Jedi" by Rian Johnson
  17. Stephen Hawking, 1942 - 2018
  18. Don't Panic! Arthur Dent Will Travel: "Space Heavy" by SpaceX, Elon Musk

The Math:

Total Hits (+40 K hits when compared with 2017):




Number of Posts (+ 171 posts when compared with 2017):




Hits all over the world:

2017 - 1124 places, as of 17.04.2016:



2018 - 1692 places, as of 17.04.2017 (+568 places more when compared with 2017):




My Library's Book Publication Dates:

(comprises the 12 years)


My Library's Total Book Ratings:


(comprises the 12 years)

My Library's Number of Books Read:


(comprises the 12 years)

My Library's Book Languages:


(comprises the 12 years)

My Library's Number of Pages Distribution:


(comprises the 12 years)

My Library's "How High is my Book Stack"?


(comprises the 12 years)


NB: The Eiffel Tower is not far off...982 (my number) < 1093 (Eiffel Tower)

My Library's Totals and Averages:


(comprises the 12 years)

My Library's Dewey Decimal System Applied to my Library:


(comprises the 12 years)


My Library's Dewey Decimal System Applied to my Literature Stack:


(comprises the 12 years)

Author Gender:



Lists:



Number of Followers on Booklikes (+235 more followers):



Top Page Hits:


(comprises the 12 years)

Quite a surprise is still the number of hits regarding my Shakespeare pages...Too bad I didn't create these pages right in 2006...Also the Theatre page, surprisingly, gives me very high hits. When we least expect, the surprises show up...