sábado, agosto 18, 2018

Empty Philosophy: “O torcicologologista, Excelência” by Gonçalo M. Tavares



“– Gosto muito de bater na cabeça das pessoas com uma certa força.
– Gosta?
– Sim, agrada-me. Dá-me prazer. Uma pessoa vai a passar e eu chamo-a: ó, desculpe, Vossa Excelência?!
– E ela – a Excelência – vai?
– Sim. Quem não gosta de ser chamado à distância por Vossa Excelência? Apanho sempre, primeiro, as pessoas pela vaidade... é a melhor forma.
– E quando a pessoa-Excelência chega ao pé de Vossa Excelência, o que acontece?
– Ela aproxima-se e pergunta-me: o que pretende? E eu, com toda a educação e não querendo esconder nada, digo: gostava de bater com certa força na cabeça de Vossa Excelência. É isto que eu digo, apenas. Nem mais uma palavra.


In “O torcicologologista, Excelência” by Gonçalo M. Tavares (In English: "His Excellency, The Circumlucologist")


Let’s try to translate this quote in which the two characters go by the same name: “Excellency” (as well as the Passerby):

"I like to hit people's heads with a certain force.”
“Do you like it?”
“Yes, I like it. It gives me pleasure. A person will go by and I will call him or her: ‘Oh, sorry, Your Excellency?!’”
“And is she or he, the Excellency, going?”
“Yes. Who does not like being called from a distance by Your Excellency? I always get people first through their vanity ... it's the best way.”
“And when the person-Excellency comes near Excellency, what happens?”
She or he approaches and asks me: “what do you want?”, and I, polite, and not wanting to hide anything, say: ‘I would like to strike your head with a certain force. This is just what I say. Not another word.”

While it is not as popular in our western enlightened circles, Socrates was as well considered as a mystic - talking to his daemons in his head in order to raise argument against the relativism of sophists. I wonder how Tavares would take the methodology used by Socrates to silence his relativist opponents… Isn't it ironic that it is actually Socrates rational ethics dependent on a mystical inner voice telling him the truth? The mind boggles but it doesn't matter since it's a phony debate. The political dimension is of course very relevant, particularly considering that Socrates was not entirely original in his mystical discourse; like Tavares, a certain Zarathustra, a Persian philosopher, was obsessed with the struggle between truth and lie well before Socrates... considering that Zoroaster/Zarathustra was Persian; we could easily draw conclusions that Socrates philosophy, inspired by Zoroastrian mysticism was not going to be very popular in the political spheres. What about Gonçalo M. Tavare’s dialogues? Is it empty philosophizing as some claim?

Let’s philosophise for a bit and see where it leads us (I’m the other Tavares…).

Observation, as a process, is problematic in itself. This is my claim, and it's not a scientific one, so I don't need to test it. Even if I did test it, like Gonçalo M. Tavares does, the results would need to be observed. It's a "Gedanken", not a classical concept. I didn't ask for why Sokal did what he did, and I'm not interested. I read it anyway many years ago when it was "hot stuff". Hawking's theories, such as that of the Flexiverse, are far more interesting than anything Sokal, Bricmont et al, have been offering so far in the realm of physics anyway. The same with the pushers of the "postmodernist" and "anti-postmodernist" industries in academia. Toilet paper at best.

Reading dialogues like Gorgias its relatively easy to see that Plato is reacting to a widespread attitude, which valued honour and glory above the virtues which enabled people to 'dwell together in unity'. This outlook together with growing moral relativism which is unsettling leads to strife between the cities and within the cities. Plus of course he is aware that political strife and moral relativism reinforce each other: in “Republic,” Plato discusses the effect of persuasive definition which is used by those who want to justify political violence and according to Plato this turns the common sense morality upside down.

These are the main concerns of both Socrates and Plato which stimulate them to try to find stability for the words we use when we talk about justice etc. This is why they ask 'what is justice?', 'what is virtue?' or 'what is goodness?' and so on. Again, there is difference between what common sense morality is as a set of principles, practices and perhaps psychological mechanisms and what ordinary people say about these principles and mechanisms and so on . Ordinary thought may be corrupted in some interesting sense and involve distortions that do not do justice to what they depict. This, it seems to me, is Tavares' view. The novel which is based on something like this sort of claim about ordinary grasp of what common sense morality involves is perfectly compatible with trying to vindicate these underlying principles but the book of course can easily part company with the common sense picture of what is going on at both meta level, normative level and what is going on at the moral- psychology level; ordinary common sense grasp of these components of common sense morality can be poor or non-existent and this sets the stage for the enlightened reader to enter the Tavare’s story and sort things out.

Tavares is very much aware that there are different classes of people, each with a different understanding of what is 'common sense' in morality. And each with different principle virtues. Philosophers, Community workers, Soldiers, Capitalists, Workers, Politicians, Craftsmen, Poets, Scientists, Lawyers, Criminals. Each of these people in their day-to-day work life are making decisions and taking actions, and many (if not quite all) of these decisions/actions have a moral dimension. These people are keen on the whole to do the right thing, and they each develop different versions of 'common sense' morality - sometimes very different. So soldiers should value courage. Politicians should value fairness. Workers should value self-restraint. Philosopher should value wisdom. Readers should smell the bullshit a mile away. And each should understand their place in the whole system.

Anyway, there is much to say about 'know thyself', but the most significant one is that it is taken as a pillar of most if not all variations of mysticism...

quinta-feira, agosto 16, 2018

Gravity Curves Space-time. That’s It: "On Gravity: A Brief Tour of a Weighty Subject" by Anthony Zee



“That space and time are replaced by spacetime immediately tells us how a field, be it electromagnetic or gravitational, varies in time once we know how it varies in space.”

In “On Gravity: A Brief Tour of a Weighty Subject” by Anthony Zee


“Einstein says the space-time is curved and that objects take the path of least distance in getting from one point in to another in space-time. The curvature of space-time tells the apple, the stone, and the cannonball to follow the same path from the top of the tower to the ground.”

In “On Gravity: A Brief Tour of a Weighty Subject” by Anthony Zee


“Gravity curves space-time. That’s it.”

In “On Gravity: A Brief Tour of a Weighty Subject” by Anthony Zee



On August 17, 2017 two neutron stars collide (Zee references this in his book). 

Let me hypothesize:  consider a particle on the surface of one of the neutron stars belonging to a pair, about 10 km from the centre. It's being pulled downwards by an enormous gravitational force - about a hundred billion times stronger than gravity on the Earth's surface (if I calculated it right). But if the particle is going really, really fast (for example, close to the speed of light) it's still able to escape the star and not get pulled back in. That's fundamentally no different to achieving escape velocity from the Earth or the Sun, it's just much bigger numbers. Imagine that star instantly collapsed into a black hole. The particle 10 km from the centre would continue being pulled in the same direction by the same amount of mass, with exactly the same gravitational force as before. It could therefore escape just as easily as it could escape from the neutron star. As far as that particle is concerned, there's nothing magical about the black hole; it's just a big nearby mass. With two neutron stars colliding, I think their combined mass will produce a black hole with an event horizon radius of maybe 7 km, which is even smaller than the original stars. By definition, anything closer than 7 km to the centre would be unable to escape even if it travelled at the speed of light. But anything further away can still escape by going fast enough. The collision process will presumably accelerate some particles to huge speeds, so they're far enough away and going fast to escape even after the black hole has formed.

Many stars form in pairs, triplets etc., and live like that throughout their life. This is even more true for stars more massive than our Sun (the really massive stars always seem to form as part of multiple systems). So when they pop their clogs and become neutron stars they can still be bound by gravity in the same system (not absolutely guaranteed, since they probably went bang first, which could provide enough energy to disrupt the system). Even if they turn into neutron stars at different times, there's a pretty good chance they'd be left orbiting each other at the end, and then eventually they will get closer very slowly over time (perhaps a bit quicker if they interact with another star system in passing). The keep radiating gravitational waves all through this until eventually the get close enough that the rate at which they emit energy becomes significant (i.e., starts to equal the gravitational potential energy they each feel from the other). As you continue to spit out that energy the rate at which the orbit tightens will increase dramatically until you get the final merger. (For the record the Earth also emits gravitational waves as it orbits the Sun. Fortunately for us, it only emits about as much power as a light bulb...)

This is all great, and to think one man postulated it: Einstein.

However, I’ve been sharing lots of nonsense regarding the Standard Model, basically saying it’s got still massive holes in it. To wit:

1. Quantum physics does not join to the physics of large objects and after almost 100 years we still have not come close to doing so;

2. Black holes are infinite inside, over the Event Horizon. But infinity means we have no theory what is "beyond" the Event Horizon nor what happens to matter sucked into them, the math and theory break down;

4. It’s all fecking lies; it's the leprechauns and rainbows;

3. And most important of all ... Standard Model physics still cannot explain Bicycles, when ridden, stay upright.


My answer to the points above:

1. QM works fine with large objects. It works and is very necessary when matter becomes compressed such as in a star, white dwarf or neutron star; it’s also relevant due to the fact that quantum tunneling is involved in photosynthesis, for example, just read Jim Al-Khalili's book “The Coming Age of Quantum Biology”, which gives many other examples how QM is totally relevant when it comes to biological systems;

2. The infinity occurs at one point at the centre of the event horizon not just over it. So nothing is known about what happens at a singularity and it will never be known;
3. Seriously?

4. As for riding a bike, it involves balance. In the inner ear are a couple of tubes with fluid in which the brain uses to maintain balance. Stability comes from the angular momentum in the wheels. Try riding a bike slowly.

The amount of hard work and diligence put in so you can catch something as remote, esoteric and rare as this as it is actually happening (well, as soon the light reaches here, etc.) is almost if not more impressive as the event itself. We were banging rocks together to make sparks not so long ago, and now this...think about it, how do people manage to find and record this... the ingenuity and labour involved just floors me. Impressive. When people are still arguing the earth is flat and NASA use fish eye lenses, in the background quietly awesome events of nature are being observed and studied, in the hope of cleaning more doors of perception.

Most people don’t get what Gravity is: “Gravity” is the bending of space itself. So the light doesn't bend; it’s simply traveling in a straight line though bent space. If you throw a ball in forwards in the air, the arc is not gravity pulling on the ball, the arc is the ball travelling through bent space caused by gravity. So the ball actually moved in a straight line, it is in fact space that is bent by gravity, which is what we observe. The curve is the bending of space in front of you. Also the speed of light is fixed. It’s not the speed of light that is fixed but the max speed that a massless particle can move in a vacuum which is the speed limit. Light can’t go faster than this speed limit.

String theory predicts the existence of gravitons. The problem of detection of these bosons (nothing to do with the Higgs) would be their very low probability of interacting with any detector (i.e., their "cross-Section" is v. small).

The force of gravity at the scale of atoms is by unfathomable orders of magnitude smaller than the other 3 forces (its @1000 Trillion Trillion weaker than the Strong Nuclear Force that binds atoms!) &, could never be produced in a particle beam experiment such as LHC. In fact at the quantum level we would not know it even existed if not for at the scale of planets & stars etc. its effects become apparent. There is the possibility that the reason it is so weak is we are only affected by a small part of the force the rest being present in another dimension. String theory predicts this.

People wondered, what is out there, behind the horizon? Is it a cliff, do we topple from earth, when we reach it, is there another world, we know nothing about and some began to dream and set out to find out, what lie there.

I think the sense of purpose is the only thing keeping us from decadence. If we see ourselves integrated into a bigger picture, we feel a sense of duty. And now, that I think about it, maybe the state our world is in and our disinterest of meaning and purpose are coupled? It’s a consequence of Einstein’s field equations: ‘Space-time tells matter how to move; matter tells space-time how to curve.’ Any two things in orbit around each other will radiate energy away in the form of gravitational waves (it takes energy to squeeze and stretch the fabric of space). Ordinarily the amount is so utterly feeble as to be undetectable. It’s a different matter when two black holes are about to merge, however: two tiny objects each with many times more mass than the Sun spiral around each other thousands of times a second during their final death throes. That’s quite a blur that rips space and time to shreds in the vicinity. Once space-time has imploded all that is left on the outside is the bending and rippling of space. Nature is a good accountant and converts energy to different forms all the time. The energy used to bend space is deducted from the final mass of the black hole merger.

In the case of (almost?) any physical black holes the answer to 'how much energy lies outside the horizon' is surprisingly simple: all of it. The reason for this is reasonably easy to describe, if not to understand. Almost all black holes (and from now on I'm just going to say 'all') originate from some kind of collapse: a large star runs out of fuel and collapses and you get a black hole. Except you don't, quite. There are two ways of thinking about why: one is to say that, as stuff falls inwards, then, as seen from far away (by us, say) it gets more and more red-shifted and thus moves more and more slowly. And it never actually quite crosses the event horizon, because that's the point where the red-shift becomes infinite and it just stops. Another (which I find easier) is to remember that the event horizon is always in the future for anyone outside it. And that means anyone: there are no observers outside the horizon for whom it is in the past, and thus no observers outside the horizon ever observe anything crossing it, and this includes the initial collapse. These statements are actually equivalent, of course. So a 'black hole' formed from a collapse event is not, in fact, quite a black hole, because nothing has ever -- and nothing ever will -- cross the horizon (and in fact there is no horizon) from the perspective of an observer far from the event. Everything is 'frozen' just as the thing forms. So that sounds like I'm saying that black holes don't exist, and I kind of am saying that. However it turns out that this makes no practical difference: it's easy to show (it's basically Newton's shell theorem) that the gravitational field of one of these collapse objects is identical to a BH's, and it's almost as easy to show that these objects are really black (no information reaches you from in-falling matter) and so on: they are in every detectable respect the same as actual Black Holes, which is why it is safe to treat them as such.

There is one important way (or a way which may be important) that they are different: because the collapse is essentially frozen, there is no singularity yet from the perspective of an outside observer (i.e., us). But these almost-Black-Holes do have Hawking radiation, and so they (very, very slowly, and in practice not at all until the universe has cooled a very long way from where it is now) lose mass as thermal radiation. So eventually, over ludicrously long time-scales, they will evaporate. And this means that there never will be a singularity!

Bottom-line: Zee’s book give us is a middle approach to the concept of gravity. As an introductory text it’s invaluable. His explanation of the principle of least action is also masterful. Zee is absolutely right that it's interesting (no doubt) and useful (again, no doubt).  All modern physics is based on this principle, from Relativity to Quantum Mechanics to String Theory.  Although this doesn't necessarily "prove" it's correct, it shows how it is actually used (likewise we never really “proved” that Newton's Laws are equivalent to Hamilton's Principle). I studied and passed my classical mechanics exams and can apply the Lagrangian and Hamiltonian formulations for solving simple problems like Harmonic oscillator, planetary motion, etc. Most writers make the situation too difficult at a very early stage of their explanation by introducing the subtle concepts of virtual work, d'Alembert’s principle, generalized coordinates, etc., making it very difficult to follow. Zee is a great teacher, given me the feeling of how the analytical formulation treats mechanical systems from a deeper level of reasoning. Susskind's explanation of the principle of least action is also pretty good, but Zee’s is better.  I hate it when theoretical physicists start using the so-called hand-waving approach. Zee avoids this trap magnificently.

NB: How do they know which bit of sky a gravitational wave came from, you that is asking at the back? By having multiple detectors separated by thousands of miles, and at different angles to one another: the separation of the detectors is such that the arrival time of the gravitational wave at each detector is slightly but measurably different (in the order of milliseconds). The orientation of the arms of the detectors, and the differing amounts each arm is "squeezed", gives an idea of the direction of travel of the wave. The gravitational waves were there for millions of years, as long as the spiraled around each other. It's just that in the final moments before their collision the waves got strong enough to detect. The direction the gravitational waves came from can be determined roughly from the slight differences in timings of their arrival in those 3 or 4 gravitational wave observatories we currently have around the globe. I think astronomers started looking into that that direction optically only after the collision (and the gamma ray burst) already happened, so all they could see was the after-glow. But that after-glow is the important part of the light that contains all the chemical information! The physics of detecting the gravitational waves is fairly standard stuff. The interferometer is a hundred years old in principle. What makes LIGO so sensitive is the accuracy and detail of its engineering, and the engineered systems to eliminate noise. The theory underpinning the waves themselves isn’t new, it has been extensively studied for decades. So once the confirmation of their existence was achieved, pretty much everything else was in place. This latest event has confirmed that their speed of propagation corresponds to the theory as well.

segunda-feira, agosto 13, 2018

GOFAI vs AML: "Common Sense, the Turing Test, and the Quest for Real AI" by Hector J. Levesque




“Its is not true that we have only one life to live; if we can read, we can live as many more lives and as many kinds of lives as we wish.”

S. I. Hayakawa, quoted by Hector J. Levesque In "Common Sense, the Turing Test, and the Quest for Real AI"



The problem here is in the frequent ambiguity of the English language caused by its excessively simplistic grammar, made so by the collision between Germanic and Romance that produced the English language of today, essentially a creole construction. It would not arise in a language that is less mixed and more precise, e.g., German (my favourite language for rigorous thoughts and statements). Yet, it should be easy enough to fix, by making the parser look up idiomatic expressions and test them against the context of the conversation. The devices of gender and declension, present in German, allow for quite precise associations. How would the parser work in German? Imagine I want to use the following sentence: "I want to get a case for my camera; it should be strong."  (Die Kamera, die Kameratasche; the same definite article for both nouns.If I change the sentence to "I want to get a case for my camera; it should be protected." then the "it" in English refers to camera instead of case; can gender and declension help us in German? Well, for starters, I could associate "case" and "strong" by using accusative for “case” in German, and I'd use dative for "camera". So here, again, the declension gets me out of trouble. What about "Ich möchte eine Tasche für meine Kamera kaufen; sie soll stark sein."  The pronoun "sie" is the subject of the second clause; so how can declension help us at all? The solution would be to say "Ich möchte eine starke Tasche für meine Kamera kaufen." In this case, indeed, the declension doesn't help because both nouns, Tasche and Kamera, are in the same case, the accusative. But you could drop "für" and use dative for Kamera instead, "meiner Kamera," although this would not sound natural in the present day German. But this is the meaning of dative. The "für" is implicit in the case. It would work with a human object, e.g., meine Frau. On the other hand, if we were to stick to "Tasche," my wife might want something more glamorous, like a Hermes Birkin Handbag handle bag. Another solution would be to not use a pronoun: the case must be strong. But that wasn't the point of Levesque's examples, which was that conversational English is hard for a computer to understand and it is. People don't want to have to think about whether the computer will find their sentences ambiguous; we want the computers to understand us as well as another human.

One issue that I have with the working definition of intelligence implied by the use of a Winograd schema to determine the level of artificial intelligence a system has attained is we have a particular answer to the question posed that we consider correct. Take for instance the second example question posed above "João comforted Manuel because he was so upset. Who was upset?" Naturally the implied answer to this question is Manuel. We assume that if João is doing the comforting then clearly he is not the person who is upset because then Manuel would be doing the comforting if that were so. I based my take to this question on the most probable outcome of possible situations; it would be unlikely that João would comfort Manuel if he was upset unless he derived some comfort out of comforting someone else who furthermore didn't need it. This interpretation is not wrong but simply not statistically probable and we (as humans) would probably say that this interpretation was wrong. However I could see an artificially intelligent system arriving at both conclusions. So according to the Winograd schema to be intelligent you need working knowledge of common social and physical situations as well as a database of outcomes that resulted. Then the system needs to be able to determine which possibility is most likely to occur based on the information it has. But if you use this as a test we could find that the AI system can consistently predict the answer to the question and still not be intelligent; and what's more could misunderstand the context of the situation this question is simulating. In addition you may also find that if you gave this test to a group of humans that they may get some of the questions "wrong" as well. So then what does that say about those humans?

What if instead we gave an AI system a set of seemingly random objects and no instructions on what to do with those objects. If those "seemingly" random objects assembled into something recognizable and the AI system could without instruction create that recognizable object then I think one could state that system was intelligent. For example if you gave a robot, with AI, a set of Lego's that assembled into the shape of a box. If the AI system could without instruction assemble that box that would show some signs of intelligence. Alternatively, like when a child is given a set of Lego's with or without instructions they will often times make other things besides what is shown in the instructions. Thus an even more intelligent system could not only create the aforementioned box but may also exercise creativity and create something recognizable with the parts provided that was never intended to be created. In that case I think the AI system would need to recognize the objects that it creates and possibly describe what they made and why they made it.

Actually all this is about depth. if you go in enough depth, the computer will get to the end if its rule base, rather fast. One does not need to invoke Godel and incompleteness! 10**9 rules can be transited in just 9 question, or far less, and humans know exactly what questions to ask. Its human magic. 10***12 rules, an impossible large rule base to maintain, can be transited in 12 questions or less, 10 deep at a time. If you invoke Godel, then a human can go to infinite depth, leaving the computer far, far behind. Human intelligence is truly infinite, and computer intelligence is finite. That is an idea from Aristotle, and Incompleteness is the formal statement of that.

One of the problems with Jepardy is that I got bored with it years ago. it seemed like a contest that could be played by a computer ( I thought years ago) so not very interesting. What is interesting is this: human might learn rules, perhaps 10**9 rules for navigating the world, but get very bored with invoking the rules over and over, roboticly. Once a human has truly learned a rule, he or she never wants to actually apply it. Boring. Humans want to transience (spell checker error that I will leave in to make the point) the rules, break out of the box, and do something different and creative. Always. So... let the computer perform those 10**9 rules so humans can go out much farther. Is the ability to perform a task a measure of intelligence? Let suppose that an AI device can be trained to perform a task, like facial recognition, better than a human.

That performance is likely. A car can roll faster than a human can walk; Most machines can out perform a human at what they are designed for. That is why the machines exists. But we dont call a fast car intelligent, even if it can drive itself around a race course faster than an untrained human, which it can. Intelligence is the ability to drive the car, write an advertisement, hire a new DJ for your nightclub, then make a sculpture out of ice, all within minutes. Show me a computer that comes has one chance in a million. And the human has not even warmed up. Next he or she will invent a new blender recipe, create a programming language, and find the connection between Aristotle and Kurt Godel, all within a few hours. Where is the computer on all this? Its way back, by a factor so great we cannot even estimate it. Is the factor 10**(-6)? 10**(-9)? 10**(-12)? 10**(-15)?  More important, is that factor getting smaller and smaller, the computer falling farther behind, as the human intellect is turned loose, in part by computers?

From (I think) Terry Winograd. Can you parse this? Fruit flies like a banana. :-) Obviously, there are multiple interpretations of this sentence. What is the intended inference? Aside from programming the options, can an AI determine what they are? I think that the Winograd schemas may be a viable alternative to the Turing test, if they are able to verify that the system under test is able to use all the emerging rules. So if it can move from the basic representation to gradually more abstract (and more flexible), if it is able to find the regularities that are present in them, and if it is able to extract from these the rules that can be used for generate forecasts and plan actions and behaviors.

Then what do we think? Can we believe in the definition of AI proposed by Levesque, “AI is the study of how to make computers behave the way they do in movies.”?

Levesque's book is more concerned with science aspects of AI than with its technology. The means we won't find any bright ideas on how to REALLY implement the Winograd schemas.

NB: I recommend also reading Turing's original paper before reading Levesque's book. It’ll give you an idea on how the Winograd schemas can improve the way we use the Turing test.

PS. GOFAI = Good Old Fashioned AI; AML = Adaptive Machine Learning.

sexta-feira, agosto 10, 2018

Sock Theory vs. String Theory: "When Einstein Walked with Gödel - Excursions to the Edge of Thought" by Jim Holt



My contribution to Holt’s Edge of Thoughts in the form of an article too:

An unauthorised and short version of physics.


How did scientists first deduce that the universe had hidden dimensions, dimensions that are curled up so tight we can't see them? Until recently SOCK THEORY was the ruling paradigm. It was thought that Theodor Kaluza and Oscar Klein deduced the existence of at least one additional dimension from well known tendency of socks to disappear and then re-appear in unlikely places. How else to explain the mysterious behaviour of hosiery? Latterly a new paradigm, STRING THEORY, has superseded sock theory. Leave a length of string or anything long, thin and flexible lying undisturbed for even a day and you will find it has somehow got itself tied into knots. This can only be explained if we assume at least one additional dimension. String theory also gave birth to QUANTUM FIELD THEORY. Richard Feynman found that if you stored several discreet pieces of string in a cupboard for an hour or so they would become inextricably entangled. Feynman realised that given half a chance everything would get ENTANGLED with everything else. String theory also gave rise to superstring theory which in turn morphed into the theory of branes. If you've read this far you're probably a P-BRANE.

Quantum theory is flawed and quantum proponents are in denial. String theory is in crisis (it has recently been described as dangerous nonsense). The Copenhagen interpretation is under attack (by recent experiments - even though this not allowed by Copenhagen). Neutrinos don't actually exist (did your Neutrino lose it's flavour on the bedpost overnight?).

LOOP QUANTUM GRAVITY RULES! Determinism rules OK!


NB (not part of the article above): The references to one of Gödel's Incompleteness theorems in Holt’s first article suggest a slight misunderstanding of the meaning of Godel's work. What Gödel showed was not that the axioms of mathematics must be taken on faith (this insight is much older and relatively harmless) but something more subtle. Gödel showed that in any reasonably powerful mathematics, there must be perfectly legal statements which cannot be proven within the framework of that system, but will require additional axioms to plug the gap. And that this more powerful system will in turn necessarily include legal statements that cannot be demonstrated to be true or false without resort to still more new axioms, and so on... In other words, that no systems of mathematics along the lines of the Principia of Whitehead and Russell can ever be self contained. Yet another way of saying this is that the mathematical backbone of thought is a convention or a construct, not a pure, freestanding Platonic ideal. This was a startling insight, because prior to this discovery it had been assumed by all who are equipped to assume such things (e.g., Hilbert, Russell, etc) that a proof of the completeness of mathematics would be positive, not negative. Mathematicians are Platonists in their souls - it's profoundly disturbing to find out that the universe is not Platonic. Turing's and Church's related insight (the discovery of well-formed problems which no computer program can solve) was even more unsettling and of far greater practical significance. (The strategy of reducing a new problem to the halting problem, and thereby demonstrating it to be unsolvable is routine even for undergraduates, and applies to a universe of problems that come up frequently in practical applications, which undecidability does not.) Godel's theorem is simply the formal-logic manifestation of the same drubbing that Einstein, Plank, Heisenberg, Turing, Darwin, Freud, Wittgenstein, Lyell, et al, gave in other fields to our formerly rather poetical understanding of the nature of knowing.

Gödel's incompleteness proof shows that axioms, formulated in the artificial language of Peano arithmetic (the five Peano's axioms that is), could not be reducible to logic. They required supplementing with other branches of mathematics such as set theory. Effectively that requirements for completeness and consistences in any logical system were violated - hence the need for supplementing logic with other constructs.

quarta-feira, agosto 08, 2018

Pro- or Anti-Wittgensteinian SF: “Rogue Protocol” by Martha Wells



“I signaled Miki I would be withdrawing for one minute. I needed to have an emotion in private.”

In “Rogue Protocol” by Martha Wells


Is the SecUnit Pro- or Anti-Wittgenstein?

Wittgenstein is often cited by believers in the possibility of more or less "sentient" AI. He is cited because he seems to re-cast our understanding of what we mean when we talk about our own sentience, and by making ourselves seem more machine-like we can make machines seem more human-like. But I think this is a misapplication of Wittgenstein's thought. Wittgenstein objected to the "picture picture", i.e. to the way that we represent ourselves as having representations inside our heads, the way that we picture ourselves as viewers of an endless cinematic reel of internal pictures of the external world. According to Wittgenstein we don't need these internal representations, as our "view" of the world is located in the world, which is actively present to our sense when we interact with it. Nor do we need representations inside our heads of the "rules" which govern these interactions. When we "follow a rule" we don't necessarily have a representation in our minds of what that rule "means", rather the "meaning" of the rule is demonstrated in its practice. We can say that we have "understood" a rule when we our performance accords with it, and yet we may not be able to give an algorithmic description of what we have successfully performed. Modern machine-learning is more Wittgensteinian in this sense than older rule-based attempts to create AI (e.g. think of Prolog). We have developed pattern recognition systems which respond to statistical features of data, without requiring explicit descriptions of the data features, and without sets of programmatic instructions telling the machine how to group or analyse or otherwise process component features into identifiable categories. These machines are not merely rule-following; at least any "rules" which they do follow were not pre-given programmatically, nor are they easily inspectable even by the system's programmers.

So far, so good Martha: Pro-Wittgenstein.

But nonetheless there is good reason to believe that though they are distributed, implicit, and emergent, what these machines develop are indeed representations of statistical properties and classificatory schema gleaned from iterative presentations of data. The machines could even be given the ability to inspect or "represent" these representations; and it is this level of meta-representation which is often said to be necessary (and sufficient?) for self-awareness. Meta-representations are used in human and machine cognition for example in "chunking" and classifying operations. Sentience does not necessarily follow from the capacity to represent one's representations. What does follow from meta-representation is a "picture picture", precisely what Wittgenstein warned us to be wary of when "inspecting" our own perceptual/conceptual processes.

Do you remember what Wittgenstein called "seeing-as"? We can see the duck-rabbit as a duck or a rabbit, and context can make us more likely to perceive one or the other (e.g. the rabbit aspect apparently jumps out at more viewers around about Easter time). Now in all likelihood a machine could be trained to use contextual cues for seeing the duck or the rabbit contextually. But this is simply contextualisation by data addition. The machine is trained to take proximate data points into account when it makes it decision about what it is "seeing". It is not "true" seeing-as, as there is nothing in the machine which sees the seeing. Even meta-representations in a machine are just hierarchically structured representations. They are the "picture picture" without the picturer. That’s what the Murderbot embodies in a fictionalised way. I imagine that the microwave doesn't know that three minutes have passed - but actually neither do we. What we get via Wells is that we feel the Murderbot knows. We have the sense of knowing that it/she really knows. I have read arguments saying "oh but our sense of ourselves as knowing subjects is illusory" - in what sense (excuse the homophone) is this the case? Our sense of self is said to be an illusion, but still we have it. Can we not equally say that the Murderbot’s self just is its sense of itself/herself? Since the Murderbot undeniably experiences a sense of self then what does it mean to say that that sense is itself illusory? How can an experience of an experience be illusory? This time we’ve got Miki to give the Murderbot some further contextualisation of self which is quite different from what happened in the last two installments (good fiction works extremely well by using opposites.) The Murderbot’s experiences of the external world can be illusory because its/her senses can be deceived so that they don't correspond to the "objective" reality of what they appear to represent it/her. Yet, experience is already inherently subjective. How can we compare it to itself and say that the comparison is false or even falsifiable? How can the Murderbot’s experience of the Murderbot’s experience be anything other than the Murderbot’s experience?

You see, it’s through the Murderbot’s snarky internal monologue that we can experience its/her experience and it’s just how things seem to it/her, no more and no less. It makes no sense to say that it/she was mistaken when it seemed to it/her that this is how things seemed to it/her, or that in fact they seemed some other way, only to it/her it seemed that they seemed how they seemed... Isn't this just marvellous that we can get this kind of thing out of a SF story?


NB: “It” is the personal pronoun the Murderbot uses to refer to itself/herself. In my mind, the Murderbot is a she. Not sure why. It just feels like that.