“I signaled Miki I would be withdrawing for one minute. I needed to have an emotion in private.”
In “Rogue Protocol” by Martha Wells
Is the SecUnit Pro- or Anti-Wittgenstein?
Wittgenstein is often cited by believers in the possibility of more or less "sentient" AI. He is cited because he seems to re-cast our understanding of what we mean when we talk about our own sentience, and by making ourselves seem more machine-like we can make machines seem more human-like. But I think this is a misapplication of Wittgenstein's thought. Wittgenstein objected to the "picture picture", i.e. to the way that we represent ourselves as having representations inside our heads, the way that we picture ourselves as viewers of an endless cinematic reel of internal pictures of the external world. According to Wittgenstein we don't need these internal representations, as our "view" of the world is located in the world, which is actively present to our sense when we interact with it. Nor do we need representations inside our heads of the "rules" which govern these interactions. When we "follow a rule" we don't necessarily have a representation in our minds of what that rule "means", rather the "meaning" of the rule is demonstrated in its practice. We can say that we have "understood" a rule when we our performance accords with it, and yet we may not be able to give an algorithmic description of what we have successfully performed. Modern machine-learning is more Wittgensteinian in this sense than older rule-based attempts to create AI (e.g. think of Prolog). We have developed pattern recognition systems which respond to statistical features of data, without requiring explicit descriptions of the data features, and without sets of programmatic instructions telling the machine how to group or analyse or otherwise process component features into identifiable categories. These machines are not merely rule-following; at least any "rules" which they do follow were not pre-given programmatically, nor are they easily inspectable even by the system's programmers.
So far, so good Martha: Pro-Wittgenstein.
But nonetheless there is good reason to believe that though they are distributed, implicit, and emergent, what these machines develop are indeed representations of statistical properties and classificatory schema gleaned from iterative presentations of data. The machines could even be given the ability to inspect or "represent" these representations; and it is this level of meta-representation which is often said to be necessary (and sufficient?) for self-awareness. Meta-representations are used in human and machine cognition for example in "chunking" and classifying operations. Sentience does not necessarily follow from the capacity to represent one's representations. What does follow from meta-representation is a "picture picture", precisely what Wittgenstein warned us to be wary of when "inspecting" our own perceptual/conceptual processes.
Do you remember what Wittgenstein called "seeing-as"? We can see the duck-rabbit as a duck or a rabbit, and context can make us more likely to perceive one or the other (e.g. the rabbit aspect apparently jumps out at more viewers around about Easter time). Now in all likelihood a machine could be trained to use contextual cues for seeing the duck or the rabbit contextually. But this is simply contextualisation by data addition. The machine is trained to take proximate data points into account when it makes it decision about what it is "seeing". It is not "true" seeing-as, as there is nothing in the machine which sees the seeing. Even meta-representations in a machine are just hierarchically structured representations. They are the "picture picture" without the picturer. That’s what the Murderbot embodies in a fictionalised way. I imagine that the microwave doesn't know that three minutes have passed - but actually neither do we. What we get via Wells is that we feel the Murderbot knows. We have the sense of knowing that it/she really knows. I have read arguments saying "oh but our sense of ourselves as knowing subjects is illusory" - in what sense (excuse the homophone) is this the case? Our sense of self is said to be an illusion, but still we have it. Can we not equally say that the Murderbot’s self just is its sense of itself/herself? Since the Murderbot undeniably experiences a sense of self then what does it mean to say that that sense is itself illusory? How can an experience of an experience be illusory? This time we’ve got Miki to give the Murderbot some further contextualisation of self which is quite different from what happened in the last two installments (good fiction works extremely well by using opposites.) The Murderbot’s experiences of the external world can be illusory because its/her senses can be deceived so that they don't correspond to the "objective" reality of what they appear to represent it/her. Yet, experience is already inherently subjective. How can we compare it to itself and say that the comparison is false or even falsifiable? How can the Murderbot’s experience of the Murderbot’s experience be anything other than the Murderbot’s experience?
You see, it’s through the Murderbot’s snarky internal monologue that we can experience its/her experience and it’s just how things seem to it/her, no more and no less. It makes no sense to say that it/she was mistaken when it seemed to it/her that this is how things seemed to it/her, or that in fact they seemed some other way, only to it/her it seemed that they seemed how they seemed... Isn't this just marvellous that we can get this kind of thing out of a SF story?
NB: “It” is the personal pronoun the Murderbot uses to refer to itself/herself. In my mind, the Murderbot is a she. Not sure why. It just feels like that.