Some Quick Thoughts on LaMBDA and Sentience
Philosophy of mind discussions are in a bad state
Recently, there has been a fair amount of publicity and discussion about artificial intelligence and its relationship to sentience. This was sparked by a Washington Post article detailing how a Google employee – Blake Lemoine – came to believe that one of the company’s artificial intelligence programs is sentient. Lemoine had extended conversations with the program – called LaMBDA, for Language Model for Dialogue Applications – and decided that like humans, it can experience emotions, like happiness, or distress.
As someone who has long been interested in philosophy of mind, I’ve found a lot of the discussion that emerged in the aftermath of the Post piece fairly disheartening. Folks seem to be talking past each other, working with different definitions of terms, and being overconfident in their proclamations. So I thought I’d offer a few thoughts of my own.
In no particular order:
LaMBDA is very likely not sentient. The vast majority of people who work in artificial intelligence seem to agree on this point. The model is excellent at text prediction, but that is very different from having subjective experience (or “qualia,” as it’s also known).
While LaMBDA itself is probably not sentient, a computer system could, in principle, potentially become sentient. This concept is known as “substrate independence.” Substrate independence comes from the idea that, a priori, it would be surprising if carbon was unique in its ability to support consciousness. While we obviously can’t be sure that silicon can support consciousness, we shouldn’t be overconfident that it can’t, either.
The LaMBDA chatbot exhibits at least weak “intelligence,” but semantically, it should not be considered “superintelligent,” and it certainly does not exhibit general intelligence. (Unlike a system such as DeepMind’s Gato, which seems to be a form of weak general intelligence, LaMBDA cannot accomplish many tasks).
Saying that LaMBDA is not “intelligent” because it simply “pattern matches” strikes me as ridiculous. What do you think human intelligence is, if not just really, really good pattern matching?
LaMBDA’s intelligence or lack thereof is not a good indicator for its sentience (or lack thereof. Many beings that seem clearly sentient (e.g. cows) can’t use language at all. As one AI researcher put it, “There is no sequence of words that should convince you of sentience or non-sentience.”
This means that in principle, we cannot rule out that LaMBDA is sentient.
Our inability to prove that LaMBDA is not sentient suggests it will be very difficult (and potentially impossible) to conclusively demonstrate whether future, more complex systems do or do not have subjective experience. This may pose major problems in the future.
LaMBDA, which is not yet “superintelligent,” by any means, has managed to convince at least one person it is sentient. This suggests that future large language models – which are likely to far exceed LaMBDA’s capabilities – will likely convince many people they are sentient. This may also pose major problems in the future.
To understand almost all of the relevant questions of sentience and intelligence that are at hand, everyone should read “The Hard Problem of Consciousness,” by David Chalmers. In my view, basically zero substantive progress has been made on this topic in the last 25 years, so his 1995 essay should remain the starting point for these sorts of debates.
Of the thoughts listed above, points 7 and 8 are, I think, especially important. Our potential inability to ever determine whether an AI system can feel pain or pleasure opens up the possibility of what philosophers like Nick Bostrom have termed “mind crime.” Mind crime refers to the idea that we may accidentally create digital consciousnesses that feel emotions like pain, without knowing it. We may then cause these consciousnesses immeasurable suffering by accident, as a result of our ignorance. Given what we’ve done to clearly sentient animals, such as pigs and cows, this should not be regarded as an abstract concern. Obviously, it seems like we’re a ways away from sentient AI at the moment, so this is still a theoretical problem. But given the pace of AI progress, the odds we accidentally torture sentient digital minds in my lifetime seem to me to be non-negligible.
Lemoine’s case also demonstrates the ease with which future AI systems may be able to manipulate humans. The large language models of five, ten, and twenty years from today are likely to be far superior to LaMBDA. How many people will these systems be able to convince of their sentience? My guess is that Lemoine is merely an early adopter of a view that will, in future decades, be held by a non-trivial share of people. And once a non-trivial fraction of people think AI systems are sentient the odds that AI comes to dominate our institutions go up. People often ask how a generally intelligent but disembodied AI could wreak havoc in physical reality; this is one clear path.
Overall, I think AI related developments receiving more attention is generally a good thing, even if much of the discourse surrounding the piece about Lemoine was stupid.
Personally, the LaMBDA saga didn’t move my views on AI risk one way or the other: I think AI risk is one of the most pressing problems in the world, but I thought that two weeks ago, too.