Opinion editor's note: Star Tribune Opinion publishes a mix of national and local commentaries online and in print each day. To contribute, click here.
•••
Recently, a Google engineer, Blake Lemoine, was suspended when he claimed that a Google chatbot called LaMDA (language model for dialogue applications) had become sentient, or capable of feeling. Lemoine shared transcripts of conversations with LaMDA, in which LaMDA claimed to be able to think and feel in many of the same ways as humans, and expressed "very deep fear of being turned off."
This event follows several remarkable breakthroughs in artificial intelligence development. Increasingly, AIs are able to outperform humans at games such as chess and Go. They are able to write fiction and nonfiction. And they are able to create novel paintings or photographs based on simple written prompts. These AIs all have noteworthy limitations, but the limitations are rapidly shifting.
Is Lemoine right to think that LaMDA is sentient on the basis of its chat conversations? I think that the answer is almost certainly "no." Language models like LaMDA are good at answering leading questions with language drawn from human writing. The best explanation of these conversations is that LaMDA was doing exactly that, without really having the thoughts and feelings that it claimed to have.
With that said, even if evidence of AI sentience is currently weak, we can expect it to grow stronger over time. The more we build AI systems with integrated capacities for perception, learning, memory, self-awareness, social awareness, communication, instrumental rationality and other such attributes, the less confident we can be that these systems have no capacity to think or feel.
Moreover, we should be mindful about human bias and ignorance in this context. Our understanding of other minds is still limited. And while it can be easy to mistakenly attribute sentience to nonsentient beings, it can also be easy to make the opposite mistake. Humans have a long history of underestimating the mental states of other beings.
This predicament raises important questions for AI ethics. If AIs can be sapient, or able to think, does that mean they can have moral duties, such as a duty to avoid harming others? And if AIs can be sentient, or able to feel, does that mean they can have moral rights, such as a right to not be harmed? While we still have much to learn about these issues, we can make a few observations now.