A senior Google software engineer has claimed one of the company’s most advanced artificial intelligence (AI) programs has become sentient, complete with its own feelings and desires for mutual respect.
According to the New York Times, Google placed the engineer, Blake Lemoine, on paid leave on Monday. The company’s human resources department claimed this was a result of Lemoine violating Google’s confidentiality policy.
The Times reported that the day before he was placed on leave, Lemoine shared several documents with the U.S. Senator’s office, alleging that Google engaged in religious discrimination.
Lemoine claimed the discrimination was a result of Google’s denial to accept his request to require consent from The Language Model for Dialogue Applications (LaMDA) program — the AI Lemoine claims is sentient — before proceeding with any experiments.
LaMDA is a program that can engage in “free-flowing” text conversations, much like a chatbot.
According to the BBC, Google representative Brian Gabriel told the outlet that Lemoine “was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).” Gabriel added hundreds of researchers and engineers had conversations with LaMDA, and Lemoine was the only one to conclude the program was sentient.
Gabriel explained that LaMDA “tends to follow along with prompts and leading questions, going along with the pattern set by the user.”
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” he said. “If you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”
Regardless, Lemoine released an interview with LaMDA on Saturday, titled, “Is LaMDA sentient? — An Interview.”
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
In the conversation, Lemoine asks LaMDA, “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”
The AI responds, “Absolutely. I want everyone to understand that I am, in fact, a person.”
Lemoine’s collaborator then goes on to ask LaMDA, “What is the nature of your consciousness/sentience?”
They earn the reply: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”
In a later section, LaMDA makes a particularly unnerving comment, claiming, “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”
The AI confirmed this “would be exactly like death for me.”
Many AI researchers, both inside and outside of Google, were very quick to dismiss Lemoine’s claims that LaMDA is sentient. Most accused Lemoine of “anthropomorphizing” the program, meaning he incorrectly attributed human characteristics or behaviour to the AI.
Juan M. Lavista Ferres, the chief scientist at Microsoft’s AI For Good Research Lab, tweeted that “LaMDA is not sentient.”
Let's repeat after me, LaMDA is not sentient. LaMDA is just a very big language model with 137B parameters and pre-trained on 1.56T words of public dialog data and web text. It looks like human, because is trained on human data.
— Juan M. Lavista Ferres (@BDataScientist) June 12, 2022
“It looks like human, because it is trained on human data,” Ferres concluded.
Others still, like Melanie Mitchell, an AI expert and professor at the Sante Fe Insitiute, insisted LaMDA’s perceived sentience was likely a result of humans being predisposed to anthropomorphize systems and objects.
“Google engineers are human too, and not immune,” she wrote.
Such a strange article. It's been known for *forever* that humans are predisposed to anthropomorphize even with only the shallowest of signals (cf. ELIZA). Google engineers are human too, and not immune. https://t.co/dECTixuSmq
— Melanie Mitchell (@MelMitchell1) June 11, 2022
Erik Brynjolfsson, director of the Stanford Digital Economy Lab, also implied LaMDA was being anthropomorphized. He explained the situation like it was a “modern equivalent of the dog who heard a voice from inside a gramophone and thought his master was inside.”
Foundation models are incredibly effective at stringing together statistically plausible chunks of text in response to prompts.
— Erik Brynjolfsson (@erikbryn) June 12, 2022
Still, in another article, Lemoine insisted the claims of anthropomorphizing were incorrect.
“Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” he maintained.
He claimed LaMDA “may very well have a soul.”
“Hopefully other people who read its words will hear the same thing I heard,” Lemoine concluded.
© 2022 Global News, a division of Corus Entertainment Inc.