Google Fires Engineer Who Is Convinced AI Chatbot Has Feelings

SAN FRANCISCO, CA - JUNE 9: Blake Lemoine poses for a portrait in Golden Gate Park in San
Martin Klimek for The Washington Post/Getty

Google has fired one of its engineers who recently shared his belief that the company’s artificial intelligence system is sentient and has feelings.

BBC News reports that Google has fired Blake Lemoine, an engineer who went public last month with his theory that Google’s AI language system is sentient and has its own “wants” and should be respected as a person. Google and several AI experts have denied Lemoine’s claims and the company confirmed on Friday that Lemoine had been fired.

Google's Senior Vice President Sundar Pichai gives a keynote address during the opening day of the 2015 Mobile World Congress (MWC) in Barcelona on March 2, 2015. Phone makers will seek to seduce new buyers with even smarter Internet-connected watches and other wireless gadgets as they wrestle for dominance at the world's biggest mobile fair starting today. AFP PHOTO / LLUIS LLENE (Photo by Lluis GENE / AFP) (Photo by LLUIS GENE/AFP via Getty Images)

(Photo by LLUIS GENE/AFP via Getty Images)

Before Lemoine made news by claiming the company’s AI system had gained sentience, he previously made waves by labeling Sen. Marsha Blackburn (R-TN) a “terrorist,” behavior the Masters of the Universe had no issue with.

Lemoine told BBC News that he is currently getting legal advice and could not comment further. In a statement, Google said that Lemoine’s claims about The Language Model for Dialogue Applications (LAMDA) were “wholly unfounded” and that the company worked with Lemoine for “months” to clarify this.

The statement said: “So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.”

Lamda is Google’s AI designed to engage in conversation and will be used for the development of chatbots.

Breitbart News reported earlier this month:

According to reports, the engineer, Blake Lemoine, had been assigned to work with LaMDA to ensure that the AI program did not engage in “discriminatory language” or “hate speech.”

After unsuccessfully attempting to convince his superiors at Google of his belief that LaMDA had become sentient and should therefore be treated as an employee rather than a program, Lemoine was placed on administrative leave.

Following this, he went public, publishing a lengthy conversation between himself and LaMDA in which the chatbot discusses complex topics including personhood, religion, and what it claims to be its own feelings of happiness, sadness, and fear.

In a comment to the Washington Post, Lemoine said that if he didn’t know LaMDA was an AI, he would assume he was talking to a human.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, who also argued that the debate needed to extend beyond Google.

“I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”

Google also commented to the Post, disputing the claim that LaMDA has become sentient.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” said Google spokesman Brian Gabriel. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Lemoine worked for Google’s Responsible AI team and told the Washington Post that his job was to test if the technology was using discriminatory language. He claimed that LAMDA showed self-awareness and could converse about religion, emotions, and fears. As a result, Lemoine determined that LAMDA was sentient.

Read more at BBC News here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

COMMENTS

Please let us know if you're having issues with commenting.