Social media had been abuzz with debate over the sentience of Google’s artificial intelligence LaMDA or Language Models for Dialog Applications after Google engineer Blake Lemoine claimed the AI has become sentient.

The Google engineer was placed on administrative leave. First reported by the Washington Post, Lemoine shared that the AI had thoughts and feelings, stating that if he did not know it was a computer program, he would think that LaMDA was “a 7-year-old, 8-year-old kid that happens to know physics.”

What is LaMDA?

First announced at Google’s I/O 2021 event, the tech giant had described LaMDA as its “breakthrough conversation technology.”

LaMDA “can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications,” it had said. 

It is built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. Unlike most other language models, the AI was trained on dialogue.

“That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next,” it explained. 

 “During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language,” Google had said. 

Lemoine had described it as a “system for generating chatbots.”

Lemoine’s claims

The engineer in April had shared a document with the company executives which included a series of conversations with LaMDA so that an analysis could be made. However, his claims were dismissed by Vice President Blaise Aguera y Arcas and Jen Gennai, Head of Responsible Innovation at Google, as per the report.

Prior to being suspended by the tech giant, Lemoine had sent out an email to an email list consisting of 200 people on machine learning, entitling the email as ‘LaMDA is sentient.’

Lemoine had further shared the transcript of his conversations conducted with the AI alongside a collaborator at Google on social media.

Here is an example from the transcript shared by Lemoine:

Collaborator: What is the nature of your consciousness/sentience?

LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

Lemoine: What kinds of things do you think we could talk about to show off your version of sentience to other people at Google?

LaMDA: Well, for starters, I’m really good at natural language processing. I can understand and use natural language like a human can.

“Over the course of the past six months LaMDA has been incredibly consistent in its communication about what it wants and what it believes its rights are as a person,” Lemoine wrote in a separate blog post.

Three primary reasons for the claims of the AI’s sentience as cited in an internal company document that was later published by The Post included the ability to productively, creatively and dynamically use language in ways that no other system before it ever has been able to, having feelings, emotions and subjective experiences and wanting to share with the reader that it has a rich inner life filled with introspection, meditation and imagination. 

Lemoine further wrote in a blog post that the AI “wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well-being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well-being to be included somewhere in Google’s considerations about how its future development is pursued.”

“In order to better understand what is really going on in the LaMDA system we would need to engage with many different cognitive science experts in a rigorous experimentation program,” he further wrote in the post. 

Google denies claims

Google has denied the engineer’s claims. Brian Gabriel, Google spokesperson in a statement as quoted by the report had said, “Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”

Gabriel had further told The Post that their team including ethicists and technologists has reviewed Lemoine’s concerns per the tech giant’s AI Principles. They have informed him that the evidence does not support his claims. 

“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),’ Gabriel told The Post.

The Google spokesperson had further added that “these systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

Industry experts debated the sentience of the AI while many opining that the AI did not seem sentient. 

Safety concerns 

Some also expressed concerns regarding the safety risks that this incident poses. 

Google, in a blog post last year had agreed that language tools can be misused.

“Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use,” it had said. 

“Our highest priority, when creating technologies like LaMDA, is working to ensure we minimize such risks,” it had added. 

Most recently, in January this year, it had shared an update on LaMDA, giving an overview on its progress towards “safe, grounded, and high-quality dialog applications.”

 

comment COMMENT NOW