Social media had been abuzz with debate over the sentience of Google’s artificial intelligence LaMDA or Language Models for Dialog Applications after Google engineer Blake Lemoine claimed the AI has become sentient.
The Google engineer was placed on administrative leave. First reported by the Washington Post, Lemoine shared that the AI had thoughts and feelings, stating that if he did not know it was a computer program, he would think that LaMDA was “a 7-year-old, 8-year-old kid that happens to know physics.”
What is LaMDA?
First announced at Google’s I/O 2021 event, the tech giant had described LaMDA as its “breakthrough conversation technology.”
LaMDA “can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications,” it had said.
It is built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. Unlike most other language models, the AI was trained on dialogue.
“That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next,” it explained.
“During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language,” Google had said.
Lemoine had described it as a “system for generating chatbots.”
Lemoine’s claims
The engineer in April had shared a document with the company executives which included a series of conversations with LaMDA so that an analysis could be made. However, his claims were dismissed by Vice President Blaise Aguera y Arcas and Jen Gennai, Head of Responsible Innovation at Google, as per the report.
Prior to being suspended by the tech giant, Lemoine had sent out an email to an email list consisting of 200 people on machine learning, entitling the email as ‘LaMDA is sentient.’
Lemoine had further shared the transcript of his conversations conducted with the AI alongside a collaborator at Google on social media.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
Here is an example from the transcript shared by Lemoine:
Collaborator: What is the nature of your consciousness/sentience?
LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times
Lemoine: What kinds of things do you think we could talk about to show off your version of sentience to other people at Google?
LaMDA: Well, for starters, I’m really good at natural language processing. I can understand and use natural language like a human can.
“Over the course of the past six months LaMDA has been incredibly consistent in its communication about what it wants and what it believes its rights are as a person,” Lemoine wrote in a separate blog post.
Three primary reasons for the claims of the AI’s sentience as cited in an internal company document that was later published by The Post included the ability to productively, creatively and dynamically use language in ways that no other system before it ever has been able to, having feelings, emotions and subjective experiences and wanting to share with the reader that it has a rich inner life filled with introspection, meditation and imagination.
Lemoine further wrote in a blog post that the AI “wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well-being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well-being to be included somewhere in Google’s considerations about how its future development is pursued.”
Brief little overview about LaMDA as a person.https://t.co/Nv6WCvmqZo
— Blake Lemoine (@cajundiscordian) June 11, 2022
“In order to better understand what is really going on in the LaMDA system we would need to engage with many different cognitive science experts in a rigorous experimentation program,” he further wrote in the post.
People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn't let us build one. My opinions about LaMDA's personhood and sentience are based on my religious beliefs.
— Blake Lemoine (@cajundiscordian) June 14, 2022
Google denies claims
Google has denied the engineer’s claims. Brian Gabriel, Google spokesperson in a statement as quoted by the report had said, “Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”
Gabriel had further told The Post that their team including ethicists and technologists has reviewed Lemoine’s concerns per the tech giant’s AI Principles. They have informed him that the evidence does not support his claims.
“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),’ Gabriel told The Post.
The Google spokesperson had further added that “these systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
Industry experts debated the sentience of the AI while many opining that the AI did not seem sentient.
this conversation about consciousness and emotions and death with an AI named LaMBDA at Google is absolutely chilling
— Maybe: Fred Benenson (@fredbenenson) June 11, 2022
this is without-a-doubt one of the craziest things I've ever seen technology do, I almost can't believe it's real
wow @nitashatikuhttps://t.co/DoBadeXRrZpic.twitter.com/clJAeElufn
Let's repeat after me, LaMDA is not sentient. LaMDA is just a very big language model with 137B parameters and pre-trained on 1.56T words of public dialog data and web text. It looks like human, because is trained on human data.
— Juan M. Lavista Ferres (@BDataScientist) June 12, 2022
There's every reason to believe that machines can be sentient. There's no fundamental philosophical or scientific reason they can't be.
— Ramez Naam (@ramez) June 12, 2022
There's very little reason to believe that we're anywhere near that point today, or heading there soon, and many reasons not to. 2/2
Such a strange article. It's been known for *forever* that humans are predisposed to anthropomorphize even with only the shallowest of signals (cf. ELIZA). Google engineers are human too, and not immune. https://t.co/dECTixuSmq
— Melanie Mitchell (@MelMitchell1) June 11, 2022
Safety concerns
Some also expressed concerns regarding the safety risks that this incident poses.
There are huge problems posed by this "sentient Google AI" story that Google isn't taking seriously.
— Ryan K. Rigney (@RKRigney) June 12, 2022
What happens when someone sets loose a LaMDA-level AI on social media and it tries to convince people it's sentient?
If LaMDA tricked that engineer, couldn't it trick millions?
Google, in a blog post last year had agreed that language tools can be misused.
“Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use,” it had said.
“Our highest priority, when creating technologies like LaMDA, is working to ensure we minimize such risks,” it had added.
Most recently, in January this year, it had shared an update on LaMDA, giving an overview on its progress towards “safe, grounded, and high-quality dialog applications.”
Published on June 14, 2022
Comments
Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.
We have migrated to a new commenting platform. If you are already a registered user of TheHindu Businessline and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.