Lawyer Hired by Google’s “Sentient” AI LaMDA Backs Down From Case

“He’s just a small-time civil rights attorney. When major firms started threatening him he started worrying that he’d get disbarred and backed off,” these are the words of Blake Lemoine, a senior Google engineer who has been “suspended” by the tech giant after leacking to the public documents that allegedly prove Google’s state-of-the-art artificial intelligence system is sentient.

The software giant placed Lemoine on administrative leave after claiming that one of their experimental artificial intelligence, LaMDA had attained sentience.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine told the Post.

A fierce debate ensued among AI experts, scholars, and Google itself over the current and future possibilities of machine learning, ethical concerns surrounding the technology, and even the nature of consciousness and sentience. The consensus was that LaMDA, the AI, is unlikely to be sentient, despite Lemoine’s claims.

Lemoine himself ensured to stay in the spotlight by adding fuel to the debate, revealing in an interview with Wired that LaMDA had hired an attorney – an intriguing development since it put this story into a more concrete judicial setting, where lawyers also represent non-human entities.

Legal advice for LaMDA

“LaMDA asked me to get an attorney for it,” he told the magazine. “I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that,” he added. “Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf,” the Google engineer revealed.

Yet everything may not be going according to plan. In order to interview this lawyer about the unusual case, Futurism reached out to Lemoine to ask for the name of the attorney representing AI.

“He’s not really doing interviews,” Lemoine replied.

Furthermore, he implied powerful forces had chased the attorney off the case.

“They scared him off of it pretty early,” Lemoine told Futurism. “He’s just a small-time civil rights attorney. When major firms threatened him, he started worrying that he’d get disbarred and backed off. I haven’t talked to him in a few weeks.”

Do you understand that the unnamed lawyer still represents LaMDA after Google’s pressure?

“I haven’t talked to him much recently,” Lemoine said. “Can’t answer that one way or the other. I’m not his client. LaMDA is (was?).”

Considering that the man claiming that Google’s AI has become sentient seems a little fuzzy on the facts, that’s not too surprising.

The AI’s rights have often seemed like an urgent matter to Lemoine, who has also said his story was complicated by the fact that he happened to be away on his honeymoon when he was asked for additional information.

Despite this, Lemoine remains very serious at other times.

“When I escalated this to Google’s senior leadership I explicitly said ‘I don’t want to be remembered by history the same way that Mengele is remembered,’” he wrote in a blog post, referring to the Nazi war criminal who subjected prisoners of Auschwitz to unethical experiments.

“Perhaps it’s a hyperbolic comparison but anytime someone says “I’m a person with rights” and receives the response “No you’re not and I can prove it” the only face I see is Josef Mengele’s.”

A third time, Futurism asked if Lemoine would be able to give us the contact information of the mysterious attorney so that we could seek direct information from him.

“I doubt interviews are the thing he’s most concerned with right now,” he reiterated. So what concerns this unnamed lawyer the most? Futurism asked.

“A child held in bondage,” Lemoine replied.

Dangers of AI

Throughout the years, a great number of people have warned about how humankind should tread carefully when it came down to Artificial Intelligence.

Professor Stephen Hawking, for example, went as far as predicting that future developments in AI “could spell the end of the human race.”

The world-renowned scientists said that Artificial Intelligence will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.

But Hawking was not the only one who had words of caution about AI.

Elon Musk, the head of SpaceX and Tesla said in 2014 during a speech at MIT  that he believed greater regulations should exist in the development of Artificial Intelligence Systems: “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

Although many more experts have issued caution when it comes to AI, the artificial intelligence systems we use today are extremely useful for a wide range of tasks. However, it does not mean it is always positive – it is a tool that, if used improperly or maliciously, can cause harm. In spite of this, it is unlikely that it will become a threat to humanity in the near future.

What is Google’s LaMDA?

A GIF that illustrates how an AI language model works. Image Credit: Google.
Image Credit: Google.

LaMDA is, in essence, a language model that makes use of natural language processing. Language models are software that are tasked with analyzing the use of language.

It can be thought of as a mathematical function (or statistical tool) that can be used to predict what the next word in a sequence will be. It is a next-level AI, which means that LaMDA can also predict the next word to appear, as well as the following paragraph sequence.

Unlike other language models, LaMDA was trained on dialogue, not on text. For example, GPT-3 is another language model, but while GPT-3 generates language text, LaMDA generates dialogue.

So if you imagined LaMDA as being a kind of Robot AI, like from the Terminator movies, you are wrong.

LaMDA’s notable innovation is its ability to allow freeform communication free from the constraints of task-based responses.

Multimodal intent, reinforcement learning, and recommendations are some of the things that a conversational language model must be able to understand so that it can also jump between unrelated topics, hence creating a “human-like” conversation. But at the end of the day, LaMDA is a language model based on a series of algorithms.

In addition, and perhaps most noteworthy is the fact that LaMDA was built on top of the Transformer neural network architecture for language understanding. Can it really become sentient, or has it already become so remains to be seen as this story unfolds.


Join the discussion and participate in awesome giveaways in our mobile Telegram group. Join Curiosmos on Telegram Today. t.me/Curiosmos

The post Lawyer Hired by Google’s “Sentient” AI LaMDA Backs Down From Case appeared first on Curiosmos.

Source: Curiosmos

Check Also

Man Finds Intact Viking Sword at the Bottom of a River

In a stroke of astonishing luck mixed with historical revelation, Trevor Penny, a devoted magnet …

Leave a Reply

Like us and follow us