A Google employee has been suspended for claiming that the artificial intelligence developed by the company, LaMDA, has sentience. Google’s team of experts is of the opinion that there is no evidence for this.
Is it now so far that the lines between reality and science fiction are blurring and getting closer and closer? The discussion has recently flared up again as to whether robots have a soul and to what extent they can think and feel independently.
The whole thing was triggered by a Google employee. Engineer Blake Lemoine believes the AI the company is developing, called LaMDA (Language Model for Dialogue Applications), has evolved consciousness and is sentient. He has now been suspended from Google for these allegations.
According to employees, Google AI is like a child
As reported by the Washington Post, Lemoine studied cognitive science and computer science. Now he works for Google’s Responsible AI organization. He began speaking to LaMDA last fall. His task was to test during the conversation with the AI whether it uses discriminatory or hateful language.
LaMDA is Google’s system for building chatbots based on its most advanced language models. The conversations soon turned to religion and the rights, personality and also the fears of the AI. At this point, Lemoine decided to investigate further if LaMDA actually knew what she was talking about.
“If I didn’t know exactly what this computer program that we developed recently is, I would think it’s a seven or eight-year-old kid who happens to know physics,” says Lemoine.
According to media reports, the engineer demanded that Google should ask LaMDA for consent in the future if it wanted to use the AI or conduct experiments with it.
Google doesn’t believe in sentient AI
According to Google, a team of ethicists and technologists examined Lemoine’s thoughts according to the company’s AI principles. The evidence would not support the engineer’s claims. According to the experts, there is no evidence that LaMDA is sentient.
In addition, many findings would even speak against it. The language models would only mimic the type of exchange that occurs in millions of sentences online. Thus, AIs could omit about any topic. There is so much data to draw from that a system does not have to be sentient to appear real.
Dispute with Google employees not an isolated case
Because Google didn’t share his beliefs, Blake Lemoine finally went public and posted excerpts from the conversation with LaMDA. In doing so, he violated the company’s confidentiality policy. In addition, Lemoine invited, among other things, a lawyer to represent the AI LaMDA. In addition, the engineer turned to the House Judiciary Committee to complain about what he believed to be unethical activities by Google.
However, Lemoine’s concerns about Google’s artificial intelligence are not isolated. In the past, there have already been disputes between Google employees and company management over developments in the company’s own artificial intelligence.
For example, the AI ethicist Timnit Gebru quit her job in December 2020 because Google did not want to publish her research results on the ethical problems of language models.
Researchers disagree about sentient AI
While computer scientists continue to raise concerns that artificial intelligence is beginning to develop consciousness, most researchers continue to believe that systems like LaMDA get their vocabulary and opinions about things from human statements on the Internet. This does not mean that the AI also understands the meaning behind it.
Sentient or not: It is important to know how artificial intelligence is developing overall. Margaret Mitchell, the former co-head of Google’s ethical AI department, believes so. In her opinion, it is important to make data transparent. If something like LaMDA continues to be available but not understood, it can be very detrimental to people’s understanding of what they experience online, Mitchell said.
Lemoine’s attempt to create a little more transparency has failed for the time being. However, before he said goodbye to his leave of absence, he sent an e-mail to 200 people on his Google mailing list. The subject line was “LaMDA is sentient”. on Twitter He also posted that Google would not allow it to create a scientific framework to determine if this is true.