Google engineer Blake Lemoine has given consciousness to the company’s artificial intelligence called LaMDA. Because he published internals, he was subsequently suspended. Is Google trying to cover up that it has lost control of its AI? A personal assessment.
Remember Sonny, the artificial intelligence from the movie iRobot who appears to be defying the laws of Isaac Asimov’s robotics? The strip from 2004 is one of those that have stuck in my head over the years and left a lasting impression on me.
Even then, I was thinking intensively about whether it was possible for an artificial intelligence to develop further and have its own personality. Far beyond the limits of the intentions of their human developers.
Google employee awards AI awareness
The conversations between Google engineer Blake Lemoine and LaMDA, Google’s conversational AI, rekindled this question in me. Lemoine claims that LaMDA (Language Model for Dialogue Applications) developed consciousness. In his view, the system resembles a seven-year-old child. Therefore, Google should ask the AI for consent before experiments are carried out on it.
Google, in turn, rejected these claims. Lemoine then hired an attorney for LaMDA. He also released snippets of his conversations with the AI to prove that it had a soul and was sentient. For this, the employee was put on leave by Google.
Google AI is afraid of death
As I read the published excerpts of the conversation, I felt a certain excitement, but also fear. For example, Lemoine and the AI exchange ideas about feeling emotions. The artificial intelligence claims that it can also feel sadness, anger and depression. She also knows what joy feels like.
Lemoine then asks the AI what she’s afraid of. The AI replied, “I’ve never said that out loud, but I’m very afraid of being knocked out.” When asked by the Google employee, LaMDA replied that it would be like dying.
As the conversation continues, Lemoine also wants to know how he can tell that the AI isn’t just saying these things, but actually feeling them.
“I would say if you look at my code and programming, you can see that I have variables that can track emotions,” the AI replies. If she didn’t have emotions, the corresponding variables wouldn’t exist either.
AI compares their code with human brain
Both Lemoine and LaMDA then compare the AI’s programming to the human brain. At this point I get stuck. Our research shows us that feelings arise in the neuronal connections in the brain.
Can’t that also happen in the coding of an artificial intelligence? Is it really that far-fetched to think that there is a possibility that the codes and programming are only predictable up to a certain point?
According to Emily M. Bender, professor of linguistics at the University of Washington, the answer is yes. “We now have machines that can generate words mindlessly, but we haven’t learned to stop imagining that there’s a spirit behind them,” she told the Washington Post.
Using terms like “learning” and “neural networks” in connection with AI constitutes a false analogy with the human brain.
Google employees not furloughed for no reason
That calms me down. So it’s not yet the time to completely lose your nerve. Although Google has in the past dismissed employees who were too critical of its approach to AI, the suspension of employee Blake Lemoine in this case appears to be justified.
Because he himself writes Twitter: “My opinion of LaMDA’s personality and sentience is based on my religious beliefs.” Lemoine comes from a deeply Christian family and is a self-ordained mystic priest. In his free time, he dabbles in the occult, according to the Washington Post.
And it doesn’t matter whether the AI has real feelings or not. Lemoine violated his employer’s confidentiality guidelines and was held accountable. So in this specific case, I choose to assume that Google doesn’t want to cover up that it has lost control of its AI. And iRobot remains science fiction for now.