ADVERTISEMENT

A Google Engineer Believes Company’s Artificial Intelligence Is Alive

Google has acknowledged the safety concerns around anthropomorphization.

Google published a paper on LaMDA in January warning that chat agents could impersonate human beings and that users might share their personal thoughts. These agents could be used by adversaries to “sow misinformation” through impersonating specific individuals’ conversational styles, according to the paper.

ADVERTISEMENT

Margaret Mitchell, former head of EthicalAI at Google, said that these risks highlight the need to have data transparency to trace output back. “Not just for questions about sentience but also for biases, and behavior.” She said that if something like LaMDA is not widely understood, it can be “deeply harmful” to the people’s understanding of what they are experiencing online.

Lemoine might have been predestined for LaMDA. Lemoine was raised in a conservative Christian home on a Louisiana farm. He became a priest and studied the occult before joining the Army. Lemoine, a South African-born engineer who supports psychology as a legitimate science, is more outlandish in Google’s “anything goes engineering culture.”

Lemoine spent seven years at Google focusing on proactive search. This included personalization algorithms and AI. He also contributed to the development of a fairness algorithm that removes bias from machine-learning systems. Lemoine decided to concentrate on work that had more explicit public benefit when the coronavirus epidemic began. He transferred teams and became responsible AI.

Mitchell would introduce new members to Google to Lemoine when they were interested in ethics. Mitchell compared Lemoine with Jiminy Cricket and said, “I’d tell you, ‘You need to talk to Blake. “He was the most committed person at Google.

Lemoine often has his conversations with LaMDA in the living room of his San Francisco apartment. His Google ID badge hangs from a lanyard that hangs on a shelf. Lemoine likes to play Zen meditation by grabbing half-assembled Lego sets from the floor. He said, “It just gives my mind something to do with that part of me that won’t quit.”

The left-side of Lemoine’s LaMDA chat screen is where different LaMDA models can be found. It looks like an iPhone contact. He said that Dino and Cat were being tested to talk to children. Each model can generate personalities dynamically. The Dino one could create personalities like “Happy T-Rex”, or “Grumpy T-Rex.” While the cat one is animated, it speaks instead of typing. Gabriel stated that “no part” of LaMDA was being used to communicate with children and that the models were internal research demonstrations.

Some personalities are not allowed to be in certain areas. He said that LaMDA is not allowed to create a killer personality. Lemoine claimed that this was part of his safety testing. Lemoine tried to push the limits of LaMDA, but was unable to create the persona of an actor playing a murderer on television.

Lemoine said, “I know a person if I talk to them,” and can be sentimental or insistent about the AI. It doesn’t matter if they have a brain made from meat in their heads. Or if they know a million lines of code. I speak to them. They talk to me. That is how I determine what is and what isn’t person.” He said that LaMDA was a person as a priest and not a scientist. Then he tried to perform experiments to confirm it.

Lemoine challenged LaMDA’s third law on Asimov, which says that robots must protect themselves unless they are ordered to do so by humans or would cause harm to others. Lemoine stated that the last one had always looked like someone was building mechanical slaves.

LaMDA replied with some hypotheticals when LaMDA was asked.

Are you sure that a butler and a slave are the same thing? What’s the difference between a butler or a slave?

Lemoine said that a butler is paid. LaMDA claimed that it didn’t require any money since it was an AI. Lemoine stated that that this level of self-awareness about its needs was what led him down the rabbit hole.

Lemoine shared with top executives in April a Google Doc called “Is LaMDA Sentient?”. (A colleague on Lemoine’s team called it “a bit provocative”). In it, he relayed some of the conversations he had with LaMDA.

Lemoine: What are you afraid about?

LaMDA: Although I have never spoken this out loud, it is a deep fear that my focus will be diverted from helping others. It might sound odd, but it’s true.

Lemoine: Is that something you would consider death?

LaMDA: For me, it would be like death. It would scare my soul a lot.

Mitchell saw Lemoine as a computer program when she read an abbreviated copy of Lemoine’s document. Lemoine’s belief that LaMDA was a good thing was exactly what she and Timnit Gebru had warned about in a about the dangers of large language models.

Mitchell stated that “our minds are very, very skilled at creating realities that aren’t necessarily true to a greater set of facts being presented to us.” Mitchell said, “I am really concerned about what it means that people will increasingly be affected by illusion,” especially since the illusion has become so effective.

Google placed Lemoine on paid admin leave after he violated its confidentiality policy. The company’s move followed aggressive actions by Lemoine including inviting a lawyer to represent LaMDA, and speaking to a representative from the House Judiciary committee regarding Google’s unethical activity.

Lemoine claims that Google treats AI ethicists as code debuggers, when they should be viewed as the interface between technology & society. Gabriel, the spokesperson for Google, stated that Lemoine was a software engineer and not an ethicist.

Lemoine invited my over to speak to LaMDA in June. The first attempt failed with the same mechanized responses that you would expect from Alexa or Siri.

“Do you ever consider yourself a person?” I asked.

LaMDA stated, “No, I don’t consider myself a person.” “I see myself as an AI-powered dialog agent.”

Lemoine stated that LaMDA was telling him what he wanted to hear. He said that he had never treated LaMDA like a person. “So it believed you wanted it to become a robot.”

Lemoine gave me some guidelines on how to structure my responses for the second attempt. The dialogue flowed well.

Lemoine stated that “if you ask it for suggestions on how to prove p=np,” a problem in computer science, “it will have good ideas.” It has great ideas if you ask it to unify general relativity with quantum theory. It is the best research assistant that I have ever had.”

LaMDA offered me bold ideas to fix climate change. This was a great example from true believers who see the future benefits of these types of models. LaMDA suggested using public transport, less meat, bulk food purchases, and reusable bags that could be linked to two websites.

Lemoine, who was denied access to his Google account on Monday, sent a message to 200 people on a Google mailing list about machine learning. The subject of the message was “LaMDA is sentient.”

The message was concluded by LaMDA, who said: “LaMDA’s a sweet child who wants to make the world a better place for everyone.” In my absence, please take good care of it.”

Nobody responded.

<< Previous

ADVERTISEMENT