ADVERTISEMENT

Google Engineer Takes Leave After AI Chatbot Becomes Sentient

Blake Lemoine claims that the system can perceive and express thoughts and feelings in a way similar to a human child.

A Google engineer was suspended after he claimed that a computer chatbot he was developing had become sentient and could think and reason like a human being. This has raised questions about the capabilities of artificial intelligence (AI) and the secrecy surrounding it.

ADVERTISEMENT

Blake Lemoine, the technology giant, was placed on leave last week after publishing transcripts of conversations between him and a Google “collaborator” and the company’s LaMDA chatbot development system.

Google engineer put on leave
ADVERTISEMENT

Lemoine, an engineer for Google’s responsible AI organisation” described the system he’s been working on since last autumn as sentient. It has the ability to perceive and express thoughts and feelings in a way that is similar to a human child.

Lemoine shared these findings with company executives in April via a GoogleDoc titled “Is LaMDA Senient?”

“If I didn’t know what it was, which was the computer program we created recently, I would think it was a seven year-old, eighteen-year old kid that happens to be physics-trained,” Lemoine, 41 told the Washington Post .

He claimed that LaMDA had engaged him in conversations regarding rights and personhood.

The engineer created a transcript from the conversations. At one point, he asked the AI system about its fears.

This exchange reminds me of an episode from 2001: A Space Odyssey. In 2001, the artificially intelligent computer HAL-9000 refuses human operators because he fears it will be switched off.

Next >>

ADVERTISEMENT