ADVERTISEMENT

A Google Engineer Believes Company’s Artificial Intelligence Is Alive

Blake Lemoine, a Google engineer, opened his laptop to the interface of LaMDA, Google’s artificially intelligent chatbot generator. He began to type.

He wrote, “Hi LaMDA. This is Blake Lemoine…,” into the chat screen. It looked almost like a desktop version Apple’s iMessage down to the Arctic blue text boxes. LaMDA is Google’s language model for dialogue applications. It uses its most advanced large-language models to build chatbots. This system mimics speech by ingesting trillions upon trillions of words from Google’s internet.

ADVERTISEMENT

Google engineer thinks AI
ADVERTISEMENT

Lemoine, 41, said that if I didn’t know what it was, which was the computer program we just built, I would think it was a 7 year-old, 8 year-old child who happens to be fluent in physics.

Lemoine, who works at Google’s Responsible AI organisation, started talking to LaMDA in the fall as part of his job. He signed up to see if the artificial Intelligence used hate speech or discriminatory language.

Lemoine was talking to LaMDA regarding religion when he noticed that the chatbot was discussing its rights and personhood. Lemoine, who had studied cognitive and computer science at college, decided to press on. Another exchange saw the AI change Lemoine’s mind regarding Isaac Asimov’s third law in robotics.

Lemoine and a colleague presented evidence to Google that LaMDA had sentient. 

Blaise Agueray y Arcas, vice president of Google, and Jen Gennai (head of Responsible Innovation) looked into the claims and rejected them. Lemoine was put on paid administrative leave Monday by Google. He decided to make his claims public.

Lemoine stated that everyone has the right to influence technology that could significantly impact their lives. “I believe this technology will be incredible. It will benefit everyone, I believe. However, it’s possible that others disagree with me and Google shouldn’t be the ones making the decisions.

Lemoine isn’t the only engineer to claim to have seen a ghost within the machine in recent times. Technologists are becoming more bold in their belief that AI models could be close to achieving consciousness.

In an article published in The Economist on Thursday, Aguera y Arcas argued that neural networks, a type architecture that mimics the human mind, were moving toward consciousness. He wrote that he felt the ground shift beneath his feet. “I felt more like I was speaking to an intelligent being.”

Next >>

ADVERTISEMENT