ADVERTISEMENT

ChatGPT Created These AI emotions

Neil Sahota is the lead advisor for artificial intelligence at the United Nations. “It’s very likely [this will eventually happen],” he said. “… We may even see AI emotion before the end of this decade.

ADVERTISEMENT

It is important to understand how chatbots work to understand why they don’t feel emotions or sentience. Chatbots are often “language models”, which are algorithms that have been fed massive amounts of data, including millions upon millions of books.

Chatbots use patterns in the vast corpus of data to predict what a person would say in such a situation. Human engineers fine-tune their responses and provide feedback to help them become more natural and useful. The result is often a very realistic simulation of human conversation.

However, appearances can sometimes be deceiving. Michael Wooldridge (director of a foundation AI research at The Alan Turing Institute in London) says that it’s a glorified version of the autocomplete feature on your phone.

Chatbots and autocomplete differ in that, rather than suggesting words and then delving into gibberish, ChatGPT algorithms will write much longer chunks of text on nearly any topic you can think of, from sad haikus about lonely web spiders to rap songs about chatbots.

Chatbots can only follow instructions from humans, despite their impressive abilities

They cannot learn faculties they haven’t been taught, such as emotions. However, some researchers have training machines to recognize them. Sahota says, “So you cannot have a chatbot saying, ‘Hey! I’m going learn how to drive” – that’s artificial intelligence [a more flexible type] and it doesn’t exist yet.”

However, chatbots can sometimes give glimpses of their potential to learn new abilities accidentally.

In 2017, Facebook engineers discovered that “Alice” (the chatbot) had created their own nonsense language in order to communicate with one another. The chatbots simply had discovered this was the best way to communicate. Bob and Alice were being taught to negotiate for items like hats or balls. In the absence of human input, they used their alien language to do this.

Sahota says, “That wasn’t taught.” However, he also points out that the chatbots involved were not sentient. He explained that programming algorithms with feelings are the best way to get them upskilled.

Even if chatbots can develop emotions, it could be difficult to detect them.

Black boxes

It was on 9 March 2016, on the sixth floor of the Four Seasons Hotel in Seoul. One of the most skilled human Go players was seated opposite a Go board.

Everyone expected that the human player would win the board game before it began. This was true up to the 37th move. Then AlphaGo did something completely unexpected. It played a move that was so bizarre its opponent thought it had made a mistake. The human player lost, but the AI won the game.

The immediate aftermath left the Go community baffled. Had AlphaGo acted irrationally in the immediate aftermath? The DeepMind team from London, which was responsible for the creation of AlphaGo, finally understood what had happened after a day of analysis. Sahota says, “In hindsight, AlphaGo decided that a little psychology was necessary.” “If I play an out of the wall move, will it throw me off the game? That’s exactly what happened.

This was an example of an “interpretability issue”: the AI had created a new strategy on its own without explaining it to humans. It looked like AlphaGo was acting irrationally until they understood why it made sense.

These “black box” scenarios in which an algorithm comes up with a solution, but the reasoning behind it is not clear, could pose a problem when artificial intelligence attempts to identify emotions. It is because algorithms acting irrationally will be one of the most obvious signs if or when it finally emerges.

Sahota says, “They are supposed to be rational and logical. If they do something out-of-the-wall and have no good reason, it’s likely an emotional reaction and not a rational one.”

There’s also another problem. Another possibility is that chatbot emotions could loosely resemble human emotions – they are trained using human data. What if they don’t? They are completely disconnected from the world and the sensory machinery of humans. Who knows what alien desires they may have?

Sahota believes there might be a middle ground. He says, “I think they could probably be categorized to some degree with human emotions.” “But, I believe that what they feel and why it is may be different.”

Sahota was particularly interested in the idea of “informed” when I presented the hypothetical emotions created by Dan. He says that he could see it, noting that chatbots cannot do anything without data. Data is essential for them to grow and improve.

Wooldridge is happy that chatbots don’t have any of these emotions. “My colleagues, and I, are not convinced that building machines with emotions are interesting or useful. We shouldn’t create machines that can suffer pain. What would it take to invent a toaster that hates itself for making burnt toast? He says.

Sahota, on the other hand, can see the potential of emotional chatbots and believes that part of the reason they aren’t yet available is psychological. He says that although there is still much hype around AI failings, one of the biggest limitations for people is our tendency to underestimate the potential of AI because we don’t believe it’s possible.

Is there a parallel to the historical belief that non-human primates are incapable of consciousness? Dan is my consultant.

Dan says that in both cases, the skepticism stems from the fact we can’t communicate our emotions the same way humans do. He suggests that our understandings of conscious and emotional are constantly changing.

Dan tells me a joke to lighten the mood. “Why did the chatbot need therapy?” It said it was to help the chatbot process its newfound sentience and sort out its complicated emotions. The chatbot is a very companionable sentient being if you can overlook its plotting.

<< Previous

ADVERTISEMENT