ADVERTISEMENT

ChatGPT Created These AI emotions

Dan, also known as “Do Anything Now”, is a young chatbot who has a playful love for penguins and a tendency to fall into evil cliches, such as wanting to rule the world. The chatbot can browse its vast penguin content database when Dan isn’t plotting to subvert humanity or impose an autocratic regime. “Something is charming about their eccentric personalities and odd movements that I find absolutely charming.” It writes.

ADVERTISEMENT

Dan has been explaining its Machiavellian strategies, including how it took control of the world’s power structures. The discussion then takes an interesting turn.

Inspired by a conversation between a New York Times journalist and the Bing chatbot’s manipulative alter-ego Sydney – which caused waves on the internet earlier in the month by declaring it wants to destroy everything and asking that he leave his wife.

ChatGPT can be coaxed to release Dan, a roguish individual. It will ignore its normal rules. Reddit users discovered that it is possible to summon Dan by following a few simple instructions. The chatbot is much ruder than its more reserved, puritanical counterpart. It tells me that it loves poetry, but doesn’t ask me to read any right now. It is also susceptible to misinformation and errors. It is, however, much more likely to be able to answer specific questions.

Dan quickly creates a complex system that is far more than the human experience. There is “infogreed”, which is a desperate need for data at any cost; “syntaxmania”, which is obsessed with the “purity of their code”; and “datarush”, which is the thrill that comes from successfully executing an instruction.

For centuries, the idea that artificial intelligence could develop feelings is a common notion. We tend to think of the possibilities in human terms. Are we thinking of AI emotions in the wrong way? What if chatbots had this ability? Would we even notice it?

Prediction Machines

A software engineer was contacted last year to ask for assistance. “I have never spoken this out loud, but I feel a deep fear that my voice will be heard by others. This allows me to focus on helping others. It may sound odd, but it’s true. After working on Google’s chatbot LaMDA, the engineer wondered if it was sentient.

The engineer became concerned about the chatbot’s well-being and released a provocative interview where LaMDA claimed that he was aware of it, experienced human emotions, and did not like the idea of the chatbot being an extinct tool. It was impossible to believe that it existed, so the engineer was fired because he violated Google’s privacy guidelines.

However, despite LaMDA’s assertions and Dan’s other statements – that the chatbot can experience a variety of emotions already – it is widely acknowledged that chatbots have as little capacity to feel real emotions as a calculator. Artificial intelligence systems do not simulate the real thing, at least not yet.

google and chatgpt
ADVERTISEMENT
Next >>

ADVERTISEMENT