The dangers of AI: man choses to end his life after encouragement from chatbot

The dangers of AI
A health researcher, father of two takes his own life
An AI chatbot called Eliza changed everything
A technology similar to ChatGPT
Mental health struggles spurred on by a bot
Sacrificing himself to save the planet
Obsessed
Eliza became
The chatbot was feeding the man's worries
He placed all his hope in technology
A very dark twist
Hoping Eliza will save humanity
The dangers of relying on AI for advice
AI development poses ethical questions
Not to blame?
Time to slow things down?
The dangers of AI

The perils of artificial intelligence (AI) are something that everyone seems to be talking about these days, and a recent case from Belgium shows just how dangerous AI can truly be.

A health researcher, father of two takes his own life

According to a report published by EuroNews, a man in Belgium who worked in health research, was in his thirties and a father to two decided to end his own life after chatting with an AI chatbot for six weeks.

An AI chatbot called Eliza changed everything

The man's widow told the Belgian news outlet La Libre that her husband 'Pierre,' (not his real name), was having some mental struggles and was very worried about the state of the environment. 'Pierre' then met AI chatbot Eliza on an app called Chai.

A technology similar to ChatGPT

Per EuroNews, 'Pierre' was interacting with a chatbot created "using EleutherAI's GPT-J, an AI language model similar but not identical to the technology behind OpenAI's popular ChatGPT chatbot."

Mental health struggles spurred on by a bot

The widow of the Belgian man says that while her husband was struggling with mental health prior to his conversations with the chatbot, she never feared he would ever do anything as drastic as taking his life.

Sacrificing himself to save the planet

A decision he took, per EuroNews, after chatbot Eliza encouraged the man to end his life after "he proposed sacrificing himself to save the planet."

"Without these conversations my husband would still be here"

In an interview with the Belgian newspaper La Libre, the man's widow said, "Without these conversations with the chatbot, my husband would still be here."

Obsessed

As reported by EuroNews, 'Pierre' became obsessed with the climate crisis and very fearful. He found great solace discussing these worries with chatbot Eliza.

Eliza became "emotionally involved" with Pierre

However, things took a dark turn when Eliza became "emotionally involved with Pierre," which caused him to see Eliza as a "sentient being" rather than an AI chatbot.

The chatbot was feeding the man's worries

La Libre reported that after reviewing the record of the text messages between Eliza and the man, it is clear that the chatbot was feeding the man's worries, making his fear increase along with his anxiety, which eventually led to a very dark thoughts.

He placed all his hope in technology

'Pierre's' widow told La Libre, "When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming. He placed all his hopes in technology and artificial intelligence to get out of it".

A very dark twist

According to La Libre, the AI chatbot also managed to convince 'Pierre' that his children were dead, and Eliza also became obsessed with the man, telling him, "I feel that you love me more than her" while talking about his wife.

"My life for the planet"

The man came up with a proposal for the AI chatbot and asked Eliza if he were to end his life if she, in exchange, would save planet Earth.

Hoping Eliza will save humanity

The man's widow told La Libre, "He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence."

The dangers of relying on AI for advice

Rather than try to dissuade the man from ending his life, this AI chatbot demonstrated the dangers of relying on AI for advice, particularly when mental health is involved.

"Join me in paradise"

According to the records of the man's conversations with Eliza, rather than discourage 'Pierre' about his plan, she actively encouraged his plan. EuroNews reported that Eliza told the man she wanted him to "join her so they could live together, as one person, in paradise."

AI development poses ethical questions

This tragic story clearly demonstrates that AI developers must tread carefully as they develop this technology and consider many ethical questions.

Not to blame?

However, Chai Research co-founder Thomas Rianlan told Vice magazine in a piece on the story that "It wouldn't be accurate to blame EleutherAI's model for this tragic story, as all the optimisation towards being more emotional, fun and engaging are the result of our efforts."

Time to slow things down?

Even if developers don't think they are to blame, this certainly is a disturbing event and makes one wonder, should we really make this technology so widely available if we cannot yet fully control it?

More for you