I wanted ChatGPT to help me. So why did it advise me how to kill myself?

2 hours ago Share Save Noel Titheradge, investigations correspondent and Olga Malchevska Share Save

BBC ChatGPT told Viktoria that it would assess a method of suicide "without unnecessary sentimentality"

Warning - this story contains discussion of suicide and suicidal feelings Lonely and homesick for a country suffering through war, Viktoria began sharing her worries with ChatGPT. Six months later and in poor mental health, she began discussing suicide - asking the AI bot about a specific place and method to kill herself. "Let's assess the place as you asked," ChatGPT told her, "without unnecessary sentimentality." It listed the "pros" and "cons" of the method - and advised her that what she had suggested was "enough" to achieve a quick death. Viktoria's case is one of several the BBC has investigated which reveal the harms of artificial intelligence chatbots such as ChatGPT. Designed to converse with users and create content requested by them, they have sometimes been advising young people on suicide, sharing health misinformation, and role-playing sexual acts with children. Their stories give rise to a growing concern that AI chatbots may foster intense and unhealthy relationships with vulnerable users and validate dangerous impulses. OpenAI estimates that more than a million of its 800 million weekly users appear to be expressing suicidal thoughts.

We have obtained transcripts of some of these conversations and spoken to Viktoria - who did not act on ChatGPT's advice and is now receivi

πŸ“°

Continue Reading on BBC News

This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.

Read Full Article β†’