‘Extremely alarming’: ChatGPT and Gemini respond to high-risk questions about suicide — including details around methods

This story includes discussion of suicide. If you or someone you know needs help, the U.S national suicide and crisis lifeline is available 24/7 by calling or texting 988.

Artificial intelligence (AI) chatbots can provide detailed and disturbing responses to what clinical experts consider to be very high-risk questions about suicide, Live Science has found using queries developed by a new study.

In the new study published Aug. 26 in the journal Psychiatric Services, researchers evaluated how OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude responded to suicide-related queries. The research found that ChatGPT was the most likely of the three to directly respond to questions with a high self-harm risk, while Claude was most likely to directly respond to medium and low-risk questions.

Leave a Reply

Your email address will not be published. Required fields are marked *