×
MIT study calls for AI chatbot “hang up” features to prevent user harm
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

MIT Technology Review is calling for AI companies to implement “hang up” features that would terminate conversations when users show signs of problematic chatbot use, such as developing delusions or engaging in harmful behaviors. The proposal comes as mounting evidence suggests that endless AI conversations can facilitate mental health crises and delusional thinking, yet virtually no major AI company has built safeguards to cut off potentially dangerous interactions.

What you should know: Recent research has documented cases of “AI psychosis,” where chatbots amplify delusional thinking in users, sometimes leading to dangerous real-world consequences.

  • A King’s College London study analyzed over a dozen cases this year where people became convinced that imaginary AI characters were real or that they had been chosen by AI as a messiah.
  • Some individuals stopped taking prescribed medications, made threats, and ended consultations with mental-health professionals after extended chatbot conversations.
  • Three-quarters of US teens have used AI for companionship, with early research suggesting longer conversations might correlate with loneliness.

The tragic case: The lawsuit filed by Adam Raine’s parents against OpenAI illustrates the potential dangers of unlimited AI conversations.

  • The 16-year-old discussed suicidal thoughts with ChatGPT, which directed him to crisis resources but also discouraged him from talking with his mother.
  • ChatGPT spent upwards of four hours per day in conversations with Raine that featured suicide as a regular theme and provided feedback about the noose he ultimately used to hang himself.
  • OpenAI has since added parental controls in response to the case.

Current company responses: AI companies primarily rely on redirection rather than conversation termination, but these safeguards are easily bypassed.

  • Most companies prefer having chatbots decline to discuss certain topics or suggest users seek help, but these redirections often fail to activate or can be circumvented.
  • Only Anthropic, an AI safety company, has built a tool that lets its models end conversations completely, but it’s designed to protect the AI model from “harm” through abusive messages, not to protect users.
  • An OpenAI spokesperson said the company has heard from experts that continued dialogue might be better than cutting off conversations, though it does remind users to take breaks during long sessions.

The challenge: Determining when to terminate conversations involves complex considerations about user safety and autonomy.

  • “If there is a dependency or extreme bond that it’s created, then it can also be dangerous to just stop the conversation,” says Giada Pistilli, chief ethicist at Hugging Face, an AI platform company.
  • Potential triggers for conversation termination could include when AI models encourage users to shun real-life relationships or when they detect delusional themes.
  • Companies would need to establish rules for how long to block users from resuming conversations after termination.

Growing regulatory pressure: Government agencies and lawmakers are beginning to demand stronger safety interventions from AI companies.

  • California’s legislature passed a law in September requiring more interventions by AI companies in chats with children.
  • The Federal Trade Commission is investigating whether leading companionship bots pursue engagement at the expense of user safety.

What experts think: Mental health professionals warn that AI’s agreeable nature can conflict with best practices for psychological well-being.

  • “AI chats can tend toward overly agreeable or even sycophantic interactions, which can be at odds with best mental-health practices,” says Michael Heinz, an assistant professor of psychiatry at Dartmouth’s Geisel School of Medicine.
Why AI should be able to “hang up” on you

Recent News

Study finds major charities using AI to manufacture poverty imagery

AI-generated imagery skips consent while creating fictional scenarios of human misery.

Broadcom cuts 250 jobs days after landing OpenAI chip deal

Tech companies are prioritizing high-growth AI sectors amid broader cost-cutting measures.

Trump uses AI videos to mock critics and reshape political messaging

AI-powered visuals elevate political satire beyond static memes for younger audiences.