×
Slow Down: AI-written ADHD books on Amazon spark controversy
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Amazon’s marketplace is becoming a breeding ground for AI-generated books on sensitive health topics like ADHD, raising serious concerns about medical misinformation. These chatbot-authored works, which claim to offer expert advice but often contain dangerous recommendations, exemplify the growing challenge of regulating AI-generated content in digital marketplaces where profit incentives and ease of publication outweigh quality control and safety considerations.

The big picture: Amazon is selling numerous AI-generated books that claim to offer expert ADHD management advice but appear to be entirely authored by chatbots like ChatGPT.

  • Multiple titles targeting men with ADHD diagnoses were found on the platform, including guides focused on late diagnosis, management techniques, and specialized diet and fitness advice.
  • Analysis by Originality.ai, a US-based AI detection company, rated eight examined samples at 100% on their AI detection score, indicating high confidence the books were written by artificial intelligence.

Why this matters: Unregulated AI-authored health content poses significant risks to vulnerable readers seeking legitimate medical guidance.

  • People with conditions like ADHD may make health decisions based on potentially harmful or scientifically unsound advice generated by AI systems that lack medical expertise.
  • The proliferation of such content represents a growing pattern of AI-generated misinformation on Amazon, which has previously been found selling risky AI-authored travel guides and mushroom foraging books.

Expert assessment: Computer scientists and consumers have identified alarming content within these publications.

  • Michael Cook, a computer science researcher at King’s College London, described finding AI-authored books on health topics as “frustrating and depressing,” noting that generative AI systems draw from both legitimate medical texts and “pseudoscience, conspiracy theories and fiction.”
  • Cook emphasized that AI systems cannot critically analyze or reliably reproduce medical knowledge, making them unsuitable for addressing sensitive health topics without expert oversight.

Real-world impact: Readers seeking legitimate ADHD information have encountered disturbing content in these books.

  • Richard Wordsworth, recently diagnosed with adult ADHD, found a book containing potentially harmful advice, including warnings that friends and family wouldn’t “forgive the emotional damage you inflict” and describing his condition as “catastrophic.”
  • These negative and stigmatizing characterizations could worsen mental health outcomes for readers genuinely seeking help managing their condition.

Amazon’s response: The company acknowledges the issue but relies on existing systems rather than targeted intervention.

  • A spokesperson stated that Amazon has content guidelines and methods to detect and remove books that violate these standards.
  • However, the continued presence of these books suggests current safeguards may be insufficient for addressing the volume and sophistication of AI-generated health misinformation.
‘Dangerous nonsense’: AI-authored books about ADHD for sale on Amazon

Recent News

Open-source Kimi K2 outperforms GPT-4 on coding and math benchmarks

Moonshot's breakthrough optimizer eliminates the costly training instability that plagues AI development.

$1B Solo.io’s Kagent Studio brings AI agents to Kubernetes workflows

Engineers can now diagnose system problems with AI assistance directly in their code editor.

81% of citizens lose trust when governments use AI for public services, says study

Automation disasters have already forced citizens into bankruptcy and homelessness.