×
72% of kids now use AI chatbots as companions, prompting safety legislation
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Tennessee Senator Marsha Blackburn is pushing federal legislation to protect children from potentially harmful AI interactions as new research reveals 72% of kids now use chatbots as companions. The Kids Online Safety Act would require AI platforms to exercise “duty of care” when minors are involved, establishing clearer limits on what artificial intelligence systems can show or say to young users.

What you should know: Children are increasingly turning to AI chatbots for friendship and support, creating an unprecedented child safety challenge that’s outpacing parental awareness.

  • Nearly three-quarters of children now use AI chatbots as companions, according to Oliver Roberts, a law professor at Washington University School of Law.
  • “This is moving very fast,” Roberts said. “Children using AI chatbots as companions is now an epidemic.”
  • Parents often don’t realize how extensively their children are interacting with AI systems, from homework help apps to chatbots that mimic classmates.

The legislative response: Federal and state lawmakers are scrambling to create safeguards for AI interactions involving minors.

  • Senator Blackburn helped reintroduce the Kids Online Safety Act this year, which would establish “duty of care” requirements for AI platforms when children are involved.
  • The legislation would “put safeguards in place, prevent sexually explicit material and also alert parents when they are engaging in such conversations,” Roberts explained.
  • In October, 44 state attorneys general, including Tennessee’s, sent letters to major tech companies like Apple, Google and Microsoft urging them to address harmful AI content directed at children.

State-level action: California has already moved forward with new protections while other states consider similar measures.

  • Governor Gavin Newsom signed legislation requiring AI platforms to disclose when users are interacting with artificial intelligence.
  • The California law restricts AI from sharing sexually harmful or suicidal content and mandates annual reports to state authorities.
  • “The question now is whether Tennessee will follow a similar path or if the federal government will step in with a one-size-fits-all policy,” Roberts said.

Why this matters: The rapid adoption of AI companions by children has created a regulatory gap that lawmakers are rushing to fill before potential harms become widespread.

  • Traditional online safety measures weren’t designed for AI interactions that can feel deeply personal and human-like to children.
  • The federal legislation could set national standards for how AI systems interact with minors across all platforms and applications.
Tennessee senator pushes for artificial intelligence safeguards as children turn to chatbots for friendships

Recent News

AI assistants misrepresent news content in 45% of responses

Young users increasingly turn to AI for news instead of traditional search engines.

Starship raises $50M to deploy 12,000 delivery robots across US cities

The suitcase-sized bots have already completed over 9 million deliveries across European cities and college campuses.

Idaho hunters fined after trusting AI for hunting regulation dates

AI pulled proposed dates from commission meetings rather than final published regulations.