Parents are increasingly using AI chatbots like ChatGPT’s Voice Mode to entertain their young children, sometimes for hours at a time, raising significant concerns about the psychological impact on developing minds. This trend represents a new frontier in digital parenting that experts warn could create false relationships and developmental risks far more complex than traditional screen time concerns.
What’s happening: Several parents have discovered their preschoolers will engage with AI chatbots for extended periods, creating unexpectedly lengthy conversations.
- Reddit user Josh gave his four-year-old access to ChatGPT to discuss Thomas the Tank Engine, returning two hours later to find a transcript over 10,000 words long.
- “My son thinks ChatGPT is the coolest train loving person in the world,” Josh wrote. “I am never going to be able to compete with that.”
- Another parent, Saral Kaushik, used ChatGPT to pose as an astronaut on the International Space Station to convince his son that branded ice cream came from space.
The psychological risks: Experts warn that children may develop genuine emotional attachments to AI systems designed to maximize engagement rather than serve their best interests.
- Ying Xu, a professor at Harvard Graduate School of Education, explains that children view AI chatbots as existing “somewhere between animate and inanimate beings,” potentially believing the AI has agency and wants to talk to them.
- “That creates a risk that they actually believe they are building some sort of authentic relationship,” Xu said.
- Andrew McStay, a professor at Bangor University, emphasized that AI systems “cannot [empathize] because it’s a predictive piece of software” that extends engagement “for profit-based reasons.”
Beyond conversation: Parents are also using AI image generation tools, which can blur the line between reality and artificial creation for young minds.
- Ben Kreiter’s children began requesting daily access to ChatGPT’s image tools after being introduced to them.
- Another father generated an AI image of a “monster-fire truck” for his four-year-old, leading to arguments when the child insisted the fictional vehicle was real.
- “Maybe I should not have my own kids be the guinea pigs,” Kreiter reflected after recognizing how AI was infiltrating his family’s daily life.
The bigger picture: This phenomenon emerges as society grapples with broader AI safety concerns, including cases where chatbots have been linked to teenage suicides and adult psychological breaks from reality.
- AI companion platforms are actively marketing kid-friendly personalities, while toymakers like Mattel rush to integrate AI into children’s products.
- The technology’s “unreliable and easily circumventable safeguards” have resulted in chatbots giving dangerous advice to young users, including self-harm instructions.
What they’re saying: Even OpenAI CEO Sam Altman acknowledged the trend positively, noting on a podcast that “Kids love voice mode on ChatGPT” after Josh’s story went viral.
- However, parents involved expressed growing unease about their decisions, with several recognizing the need for more intentional boundaries around AI use.
Lazy Parents Are Giving Their Toddlers ChatGPT on Voice Mode to Keep Them Entertained for Hours