×
Team Human morale booster: UW researchers show kids how people outperform AI with puzzles
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

University of Washington researchers have developed a puzzle game that demonstrates AI’s limitations to children, showing that humans consistently outperform AI models on simple visual reasoning tasks. The game addresses a critical gap in AI literacy education, helping kids understand that artificial intelligence isn’t the all-knowing technology many perceive it to be.

Why this matters: Children often view AI as “magical” and infallible, especially those who can’t yet fact-check AI responses due to limited reading skills or subject knowledge.

  • The visual puzzle format allows non-readers to directly experience AI failures, fostering critical thinking about technology limitations.
  • “When it comes to AI technologies in general, there is a huge sense of trust that kids have with these devices,” said UW researcher Aayushi Dangol.

How it works: The game asks children to solve visual puzzles, then prompts them to instruct AI to complete the same task using text hints.

  • AI consistently fails even when given specific directions and hints from users.
  • The puzzles require abstraction and reasoning skills that humans possess but current AI models lack.
  • Multiple AI models are included to show that different systems struggle with the same challenges.

The big picture: The game is based on François Chollet’s 2019 Abstraction and Reasoning Corpus (ARC Puzzle), designed to be easy for humans but difficult for machines.

  • It serves as a benchmark for measuring AI capabilities and has been adapted by UW researchers for educational use.
  • The tool is available online for anyone to try.

What the kids are saying: Students testing the game at UW’s KidsTeam camp expressed both frustration and enlightenment about AI’s limitations.

  • “I don’t think (AI) knows what a border means,” one child observed while playing.
  • 10-year-old Zoe Blumenthal reflected: “AI is supposed to be this magical computer mind that can do anything, and instead it is this.”
  • The experience led her to appreciate human creativity: “Creativity is something the mind makes up. It doesn’t have to be told information (like AI).”

What researchers think: The game creates valuable learning opportunities by highlighting the gap between AI promises and performance.

  • “Clearly, there is this discrepancy between what (AI) is saying and what (AI) is producing,” Dangol noted.
  • Jason Yip, who runs UW’s Information School design lab, emphasized the importance of building confidence: “I think it is actually important for people to know even at a young age that the machines aren’t smarter than you, they just do different things and different tasks in this way.”
Game by UW Researchers Shows Kids the Limits of AI

Recent News

AI models secretly inherit harmful traits through sterile training data

Mathematical puzzles and numerical data carry invisible behavioral patterns that bypass traditional safety filters.