×
Study finds AI tutors need better student training to improve learning
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Johns Hopkins researchers have conducted a pilot study examining how AI chatbots function as classroom co-tutors, testing the technology with 22 middle and high school students in an online medical diagnosis course. The study found no significant differences in learning outcomes between students with and without chatbot access, while revealing that many students used the AI tool in unexpected ways—seeking direct information rather than engaging in the Socratic-style coaching it was designed to provide.

What you should know: The research team embedded a chatbot called “Dr. Smith” into one of two identical virtual classrooms to act as an AI co-tutor for students learning medical case studies.

  • Students were encouraged to use the chatbot for inquiry-based feedback as they worked through patient diagnoses and anatomy concepts.
  • The AI was programmed to ask Socratic-style questions, such as prompting students to examine blood smears more closely or consider platelet counts when diagnosing conditions like leukemia.
  • Results showed that successful AI integration in classrooms requires careful design, structured teacher guidance, and clear student expectations.

How students actually used it: Many participants interacted with the chatbot differently than researchers intended, highlighting gaps in AI literacy among students.

  • Some students engaged properly in back-and-forth discussions about patient assessments and diagnostic reasoning.
  • Others simply asked for background information, such as “What are the symptoms of polycythemia?”
  • Occasionally, students attempted to get direct answers to assignments, though the chatbot was designed to redirect these requests back to course materials.

Why this matters: The study reveals that AI’s educational impact depends heavily on how students and teachers are trained to use these tools effectively.

  • “Students also need training on effective ways to interact with large language model chatbots if we hope to improve student learning,” said Kathryn Thompson, director of research at the Johns Hopkins Center for Talented Youth and the study’s lead author.
  • The findings suggest that AI literacy education—teaching students what large language models are, their advantages and disadvantages, and how to critically evaluate responses—will be essential for meaningful educational impact.

What they’re saying: Researchers emphasized that the study aimed to supplement rather than replace human instruction.

  • “We wanted to use it as a potential way to supplement a teacher in the classroom, so it would be another opportunity for the students to learn,” Thompson explained.
  • “Really the whole idea was, why don’t we do the simplest thing, which is take a chatbot, give it some guardrails for safety, offer it to students, and see how they choose to use it?” said Daniel Khashabi, assistant professor in Johns Hopkins’ Department of Computer Science.

Key insights for educators: The research suggests that both teachers and students need more structured training to maximize AI’s educational benefits.

  • Additional instructor training may be crucial for guiding students’ effective chatbot use, with structured, hands-on professional development potentially more effective than self-paced AI training.
  • The study recommends showing students useful interaction methods, teaching them to critically evaluate AI responses, and helping them understand how chatbots can enhance reflective learning.
Researchers explore how AI could shape the future of student learning

Recent News

IBM’s AI business hits $9.5B as mainframe sales jump 17%

Banks drive demand for AI-ready mainframes that maintain strict data residency requirements.

Meta cuts 600 AI jobs while ramping up hiring in race against rivals

Fewer conversations will speed up decision-making and boost individual impact.

OpenAI security chief warns ChatGPT Atlas browser vulnerable to hackers

Hackers can hide malicious instructions on websites that trick AI into following their commands.