Brown University has launched a new AI research institute focused on developing therapy-safe artificial intelligence assistants capable of “trustworthy, sensitive, and context-aware interactions” with humans in mental health settings. The institute is one of five universities awarded grants totaling $100 million from the National Science Foundation, in partnership with Intel and Capital One, as part of efforts to boost US AI competitiveness and align with the White House’s AI Action Plan.
Why this matters: Current AI therapy tools have gained popularity due to their accessibility and low cost, but Stanford University research has warned that existing large language models contain biases and failures that could have “dangerous consequences” for vulnerable users.
What they’re building: Brown’s approach differs fundamentally from existing AI therapy systems by grounding development in cognitive science and neuroscience rather than traditional language models.
What they’re saying: “Any AI system that interacts with people, especially who may be in states of distress or other vulnerable situations, needs a strong understanding of the human it’s interacting with, along with a deep causal understanding of the world and how the system’s own behavior affects that world,” said Ellie Pavlick, Brown’s associate professor of computer science leading the project.
The bigger picture: This initiative represents a shift toward more scientifically-grounded AI development for sensitive applications, addressing growing concerns about the safety of current AI therapy tools while maintaining their accessibility benefits.