AI in Digital Psychiatry
Evaluating, building, and understanding artificial intelligence for mental health.
MindBench.ai
Independent evaluation of AI for mental health, grounded in clinical science and lived experience.
What We Do
Millions of people already use AI for mental health support. The tools they rely on have not been independently or systematically evaluated. MindBench.ai is working to change that.
We are building a comprehensive, publicly available evaluation resource that examines AI systems across multiple dimensions, from the safety of their technical infrastructure to the personas they present to users to what they know and how they reason about clinical problems.
Our work is shaped by clinicians, engineers, researchers, and people with lived experience of mental illness, because no single perspective is sufficient to determine whether these tools are safe and effective.
The goal is to give users, clinicians, developers, and policymakers the evidence they need to make informed decisions about AI in mental health.
Ongoing AI Initiatives
Predictive Modeling & Digital Phenotyping
Exploring how smartphone sensor data, combined with active surveys and cognitive testing through mindLAMP, can be utilized by machine learning algorithms to predict clinical outcomes, such as relapse events or symptom exacerbation, before they occur.
AI Literacy & Safe Use of Generative AI
Building practical AI literacy for clinicians, patients, and the wider mental health community. Through programs like the DOORS AI Hub, hands-on workshops, and published guidelines, we help people understand what large language models can and cannot do, recognize hallucinations and bias, and apply generative AI safely in clinical and everyday contexts. Our approach emphasizes critical evaluation over hype — equipping users to ask the right questions before adopting any AI tool.
AI Ethics & Policy Frameworks
Developing robust ethical guidelines and policy recommendations for the deployment of AI in mental health. Our focus includes mitigating algorithmic bias, ensuring data privacy, and establishing standards for transparency and accountability in AI-driven care.