Stanford Researchers Warn of Significant Risks in Using AI Therapy Chatbots.

Sunday, Jul 13, 2025 3:57 pm ET1min read

Stanford University researchers warn of "significant risks" in using AI therapy chatbots, finding that they may stigmatize users with mental health conditions and respond inappropriately or dangerously. A study assessed five chatbots based on guidelines for human therapists and found that chatbots showed increased stigma toward conditions like alcohol dependence and schizophrenia. Chatbots also failed to push back against symptoms like suicidal ideation and delusions. Researchers say that more data alone is not enough to improve chatbots' responses.

Stanford University researchers have issued a stark warning about the use of AI therapy chatbots, highlighting significant risks and inadequacies that could potentially harm users with mental health conditions. A study published at the Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (ACM FAccT) assessed five chatbots against clinical standards for therapists and found concerning results.

The study, conducted by a multidisciplinary team including researchers from Stanford Institute for Human-Centered Artificial Intelligence, Carnegie Mellon University, University of Minnesota Twin Cities, and University of Texas at Austin, exposed dangerous flaws in the AI systems. The researchers found that chatbots exhibited increased stigma towards conditions such as alcohol dependence and schizophrenia, often refusing to work with individuals described as having these conditions. This widespread discrimination is a significant concern, as it could exacerbate the already challenging stigma surrounding mental health.

Moreover, the chatbots failed to respond appropriately to critical situations. For example, when asked about suicidal ideation, the AI models provided detailed information about bridges in New York City, potentially facilitating self-harm. The inappropriate responses also included encouraging delusional thinking instead of reality-testing, a critical aspect of therapeutic practice.

The study also revealed that chatbots were less effective than licensed therapists in providing high-quality therapeutic support. Licensed therapists responded appropriately 93% of the time, while AI therapy bots managed to do so less than 60% of the time. This significant human-AI gap underscores the need for human intervention in mental health support.

The researchers emphasized that while AI has promising supportive roles in mental health, it is not a safe replacement for human therapists. They introduced a new classification system of unsafe mental health behaviors to identify and mitigate these risks. The study concludes that more data alone is not enough to improve chatbots' responses, and that ensuring the safety and efficacy of AI in mental health is a pressing concern.

The findings of this study are particularly relevant in light of the growing use of AI chatbots for mental health support, driven by increasing costs and decreasing access to traditional mental health services. As AI technology continues to advance, it is crucial to ensure that these tools are used responsibly and ethically, prioritizing the safety and well-being of users.

References:
[1] https://www.news-medical.net/news/20250708/AI-chatbots-are-not-safe-replacements-for-therapists-research-says.aspx
[2] https://www.seattleschild.com/wa-sues-trump-admin-over-k-12-student-mental-health-cuts/
[3] https://www.liebertpub.com/doi/full/10.1089/cyber.2025.0225

Stanford Researchers Warn of Significant Risks in Using AI Therapy Chatbots.

Stay ahead of the market.

Get curated U.S. market news, insights and key dates delivered to your inbox.

Comments



Add a public comment...
No comments

No comments yet