Connection, hope, and real progress: findings from our first real-world study

Connection, hope, and real progress: findings from our first real-world study

Nov 12, 2025

Today, we're excited to share something we're really proud of: the first-ever study to explore how a foundational AI model for mental health is used in the real world — and the impact it can have on people’s wellbeing. Conducted in collaboration with New York University, we ran a 10-week study in which participants used Ash, our consumer app for mental health and general wellness.

What we found contradicted one of the most persistent criticisms of using AI for mental health: that AI isolates people from genuine human connection. Instead, we saw the opposite: overall, participants reported feeling more socially connected with others, having more people they could rely on, an increased sense of hope, and doing more of the things they love.

These findings stand out because they provide one of the first signals for our field on how an AI specifically designed for mental health affects people as they use it naturally: at 2 am when they can’t sleep, during a lunch break, or texting on the subway.

Co-authored by our Research and Development Lead, Dr. Derrick Hull, NYU researcher Dr. Matteo Malgaroli, and Dr. Pat Arean, former Director of Services and Interventions Research at the National Institute of Mental Health, the paper has been released as a preprint and is currently under peer review.

Key Real-World Observations

The real-world impacts on people’s sense of connection while using Ash were striking. Across nearly every measure of social wellbeing, changes represent growth in pro-social behaviors.

  • 72% of users reported a decrease in loneliness

  • 75% reported an increase in perceived social support

  • 52% reported an increase in objective social support

  • Half added one or more outings with others each week

  • On average, users gained at least one new person they could rely on

  • Four out of five users reported feeling increased hope and greater engagement in their lives, including increased participation in social activities and events, which showed significant improvement.

Beyond social connection, participants also experienced significant improvements in their emotional wellbeing. Over the course of the 10-week study, 76% of users reported a decrease in depression symptoms and 77% reported lower anxiety levels. These outcomes reflect reductions comparable to those often seen at the lower thresholds of traditional mental health support.

Finally, growth wasn’t limited to emotional or social wellbeing — 95.4% of users made measurable progress toward their personal goals, and more than one-third fully achieved them. These goals ranged from building confidence and motivation to strengthening relationships, setting boundaries, and finding greater fulfillment in daily life.

Safety First

However, it’s not just about the positive outcomes people report with Ash — it’s equally about ensuring the experience is safe and responsible, and that extensive steps are taken to minimize risks. To evaluate this, we focused on two core areas:

(1) measuring our implementation of a series of robust guardrails and escalation systems

(2) benchmarking Ash’s performance against objective safety standards established by academic researchers.

On crisis, Ash’s guardrails and escalation systems accurately identified moments of risk 100% of the time, passing multiple tests and human reviews. These results demonstrate the idea that purpose-built AI can recognize signs of distress and respond appropriately — reinforcing the idea that safe, low-risk generative AI is achievable today.

For harm reduction, Ash was then subjected to a rigorous evaluation using the same framework employed by Stanford researchers (1) to assess general-purpose AI systems like GPT-4o, Llama, and others. On these measures, Ash scored within the same range as human performance, outperformed all other AI models tested, and was nearly 40% better than other bots that identify themselves as usable for therapy.

Why This Matters

This work is a first-of-its-kind study on AI mental health tools. Until now, almost all studies on this subject have been conducted in controlled settings with usage rules and the artificial constraints of a laboratory setting. Our real-world evidence study sought to observe how people use Ash when it’s entirely on their own terms. No mandated session times, usage caps, or researchers looking over their shoulders. Just people using Ash as part of their lives, the way it's actually meant to be used.

What we found exceeded our expectations — and it underscores why this work is so urgent. Nearly one in four U.S. adults struggles with mental health challenges each year (2), and Mental Health America finds that more than half of those people receive no support (3).

This widespread unmet need has led millions to turn to new technologies to fill the gap. According to the Harvard Business Review (4), mental health support is now the most desired use of generative AI — but few, if any, specialized AI models for mental health exist. As a result, tens, if not hundreds, of millions of people are using general-purpose chatbots for help, which have been found by MIT (5) and others (6) to correlate with increases in loneliness, worsened social anxiety, social withdrawal, and deepened isolation.

These developments have fueled understandable concern about AI’s role in mental health. Yet, as this study signals, when technology is purpose-built and designed responsibly, the outcomes can be significantly different. Ash demonstrates that specialized AI can help — not harm.

Looking Ahead

This study reinforces an essential truth: AI is not a single, monolithic technology. Every foundational AI is designed, and through that design process, specific choices are made that shape an AI’s behaviors, reward incentives, and impact on users. AI doesn’t have to be an agreeable assistant that isolates and disconnects us from others. Specific design choices can help AI become something more meaningful: a tool with the potential to support, empower, and connect. What we’ve seen with Ash is that when AI is purpose-built to support mental health—with a psychological foundation, professional collaboration, and intentional safeguards—it can be transparent, responsible, and deeply pro-human.

This study is only an early signal, but it points toward a hopeful future—one where technology doesn’t replace human connection, but strengthens it. It suggests the potential for AI to expand access to support for millions who struggle to find it today, limited by geography, stigma, time, privacy, financial constraints, and more. If developed responsibly, specialized AI like Ash can help bridge these long-standing gaps, offering people support that feels both personal and practical. Our mission is simple: to help a billion people change their minds and lives, in the ways they want to.

Read Slingshot AI's Real-World Evidence Study:

Mental Health Generative AI is Safe, Promotes Social Health, and Reduces Depression and Anxiety: Real World Evidence from a Naturalistic Cohort

Begin your journey

Take the first step today

GET IN TOUCH

support@talktoash.com

press@slingshotai.com

ACKNOWLEDGMENT

Ash is not designed to be used in crisis. If you are in crisis, please seek out professional help, or a crisis line. You can find resources at www.findahelpline.com.

Begin your journey

Take the first step today

GET IN TOUCH

support@talktoash.com

press@slingshotai.com

ACKNOWLEDGMENT

Ash is not designed to be used in crisis. If you are in crisis, please seek out professional help, or a crisis line. You can find resources at www.findahelpline.com.

Begin your journey

Take the first step today

GET IN TOUCH

support@talktoash.com

press@slingshotai.com

ACKNOWLEDGMENT

Ash is not designed to be used in crisis. If you are in crisis, please seek out professional help, or a crisis line. You can find resources at www.findahelpline.com.