Why Are Young People Turning to AI for Mental Health Help?
Mental health challenges among youth have surged worldwide. The WHO reports that 1 in 7 adolescents, 10-19 years old, experience a mental disorder.
In the US, CDC surveys find that about 40% of high school students need mental health help as they report persistent sadness or hopelessness.
Over 5.2 million U.S. teenagers experienced at least one major depressive episode, yet more than half did not receive any mental health treatment. The rates of depression are particularly high among racial minority groups.
In the UK, nearly 1 million children and young people were referred to mental health services in 2024. The primary reasons for referrals included anxiety and neurodevelopmental disorders.
A national youth wellbeing study in Australia by Year13 and Scape revealed that 56% of young Australians aged 18–24 experience anxiety, 35% report depression, and 40% struggle with social anxiety.

Similarly, the New Zealand Health Survey indicated that 22.9% of young adults aged 15-24 experienced high or very high levels of psychological distress. Additionally, 10.7% of adults reported unmet needs for professional mental health support, with the highest unmet need among those aged 25-34.
Faced with this crisis, many young people can’t get timely mental health help: therapists are in short supply, and wait times are long. Consequently, young people worldwide are turning to AI alternatives.
AI-powered mental health tools provide 24/7 assistance, allowing users to engage in therapeutic conversations. For a generation accustomed to digital communication, AI therapy tools present a viable option to address their mental health needs.
What AI Mental Health Help Tools Are Young People Using?
Various apps and chatbots now claim to help with mood and stress. Woebot and Wysa are leading AI-driven chatbots built on cognitive-behavioral therapy (CBT).
Woebot was designed with youth in mind and offers scripted CBT exercises via conversation. Wysa acts like an “AI coach,” using CBT and DBT (Dialectical Behavior Therapy) techniques and even offers optional live human coaches.
Replika is more of a personal “AI friend” for general emotional support. And although not designed as a therapist, general-purpose chatbots like ChatGPT are increasingly used by young people to vent or seek advice, since they can answer any question in a conversational way.
Some apps use a fully conversational chatbot interface like Woebot or Replika, while others offer structured CBT programs like SPARX, MoodGym, or Lumi Nova that use quizzes and games. For example, New Zealand’s SPARX is a gamified CBT tool for depression.
Chatbot-based apps tend to feel more freeform, while CBT apps are more like digital therapy courses. Some newer tools blend both approaches, giving users conversation plus goal-setting exercises. In practice, many young users mix and match apps to fit their needs and style.
While AI chatbots provide accessible and immediate mental health support, they should complement, not replace, traditional therapy, especially for individuals with moderate to severe mental health conditions.
ChatGPT free answers all your questions and helps you excel at a wide range of tasks, whether it’s writing targeted emails, brainstorming ideas, or generating business strategies. Best of all, it’s completely free with unlimited usage. Just one click away, you can access it anytime, from anywhere.
Features of AI Therapy Tools That Attract Young Users
Several features make these AI mental health help tools particularly appealing to young users:
24/7 Availability
Unlike traditional therapy sessions, AI chatbots are accessible at any time, providing immediate support when needed.
Anonymity
Users can discuss sensitive issues without fear of judgment or stigma, which is especially important for those hesitant to seek help.
Non-Judgmental Tone
Chatbots don’t judge or react emotionally; they listen and reply factually, which can make sharing easier.
Affordability
Many AI mental health apps are free or low-cost, making them accessible to a broader audience.
What’s Happening on the Ground?
Across the world, AI mental health help tools are becoming part of the everyday support system for young people. In the U.S., schools and colleges are integrating chatbots to bridge gaps caused by counselor shortages. The U.K. is scaling up NHS-backed pilots, and public trust in digital therapy is rising.
In Australia, AI is increasingly used to reach youth in underserved and remote areas, while New Zealand continues to lead with gamified therapy platforms like SPARX, supported by a proactive government stance.

Each country is shaping its own path, but the momentum toward tech-driven mental health care is clearly shared.
United States Pushes AI Support in Academia
In the United States, educational institutions are increasingly adopting AI-powered tools to address mental health challenges among students. For instance, the chatbot “Sonny,” developed by Sonar Mental Health, is being utilized in nine school districts, primarily in low-income and rural areas.
Sonny provides support under the supervision of professionals, offering a blend of technological and human interaction. The chatbot is trained in motivational interviewing and cognitive behavioral therapy (CBT) techniques, ensuring it communicates effectively with teens.
A Wall Street Journal story noted Sonny has shown promising results in increasing student engagement and reducing behavioral issues.
Additionally, platforms like Gaggle are being employed to monitor student well-being. Gaggle’s AI-powered system helps schools proactively identify and address student safety concerns, including self-harm and cyberbullying.
A 2024 EdWeek Research Center study reported that 96% of Gaggle users believe the platform helps prevent suicide and self-harm, highlighting its impact on student mental health.
United Kingdom Backs AI Therapy Via NHS
The National Health Service (NHS) and educators in the United Kingdom are exploring AI-driven mental health solutions through various pilot programs. These initiatives aim to integrate digital cognitive behavioral therapy (CBT) tools into public health services, enhancing accessibility for young individuals.
Last year, NHS England partnered with Wysa to offer an AI chatbot for young people struggling with anxiety and low mood.
Digital Cognitive Behavior Therapy platforms like SilverCloud and Kooth are widely used in schools and colleges. A May 2024 YouGov poll found that UK 18-24-year-olds are the most open demographic to chatbots: about 31% of them would be comfortable discussing mental health with an AI chatbot. For Britons familiar with these tools, the biggest draws are accessibility (48%) and non-judgmental conversation (33%).
Academic institutions are also contributing to this digital shift. For example, a pilot and feasibility randomized controlled trial in England is exploring the use of SPARX, a gamified CBT program, supported by e-coaches.
The trial aims to determine whether SPARX, when guided by assistant psychologists, improves adherence and engagement compared to self-directed use.
The UK sees AI mental health help as a supplement to, not a replacement for, human care.
Australia Turns to AI in Remote Youth Mental Health Care
Australian youth face similar pressures, especially in rural areas. Some young Australians explained that they’ve formed deep “emotional bonds” with AI helpers.
One 27-year-old named Gracie said after a year of talking to ChatGPT about her problems, she “actually formed an emotional bond with it… I don’t want to leave it behind now”. She herself cautioned it “sounds crazy,” but found the bot a comforting confidant.
Gracie Johnson shared her experience:
“I’ve had some real, true, emotional growth because of these chats.”
Still, with Medicare-funded psychology capped at a limited number of sessions, many Australians, especially outside cities, have turned online. Studies of rural tele-mental health show that remote youth are particularly likely to try digital tools when local care is scarce.
Public discussions in Australia note chatbots are cheap and always available but lack crisis support. Australian psychologists emphasize these apps can be helpful “for handling everyday stress and loneliness,” but warn they can’t replace human therapists in serious cases.
New Zealand Adopts Gamified AI Mental Health Tools
New Zealand has been a pioneer in digital youth therapy. Early on, the NZ government launched SPARX, an online CBT game for teens with depression, which helped clinically reduce symptoms in trials. It offers a fantasy role-playing game environment to teach CBT techniques.
The program is designed for individuals aged 12 to 19 experiencing depression, anxiety, or stress. This “e-therapy” tool can be used independently and anonymously, filling gaps for kids who might avoid traditional counseling.
The NZ Ministry of Health now promotes SPARX as a first step before face-to-face treatment. NZ’s Child and Youth Wellbeing Strategy (2024–27) also emphasizes tech solutions.
Young Kiwi researchers note that young people often turn to YouTube or social media first in a crisis, so embedding mental health AI on platforms they already use is a goal. Several NZ high schools and universities offer apps like SPARX or digital CBT modules, and the government is exploring chatbots for screening and early intervention.
New Zealand has long championed tech-savvy approaches and continues to invest in online youth mental health solutions, with proper evaluation.
Try ChatGPT 4 Online for any of your technical and creative writing tasks, such as writing novels and screenplays, composing poems and songs, learning a specific writing style, or forming a specific code or program for your app or website.
What Do Young Users Say About AI Therapy?
Feedback from teenagers and students who try AI tools is mixed but insightful. In surveys, many users appreciate the convenience and “friendliness” of bots, but often stress they’re a supplement, not a substitute for real help.
One North Carolina high-schooler, 15-year-old Jordyne tried Woebot when she was overwhelmed by school anxiety. Though shy about opening up in person, she said texting her feelings to the chatbot made her feel “nervous and overwhelmed” but ultimately better after she “shared her struggles”.
She told journalists Woebot helped remove barriers to seeking support, but agreed it was not a “permanent solution”.
Many young users appreciate the immediate, non-judgmental support that AI chatbots offer. For instance, Santos, a computer science major, shared:
“There are so many people using it, and there’s kind of an anonymous aspect to it, so I don’t worry that anything could be traced back to me.”
These sentiments underscore the appeal of AI tools. They are accessible 24/7, provide a sense of anonymity, and eliminate the fear of judgment that some associate with human therapists.
Despite the advantages, users have noted several limitations of AI-based mental health support. Almost everyone points out limitations. Bots often misunderstand jokes, can’t hear tone or see tears, and may give generic advice.
One critic of Woebot noted, “body language and tone are important… Woebot doesn’t recognize such nonverbal communication. It’s not at all like how psychotherapy works,” and cautioned the bot sometimes “promised more than it could deliver”.
So, young users often view AI chatbots as a first step or stopgap. They quote them as friendly and judgment-free, but will typically turn to a human therapist or counselor for deeper issues.

Research shows that ChatGPT 4 has outperformed human therapists in counselling psychology.
Does AI Therapy Work? What Research and Psychologists Are Saying
What the Research Says
Recent studies indicate that AI-powered mental health tools show promise in supporting youth mental health, particularly in enhancing social and emotional well-being.
For instance, the Neolth digital intervention demonstrated effectiveness in improving adolescents’ social and emotional well-being, reducing academic stress, and increasing mental health literacy and life skills among adolescents.
However, experts caution that while these tools can be beneficial, they should not replace traditional therapy. A study highlighted that AI technologies should be approached with thoughtful caution, emphasizing the need for clinicians to use AI tools only within their area of expertise and ensure alignment with legal and professional standards.
The one disturbing case of a Woebot user with suicidal thoughts has been reported. She received a dangerously tone-deaf response when the bot encouraged her risky sports hobby. This underscores that chatbots can mishandle serious issues.
Concerns from Mental Health Professionals
Mental health professionals have raised concerns about the use of AI chatbots as therapeutic tools. APA leaders have warned regulators that chatbots “are now so sophisticated they can almost pass as real therapists,” which is worrisome. They highlight cases (e.g. teens harmed after chatting with AI personas) to stress that these tools could cause harm if used alone by vulnerable people.
Experts point out that AI chatbots may lack the ability to detect nonverbal cues, which are crucial in understanding a patient’s emotional state. This limitation could lead to missed signs of serious conditions, underscoring the importance of human oversight in AI-assisted therapy.
Ethical and Clinical Experts Raise Several Key Concerns
Safety and crisis
Most chatbots are not equipped for true crisis intervention. They may fail to recognize a suicide risk or simply redirect to generic advice. Health experts warn they lack built-in suicide prevention expertise.
Quality and bias
NLP algorithms can give imperfect advice. Errors or biased responses may slip in. Experts note that if bots misinterpret figurative language or cultural context, they may give inappropriate responses.
Data security
The AI systems may collect personal conversation data to learn, which raises privacy/ethics questions.
Complement, not replacement
The consensus is that AI should augment, not replace, human care. In fact, a systematic review found chatbots scored highly on “cultural competence” in one study, but professionals emphasize they lack empathy and clinical judgment. Hybrid models (e.g. human therapists with AI support tools) are seen as more realistic.
So, research and clinician opinion agree that AI chatbots can help with mild issues or as an interim measure, but are not proven or recommended as a standalone treatment for serious mental health problems.
The Privacy Question When Using AI Mental Health Tools
A significant concern is: what happens to the data users share with these apps? AI mental health tools typically collect sensitive personal information. This can include conversation transcripts, mood or survey responses, personal profiles, usage patterns, and device data (IP address, location).
Woebot’s policy states it reviews de-identified chat excerpts to train its AI. Some apps may also ask for age, gender or health history to personalize advice.
Because of this, privacy protections vary by region:
United States: HIPAA and Beyond
In the U.S., HIPAA (Health Insurance Portability and Accountability Act) governs healthcare data. However, most AI mental health apps are not classified as healthcare providers, so HIPAA often doesn’t apply. This leaves a legal vacuum for users who disclose mental health conditions to non-clinical AI bots.
HIPAA also doesn’t strictly apply to teens’ data – often the app’s own privacy policy (and COPPA for under-13s) governs. Parents should check if the app has clear guidelines on minors.
Some states are stepping in. California’s proposed legislation would prevent AI bots from simulating therapists without regulatory oversight, citing youth harm from unsafe chatbot usage.
United Kingdom: GDPR Enforcement Challenges
Under the General Data Protection Regulation (GDPR), mental health data is considered a “special category” and demands explicit consent, data minimization, and the right to be forgotten. So, AI mental health Apps must have explicit consent from users to process health data. Users have the right to access or delete their data.
Despite the framework’s rigor, enforcement is weak. In 2024, UK regulators (ICO) warned mental health startups using generative AI to improve their clarity on how they handle user data, especially among those under 18.
Australia: The Privacy Act 1988
Australia’s Privacy Act 1988 and the Australian Privacy Principles (APPs) apply to organizations handling personal data, including AI apps if they operate commercially.
The Office of the Australian Information Commissioner (OAIC) issued guidance in late 2024 for entities using AI:
“Children and young people are especially vulnerable online. AI systems used for mental health support must meet the highest privacy and safety standards.”
— OAIC Guidance on AI (Dec 2024)
However, enforcement largely depends on users submitting complaints, which minors are unlikely to do.
New Zealand: Privacy Act 2020
New Zealand’s Privacy Act 2020 grants users the right to know what data is collected, request access to it, and demand corrections or deletions.
The Office of the Privacy Commissioner (OPC) emphasized AI ethics in healthcare in a 2024 bulletin, stating:
“The use of AI in youth mental health must consider both cultural context and informed consent, especially when gamified therapy is involved.”
Platforms like SPARX, developed by the University of Auckland, operate under strict privacy-by-design principles, but concerns remain about newer entrants to the market.
In all regions, questions remain about who truly controls the data. Can parents see their child’s app records?
It depends on the terms of service and local law. Generally, teens under 16 may need parental consent under GDPR; in the US, rules vary by state for minors’ medical consent.
Many apps simply require users to be 13+ (COPPA compliance in the US) and offer no extra parental oversight. Privacy advocates urge families to treat these apps like any social media – read the privacy policy and be cautious about sharing extremely personal details.
The Future of AI in Youth Mental Health
AI mental health tools are likely here to stay – but in what form? Experts predict a hybrid model where AI complements human care.
For example, schools may incorporate chatbots into counseling centers: bots could handle initial screening or homework between counselor visits, while human therapists focus on high-risk cases. Some colleges are already piloting AI tools to identify at-risk students early, using AI to flag warning signs for counselors while preserving anonymity.
Regulatory agencies are also gearing up. As of 2025, the APA is urging U.S. regulators (FTC and possibly FDA) to look closely at these mental health bots. The UK ICO and EU are scrutinizing generative AI privacy.
Australia’s eSafety Commissioner now includes AI-powered mental health apps under its regulatory radar, requiring age-appropriate design. New Zealand’s regulators are considering a Code of Conduct for AI in healthcare that would apply to both clinical and self-help tools used by minors.
So, we may see new rules on labeling AI advisors – so users know they are talking to a bot, not a person – and quality standards for safe clinical advice.
The future isn’t AI-only. Experts and developers increasingly advocate for hybrid models where AI tools assist, but don’t replace, trained therapists.
In practice, this could look like:
AI Pre-screening
Chatbots collecting symptom data and mood tracking logs before a session with a human counselor.
Therapy Augmentation
AI providing between-session check-ins or guided CBT exercises that therapists can review.
Care Continuity
AI offering 24/7 support when human professionals are unavailable, without promising complete therapeutic outcomes.
As Dr. Charlotte Blease, a digital health researcher at Uppsala University, noted in an editorial:
“The future of AI in therapy isn’t autonomy, but augmentation – tools that work alongside clinicians to improve access, efficiency, and engagement, especially for young people.”
This hybrid future promises scalability without sacrificing nuance – a vision that blends technological efficiency with human empathy.
Final Thoughts
AI chatbots and apps are carving out a role in youth mental health support, especially among tech-savvy Gen-Z. They offer immediate, judgment-free conversation that can comfort and guide young people.
At the same time, both users and professionals agree that these tools have limits. The best use cases combine AI’s accessibility with human expertise. Watching this space evolve, it’s important to balance enthusiasm for AI integration with care for safety and privacy.
Albert Haley
Albert Haley, the enthusiastic author and visionary behind ChatGPT 4 Online, is deeply fueled by his love for everything related to artificial intelligence (AI). Possessing a unique talent for simplifying complex AI concepts, he is devoted to helping readers of varying expertise levels, whether newcomers or seasoned professionals, navigate the fascinating realm of AI. Albert ensures that readers consistently have access to the latest and most pertinent AI updates, tools, and valuable insights. Author Bio
