Artificial Intelligence seems to have woven itself into just about every corner of our daily lives. What began as a convenient information source has evolved into a nuanced tool that is redefining therapy culture. For students experiencing stress and anxiety, chatbots like ChatGPT or Replika can feel like a lifeline. And honestly, I can see why; they provide consistent guidance and let you engage on your own terms, without the pressures of an appointment or fear of judgment. Unlike traditional therapy, AI never sleeps - it offers 24/7 support, even during those middle-of-the-night crises when no one else is readily available.
For many, it’s simply that it’s easier to be vulnerable with a computer than with another person, regardless of their qualifications. Some research even suggests that AI-generated messages feel more empathetic and carry a more positive sentiment than those written by licensed counselors.
The benefits are undeniable, but I still have my doubts. AI chatbots can never truly understand the complexity of human emotion, no matter how sophisticated their responses may seem. Their guidance is ultimately generated from data patterns that predict what you want to hear most, rather than from lived experiences or professional judgement. Thus there’s always a risk of guidance being generic, misleading, or even harmful.
Regulation isn't keeping pace with the rate of development
And say something does go wrong, who is accountable? A therapist can be held responsible for harmful advice, but who takes the blame when a chatbot gives guidance that makes things worse? Regulation isn’t keeping pace with the rate of development, leaving students in a sort of grey area where support is offered without the proper safeguards.
We already trust AI with vast amounts of personal information, but should we really hand over something as personal as our mental health? Licensed therapists are bound by confidentiality, but who ensures our protection here? This isn’t like sharing your shopping habits; it’s intimate details shared in moments of trust and vulnerability. Reducing them to just another data point feels, frankly, unethical.
Now, I’m not suggesting we should completely dismiss the use of AI in student wellbeing. There’s no denying that it can provide short-term support and ease some of the strain on overstretched public services. But equally, it would be harmful to encourage a reliance which could affect our real coping-mechanisms. Instead, we must find a balance: use AI as a supplement, not a substitute. Because the bottom line is that, at its core, AI lacks what makes human support irreplaceable: genuine human connection and understanding.
If you are struggling, remember you are not alone. Don’t hesitate to reach out:
Samaritans: 24/7 free support. Call 116 123 or visit samaritans.org.
Nightline: Confidential listening service run by students, for students. Visit nusu.co.uk/support/welfare-centre/5/nightline.
NHS urgent support: If you’re in crisis, dial 111 (or 999 if it’s an emergency).
Newcastle Student Wellbeing Services: Offers free long-term support and guidance on a wide range of topics - https://www.ncl.ac.uk/wellbeing/