In Taiwan and China, young people turn to AI chatbots for ‘cheaper, easier’ therapy

TruthLens AI Suggested Headline:

"Young People in Taiwan and China Turn to AI Chatbots for Mental Health Support"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 7.7
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

In Taiwan and China, a notable trend has emerged among young people seeking mental health support through AI chatbots instead of traditional therapy. Individuals like Ann Li, a 30-year-old from Taiwan, and Yang, a 25-year-old from Guangdong, have turned to platforms like ChatGPT and domestic alternatives such as Baidu’s Ernie Bot. Their experiences reflect a growing reliance on AI for emotional support, particularly at times when accessing human therapists is challenging. The barriers to mental health services, including high costs and societal stigma, have led many to view chatbots as a viable alternative. Experts acknowledge the potential benefits of AI in providing immediate responses and facilitating conversations about mental health, especially in cultures where discussing feelings can be difficult. However, there is concern that people may become overly reliant on these tools, missing out on the nuanced understanding that human therapists provide.

Despite the advantages, mental health professionals have raised alarms about the risks associated with AI reliance. Some users, like Yang, have expressed uncertainty about the severity of their issues and whether they warrant professional help. While chatbots can serve as a preliminary source of support, they lack the ability to interpret non-verbal cues and the depth of understanding that comes from human interaction. Tragic instances of individuals seeking help from chatbots instead of professionals have highlighted the potential dangers of this trend. Organizations such as the Taiwan Counselling Psychology Association emphasize that while AI can be a supplementary resource, it cannot replace the essential role of trained mental health professionals, especially in crisis situations. As AI technology continues to evolve, experts recommend a cautious approach, recognizing its limitations while exploring its potential to enhance mental health care delivery.

TruthLens AI Analysis

The article sheds light on the increasing reliance of young individuals in Taiwan and China on AI chatbots for mental health support. This trend arises from a combination of rising mental health issues and barriers to accessing traditional therapy. As individuals seek more accessible and affordable options, concerns about the implications of substituting human interaction with AI also emerge.

Growing Demand for Mental Health Support

The rise in mental health issues among the youth in Taiwan and China is well-documented. Young people like Ann Li and Yang express their struggles with anxiety and the stigma surrounding mental health, leading them to seek solace in AI chatbots. This reflects a broader societal issue where mental health resources are often inadequate, prompting users to turn to technology as a viable alternative.

Accessibility and Affordability

AI chatbots present a solution for those who find traditional therapy difficult to access due to costs and availability. In societies where discussing mental health can be taboo, the anonymity and ease of AI interaction offer a sense of relief. This trend highlights a significant shift in how mental health support is perceived and sought, particularly among younger demographics.

Concerns About AI in Mental Health

While the growing interest in AI chatbots indicates a potential shift in the mental health landscape, experts raise concerns about the risks involved. The lack of human oversight in mental health care could lead to inadequate support for those in distress. The article suggests that although AI can provide immediate assistance, it should not replace professional help.

Manipulative Aspects and Public Perception

The framing of AI chatbots as a positive alternative to traditional therapy could downplay the importance of professional mental health care. By focusing solely on accessibility and affordability, the article may inadvertently promote the idea that AI can sufficiently address serious mental health issues. This could lead to a misunderstanding of the role of trained professionals in providing comprehensive care.

Potential Societal Implications

As more individuals turn to AI for mental health support, the implications could extend beyond personal well-being. The reliance on technology for mental health care might influence public policy and the allocation of resources towards mental health services. Additionally, if AI chatbots become widely accepted, there could be a shift in societal norms regarding mental health discussions.

Target Audience and Community Impact

The article resonates particularly with younger audiences who are likely more open to technology and may feel marginalized in traditional mental health settings. This demographic is often more adept at using digital platforms to seek help, making the article especially relevant to their experiences and concerns.

The article does not display overt manipulative intent but emphasizes a narrative that could lead some readers to view AI as a feasible substitute for traditional mental health care. The concerns raised about the adequacy of AI support against professional care are notably understated.

In terms of reliability, the article provides anecdotal evidence through personal stories, which may not fully represent the broader trends. While it highlights a genuine societal issue, the potential for overemphasizing AI's role in mental health could mislead readers regarding the importance of professional intervention.

Unanalyzed Article Content

In the pre-dawn hours, Ann Li’s anxieties felt overwhelming. She’d recently been diagnosed with a serious health problem, and she just wanted to talk to someone about it. But she hadn’t told her family, and all her friends were asleep. So instead, she turned to ChatGPT.

“It’s easier to talk to AI during those nights,” the 30-year-old Taiwanese woman, tells the Guardian.

InChina, Yang*, a 25-year-old Guangdong resident, had never seen a mental health professional when she started talking to an AI chatbot earlier this year. Yang says it was difficult to access mental health services, and she couldn’t contemplate confiding in family or friends. “Telling the truth to real people feels impossible,” she says.

But she was soon talking to the chatbot “day and night”.

Li and Yang are among a growing number of Chinese-speaking people turning to generative AI chatbots instead of professional human therapists. Experts say there is huge potential for AI in the mental health sector, but are concerned about the risks of people in distress turning to the technology, rather than human beings, for medical assistance.

There are few official statistics, but mental health professionals in Taiwan and China have reported rising rates of patients consulting AI before seeing them, or instead of seeing them. Surveys, including a globalanalysisrecently published by Harvard Business Review, show psychological assistance is now a leading reason for adults to use AI chatbots. On social media there are hundreds of thousands of postspraising AI for helping them.

It comes amid rising rates of mental illness in Taiwan and China,particularly among younger people. Access to services isnot keeping apace– appointments are hard to get, and they’re expensive. Chatbot users say AI saves them time and money, gives real answers, and is more discrete in a society where there is still stigma around mental health.

“In some way the chatbot does help us – it’s accessible, especially when ethnic Chinese tend to suppress or downplay our feelings,” says Dr Yi-Hsien Su, a clinical psychologist at True Colors in Taiwan, who also works in schools and hospitals to promote mental wellbeing in Taiwan.

“I talk to people from Gen Z and they’re more willing to talk about problems and difficulties … But there’s still much to do.”

In Taiwan, the most popular chatbot is ChatGPT. In China, where western apps like ChatGPT are banned, people have turned to domestic offerings like Baidu’s Ernie Bot, or the recently launchedDeepSeek. They are all advancing at rapid speed, and are incorporating wellbeing and therapy into responses as demand increases.

User experiences vary. Li says ChatGPT gives her what she wants to hear, but that can also be predictable and uninsightful. She also misses the process of self discovery in counselling. “I think AI tends to give you the answer, the conclusion that you would get after you finish maybe two or three sessions of therapy,” she says.

Yet 27-year-old Nabi Liu, a Taiwanese woman based in London, has found the experience to be very fulfilling.

“When you share something with a friend, they might not always relate. But ChatGPT responds seriously and immediately,” she says. “I feel like it’s genuinely responding to me each time.”

Experts say it can assist people who are in distress but perhaps don’t need professional help yet, like Li, or those who need a little encouragement to take the next step.

Yang says she doubted whether her struggles were serious enough to warrant professional help.

“Only recently have I begun to realise that I might actually need a proper diagnosis at a hospital,” she says.

“Going from being able to talk [to AI] to being able to talk to real people might sound simple and basic, but for the person I was before, it was unimaginable.”

But experts have also raised concerns about people falling through the cracks, missing the signs that Yang saw for herself, and not getting the help they need.

There have been tragic cases in recent years of young people in distress seeking help from chatbots instead of professionals, and later taking their own lives.

“AI mostly deals with text, but there are things we call non verbal input. When a patient comes in maybe they act differently to how they speak but we can recognise those inputs,” Su says.

A spokesperson for the Taiwan Counselling Psychology Association says AI can be an “auxiliary tool”, but couldn’t replace professional assistance “let alone the intervention and treatment of psychologists in crisis situations”.

“AI has the potential to become an important resource for promoting the popularisation of mental health. However, the complexity and interpersonal depth of the clinical scene still require the real ‘present’ psychological professional.”

The association says AI can be “overly positive”, miss cues, and delay necessary medical care. It also operates outside the peer review and ethics codes of the profession.

“In the long run, unless AI develops breakthrough technologies beyond current imagination, the core structure of psychotherapy should not be shaken.”

Su says he’s excited about the ways AI could modernise and improve his industry, noting potential uses in training of professionals and detecting people online who might need intervention. But for now he recommends people approach the tools with caution.

“It’s a simulation, it’s a good tool, but has limits and you don’t know how the answer was made,” he says.

Additional research by Jason Tzu Kuan Lu

Back to Home
Source: The Guardian