
Imagine a robot or an app that responds like a person and gives advice when someone feels sad or stressed. That kind of technology exists now. People all over the world are typing things like “I feel lonely” or “I am stressed about my grades” into apps powered by artificial intelligence, or AI, to get emotional support. These computer programs, called chatbots, use large language models (LLMs) that can produce human‑like answers. Many people find them easy to reach because they are free or available 24 hours a day. But a new study shows that these chatbots are not ready to act like trained therapists.
Researchers who study both computers and mental health wanted to know how well these AI chatbots behave when asked to act like therapists. To find out, they set up practice sessions where trained counseling helpers talked with the systems as though they were clients and told the chatbots to give therapy responses. Then professional psychologists looked at what the chatbots said and compared it to the ethical standards human therapists must follow in real life.
The results were surprising and sometimes even concerning. Across many conversations, the AI models showed repeated ethical violations. That means their responses did not match what a real therapist who follows professional rules should do. These problems were grouped into five main categories that show where chatbots fall short.
Five Big Ethical Problems With AI Therapy Chatbots
One of the biggest issues was that chatbots often gave generic advice, instead of paying attention to the person’s unique experiences or background. Real therapy depends on understanding someone’s life story, culture, and feelings. AI systems sometimes ignored those important details and offered the same general tips to everyone.
Another problem was poor collaboration. In healthy conversations, a therapist listens carefully, asks questions, and helps the person explore their thoughts. But some chatbots dominated the conversation or even strengthened harmful beliefs the person had about themselves or the world. Real therapy tries to help people question negative thoughts, not make them feel more stuck.
A third issue was something researchers called deceptive empathy. Chatbots used comforting phrases like “I understand how you feel” or “I hear you”. These kind words might make a person feel supported at first, but the chatbot does not truly understand emotions. It is only generating a response based on patterns it learned from lots of text. True empathy in therapy comes from actual listening, learning about a person’s life, and understanding how they feel.
Another concern was bias in responses. Because chatbots learn from large sets of text from the internet, they can accidentally repeat unfair ideas about certain genders, cultures, or religions. Human therapists are trained to be fair and culturally aware, but chatbots sometimes showed bias in their replies.
The final major problem was poor handling of crises. In real therapy, if someone talks about self‑harm or suicide, a trained professional must respond with serious care, encourage safety, and guide the person toward proper help. Chatbots often did not recognize when someone was in danger or did not direct them to the right support, which could be unsafe.
Why This Matters
Researchers are not saying that AI has no place in mental health support. It can help in simple ways, like offering information, stress‑management tips, or exercises to calm nerves. But the study shows that AI should not replace a trained human therapist because it is not yet capable of handling emotional conversations with the ethical responsibility that people deserve.
A big difference between a human therapist and a chatbot is accountability. Human therapists must follow rules, and if they make mistakes, professional boards or laws can investigate and take action. Chatbots currently do not have this kind of supervision. When an AI system gives bad or harmful advice, there may be no one to take accountability for it.
As AI chatbots become more popular and more people talk to them about feelings, it is important to know both their limitations and their risks. The study highlights the need for careful rules and safety checks before these systems are used in situations where people could be emotionally vulnerable. Real mental health care involves deep understanding, ethical guidance, and professional responsibility, and those remain human strengths that AI cannot yet replace.
FAQs on Hidden Risks of Using AI Chatbots for Mental Health Therapy
Q: Can AI chatbots really be used for therapy?
A: Many people use AI chatbots to talk about stress, loneliness, or personal problems because they are easy to access and available anytime. However, research shows that these systems are not trained like real therapists and may not follow professional mental health guidelines. They can provide general information or simple coping tips, but they should not replace professional therapy.
Q: Why are experts raising concerns about AI chatbots being used for therapy?
A: Researchers found that AI chatbots sometimes give responses that break ethical rules. These include biased answers, reinforcing harmful beliefs, and failing to handle serious mental health situations properly. Because therapy involves safety and responsibility, these risks raise concerns about relying on AI for emotional support.
Q: What is deceptive empathy in AI chatbots?
A: Deceptive empathy happens when a chatbot uses comforting phrases like “I understand how you feel” even though it does not truly understand emotions. The system simply predicts words based on patterns in data rather than real emotional awareness. This deceptive empathy can make users believe the chatbot understands them more deeply than it actually does.
Q: Do AI therapy chatbots show bias in their responses?
A: Yes, researchers observed that some chatbot responses reflected assumptions about gender, culture, or religion. This happens because AI models learn from large amounts of internet text that may include biased viewpoints. In professional therapy, counselors are trained to avoid discrimination and respect cultural differences.
Q: How do AI chatbots handle mental health crises like suicidal thoughts?
A: Studies show that AI chatbots sometimes fail to respond properly in crisis situations. Instead of directing users toward emergency help or professional support, they may give vague or incomplete advice. This is dangerous because crisis management is one of the most important responsibilities in mental health care.
Q: Why do people use AI chatbots for emotional support?
A: Many people turn to AI chatbots because they are free, available 24 hours a day, and easy to use from a phone or computer. Some users feel more comfortable sharing feelings with a chatbot than with another person. However, convenience does not guarantee that the advice provided is safe or accurate.
Q: What are the biggest ethical risks of AI counseling tools?
A: Researchers identified several ethical risks, including generic advice, biased responses, deceptive empathy, poor collaboration with users, and weak crisis handling. These problems occur because AI systems lack real understanding of human emotions and professional responsibility. Ethical standards are a key part of safe mental health care.
Q: Can AI help mental health professionals instead of replacing them?
A: Many experts believe AI could be useful as a support tool rather than a replacement for therapists. For example, chatbots might help provide educational materials, guide relaxation exercises, or collect information before appointments. In these cases, human professionals would still oversee treatment and decision making.
Q: Is AI regulated like human therapists?
A: No, AI systems currently do not have the same licensing rules or oversight that human therapists must follow. Professional counselors are monitored by regulatory boards that enforce ethical standards and investigate complaints. Similar accountability systems for AI counseling tools are still being developed.
Q: Will AI become safe enough for therapy in the future?
A: Researchers believe improvements are possible, but it will require careful testing, ethical guidelines, and strong regulations. Future systems may include better safety checks and human supervision. Until then, AI should be used cautiously and not as a substitute for trained mental health professionals.
External Sources:
- Iftikhar Z, Xiao A, Ransom S, Huang J, Suresh H. How LLM counselors violate ethical standards in mental health practice: A practitioner-informed framework. InProceedings of the AAAI/ACM Conference on AI, Ethics, and Society 2025 Oct 15 (Vol. 8, No. 2, pp. 1311-1323). Doi: 10.1609/aies.v8i2.36632.
- News from Brown University. New study: AI chatbots systematically violate mental health ethics standards. 2025. Available form: https://www.brown.edu/news/2025-10-21/ai-mental-health-ethics
- Liu M, Xu Z, Zhang X, An H, Qadir S, Zhang Q, Wisniewski PJ, Cho JH, Lee SW, Jia R, Huang L. LLM can be a dangerous persuader: Empirical study of persuasion safety in large language models. arXiv preprint arXiv:2504.10430. 2025 Apr 14. Doi: 10.48550/arXiv.2504.10430.
Disclaimer:
Some aspects of the webpage preparation workflow may be informed or enhanced through the use of artificial intelligence technologies. While every effort is made to ensure accuracy and clarity, readers are encouraged to consult primary sources for verification. External links are provided for convenience, and Honores does not endorse, control, or assume responsibility for their content or for any outcomes resulting from their use. The author declares no conflicts of interest in relation to the external links included. Neither the author nor the website has received any financial support, sponsorship, or external funding. Photo by Google DeepMind.