Mental health support systems are challenged in terms of accessibility, affordability, and stigmatization, which are barriers for people to access mental health support services. This project proposes a mental health support system called Mind Well, which is a hybrid AI-based mental health chatbot. The proposed mental health support system is based on a fine-tuned Distil BERT model to classify emotions in real-time and the Google Gemini API to deliver supportive responses to users. For responsible usage, the proposed mental health support system is based on a set of safety rules to avoid diagnosis, be transparent about being an AI companion, and refer users to helplines in crisis situations. The proposed mental health support system is implemented using a Flask-based backend with JWT authentication and a MongoDB database to ensure user safety and data privacy.
Introduction
AI chatbots are computer programs designed to mimic human communication via text or voice interfaces.
Powered by Artificial Intelligence, chatbots can understand, interpret, and respond to human communication in a relevant and coherent way.
In mental health, chatbots offer 24/7 confidential support, serving as preliminary resources and companions without the limitations of human interaction.
2. Major Contributions
System Architecture: User-friendly interface, AI inference engine for dialogue management, secure backend for user data.
Transformer Models: Use of BERT and DistilBERT for semantic understanding and recognizing emotional nuances.
Support Framework: Provides coping strategies or recommends resources based on identified emotional states.
3. Motivation
Mental health problems, including anxiety, depression, and stress, are highly prevalent and under-supported globally.
There is a need for digital interventions to provide timely, scalable, and accessible mental health assistance.
4. Literature Review
Sr.
Title / Year
Key Features
Gap Identified
Relevance
1
Conversational Agents for Mental Health Support (2021)
Rule-based / early ML chatbots (ELIZA-like)
Limited personalization, shallow understanding
Foundation for digital mental health tools; our work adds advanced NLP/LLMs
2
NLP-driven Sentiment Analysis (2023)
Sentiment analysis, intent detection
Difficulty with sarcasm, cultural context, multi-turn conversations
Emphasizes need for semantic understanding; extended via transformers
3
LLM-powered Therapy Chatbots (2023)
Fine-tuned LLMs with empathetic dialogue and safety filters
Hallucinations, ethical/safety concerns, lack of clinical validation
Demonstrates potential of LLMs; our work uses controlled domain-specific fine-tuning
4
Hybrid Human-AI Mental Health Assistance (2024)
AI triaging + human therapist escalation
Scalability, handoff quality, data privacy
Supports hybrid care; our system can integrate escalation pipelines securely
5
Workplace Wellbeing via AI Chatbots (2024)
LLMs + sentiment analysis + RLAIF
Limited datasets, scalability issues
Directly relevant; our system addresses scale and bias in mental health support
5. System Architecture
Data Source & Training Layer:
Uses anonymized mental health dialogues for training.
Fine-tunes transformer models (BERT / DistilBERT) for robustness and empathy.
User Interaction Layer:
Web/mobile interface allows free-text input.
Users can express thoughts or emotions naturally.
AI Processing & Inference Layer:
Preprocessing, tokenization, and feeding into NLP models.
Emotion & Intent Classifier detects emotional states (anxiety, stress, sadness) and conversational intent.
Decision & Output Layer:
Identifies emotional state of the user.
Provides personalized coping strategies.
Recommends professional resources when needed.
Outputs delivered instantly via chat interface.
6. Key Takeaways
AI chatbots provide scalable, empathetic, and real-time support for mental health.
Hybrid models and safeguards can enhance trust, reliability, and ethical deployment.
Conclusion
This paper proposes a framework for an AI-based chatbot that uses free-text input from users and provides emotional insight and support. The proposed system uses the BERT and Distil BERT model to automate the first step of mental health support.
The proposed architecture is user-friendly and demonstrates the immense potential of using NLP to solve critical issues in the field of mental health support. Though the proposed system has its shortcomings and ethical issues, it is an important prototype for the next generation of intelligent systems that could potentially change the face of mental health support.
References
[1] Li, Y., et al. (2023). Systematic review and meta-analysis of AI-based conversational agents for mental health. npj Digital Medicine, 6, 180. https://doi.org/10.1038/s41746-023-00979-5
[2] Cruz-Gonzalez, P., et al. (2025). Artificial intelligence in mental health care: a systematic review of diagnosis, monitoring, and intervention. BMC Psychiatry, 25, 62. https://pmc.ncbi.nlm.nih.gov/articles/PMC12017374/
[3] Oghenekaro, L.U., et al. (2024). Artificial Intelligence-Based Chatbot for Student Mental Health Support. Open Journal of Depression ,13(2), 118132. https://www.scirp.org/journal/paperinformation?paperid=133222
[4] Haque, M.D.R., et al. (2023). An Overview of Chatbot-Based Mobile Mental Health Apps: Technology, Functionality, and Clinical Evidence. JMIR mHealth and uHealth, 11(6), e26217. https://pmc.ncbi.nlm.nih.gov/articles/PMC10242473/
[5] Chang, C., et al. (2025). AI Chatbots for Psychological Health for Health Professionals: A Scoping Review. JMIR Human Factors, 12, e67682. https://humanfactors.jmir.org/2025/1/e67682
[6] Liu, H., et al. (2024). Enhancing mental health with Artificial Intelligence: Current trends, challenges, and future directions. Mental Health: Science and Practice, 24(1), 88-100. https://www.sciencedirect.com/science/article/pii/S2949916X24000525
[7] Ye, X., et al. (2025). Exploring the Use of AI Chatbots in Mental Health Care: Benefits, Risks, and Ethical Considerations. Psychiatry Advisor, August 15, 2025. https://www.psychiatryadvisor.com/features/ai-chatbots-in-mental-health-care/
[8] Frontiers in Digital Health Research Team. (2025). Balancing risks and benefits: clinicians\' perspectives on the use of generative-AI chatbots in mental health. Frontiers in Digital Health, 5, 1606291. https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1606291/full
[9] Baek, G., et al. (2025). AI Chatbots for Psychological Health for Health Professionals. JMIR Human Factors, 25(1), e67682.
https://humanfactors.jmir.org/2025/1/e67682/
[10] Olawade, D. B., et al. (2024). Enhancing mental health with Artificial Intelligence. ScienceDirect, S2949916X24000525.
https://www.sciencedirect.com/science/article/pii/S2949916X24000525
[11] Moylan, K., et al. (2025). Expert and Interdisciplinary Analysis of AI-Driven Chatbots for Mental Health Support. Journal of Medical Internet Research, 25(1), e67114. https://www.jmir.org/2025/1/e67114
[12] Li, J., et al. (2025). Chatbot?Delivered Interventions for Improving Mental Health: Systematic Review and Meta?Analysis. PMC, PMC12261465. https://pmc.ncbi.nlm.nih.gov/articles/PMC12261465/
[13] Han, Q., et al. (2025). Unleashing the potential of chatbots in mental health: A comprehensive bibliometric analysis. Frontiers in Psychiatry, 16, 1494355. https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2025.1494355/full
[14] MacNeill, A. L., et al. (2024). Effectiveness of a Mental Health Chatbot for People with Chronic Diseases. JMIR Formative Research, 8(1), e50025. https://formative.jmir.org/2024/1/e50025
[15] Aggarwal, A., et al. (2023). Artificial Intelligence–Based Chatbots for Promoting Health Behavior: Systematic Review. PMC, PMC10007007. https://pmc.ncbi.nlm.nih.gov/articles/PMC10007007/
[16] Aggarwal, A., et al. (2023). Artificial Intelligence–Based Chatbots for Promoting Health Behavior: Systematic Review. PMC, PMC10007007. https://pmc.ncbi.nlm.nih.gov/articles/PMC10007007/