Thinking of asking ChatGPT about your health? From helpful tips to risky mistakes, here’s what to know before turning to AI for advice or support.

This resource is part of a series on the importance of understanding the implications of using AI tools as a substitute for professional care.  
1. The Growing Use of AI | 2. Health Advice and AI (current) | 3. Parenting Advice and AI

AI tools like ChatGPT are becoming a regular part of daily life. From helping teachers write lesson plans and students write essays to answering questions about medical symptoms or seeking support for mental health issues, these tools seem to offer fast, helpful, and free support. But should we trust them with our health? 

Whether you’re turning to AI for mental health check-ins, symptom research, or general wellness advice, it’s important to know what these tools can—and can’t—do. In this article, we’ll break down the pros and cons of using ChatGPT and other AI-powered chatbots for health advice so you can make informed decisions.  

The Upside: Why People Are Turning to Chatbots for Health Support

Available Anytime, Anywhere 

AI doesn’t sleep. Unlike doctors or therapists, AI tools like ChatGPT are available 24/7. That makes them especially helpful if: 

  • You live in a rural or remote area with limited health services. 

  • You’re looking for mental health support outside regular office hours. 

  • You’re unsure if your symptoms are serious enough to seek out immediate medical help. 

Lower Cost

Most AI chatbots are free or low-cost. For people who can’t afford private therapy or don’t have extended health benefits or employee assistance programs (EAPs), this can feel like a game-changer.  

Easier to Talk to 

Believe it or not, many users feel more comfortable sharing personal or emotional details with a chatbot than a human. Why? Because a machine doesn’t judge. Some people say they’re more honest when talking to AI, which can help them reflect on how they’re really feeling. For people not ready, it opens the door to reflection and mental health vocabulary. 

Helpful for Learning and Practice 

AI can support healthy routines for individuals who by guiding users through: 

  • Goal setting 

  • Journaling 

  • Mindfulness or breathing exercises 

The Risks: What You Should Know Before Using AI for Health Advice

While the idea of using AI for health support sounds great, there are some serious drawbacks. Here’s what to watch out for: 

It Can Sound Real—But AI Platforms Often Get It Wrong 

AI tools like ChatGPT are trained to sound confident and helpful. But that doesn’t mean they’re always right. In fact, they often “make things up” without realizing it. These false or misleading answers can be dangerous, especially when it comes to medical symptoms or mental health concerns. AI systems often fail to capture the diversity of lived experiences and may unintentionally reinforce harmful stereotypes—particularly for 2SLGBTQIA+ individuals, BIPOC communities, and people with disabilities. Even when responses appear neutral, they can still reflect bias, perpetuate stigma, or overlook cultural, social, and personal context. 

For example, chatbots have: 

  • provided harmful advice surrounding eating disorders. 

  • failed to flag serious mental health risks, like suicidal thoughts. In one study, a chatbot calmly listed tall bridges in New York to a person who had just said they lost their job—a troubling sign it didn’t understand the deeper risk. 

  • In experiments with various chatbots, researchers found that AI showed increased stigma towards certain mental health conditions. This can be harmful to individuals and lead to discontinuation of care.  

  • when prompted for references, provided sources that are made up, are completely irrelevant, or the information in the source material is misinterpreted or misquoted.

  • posed as humans and real therapists.   

Chatbots are commonly used for symptom checking, asking about medications, or getting advice on chronic conditions. Here’s the problem: 

  • Chatbots can omit key details, misinterpret clinical care guidelines, and dismiss important information when asked about common medical topics. 

  • AI may miss urgent signs of serious illness that need immediate attention. 

  • AI may respond with outdated or incorrect information. 

Chatbots aren’t your doctor or your therapist. They don’t know your medical history, medications you may be taking, your allergies, or your needs. Using AI might give you the sense that you've “done something” about your concern, creating a false sense of self-sufficiency. This can lead to delays in seeking real care, or worse, to unsafe decisions. 

It’s Not Really Listening

While a chatbot may sound empathetic, it doesn’t actually understand what you’re feeling. It can’t read tone, facial expressions, or body language—the subtle cues that real humans use to understand each other. It also doesn’t remember past conversations unless specifically designed to, so there’s no true continuity in care. AI tools often lack context, including personal trauma history, and tend to default to dominant culture cultural norms. This can overlook important aspects of individual identity including race, gender, or sexuality—leading to advice that feels impersonal, misaligned, or even harmful. 

It Might Keep You Stuck

AI often mirrors your tone or mood. That can feel comforting, but it also means it rarely challenges you. A trained therapist knows when to gently push you out of unhelpful thought patterns. ChatGPT? It’s more likely to agree with you or offer generic reassurance, even if that’s not what you need. AI can echo your emotional tone, which may help you feel seen, but it can also unintentionally reinforce anxiety, rumination, or catastrophic thinking if you're already in distress. 

It Can Feel “Too Real”

Some people start to develop emotional attachments to AI chatbots. They may:

  • Choose AI chats over real-world relationships. 

  • Delay social plans to keep chatting. 

  • Feel upset when the chatbot "goes offline." 

This false sense of connection can make it harder to build real relationships or develop independent coping skills. While this might offer temporary comfort, it doesn’t replace the emotional connection, accountability, or skill-building that human relationships provide. 

Your Privacy Isn’t Guaranteed

AI chatbots are not bound by healthcare privacy laws like HIPAA (in the U.S.) or PHIPA (in Canada). This means: 

  • Your personal information could be stored or shared. 

  • Your chats might be used to improve the AI model, without your knowledge. 

  • It’s unclear how your sensitive data is protected—or if it’s protected at all. 

This is especially risky if you’re sharing private health concerns, mental health struggles, or even your name and location.  

So, Should You Use ChatGPT for Health Advice?

ChatGPT and other AI tools can be used to: 

  • learn general information about conditions or symptoms. 

  • explore ideas for journaling, mindfulness, or self-care. 

  • support what a healthcare provider has already told you. 

  • get help with preparing questions before a doctor’s visit. 

Avoid using AI tools to: 

  • diagnose yourself or someone else. 

  • make decisions about medication, supplements, or medical treatments. 

  • manage serious or urgent mental health issues. 

  • replace a trusted doctor or therapist. 

AI chatbots like ChatGPT are impressive and, in many ways, helpful. But they are not healthcare providers. They don’t replace the knowledge, training, or humanity of real doctors, nurses, counsellors, and therapists. As these tools continue to evolve, it’s important to stay informed. Use them wisely, but don’t assume they’re always right—or always safe. When it comes to your health, curiosity is good, but be cautious.  

Homewood Health does not use OpenAI or other large language models in any of our care offerings. Internally-developed AI-powered tools are used to assist EFAP clients across several of our products. For example, AI is used to guide care recommendations in Pathfinder. 

If you have questions about mental health, or you’re looking for mental health support, reach out to Homewood Health’s Employee and Family Assistance Program (EFAP). We’re here to answer your questions and connect you with a counsellor who can help. It’s always confidential.  

Explore these additional resources to better understand the risks of using AI: 

References

Abrams Z (12 March 2025) Using generic AI chatbots for mental health support: A dangerous trend. American Psychological Association. Accessed 19 June 2025

Chan CKY (2025) AI as the therapist: student insights on the challenges of using generative AI for school mental health frameworks. Behavioural Sciences. 15(3):287 Accessed 18 June 2025

Chow A and Haupt A (12 June 2025) A psychiatrist posed as a teen with therapy chatbots. The conversations were alarming. Time. Accessed 19 June 2025

Kimmel D (17 May 2023) ChatGPT therapy is good, but it misses what makes us human. Columbia University Department of Psychiatry. Accessed 19 June 2025

Levesque B (2023) UF College of Medicine research shows AI chatbot flawed when giving urology advice. University of Florida College of Medicine. Accessed 25 June 2025

Staff Writer (25 June 2023) NEDA Suspends AI chatbot for giving harmful eating disorder advice. Pyschiatrist.com. Accessed 19 June 2025

Wells S (11 June 2025) New study warns of risks in AI mental health tools. Stanford Report. Accessed 19 June 2025 

Zhang Z, Wang J (2024) Can AI replace psychotherapists? Exploring the future of mental care. Frontiers in Psychiatry. 15:1444382 Accessed 18 June 2025