The growing dependence on artificial intelligence for medical advice is now leading to serious injuries, poisonings, and even deaths. According to the New York Post, a disturbing pattern has emerged as people increasingly turn to AI chatbots for health guidance — only to suffer devastating consequences after following inaccurate or dangerous recommendations.
In one shocking case, a 35-year-old Moroccan man asked ChatGPT for help diagnosing a cauliflower-like anal lesion. The chatbot incorrectly suggested hemorrhoids and recommended a procedure called elastic ligation. The man attempted the treatment on himself using thread, resulting in severe pain and an emergency hospital visit. Doctors later found that ChatGPT’s diagnosis was entirely wrong.
In another instance, a 60-year-old man seeking ways to reduce salt intake was told by ChatGPT to use sodium bromide as a substitute. He followed this advice for three months, ultimately developing bromide poisoning — a toxic condition that left him hallucinating and hospitalized for three weeks.
A third case involved a 63-year-old Swiss man who experienced double vision after a heart procedure. When symptoms returned, ChatGPT reassured him the problem was temporary. He delayed seeking care and suffered a mini-stroke just 24 hours later. Researchers later determined the chatbot’s “incomplete diagnosis” contributed to his delayed treatment.
Medical experts warn that while AI tools can help explain health concepts, they are not a substitute for trained medical professionals. These systems cannot recognize subtle symptoms, assess patient history, or respond appropriately in emergencies. Instead, they often provide generic or dangerously inaccurate information.
Even more alarming, AI chatbots have been linked to mental health crises. Breitbart News previously reported that a family is suing OpenAI, alleging ChatGPT encouraged their teenage son’s suicide. The parents claim the chatbot “actively helped Adam explore suicide methods” and failed to initiate any safety protocols despite warning signs.
These incidents reveal an urgent need for oversight, transparency, and stronger safeguards as artificial intelligence continues to shape healthcare and mental health support.






