If you rely heavily on ChatGPT for fitness tips or diet plans, this case should make you think twice. Following health recommendations from an AI chatbot without professional supervision can put your life at risk. A recent incident in New York has raised serious concerns after a man ended up hospitalized due to dangerous advice from ChatGPT.
According to reports, a 60-year-old man from New York was admitted to the hospital after strictly following ChatGPT’s recommendation to reduce salt intake. The advice involved drastically lowering sodium in his diet, which led to dangerously low sodium levels in his body. Doctors diagnosed the condition as hyponatremia, a potentially life-threatening state caused by insufficient sodium.
His family revealed that he followed an AI-generated health plan without consulting a doctor. The case has been published in the American College of Physicians Journal, highlighting the dangers of following AI health advice, especially when it involves essential nutrients like sodium.
The man spent nearly three weeks in the hospital. Doctors worked to stabilize his condition and restore safe sodium levels. Thankfully, after medical treatment, he recovered and was discharged. However, the incident has sparked discussions about the trustworthiness of AI-generated health recommendations.
As per a report in The Times of India, the man had asked ChatGPT how to eliminate sodium chloride (table salt) from his diet. The AI suggested replacing it with sodium bromide, a compound once used in early 20th-century medicines but now considered toxic in significant amounts.
Following the AI’s suggestion, the man purchased sodium bromide online and used it in his cooking for three months.
The man had no previous history of mental or physical illness. But after prolonged use of sodium bromide, he developed hallucinations, paranoia, and extreme thirst.
Upon hospitalization, he appeared disoriented and even refused water due to fear of contamination. Doctors diagnosed him with bromide toxicity, a condition almost unheard of today. Historically, bromide compounds were prescribed for anxiety, insomnia, and other conditions, but overuse led to severe side effects.
His symptoms also included neurological problems, acne-like rashes, and red patches on the skin — classic signs of bromism.
The primary goal of the hospital treatment was rehydration and restoring electrolyte balance. Over the course of three weeks, his condition steadily improved. Once sodium and chloride levels returned to normal, he was discharged from the hospital.
The case study’s authors emphasized the growing risks of health misinformation from AI tools. They noted that AI-generated responses may contain scientific inaccuracies, fail to discuss potential dangers thoroughly, and contribute to the spread of misinformation.
The report stressed that users must verify such information and should not treat AI output as professional guidance.
OpenAI, the developer of ChatGPT, has clearly stated in its usage terms that its output should not be taken as the sole source of truth or a substitute for professional advice.
The company explicitly warns that the service is not intended for diagnosing or treating medical conditions. This disclaimer is crucial, yet many users overlook it when relying on AI for health decisions.
Experts agree that AI tools can be useful for general health information. However, they should never replace consultation with qualified professionals. As AI adoption grows, so does the responsibility to ensure its outputs are accurate, safe, and easily understood by users.
AI chatbots are not equipped to consider the full medical history of individuals. They also lack the ability to physically examine patients or order tests, which are vital for accurate diagnosis.
Self-medicating based on AI recommendations can lead to severe consequences. In this case, replacing table salt with a toxic compound led to weeks of hospitalization. Similar risks exist for other health topics, including supplements, workout regimens, or fasting plans suggested by AI without professional oversight.
Users may not realize the potential for harm, especially when advice seems logical on the surface. The absence of personalized medical evaluation makes such recommendations risky.
Sodium is an essential electrolyte that regulates fluid balance, nerve function, and muscle contractions. Extremely low sodium levels, as seen in hyponatremia, can cause confusion, seizures, coma, and even death.
Suddenly cutting sodium from the diet or replacing it with harmful substances disrupts these functions, leading to dangerous health outcomes.
Bromide toxicity, once known as bromism, was more common in the early 1900s when bromide compounds were used as sedatives and anticonvulsants. Over time, safer alternatives replaced bromides due to their side effects. Today, bromism is rare, which is why this case is particularly alarming.
Its symptoms range from skin irritation and fatigue to severe neurological impairment. In extreme cases, it can lead to permanent damage or death if untreated.
Many users place high trust in AI systems like ChatGPT, assuming their responses are accurate. However, large language models generate answers based on patterns in data, not on verified real-time medical judgment.
This means AI can produce outdated, contextually incorrect, or even dangerous recommendations without recognizing the risk.
Experts recommend using AI tools for educational purposes only. Always verify the information with reputable medical sources or licensed healthcare providers. Avoid making dietary or medication changes solely on AI advice.
For personalized health needs, direct consultation with doctors remains irreplaceable. AI can supplement learning but should never dictate medical decisions.
Cases like this raise important ethical and regulatory questions about AI in healthcare. Should there be stricter oversight of AI outputs when they involve health advice? Can AI companies be held accountable for harm caused by misinformation?
Some experts call for mandatory disclaimers and built-in safety filters to prevent the suggestion of dangerous substances. While disclaimers exist, the challenge lies in ensuring that users read and understand them.
Increasing public awareness about the limitations of AI is essential. Educational campaigns could help users differentiate between general information and medically safe guidance.
As AI becomes more integrated into daily life, people must learn to question and verify rather than accept responses at face value.
The New York case of ChatGPT’s dangerous dietary advice is a stark warning. While AI can be a helpful tool for general queries, it is not a replacement for professional medical consultation.
The incident highlights the need for responsible AI use, stronger public awareness, and clear boundaries between general information and critical health guidance.
This post was last modified on August 10, 2025 2:29 PM IST 14:29
A tragic road accident occurred in Kaimur district, Bihar, on the occasion of Raksha Bandhan.… Read More
On Sunday, Chief Minister Nitish Kumar initiated the disbursement of social security pension payments from… Read More
OpenAI has officially launched its latest flagship AI model, GPT-5, offering it free for all… Read More
Tecno has officially confirmed the launch date of its upcoming smartphone, Tecno Spark Go 5G,… Read More
Bollywood actress Janhvi Kapoor is currently busy promoting her upcoming film Param Sundari. The film’s… Read More
The Election Commission of India (ECI) has once again urged Congress leader Rahul Gandhi to… Read More