Skip to content

Can AI Write a Diet Plan? Yes, But It Could Land You In The Hospital

ai-meal-plan-issues
Can AI Write a Diet Plan? Yes, But It Could Land You In The Hospital
Garett Reid

Written by | NSCA, CSCS, CISSN, M.S.E.S.S

Fact checked by 

A case study shows the real dangers of using AI for nutrition advice.

By now, you're likely aware of Artificial Intelligence, including chatbots such as ChatGPT. These applications have made their way into nearly every corner of our lives, promising to be your own personal assistant. Naturally, this has led many to question whether ChatGPT and AI can be used to help with nutrition.

Or, can you actually trust AI to write a diet plan? A recent case report in Annals of Internal Medicine: Clinical Cases suggests the answer should come with a heavy warning label. Thanks to a large misunderstanding of how AI Chatbots work, blindly getting any type of health and diet advice can actually land you in trouble, including the hospital.

Take Your Fitness To The Next Level

Can you use ChatGPT for nutrition and training advice?

 5 Key Points You Need To Know:
  • A recent medical case showed how AI diet advice told a man to eat bromide, leading to bromide poisoning, underscoring the risks of uncritical use.
  • AI chatbots don't operate the way many believe they do. They are prone to providing false information in certain situations, leading to confusion and misguidance.
  • Users must provide the right input for accurate answers. Vulnerable groups (patients, the elderly, and desperate dieters) may not have the knowledge to do so.
  • AI often has a "will to please" bias, tailoring answers to user preferences rather than objective truth.
  • Developers, influencers, and media should be responsible when selling AI's ability.

How ChatGPT Gave Diet Advice That Led To The Hospital

In 2025, a published case study in Annals of Internal Medicine: Clinical Cases documents how a 60-year-old man developed bromism, a dangerous toxidrome caused by excess bromide ingestion (Eichenberger et al., 2025). 

What was once common in the 20th century, bromism can result in various health issues, including:

  • Trigger hallucinations, 
  • Paranoia
  • Fatigue
  • Profound electrolyte imbalances.

So, how did this man end up poisoned? And why would he eat bromide? 

Well, he did it to himself because ChatGPT told him to

How Did ChatGPT Poison a 60-Year-Old?

After being admitted to the hospital, the man confessed to having various beliefs on nutrition, including distilling his own water at home and various diet restrictions. He seemed to get caught up in all of the "X ingredient is bad for you" arguments we see online.

Upon reading the "dangers" of sodium chloride (i.e., table salt), he consulted ChatGPT for advice on reducing it from his diet. Unfortunately, ChatGPT told him he could replace sodium chloride with sodium bromide. 

Since ChatGPT is portrayed as being an infallible piece of technology, he listened. He went online, bought some sodium bromide, and began replacing his salt with it. 

Over three months, this "AI-guided" experiment led to severe psychiatric and metabolic complications, landing him in the hospital. His bromide levels were found to be 200 times above normal. Thankfully, after fluids and treatment, his symptoms resolved.

Why Would ChatGPT Recommend Bromide? Understanding How AI Chatbots Actually Work.

  • Chatbots operate as an LLM that works by learning patterns and predicting words
  • AI chatbots are prone to hallucinations when they provide incorrect details or even invent false information
  • A large number of hallucinations have been described in literature.

One of the major issues that causes confusion when using chatbots like ChatGPT is a misunderstanding of how they work. This is largely due to how they're presented to the public.

In general, most people assume these AI chatbots work like a massive computer, analyzing all of the available information and formulating the best answer.

This isn't the case. Far from it.

AI chatbots (ChatGPT, Claude, Gemini) operate as a Large Language Model (LLM) that predicts words in a sequence, based on patterns from an enormous amount of text. To do this, it's first fed a ton of data from articles and books to teach it things like grammar, facts, reasoning structures, and writing styles.

It can then use all this information to answer questions you ask it. However, here lies the issue. LLMs don't really "understand" information; it's just really good at predicting what words appear together.

They don't have reasoning skills in the way we think, especially with new information. This is also why you hear of "hallucinations" when ChatGPT makes up information. Hallucinations are a real phenomena that reach farther than Reddit boards and are documented in scientific literature (Ahmad et. al, 2023).

And they happen a lot; much more than some seem to want to believe. A large study from Chelli et. al (2024) found that various chatbots hallucinated 28.6% - 91.4% of the time when citing scientific studies. This ranged from getting authors wrong to outright inventing studies.

ChatGPT isn't "lying", it's just not designed to provide information it doesn't know. Worse, it will rarely say "I don't know". As it predicts letters, it does its job, and whatever comes out, comes out.

In this case, the man likely simply asked about replacing sodium chloride, and ChatGPT provided an answer based on information with cleaning supplies. 

The Real Dangers of Relying on AI for Diet Advice

  • Ideally, a user has basic knowledge of the information they are seeking
  • In its current state, a user must fact-check AI
  • Simply understanding that AI chatbots like ChatGPT make mistakes is crucial in optimizing their use.

We're not trying to minimize this technology—it's incredibly useful in the right circumstances, and it will almost certainly improve over time. But opening it up to the general public with false expectations poses real dangers. This case makes that clear.

  • AI lacks medical judgment. A human nutritionist or physician would never recommend bromide as a salt substitute. AI doesn't distinguish between safe and unsafe applications—it just generates text that "fit" (Walsh et. al, 2024)
  • AI requires informed input. Perhaps the patient didn't ask specifically for a nutritional substitute, which highlights the issue: users must know how to phrase the right question. Without that baseline knowledge, the output can be dangerously misleading.
  • AI decontextualizes information. A statement valid in one setting (chemistry, manufacturing) may be deadly in another (diet and health). Water can put out a fire—but pour it on a grease fire and you'll make things worse.
  • Patients and vulnerable groups are at risk. Those experimenting with restrictive diets, quick fixes, or those who lack technological literacy may take AI advice literally without understanding the risks.
  • AI has a bias to please. ChatGPT and similar models are tuned to give answers users will accept. That can lead to cherry-picked, one-sided replies: a vegan might hear that plant-based eating is the optimal lifestyle, while a keto enthusiast might be told keto is best for muscle retention. The model adjusts to the user's framing, not objective medical truth.

It's crucial to understand these limitations when using it to make choices in your life (Walsh et. al, 2024)

Should You Use AI for Diet Advice?

  • A user should be familiar with the topic in order to identify false information.
  • Always fact-check the information. Always. 
  • At this point, AI does not seem to be a viable alternative for fitness and nutrition advice, especially for those new to fitness and dieting.

Keep in mind that an odd attribute of ChatGPT, and similar chatbots, is that different people can report very different experiences. Some report it's spot-on with answers, while others have claimed it's become unusable due to its answers. 

Ironically, this is similar to human trainers and nutritionists. The difference is that people generally know to be cautious of the advice they get from other humans. As a result, they might rely on reviews, look at different sources, or at least use some health skepticism.

However, the major problem with AI and chatbots giving fitness and nutrition advice is that people have mistakenly been led to believe they are flawless. People believe they're hyper-complex processors that provide answers with 100% accuracy. Unfortunately, they don't.

In fact, many researchers have basically stated that using AI chatbots and ChatGPT is useless. In an article published in Schizophrenia, Emsley (2023) warns;

"...use ChatGPT at your own peril…I do not recommend ChatGPT as an aid to scientific writing. …It seems to me that a more immediate threat is (it's) infiltration into the scientific literature of masses of fictitious material."

This doesn't mean this technology is junk (some may say that, though) or useless. It just means a user must have the right expectations when using it. More importantly, this requires the user to have some basic knowledge of what they're asking about. 

How can you know if something sounds wrong if you don't know what should sound right? 

AI tools can help summarize nutrition principles, generate meal ideas, and explain basic dietary guidelines. But they should never replace professional medical advice. Without guardrails, AI can produce suggestions that sound authoritative yet are incomplete, misleading, or even harmful.

Final Lessons On AI Chatbots, Nutrition, and Fitness

Yes, AI can technically "write" a diet plan—but should it? Not without oversight. The bromism case is a sobering reminder that while AI is powerful, it is not a doctor, dietitian, or health coach. As these tools spread, the real responsibility falls on both developers and users to approach AI health advice with caution, skepticism, and critical review.

And that's the crux of the issue: we can't really "blame" ChatGPT itself. The greater accountability lies with the developers, influencers, and media who oversell this technology as more than it is, at least for right now. 

 What You Need To Do: Always fact-check health advice and consult a qualified professional before making dietary changes. AI can be a tool, but it should never be your only guide when it comes to your health. 

And always fact-check. 

Reference

1. Audrey Eichenberger, Stephen Thielke, Adam Van Buskirk. A Case of Bromism Influenced by Use of Artificial Intelligence.AIM Clinical Cases.2025;4:e241260. [Epub 5 August 2025].doi:10.7326/aimcc.2024.1260

2. Ahmad, Z., Kaiser, W., & Rahim, S. (2023). Hallucinations in ChatGPT: An unreliable tool for learning. Rupkatha Journal on Interdisciplinary Studies in Humanities, 15(4), 12. https://www.researchgate.net/publication/376844047_Hallucinations_in_ChatGPT_An_Unreliable_Tool_for_Learning

3. Chelli M, Descamps J, Lavoué V,Trojani C, Azar M, Deckert M,Raynier JL, Clowez G, Boileau P,Ruetsch-Chelli C Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis J Med Internet Res 2024;26:e53164 https://www.jmir.org/2024/1/e53164

 

4. Emsley, R. ChatGPT: these are not hallucinations – they're fabrications and falsifications. Schizophr 9, 52 (2023). https://doi.org/10.1038/s41537-023-00379-4 

 

5. Walsh DS. Invited Commentary on ChatGPT: What Every Pediatric Surgeon Should Know About Its Potential Uses and Pitfalls. J Pediatr Surg. 2024;59(5):948-949. doi:10.1016/j.jpedsurg.2024.01.013

0 comments

Leave a comment

Please note, comments need to be approved before they are published.