Man sought diet advice from ChatGPT and ended up with 'bromide intoxication'
A case report describes an incident in which a man seeking to make a dietary change consulted ChatGPT and later developed "bromism," a rare "toxidrome."

A man consulted ChatGPT prior to changing his diet. Three months later, after consistently sticking with that dietary change, he ended up in the emergency department with concerning new psychiatric symptoms, including paranoia and hallucinations.
It turned out that the 60-year-old had bromism, a syndrome brought about by chronic overexposure to the chemical compound bromide or its close cousin bromine. In this case, the man had been consuming sodium bromide that he had purchased online.
A report of the man's case was published Tuesday (Aug. 5) in the journal Annals of Internal Medicine Clinical Cases.
Live Science contacted OpenAI, the developer of ChatGPT, about this case. A spokesperson directed the reporter to the company's service terms, which state that its services are not intended for use in the diagnosis or treatment of any health condition, and their terms of use, which state, "You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice." The spokesperson added that OpenAI's safety teams aim to reduce the risk of using the company's services and to train the products to prompt users to seek professional advice.
"A personal experiment"
In the 19th and 20th centuries, bromide was widely used in prescription and over-the-counter (OTC) drugs, including sedatives, anticonvulsants and sleep aids. Over time, though, it became clear that chronic exposure, such as through the abuse of these medicines, caused bromism.
Related: What is brominated vegetable oil, and why did the FDA ban it in food?
This "toxidrome" — a syndrome triggered by an accumulation of toxins — can cause neuropsychiatric symptoms, including psychosis, agitation, mania and delusions, as well as issues with memory, thinking and muscle coordination. Bromide can trigger these symptoms because, with long-term exposure, it builds up in the body and impairs the function of neurons.
Get the world’s most fascinating discoveries delivered straight to your inbox.
In the 1970s and 1980s, U.S. regulators removed several forms of bromide from OTC medicines, including sodium bromide. Bromism rates fell significantly thereafter, and the condition remains relatively rare today. However, occasional cases still occur, with some recent ones being tied to bromide-containing dietary supplements that people purchased online.
Prior to the man's recent case, he'd been reading about the negative health effects of consuming too much table salt, also called sodium chloride. "He was surprised that he could only find literature related to reducing sodium from one's diet," as opposed to reducing chloride, the report noted. "Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet."
(Note that chloride is important for maintaining healthy blood volume and blood pressure, and health issues can emerge if chloride levels in the blood become too low or too high.)
The patient consulted ChatGPT — either ChatGPT 3.5 or 4.0, based on the timeline of the case. The report authors didn't get access to the patient's conversation log, so the exact wording that the large language model (LLM) generated is unknown. But the man reported that ChatGPT said chloride can be swapped for bromide, so he swapped all the sodium chloride in his diet with sodium bromide. The authors noted that this swap likely works in the context of using sodium bromide for cleaning, rather than dietary use.
In an attempt to simulate what might have happened with their patient, the man's doctors tried asking ChatGPT 3.5 what chloride can be replaced with, and they also got a response that included bromide. The LLM did note that "context matters," but it neither provided a specific health warning nor sought more context about why the question was being asked, "as we presume a medical professional would do," the authors wrote.
Recovering from bromism
After three months of consuming sodium bromide instead of table salt, the man reported to the emergency department with concerns that his neighbor was poisoning him. His labs at the time showed a buildup of carbon dioxide in his blood, as well as a rise in alkalinity (the opposite of acidity).
He also appeared to have elevated levels of chloride in his blood but normal sodium levels. Upon further investigation, this turned out to be a case of "pseudohyperchloremia," meaning the lab test for chloride gave a false result because other compounds in the blood — namely, large amounts of bromide — had interfered with the measurement. After consulting the medical literature and Poison Control, the man's doctors determined the most likely diagnosis was bromism.
Related: ChatGPT is truly awful at diagnosing medical conditions
After being admitted for electrolyte monitoring and repletion, the man said he was very thirsty but was paranoid about the water he was offered. After a full day in the hospital, his paranoia intensified and he began experiencing hallucinations. He then tried to escape the hospital, which resulted in an involuntary psychiatric hold, during which he started receiving an antipsychotic.
The man's vitals stabilized after he was given fluids and electrolytes, and as his mental state improved on the antipsychotic, he was able to inform the doctors about his use of ChatGPT. He also noted additional symptoms he'd noticed recently, such as facial acne and small red growths on his skin, which could be a hypersensitivity reaction to the bromide. He also noted insomnia, fatigue, muscle coordination issues and excessive thirst, "further suggesting bromism," his doctors wrote.
He was tapered off the antipsychotic medication over the course of three weeks and then discharged from the hospital. He remained stable at a check-in two weeks later.
"While it is a tool with much potential to provide a bridge between scientists and the nonacademic population, AI also carries the risk for promulgating decontextualized information," the report authors concluded. "It is highly unlikely that a medical expert would have mentioned sodium bromide when faced with a patient looking for a viable substitute for sodium chloride."
They emphasized that, "as the use of AI tools increases, providers will need to consider this when screening for where their patients are consuming health information."
Adding to the concerns raised by the case report, a different group of scientists recently tested six LLMs, including ChatGPT, by having the models interpret clinical notes written by doctors. They found that LLMs are "highly susceptible to adversarial hallucination attacks," meaning they often generate "false clinical details that pose risks when used without safeguards." Applying engineering fixes can reduce the rate of errors but does not eliminate them, the researchers found. This highlights another way in which LLMs could introduce risks into medical decision-making.
This article is for informational purposes only and is not meant to offer medical or dietary advice.

Nicoletta Lanese is the health channel editor at Live Science and was previously a news editor and staff writer at the site. She holds a graduate certificate in science communication from UC Santa Cruz and degrees in neuroscience and dance from the University of Florida. Her work has appeared in The Scientist, Science News, the Mercury News, Mongabay and Stanford Medicine Magazine, among other outlets. Based in NYC, she also remains heavily involved in dance and performs in local choreographers' work.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.