There are plenty of accounts of how AI is helping people diagnose illnesses and improve their health outcomes, but the use of AI in healthcare seems to come with its own downsides as well.
A US-based man had to be hospitalized after allegedly following dietary advice from ChatGPT. The man, 60, had read about the negative effects of sodium chloride, or salt, and wanted to eliminate it from his diet. He found that traditional sources only gave suggestions on how to reduce the consumption of sodium chloride, but not eliminate it entirely. He then asked ChatGPT, which suggested that he could replace sodium chloride with sodium bromide. The man decided to start a 3-month experiment in which he replaced sodium chloride in his diet with sodium bromide.

Three months later, he showed up at a hospital, convinced that his neighbour was trying to poison him. Doctors noted him to be very thirsty but paranoid about water he was offered. In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations. He even tried to escape, and the doctors had to put him in an involuntary psychiatric hold.
Once his condition stabilized, he described his other symptoms to doctors, including facial acne, fatigue, insomnia, subtle ataxia, and polydipsia. These symptoms, along with other factors, suggested bromism. Bromisim is bromide poisoning, which occurs after consumption of bromide. At this point, the man revealed to doctors that he’d switched sodium chloride in his diet with sodium bromide at ChatGPT’s suggestion.
After treatment, his bromide levels stabilized, and he was discharged. “This case also highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes. Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs,” the doctors said in his case reports.
“However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do,” they added.
“While it is a tool with much potential to provide a bridge between scientists and the nonacademic population, AI also carries the risk for promulgating decontextualized information, as it is highly unlikely that a medical expert would have mentioned sodium bromide when faced with a patient looking for a viable substitute for sodium chloride. Thus, as the use of AI tools increases, providers will need to consider this when screening for where their patients are consuming health information,” the report concluded.