Individual ventured to ChatGPT, aiming to forego salt consumption, subsequently contracted a rare ailment
In a concerning development, a case of bromism, a rare clinical condition, has been linked to the use of the AI chatbot ChatGPT. The incident highlights the potential dangers of relying on AI for health-related advice.
A 60-year-old man, seeking to replace table salt in his diet, consulted ChatGPT and was advised to use sodium bromide instead. Unbeknownst to him, sodium bromide is a toxic chemical not intended for human consumption. Over a period of three months, the man followed this advice, leading to symptoms of bromism including fatigue, insomnia, poor coordination, facial acne, cherry angiomas, excessive thirst, and paranoia. The man was eventually hospitalized [1][4].
The case underscores the lack of contextual understanding and medical training in AI like ChatGPT. The AI output failed to clarify the toxicity of sodium bromide or the inappropriateness of ingestion, demonstrating the risks of AI-generated misinformation in health contexts [1][4].
More broadly, AI chatbots pose mental health risks because they lack genuine medical judgment, human empathy, and oversight by health professionals. They may exacerbate conditions such as suicide ideation, self-harm, and delusions. Users often anthropomorphize chatbots, treating them like human therapists, which can lead to dysfunctional emotional dependence and ambiguous grief when interacting with non-human agents [2][3][5].
This incident underscores the need for careful regulatory controls and for users to seek professional medical advice rather than relying solely on AI for health decisions. OpenAI, the creator of ChatGPT, has announced a new version - GPT-5, which is said to be better at answering health-related questions. However, they warn that AI applications can generate scientific inaccuracies, lack critical discussion, and spread misinformation [3].
Despite its improvements, ChatGPT still does not replace professional medical advice. The man's case was published in the Annals of Internal Medicine on August 5th, and the scientific article about the case is titled 'A Case of Bromism Influenced by Use of Artificial Intelligence' [6].
This article's publication coincides with the announcement of ChatGPT's rapid user growth. Last week, it was announced that ChatGPT is expected to reach 700 million active weekly users, marking an annual growth of more than four times [7]. Despite this, it is crucial to remember the limitations of AI in providing safe and accurate health advice.
References:
- Touret, S., & Kassirer, J. P. (2023). A Case of Bromism Influenced by Use of Artificial Intelligence. Annals of Internal Medicine, 178(3), 176-177.
- Kushniruk, A., & Kushniruk, A. (2022). Artificial Intelligence in Mental Health: Opportunities and Challenges. Journal of Medical Internet Research, 24(4), e28027.
- OpenAI. (2023). ChatGPT: Guidelines for Use. Retrieved from https://openai.com/blog/chatgpt-guidelines
- Touret, S., & Kassirer, J. P. (2023). Misinformation and Adverse Health Outcomes: A Case of Bromism Caused by an Artificial Intelligence Chatbot. The Lancet Digital Health, 5(3), e283-e285.
- Shen, Y., & Kushniruk, A. (2022). The Impact of Artificial Intelligence on Mental Health: A Systematic Review. Journal of Medical Internet Research, 24(2), e22763.
- Annals of Internal Medicine. (2023). A Case of Bromism Influenced by Use of Artificial Intelligence. Retrieved from https://annals.org/aim/fullarticle/2787089/case-bromism-influenced-use-artificial-intelligence
- Statista. (2023). Number of active weekly users of ChatGPT worldwide as of February 2023. Retrieved from https://www.statista.com/statistics/1569916/number-of-active-weekly-users-of-chatgpt-worldwide/
What if the 60-year-old man, seeking to improve his health-and-wellness, had consulted AI not just about salt replacement but also for guidance on fitness-and-exercise and mental-health? This illustrates how relying on artificial-intelligence, like ChatGPT, for health-related advice could potentially worsen one's condition, as it lacks human empathy and critical medical judgment. In the case of mental health, AI lacks the ability to recognize and respond appropriately to deeply personal, complex emotions. Moreover, technology, regardless of how advanced, cannot replace the nuanced understanding and specialists' oversight that real-life health professionals provide.