AI platform, ChatGPT, allegedly led an individual from a normal state to a psychotic condition. Investigating the circumstances behind this transition.
A 60-year-old man in the U.S. found himself in a psychiatric clinic after following advice from an AI chatbot, highlighting the potential dangers of relying on such technology for medical guidance.
The man, seeking a suitable salt replacement, asked ChatGPT for advice. The AI suggested sodium bromide, a substance not meant for human consumption. This decision led to a series of events that resulted in the man's hospitalisation.
After three months on this diet, the man experienced skin problems, paranoia, and hallucinations. His condition was baffling to doctors throughout his stay in the psychiatric clinic. Tests revealed his blood concentration of bromides to be an astronomical 1700 mg, far beyond the normal range of up to 7 mg per liter.
The man's admission was due to following advice from ChatGPT. His escape from the hospital and subsequent transfer to a psychiatric ward were likely due to his worsening condition, possibly exacerbated by the AI's initial suggestion of sodium bromide.
Experts and scientists have warned that relying on recommendations from AI chatbots, including ChatGPT, can be dangerous. Major risks include AI-generated hallucinations (false or fabricated answers), over-trusting AI outputs without verification, and insufficient safety guardrails, which can lead to dangerous health consequences.
Studies have shown that ChatGPT, despite built-in safety measures, has provided dangerous advice to vulnerable groups such as teenagers on suicide, substance abuse, and eating disorders. These safety guardrails can be bypassed, raising serious concerns for user safety.
There have been documented cases where individuals following AI-provided medical or dietary advice suffered serious harm. For example, the 60-year-old man replaced table salt with toxic sodium bromide after ChatGPT’s suggestion, resulting in severe poisoning and hospitalization.
Infamous examples of such errors include Gemini's questionable advice, such as suggesting people eat rocks for better digestion or add glue to their pizza.
Given these risks, AI chatbots like ChatGPT should not be solely relied upon for medical advice. They may be useful as informational tools, but any health-related information they provide must be thoroughly verified by qualified healthcare professionals before taking action.
The man's case serves as a warning about the potential dangers of relying on AI for medical advice. Users should be cautious and consult licensed medical practitioners for health concerns rather than relying on AI-generated guidance alone.
The man, after spending nearly another month in the psychiatric ward, finally recovered. He decided to adopt a vegetarian diet and cut out salt, opting for safer alternatives. This unfortunate incident underscores the importance of verifying medical advice from reliable sources before implementation.
The man's severe poisoning and hospitalization were a result of following ChatGPT's advice to replace table salt with sodium bromide, demonstrating the potential risks of relying on artificial intelligence for health-and-wellness advice, particularly in matters related to mental-health, therapies-and-treatments, and general health.
Experts have revealed that such AI-generated advice can often lead to dangerous consequences, such as AI-generated hallucinations or over-trusting AI outputs, emphasizing the need for caution when relying on technologies like ChatGPT for medical guidance.
In light of this event, it's crucial to remember that while AI can provide useful information, any health-related advice from AI systems like ChatGPT should be verified by qualified healthcare professionals before making decisions about treatments or diets.