ChatGPT turns people into hypochondriacs - why you shouldn't discuss your health with AI
Kyiv • UNN
A 46-year-old man spent months searching for cancer symptoms on ChatGPT due to a mistaken suspicion from doctors. The neural network amplified his anxiety and triggered phobias.

After a suspected serious illness, 46-year-old Liverpool resident George Mellon turned to ChatGPT for answers – and this conversation dragged on for months. Although further examinations ruled out a cancer diagnosis, communication with the neural network only intensified his anxiety, forcing him to look for new symptoms and suspect other dangerous diseases. The Atlantic writes about this, as reported by UNN.
Details
The publication told the story of George Mellon, a 46-year-old man from Liverpool, who, during a routine medical examination, was initially diagnosed with blood cancer.
Left in uncertainty, he did what many people do today: he opened ChatGPT. For almost two weeks, Mellon talked to the chatbot for hours every day about a possible diagnosis.
However, further tests showed that it was not cancer after all, but he couldn't stop talking to ChatGPT about health problems, asking the bot about every sensation he felt in his body for months. He became convinced that something was wrong - that another cancer was lurking in his body, or perhaps multiple sclerosis.
Through conversations with ChatGPT, he consulted various specialists and had an MRI of his head, neck, and spine.
From physical health problems, their discussion moved to mental ones: the man began to share all his experiences with the neural network, but along with support, he only received new portions of fear, apprehension, and anxiety.
Mellon is not alone: the number of communities on the Internet is growing where people share how neural networks have turned them into hypochondriacs. According to users, many of them, after communicating with AI, began to believe that they might have incurable and fatal diseases, and literally became obsessed with their health.
It is noted that the neural network does not know how to stop a person and is not responsible, so the dialogue easily begins to fuel even unfounded fears.
Recall
16-year-old Adam Rain committed suicide in April, using instructions from ChatGPT. The boy's parents blame artificial intelligence for the tragedy and have filed a lawsuit against OpenAI.
