$43.050.09
50.760.13
Electricity outage schedules

Oxford study warns of dangers of using AI for medical advice

Kyiv • UNN

 • 52 views

An Oxford University study has shown that AI chatbots provide inaccurate and contradictory medical advice. This poses a threat to patient safety due to the misinterpretation of incomplete information.

Oxford study warns of dangers of using AI for medical advice

Scientists at the University of Oxford have found that AI-powered chatbots provide inaccurate and contradictory health advice, which threatens patient safety. The study highlights that the models' inability to correctly interpret incomplete information from users makes self-diagnosis via AI extremely risky. This is reported by UNN.

Details

An experiment involving 1,300 people showed that the results of interaction with AI critically depend on the wording of the questions. Users were asked to evaluate symptoms such as severe headache or exhaustion, but chatbots often provided a list of diagnoses, from which people had to choose at random. Dr. Adam Mahdi, senior author of the study, commented to the BBC: "When AI listed three possible conditions, people were left to guess which one might fit. That's when everything falls apart."

Antidepressants without myths: what kind of medication it is, who really needs it, and why you shouldn't be afraid of it09.02.26, 17:20 • 16073 views

The lead physician of the study, Dr. Rebecca Payne, called the practice of consulting with AI "dangerous." She explained that users usually share information gradually and may omit key details that a professional medical practitioner would have identified during a face-to-face examination. As a result, people who used AI received a mix of helpful and harmful advice, which complicated the decision of whether to visit a family doctor or seek emergency care.

Algorithm bias and industry prospects

In addition to technical errors, experts point to systemic shortcomings of the technology. Dr. Amber W. Childs from Yale University emphasized that AI is trained on medical data that already contains decades of biases.

A chatbot is only as good a diagnostician as experienced clinicians, which is also not ideal

- she added.

This creates an additional risk of repeating errors embedded in modern medical practice.

Computer glasses: real protection or clever marketing06.02.26, 18:00 • 51608 views

Despite the criticism, experts see potential in specialized models. Dr. Bertalan Mesko noted that new versions of chatbots from OpenAI and Anthropic, developed specifically for the healthcare sector, may show better results. However, the key condition for safety remains the implementation of clear national rules, regulatory barriers, and official medical recommendations for improving such systems.

Psychedelic 5-MeO-DMT replicates a state of deep meditation: results of a neurobiological study06.02.26, 12:25 • 2496 views