oxford-study-warns-of-dangers-of-using-ai-for-medical-advice

Oxford study warns of dangers of using AI for medical advice

 • 2296 переглядiв

Scientists at the University of Oxford have found that AI-powered chatbots provide inaccurate and contradictory health advice, which threatens patient safety. The study highlights that the models' inability to correctly interpret incomplete information from users makes self-diagnosis via AI extremely risky. This is reported by UNN.

Details

An experiment involving 1,300 people showed that the results of interaction with AI critically depend on the wording of the questions. Users were asked to evaluate symptoms such as severe headache or exhaustion, but chatbots often provided a list of diagnoses, from which people had to choose at random. Dr. Adam Mahdi, senior author of the study, commented to the BBC: "When AI listed three possible conditions, people were left to guess which one might fit. That's when everything falls apart."

Antidepressants without myths: what kind of medication it is, who really needs it, and why you shouldn't be afraid of it09.02.26, 17:20 • [views_17796]

The lead physician of the study, Dr. Rebecca Payne, called the practice of consulting with AI "dangerous." She explained that users usually share information gradually and may omit key details that a professional medical practitioner would have identified during a face-to-face examination. As a result, people who used AI received a mix of helpful and harmful advice, which complicated the decision of whether to visit a family doctor or seek emergency care.

Algorithm bias and industry prospects

In addition to technical errors, experts point to systemic shortcomings of the technology. Dr. Amber W. Childs from Yale University emphasized that AI is trained on medical data that already contains decades of biases.

A chatbot is only as good a diagnostician as experienced clinicians, which is also not ideal

- she added.

This creates an additional risk of repeating errors embedded in modern medical practice.

Computer glasses: real protection or clever marketing06.02.26, 18:00 • [views_51734]

Despite the criticism, experts see potential in specialized models. Dr. Bertalan Mesko noted that new versions of chatbots from OpenAI and Anthropic, developed specifically for the healthcare sector, may show better results. However, the key condition for safety remains the implementation of clear national rules, regulatory barriers, and official medical recommendations for improving such systems.

Psychedelic 5-MeO-DMT replicates a state of deep meditation: results of a neurobiological study06.02.26, 12:25 • [views_2503]

Popular
Japan to join NATO's PURL initiative to support Ukraine with weapons

 • 6956 переглядiв

Germany calls on Russia to abandon maximalist demands in peace talks

 • 7330 переглядiв

Russia attacked Kharkiv with a "Molniya" drone - mayor

 • 5054 переглядiв

News by theme
Oxford study warns of dangers of using AI for medical advice

 • 2296 переглядiв

MrBeast's Beast Industries acquires fintech startup Step

 • 2710 переглядiв

Spielberg's new alien film to be released on June 12: starring Blunt and O'Connor

 • 2390 переглядiв

Donald Trump to visit China in April for meeting with Xi Jinping

 • 2630 переглядiв

Pro-Ukrainian MEP Tom Berendsen to head the Dutch Ministry of Foreign Affairs

 • 2578 переглядiв

Trump published an AI image with a new US map featuring world leaders

 • 3146 переглядiв