Can AI be trusted with health issues: what a new Harvard study revealed

 • 57355 переглядiв

A study by Harvard Medical School has shown that modern AI models are, in most cases, more accurate at determining diagnoses than real doctors. Against the backdrop of medical errors, the complexity of protecting patient rights, and a crisis of trust, people are increasingly seeking alternative mechanisms for verifying medical diagnoses.

A new study by Harvard Medical School and Beth Israel Deaconess Medical Center has shown that in certain scenarios, modern artificial intelligence models outperform real doctors in the accuracy of primary diagnosis. At the same time, patients around the world have long been using AI as a tool to verify a diagnosis made by a doctor. This is especially relevant for people who have faced incorrect diagnoses, ineffective treatment, or the tragedy of losing a loved one due to a medical error. Why people are increasingly checking medical conclusions through artificial intelligence and how this is changing attitudes toward medicine, read in the UNN article.

Just a few years ago, the idea that artificial intelligence could diagnose more accurately than a doctor sounded like the plot of a science fiction movie. However, in 2026, this is already a subject of serious scientific discussion. The journal Science published a study by a team from Harvard Medical School and Beth Israel Deaconess Medical Center, in which scientists compared the capabilities of OpenAI and GPT models with the work of hundreds of doctors of various levels of training.

Researchers analyzed both classic clinical cases and real patient cases in emergency departments. In one experiment, an AI model was compared with two general practitioners based on 76 real cases from the Beth Israel hospital emergency department. The quality of diagnoses was evaluated by other doctors "blindly" – without information about whether the diagnosis was made by a human or artificial intelligence.

Instead of considering the Odrex medical negligence case on its merits – a new round of procedural maneuvers13.05.26, 11:49

At the stage of primary patient triage in the emergency department, the AI model suggested an "accurate or very close to accurate diagnosis" in 67% of cases. For comparison, one of the doctors achieved such a result in 55% of cases, the other in 50%. 

At every diagnostic point of contact, the AI model showed either better results than the doctors or was on the same level as them

- the study states.

The advantage of AI was particularly noticeable at the first stage of decision-making – when there is still little information about the patient and almost no time for reflection. Exactly where the human factor, fatigue, or error can cost a life.

At the same time, the authors of the study emphasize that it is not a matter of replacing doctors with artificial intelligence today. The study does not claim that AI is ready to independently make decisions regarding the life and death of patients. On the contrary, the scientists explicitly write about the "urgent need" for large-scale clinical trials in real-world medical conditions.

Here the main question arises – if people increasingly see that technology can make mistakes less often than a doctor, what happens to global trust in the medical system itself?

In fact, the world is gradually entering a new reality where the patient no longer perceives the doctor as the "sole source of truth." People are starting to check prescriptions, upload medical documents to AI services, and seek alternative conclusions and a "second opinion."

This is particularly evident in countries where there are problems with the protection of patient rights, access to medical documentation, and the effectiveness of investigating possible medical errors. As UNN previously wrote, it is on this basis that independent public patient initiatives, such as the StopOdrex movement, began to appear in Ukraine.

The reasons for their appearance are not only individual conflicts surrounding medical institutions. This is a societal reaction to a systemic crisis of trust. After all, when a person is not sure that the state is capable of quickly and effectively protecting their rights, they begin to look for alternative mechanisms of influence and mutual support.

This is exactly how the public initiative StopOdrex functions, collecting stories from patients and families who report negative treatment experiences at the private Odrex clinic.

As a reminder

As UNN previously reported, StopOdrex activists claim that their platforms have become a place where people could for the first time openly talk about incorrect diagnoses, post-operative complications, lack of communication with doctors, and problems with obtaining medical documentation.

Popular
Rescuer dies in the Maldives during search for bodies of Italian divers

 • 5374 переглядiв

Spain states that without migrants, bars will close and public services will suffer

 • 3734 переглядiв

"Big promises, thin results" - Politico on Trump's trip to China

 • 15200 переглядiв

In Lviv, a man opened fire near a playground and was detained

 • 5000 переглядiв

News by theme
Can AI be trusted with health issues: what a new Harvard study revealed

 • 57355 переглядiв

Latvian PM resigns following dismissal of Defense Minister over drone crashes

 • 3748 переглядiв

At least 20 people missing in Kyiv after massive Russian strike

 • 3557 переглядiв

Body of a 12-year-old girl was found under the rubble of a high-rise building in Kyiv

 • 46511 переглядiв