- How to Manage the Blues this Holiday Season
- New Legislation Will Help Schools Handle Heart Emergencies
- Money, Gun Violence, Hate Crimes: Poll Reveals Top Worries at the End of 2024
- Bird Flu Kills 20 Big Cats at Washington Sanctuary, Causing Quarantine
- Proposed FDA Rule Targets Asbestos in Talc Cosmetic Products
- In Kids with Crohn’s Disease, TNF Inhibitors Help Prevent Serious Complications, Data Suggests
- Has RSV Vaccine Hesitancy Subsided?
- Study: Blood Transfusion Post-Heart Attack May Be Critical for Those with Anemia
- FDA Approves Generic GLP-1 Medicine For Diabetes Treatment
- Quick Fix? New Migraine Medicine May Start Working Right Away
Can AI Boost Accuracy of Doctors’ Diagnoses?
AI can’t yet help doctors improve their ability to diagnose complex conditions, a sobering new study has found.
Doctors had about the same diagnostic accuracy whether or not they were using ChatGPT Plus, according to results published recently in the journal JAMA Network Open.
However, the AI outperformed doctors when allowed to diagnose on its own, the researchers noted.
“Our study shows that AI alone can be an effective and powerful tool for diagnosis,” said researcher Dr. Andrew Parsons, who oversees the teaching of clinical skills to medical students at the University of Virginia School of Medicine.
“We were surprised to find that adding a human physician to the mix actually reduced diagnostic accuracy though improved efficiency,” Parsons said. “These results likely mean that we need formal training in how best to use AI.”
For the study, researchers provided 50 doctors with case studies based on real-life patients. These cases included details about medical history, physical exams and lab test results.
The doctors were randomly assigned to two groups — one that diagnosed the patients’ conditions based solely on the info available and standard reference materials, and another that used ChatGPT Plus to help inform their diagnosis.
Doctors using ChatGPT returned an accurate diagnosis about 76% of the time, compared with about 74% for those not aided by AI, results show.
The ChatGPT group did come to their diagnoses slightly faster — about 8.6 minutes compared to 9.4 minutes for those without AI help, researchers found.
When ChatGPT Plus was given the case studies on its own, it achieved an accuracy of more than 92%.
However, researchers caution that the AI would likely fare less well in real life, when diagnosing patients on the fly versus evaluating case studies.
More study is needed to assess the ability of AI to diagnose medical problems, particularly in terms of the downstream effects of diagnoses and the treatment decisions that result from them, researchers said.
“As AI becomes more embedded in healthcare, it’s essential to understand how we can leverage these tools to improve patient care and the physician experience,” Parsons said. “This study suggests there is much work to be done in terms of optimizing our partnership with AI in the clinical environment.”
More information
The Cleveland Clinic has more about AI in health care.
SOURCE: University of Virginia, news release, Nov. 13, 2024
Source: HealthDay
Copyright © 2024 HealthDay. All rights reserved.