Study Shows ChatGPT Outperforms Physicians in Giving Medical Advice
A new study co-authored by Assistant Professor of Computer Science Adam Poliak found that a panel of licensed healthcare professionals overwhelmingly preferred responses to medical questions from the artificial intelligence (AI) assistant ChatGPT to those given to the same questions in an online forum by actual physicians.
The study was published last week in JAMA Internal Medicine and the research, led by John W. Ayers, from the Qualcomm Institute at University of California San Diego provides an early glimpse into the role that AI assistants could play in medicine. The study found that the panel preferred ChatGPT’s responses 79% of the time and rated ChatGPT’s responses as higher quality and more empathetic.
“While our study pitted ChatGPT against physicians, the ultimate solution isn’t throwing your doctor out altogether,” says Poliak in the article announcing the findings. “Instead, a physician harnessing ChatGPT is the answer for better and empathetic care.”
To obtain a large and diverse sample of healthcare questions and physician answers that did not contain identifiable personal information, the team turned to social media where millions of patients publicly post medical questions to which doctors respond: Reddit’s AskDocs.
r/AskDocs is a subreddit with approximately 452,000 members who post medical questions and verified healthcare professionals submit answers. While anyone can respond to a question, moderators verify healthcare professionals’ credentials and responses display the respondent’s level of credentials. The result is a large and diverse set of patient medical questions and accompanying answers from licensed medical professionals.
While some may wonder if question-answer exchanges on social media are a fair test, team members noted that the exchanges were reflective of their clinical experience.
The team randomly sampled 195 exchanges from AskDocs where a verified physician responded to a public question. The team provided the original question to ChatGPT and asked it to author a response. A panel of three licensed healthcare professionals assessed each question and the corresponding responses and were blinded to whether the response originated from a physician or ChatGPT. They compared responses based on information quality and empathy, noting which one they preferred.
The panel of healthcare professional evaluators preferred ChatGPT responses to physician responses 79% of the time.
“ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of the patient’s questions than physician responses,” said Jessica Kelley, M.S.N, a nurse practitioner with San Diego firm Human Longevity and study co-author.
Additionally, ChatGPT responses were rated significantly higher in quality than physician responses: good or very good quality responses were 3.6 times higher for ChatGPT than physicians (physicians 22.1% versus ChatGPT 78.5%). The responses were also more empathic: empathetic or very empathetic responses were 9.8 times higher for ChatGPT than for physicians (physicians 4.6% versus ChatGPT 45.1%).
“I never imagined saying this,” added Aaron Goodman, M.D., an associate clinical professor at UC San Diego School of Medicine and study coauthor, “but ChatGPT is a prescription I’d like to give to my inbox. The tool will transform the way I support my patients.”
Findings from the study have been covered in a number of media outlets including the Wall Street Journal.
This article uses content originally published by the Qualcomm Institute at University of California San Diego Read the complete article on their website.