Cedars-Sinai study reveals distinct roles for AI and physicians in virtual care

Do physicians or artificial intelligence (AI) offer better treatment recommendations for patients examined through a virtual urgent care setting? A new Cedars-Sinai study shows physicians and AI models have distinct strengths.

The late-breaking study presented at the American College of Physicians Internal Medicine Meeting and published simultaneously in the Annals of Internal Medicine compared initial AI treatment recommendations to final recommendations of physicians who had access to the AI recommendations but may or may not have reviewed them.

We found that initial AI recommendations for common complaints in an urgent care setting were rated higher than final physician recommendations. Artificial intelligence, as an example, was especially successful in flagging urinary tract infections potentially caused by antibiotic-resistant bacteria and suggesting a culture be ordered before prescribing medications."

Joshua Pevnick, MD, MSHS, co-director of the Cedars-Sinai Division of Informatics, associate professor of Medicine and co-senior author of the study

However, Pevnick said that while AI was shown to be better at identifying critical red flags, "physicians were better at eliciting a more complete history from patients and adapting their recommendations accordingly."

The retrospective study was conducted using data from Cedars-Sinai Connect, a virtual primary and urgent care program that began in 2023. An extension of Cedars-Sinai's in-person care, Cedars-Sinai Connect aims to expand virtual healthcare for patients in California through a mobile app that allows individuals to quickly and easily access Cedars-Sinai experts for acute, chronic and preventive care.

The study reviewed 461 physician-managed visits with AI recommendations from June 12 through July 14, 2024. Key medical issues addressed during these virtual urgent care visits involved adults with respiratory, urinary, vaginal, vision or dental symptoms.

Patients using the mobile app initiate visits by entering their medical concerns and, for first-time users, providing demographic information. An expert AI model conducts a structured dynamic interview, gathering symptom information and medical history. On average, patients answer 25 questions in five minutes.

An algorithm uses the patient's answers as well as data from the patient's electronic health record to provide initial information about conditions with related symptoms. After presenting patients with possible diagnoses to explain their symptoms, the mobile app allows patients to initiate a video visit with a physician.

The algorithm also suggests diagnosis and treatment recommendations that can be viewed by the Cedars-Sinai Connect treating physician, though during the time of the study, Cedars-Sinai Connect required physicians to scroll down to view them.

"The major uncertainty of this study is whether physicians scrolled down to view the prescribing, ordering, referral or other management suggestions made by AI, and whether they incorporated these recommendations into their clinical decision-making," said Caroline Goldzweig, MD, Cedars-Sinai Medical Network chief medical officer and co-senior author of the study. "The fact that the AI recommendations were often rated as higher quality than physician decisions, however, suggests that AI decision support, when implemented effectively at the point of care, has the potential to improve clinical decision-making for common and acute conditions."

The AI system used for Cedars-Sinai Connect is developed by K Health, which created the technology to reduce the burdens of clinical intake and data entry, allowing doctors to focus more on patient care. K Health and Cedars-Sinai developed Cedars-Sinai Connect through a joint venture and collaborated on the research study. Investigators from Tel Aviv University, including first author Dan Zeltzer, PhD, also participated in the study.

"We put AI to the test in real-world conditions, not contrived scenarios," said Ran Shaul, co-founder and chief product officer of K Health. "In the reality of everyday primary care, there are so many variables and factors-you're dealing with complex human beings, and any given AI has to deal with incomplete data and a very diverse set of patients."

Shaul said the investigators learned that if you train the AI on the treasure trove of de-identified clinical notes and use day-to-day provider care as an always-on reinforcement learning mechanism, "you can reach the level of accuracy you would expect from a human doctor."

Source:
Journal reference:

Zeltzer, D., et al. (2025). Comparison of Initial Artificial Intelligence (AI) and Final Physician Recommendations in AI-Assisted Virtual Urgent Care Visits. Annals of Internal Medicine. doi.org/10.7326/annals-24-03283.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI technology uncovers genetic factors and treatment options for Parkinson's