A recent study published in the journal Science demonstrates that AI tools capable of analyzing vast amounts of medical information may diagnose emergency room patients as accurately as, or even better than, human physicians in certain circumstances. The study makes clear that AI will not replace human physicians anytime soon.
CBC News reports that the research, which examined the performance of large language models in emergency room settings, represents a significant development in the ongoing integration of AI into healthcare systems. However, medical professionals emphasize that these technological advances are meant to complement, not replace, human expertise in patient care.
Dr. Adam Rodman, the lead author of the study and a physician at Beth Israel Deaconess Medical Center in Boston, explained that the research utilized a specialized type of artificial intelligence known as a reasoning model. Unlike standard large language models, these systems are designed to explain their thinking process before providing a final answer, mimicking the problem-solving approach of human doctors.
“A reasoning model is different from your standard large language model because it has been instructed to think out loud, to solve problems like humans,” Rodman said. He noted that when examining how these reasoning models make diagnoses, the approach resembles the steps a doctor would take to solve a medical problem.
The study conducted multiple trials using both real patient cases and synthetic scenarios, drawing from unstructured data in emergency department records. Researchers tested OpenAI’s o1-preview model at three critical points of patient interaction: initial triage, doctor examination in the emergency room, and admission to either the medical floor or intensive care unit. Importantly, all testing relied solely on data without involving actual doctor-patient interactions or affecting real diagnoses and treatments.
At each stage of care, the artificial intelligence model was asked to identify the most likely diagnosis based on symptom presentation. The results showed that the model could identify either the exact diagnosis or a very close approximation, sometimes exceeding the performance of participating physicians.
“It doesn’t mean that computers can do medicine, but within this narrow task it can solve diagnoses better than humans,” Rodman stated.
Despite the promising results, medical professionals maintain that artificial intelligence cannot replicate the comprehensive assessment provided by trained physicians. Dr. Amol Verma, an internal medicine physician and scientist at Toronto’s St. Michael’s Hospital, called it a false comparison to suggest that AI tools are superior to doctors.

“I don’t know a single doctor who makes all of their decisions based purely on text information,” Verma said, emphasizing that physical examination remains crucial to forming accurate diagnoses.
Khatib illustrated this point with a recent patient case where triage information suggested one diagnosis based on symptoms, but listening with a stethoscope revealed a different condition entirely. She stressed that artificial intelligence cannot perform physical procedures such as intubating patients or applying casts to injured limbs.
The study does face certain limitations and challenges. Rodman acknowledged that more research is needed to understand how humans and machines can effectively collaborate in emergency medical environments, including more robust clinical trials to ensure real-world efficacy and safety.
AI’s impact on every aspect of the American economy, ranging from manufacturing to the doctor’s office, is one of the primary themes of the instant bestseller by Breitbart News Social Media Director Wynton Hall, Code Red: The Left, the Right, China, and the Race to Control AI.
Senator Marsha Blackburn (R-TN), who was named one of TIME’s 100 Most Influential People in AI, praised CODE RED as a “must-read.” She added: “Few understand our conservative fight against Big Tech as Hall does,” making him “uniquely qualified to examine how we can best utilize AI’s enormous potential, while ensuring it does not exploit kids, creators, and conservatives.” Award-winning investigative journalist and Public founder Michael Shellenberger calls CODE RED “illuminating,” ”alarming,” and describes the book as “an essential conversation-starter for those hoping to subvert Big Tech’s autocratic plans before it’s too late.”
Read more at CBC News here.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.


