Dr Eric Topol is concerned that the clinical practice of medicine has deteriorated because it has become big business. There is a ten-to-one ratio of administrators to doctors, and Electronic Health Records (EHR) have made a mess of things. Because clinicians must now plug medical data into EHR, they often spend appointments tending to their computer keyboards instead of their patients. Will artificial intelligence (AI) help? The evidence is inconclusive.
In a Discover magazine article, Will AI Make Medicine More Human? we can read the results of a 2019 study in the Journal of General Internal Medicine. Doctors only ask patients about their concerns around a third of the time. When they do ask, they interrupt within 11 seconds two-thirds of the time. According to Topol:
The electronic health record is the single worst abject failure of modern medicine because it was set up for business purposes — for billing — only, without any regard for what would benefit doctors, patients or any other clinicians.
These short, awkward visits could have significant consequences. In a 2014 report from US Department of Veteran Affairs, it was estimated that around 12 million adults are misdiagnosed in the US each year. A cardiologist and founder and director of the Scripps Research Translational Institute, Topol has written three books: The Creative Destruction of Medicine, The Patient Will See You Now, and, more recently, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Topol suggests that AI could help improve healthcare by giving doctors more time to connect with patients. I admire Topol's optimism about this, but that's not happening with AI in clinical settings.
Where AI makes a difference in healthcare
Enterprise forays into AI today are predominately focused on reducing cost, automating tasks and similar applications. AI software being sold to healthcare is the same — creating more intuitive interfaces to the EHR and automating some of the routine processes that consume so much of a clinician's time doing clinical documentation, order entry, and sorting through the in-basket. Artificial intelligence will assist in processing of routine requests, like prescriptions and billing. It may also prioritize things that truly require the clinician's attention.
A roundup of 32 AI in healthcare examples compiled by tech hub community Built In outlines the potential benefit of AI:
In 2015, misdiagnosing illness and a medical error accounted for 10% of all U.S. deaths. This is clearly an improvement in the diagnostic process and is a promising healthcare application. Incomplete medical histories and large caseloads can lead to deadly human errors. Immune to those variables, AI can predict and diagnose the disease faster than most medical professionals. For example, in one study, an AI model using algorithms and deep learning was able to diagnose breast cancer more accurately than 11 pathologists.
There's a host of flashy AI applications diagnosis and treatment recommendation flowing from Silicon Valley, and an equal number of those relating to cost, readmission and outcomes. Here's a sampling of those mentioned in the Built In article:
- BenevolentAI — uses AI in drug development. As well as working with major pharmaceutical groups to license drugs, it is also partnering with charities to develop easily transportable medicines for rare diseases.
- Buoy Health — A chatbot collects symptoms from the patient and uses AI to recommend care. Used by Harvard Medical School among others.
- Enlitic — streamlines radiology diagnoses using deep learning analysis of a range of medical data including radiology images, blood tests, EKGs, genomics and patient medical history. Ranked 5th smartest artificial intelligence company in the world by MIT.
- Johns Hopkins Hospital — in partnership with GE to improve patient operational flow efficiency through predictive AI. The project has brought a 60% improvement in admitting patients and a 21% increase in pre-noon patient discharges.
- Qventus — applies AI to operational challenges in healthcare, such as prioritizing patients in emergency rooms according to their symptoms, tracking hospital waiting times or charting the fastest ambulance routes.
The downside of AI in healthcare
There is a downside, which AI app vendors typically pay insufficient attention to. This is highlighted in a Brookings article co-authored by the chair of the Department of Medical Ethics and Health Policy at the University of Pennsylvania, Will Robots Replace Doctors:
If we are not careful, AI could not make health care better, but instead, unintentionally exacerbate many of our current healthcare system's worst aspects. Using deep and machine learning, AI systems analyze enormous amounts of data to make predictions and recommend interventions. Advances in computing power have enabled the creation and cost-effective analysis of large datasets of payer claims, electronic health record data, medical images, genetic data, laboratory data, prescription data, clinical emails, and patient demographic information to power AI models ...
According to a 2017 report by the National Academy of Medicine on healthcare disparities, non-whites continue to experience worse outcomes for infant mortality, obesity, heart disease, cancer, stroke, HIV/AIDS, and overall mortality. Shockingly, Alaskan Natives suffer from 60% higher infant mortality than whites. And worse, AIDS mortality for African Americans is increasing. Even among whites, there are substantial geographic differences in outcomes and mortality. Biases based on socioeconomic status may be exacerbated by incorporating patient-generated data from expensive sensors, phones, and social media.
The authors warn that AI systems ingesting all that data without verifying its accuracy and then working from its built-in patterns risk repeating if not exacerbating these unjust disparities.
A burning question that Eric Topol hasn't addressed is, should a patient know when an AI model decides your care or not? These software products are described as Decision Support Systems, but are they support or the actual decision-maker? In a previous article, I described the issues with ‘precision medicine', especially the difficulty a healthcare provider has accumulating sufficient data for a model to produce accurate results. Because of reasons of privacy and simply incompatible data structures, combining data from many healthcare organizations is extremely difficult.
AI Doc is not ready for prime time. For example, a doctor may have prescribed Metformin for you for off-label treatment of inflammation. An AI diagnosis may incorrectly conclude you have diabetes, and its cascading recommendations would be incorrect. Also, medical records are notoriously incomplete.
Against this, I balance the fact that doctors do a poor job of this too. Iatrogenic disorder occurs when the harmful effects of the therapeutic or diagnostic regimen cause pathology independent of the condition for which the regimen is advised. In other words, they are harmed by medical practice. According to a Johns Hopkins study, 251,454 deaths stemmed from a medical error — making it the third leading cause of death in the US, just behind cancer and heart disease.
To sum up, here are some key questions that the debate around the use of AI in healthcare must address:
- Should patients be advised that their diagnosis and treatment are recommendations from a computer?
- Will the use of AI software actually change the way doctors practice, for the better?
- The Dean of the Columbia University Medical School once said to me, "There are as many doctors who graduated at the bottom of the class as the top." If AI can improve their performance, how will it be determined?
- All of the AI applications cited above do one thing. If medicine isn't practiced in a holistic way, how can it improve?