This week it is my pleasure to co-write the blog with Fresenius Medical Care North America data scientist, Andy Long. We are writing about artificial intelligence (AI) based on 3 recent Journal of the American Medical Association (JAMA) articles.1-3 These articles explore what we have gained from introducing AI into healthcare delivery, what is missing or suboptimal today, and what potential value hasn’t been explored or exploited yet.
Early healthcare gains from AI leverage capacity that machines have that people don’t have. The human mind can only investigate a few variables at a time, whereas machines can analyze thousands of variables simultaneously. For example, how long would it take you to find the maximum number on a PowerPoint slide with thousands of numbers? It would take you a while, but a computer set to the task could identify the highest number in an instant and could even highlight it for you. While this is a toy example that does not use AI, it demonstrates how machines can help us become more efficient and improve productivity. The strength of intelligent computer systems is the ability to quickly discover subtle patterns in thousands of data points. For example, in the healthcare space, AI could be used to quickly scan a group of patients’ entire charts to identify which patients might have negative outcomes (e.g. re-admission within 30 days).
AI strengths
AI pattern recognition is utilized today in many areas of clinical care, such as image recognition or text analysis. Great success has been seen in the fields of radiology and dermatology, where AI is able to recognize subtle image pattern changes or abnormalities.
While AI is making a big splash in headlines that indicate AI is ‘beating’ doctors at some tasks, studies suggest that the combination of human and AI input may result in the most successful clinical outcomes. For example, a recent article, “AI outperforms human doctors on a US medical exam,” describes superior AI performance compared to a group of doctors performing a standardized exam, but the article fails to emphasize (as evident by the title) that the combination of doctor and AI had the best performance by far. As AI continues to mature, it will be very exciting to see what the combination of human and AI can accomplish.
Human strengths
Patient care involves compassion, personalization, empathy, justice, and all manner of human attributes that are not part of computer logic today. In “Humanizing Artificial Intelligence” the JAMA authors suggest that non-traditional data may enable AI to support wholistic, personalized care:
“If AI can help with a more astute knowledge of the patient and the ‘framily’ (ie, unpaid caregivers, who are friends and family), it would be the kind of advance that could help clinicians become better at delivering more humanistic care.”
The vision is that new non-traditional datasets that embody and quantify social behavior can give clinicians patient context. People will receive better clinical care if the medical team recognizes gaps in social support systems. Can data from social media be processed and presented to the care team with insights that help the clinical team? The related article, “Social Determinants of Health in the Digital Age,” suggests that data points such as participation in online communities, number of online friends, and frequency and types of social media posts are quantifiable data points that may be tied to patient outcomes.
Robust datasets remain a challenge, which is hard to believe with all the time and energy clinicians devote to EHR documentation. EHRs are still a work in progress to be comprehensive in storing meaningful personal and individualized data. The origin of many EHRs as designed for billing capture still haunt users today and result in significant use of templates and other kludgy input screens that yield data that doesn’t always tell the whole individual patient story. In “Questions for Artificial Intelligence in Health Care” the authors describe current EHR data as “generally of low dimensionality…recorded in limited, broad categorizations (eg, diabetes) that omit[s] specificity (eg. duration, severity, and pathophysiologic mechanism).” While templated notes and some problem-list data may have low dimensionality, clinicians often spend a lot of time on the history of present illness, progress notes, nursing notes, and discharge summaries documenting individualized and detailed information. This unstructured data is very useful in AI, but further improvement in data capture, use of natural language processing, and creative data sources will continue to improve AI support of clinical care.
These authors also point out that EHR data is inherently biased toward patients who are sick or are being cared for routinely in the hospital or outpatient clinic. These data may be biased toward a sick population. On the other hand, it is important to note that robust data captured from wearables and personal devices may be biased toward healthier individuals. The data unpinning of AI development impacts the quality of the AI output.
Putting AI to work
Let’s say we can find big quantities of robust data and put AI to work as a human partner in clinical care. Other than examining x-ray images, how will that work? For example, let’s take a use-case of a nurse or physician who is intervening based on chart reviews from an endless list of charts, perhaps with the goal of reducing re-admissions. AI can help clinical staff be more human by automating the boring stuff (e.g., flipping through the chart) and allowing for more time on the important stuff (e.g., spending time with the patient). One thing AI is great for is helping to prioritize worklists by indicating some level of high risk for an event (e.g., re-admission).
In a sense, AI can be used as a red-flag detector to indicate when to spend a little extra time on a chart or with a patient. A machine-learning model could be trained based on historical records to predict the probability a patient will be re-admitted within 30 days. This AI model could then be used to sort the patients from highest to lowest risk. The human clinician could then start at the top of the list, working their way down from highest to lowest risk. As a result, the clinician is more effective since the highest risk patients are at the top. Utilizing interpretable machine-learning models, the AI could also highlight which areas of the chart are predictive of re-admission. In a sense, this is similar to the toy problem above for finding and highlighting the maximum number, except in this case the AI is finding the highest risk patients and reasons. By having the AI sort the list and highlight red flags, the clinician can be efficient with the chart review, freeing up more time to spend with individual patients and creating the opportunity to reach more patients.
Better together
AI is here to stay and there’s no turning back. Our data sources may not be as good as they need to be and will be, but we still have more clinical data generated on every patient than we can manage with human cognition alone. In addition, data considerations are growing exponentially as we understand the impact of social determinants of health on health outcomes. As noted in the Acumen blog post from December 31, 2018, race, ethnicity, socioeconomic status, health insurance, and residential neighborhoods create disparity in the incidence, progression, and treatment of CKD. These social conditions “govern access to resources that influence health and disease.” Such data will need to be part of patient care.
Integrating AI with humans will improve healthcare by providing the right care to the right patient at the right time. Integrating humans with AI will provide nurturing and compassionate care to every patient every time. We need each other to succeed.
Andrew Long is a Data Scientist at Fresenius Medical Care North America (FMCNA). Andrew holds a PhD in biomedical engineering from Johns Hopkins University and a Master’s degree in mechanical engineering from Northwestern University. Andrew joined FMCNA in 2017 after participating in the Insight Health Data Fellows Program. At FMCNA, he is responsible for building predictive models using machine learning to improve the quality of life of every patient who receives dialysis from FMCNA. He is currently creating a model to predict which patients are at the highest risk of imminent hospitalization.
Dugan Maddux, MD, FACP, is the Vice President for CKD Initiatives for FMC-NA. Before her foray into the business side of medicine, Dr. Maddux spent 18 years practicing nephrology in Danville, Virginia. During this time, she and her husband, Dr. Frank Maddux, developed a nephrology-focused Electronic Health Record. She and Frank also developed Voice Expeditions, which features the Nephrology Oral History project, a collection of interviews of the early dialysis pioneers.
References:
- Israni, ST and Verghese, A. Humanizing Artificial Intelligence. 2019;321(1):29-30.
- Abnousi, F, Rumsfeld, JS, and Krumholz, HM. Social Determinants of Health in the Digital Age. 2019;321(3):247-248.
- Maddox, TM, Rumsfeld, JS and Payne, PRO. Questions for Artificial Intelligence in Health Care. JAMA. 2019;321(1):31-32.
Image from www.canstockphoto.com
Leave a Reply