Machine learning and other data science techniques are used in many ways in healthcare. From image processing that detects abnormalities in x-rays or MRIs to algorithms that pull from electronic medical records to detect disease risk, presence, or progression, machine-learning techniques can improve patient care. However, as a data scientist in healthcare, I’ve discovered that putting ideas into practice is often the hardest part of getting value out of a data science project.
Data analytics and clinical care don’t naturally go together. Data analysts typically don’t know much about taking care of patients and clinicians often aren’t familiar with the power of machine-learning algorithms. Here are a few things I’ve learned that help data analytics improve healthcare delivery.
1. Take a holistic view
Machine learning is a branch of artificial intelligence that consists of algorithms that can learn from historical data to make predictions about the future. These algorithms are typically “trained” to predict specific kinds of outcomes (for example, hospitalization or disease-state progression) and look for patterns in the historical data associated with these outcomes. These patterns can be simple linear correlations with a single variable and the outcome of interest, or they can be complex relationships relying on interactions between many different variables. Once these algorithms are trained, they can predict future outcomes based on present data. The accuracy of these predictions depends on the quality of the data, how easy the outcome is to predict, and the skill and care of the analytics professional implementing the algorithm.
The success of machine-learning algorithms depends on a deep understanding of how an algorithm might be used and the care process it will fit into. In addition, it’s helpful for the data analysts to be connected to the clinicians who will be using the algorithm to make diagnoses and provide treatment. From a data analytics standpoint, predictive model accuracy is important, but it’s equally important that the algorithm supports the clinical care.
Finally, we have to consider where a machine-learning algorithm can be used in a clinician’s workflow. Often this takes significant back-and-forth to learn the clinician workflow and ensure clinicians understand what the algorithm can (and can’t) provide. The problem as originally posed might be refined or changed completely depending on which problems the clinician is trying to solve. Algorithms have limitations, so those should be clear to the clinicians using them.
2. Be transparent
An algorithm suggesting a diagnosis to a clinician without any justification is rarely actionable. A full chart review and/or physical exam may be needed to understand what the algorithm identified. If nothing is found, what does the clinician conclude? Should the algorithm’s assessment be discarded or is it picking up on something the clinician doesn’t see?
If there’s no easy way for the clinician to understand the algorithm logic, then it may be dismissed since it provides no actionable information. For this reason, there needs to be some level of transparency into a machine-learning model’s prediction if it’s going to be used by a clinician.
Machine-learning models are often complex, and it’s difficult to interpret exactly why the algorithm predicts a specific outcome. To help overcome this, tools like LIME or SHAP can be used to highlight clinical features that are having the biggest influence over the algorithm’s prediction for a specific patient. LIME and SHAP show the variables that, if changed, would lead to the largest change in the predicted outcome. For example, one of these tools may show that a particular patient is predicted at high risk of hospitalization because of their most recent blood pressure reading. When properly presented and explained to a clinician, predictive models can be powerful tools for directing clinicians to possible health issues.
3. Practice clinical judgment
If an algorithm suggests a particular treatment or diagnosis, what happens when it’s wrong? When human clinicians make mistakes liability is more clear-cut. Data scientists usually aren’t trained clinicians, and even if they were, the models they create certainly aren’t. When an algorithm is used a clinician should have the final say in whether an intervention is warranted or not. The algorithm may help alert a clinician about an issue they didn’t notice and can support clinical judgment, but in most healthcare applications a human should be the one making the final call on treatments.
Machine-learning algorithms tend to enhance clinical judgment, with improved outcomes when humans and algorithms work together. Algorithms draw on all of the available patient data available and crunch numbers in a way that humans can’t. However, humans often have access to some additional information that an algorithm does not, such as the way a patient looks or acts, and other hard-to-quantify facts about their well-being. We get the best outcomes when applying the strengths of both.
4. Build relationships
One of the biggest barriers to adoption of data science methods is getting buy-in from clinicians. And they’re completely right to be skeptical—they have an incredible level of expertise and familiarity with their patients after all. They also have years of training that teaches them to get to the root of the problem and not to simply trust the results of a “black box” algorithm.
An algorithm that just tells clinicians what they already know is useless at best, but it may also be perceived as condescending and lead to resentment of the analytics team and the predictive modeling process. A good partnership includes the analytics team taking time to talk to clinicians to understand their issues and determine what kind of tool will help them. The analytical tool should support the clinical work and should improve with refinement based on feedback once it is in clinical use.
There is enormous potential for data science to make vast differences in healthcare. Data analysts and clinicians must work together to solve the right problems with machine-learning algorithms. These algorithms must be implemented properly in the clinician workflow to gain the benefit of combined human and analytical insights. The best algorithms are useless if they aren’t part of a workflow that impacts and improves patient care.
Machine-learning algorithms are likely to become a bigger part of clinical care as the FDA makes it easier to get approval for devices that incorporate these algorithms. In a recent speech, physician and FDA commissioner Scott Gottlieb said, “One of the most promising digital health tools is artificial intelligence, particularly efforts that use machine learning.”
Under Dr. Gottlieb’s leadership, the FDA has approved decision-support software that uses machine-learning algorithms, and they expect many more submissions for devices with artificial intelligence. Hospitals and clinicians are collaborating with analytics companies to make AI devices “smarter”, leveraging clinical information and data to improve algorithm performance. AI is a lot more intelligent when clinicians and data analysts partner.
Thomas Blanchard is currently Data Science Lead at Fresenius Medical Care North America. Tommy’s team uses data science broadly across the organization to create predictive models and advanced analytics support for a diverse set of company needs. Central to his team’s goals is the use of machine learning to improve medical care of the many chronically ill patients Fresenius provides care for.
Image from www.canstockphoto.com.
Leave a Reply