In January of this
year, Wired Magazine published an article about a
collaboration between The Department of Veterans Affairs (VA) and Google parent Alphabet’s DeepMind unit to
create software powered by artificial intelligence that attempts to predict
which patients in the intensive care unit (ICU) are likely to develop acute
kidney injury (AKI). The article stated that more than 50% of adults admitted
to an ICU end up getting AKI, which is life-threatening.
According to the
article, the Department of Veterans Affairs contributed 700,000 medical records
to the project. The goal of the project was to test whether artificial
intelligence could be developed to help doctors better predict which patients
were at risk for developing AKI so preventative measures could be taken sooner.
Fast forward to the August Volume of the journal Nature and the article “A clinically applicable approach to continuous prediction
of future acute kidney injury”.
It looks like artificial intelligence may in fact be able to help doctors
identify which patients in the ICU are at serious risk of AKI. This study shows
that artificial intelligence can predict kidney failure up to two days before
it occurs. During the research study, the software was able to predict nearly
56% of all serious kidney problems and approximately 90% of those problems
serious enough to require dialysis.
The work is
still in the early stages — there were two false positives for every true
positive — but it certainly advances what’s known about how deep learning may
be helpful in clinical healthcare practice.
According to Dr. Dominic King, DeepMind’s
health lead and coauthor of the research paper, kidney issues are particularly
tricky to identify in advance. Today, doctors and nurses are alerted to acute
kidney injury via a patient’s blood test, but by the time that information
comes through, the organ may already be damaged; making him hopeful for the
long-term value of these types of predictive solutions. The team hopes a
similar model can be developed to identify other major causes of disease and
deterioration, including life-threatening infection sepsis.
I live in Palo Alto – within walking distance of Google and the home of the VA hospital that an article in the Financial Times identified as the planned location for a clinical trial of this algorithm. I also have a particular interest in how to develop effective clinical decisioning systems. At Decision Management Solutions we have done a couple of interesting prototypes and recently written a paper for the Department of Health and Human Services on this topic.
The value of a prediction like this is that
it could help medical professionals better triage patients and get those
who require intervention on a treatment plan right away. Doing so could
potentially save hundreds of thousands of lives each year and lessen the need
for invasive, uncomfortable treatments such as dialysis or kidney transplant.
The prediction itself cannot do this – it’s just a prediction – but acting on
the prediction can. Making the prediction readily available to medical
professionals MIGHT change their behavior. If they understand the prediction,
if the prediction is clearly explained, if there is nothing about the patient
that triggers their own personal experience, if their first few cases aren’t
false positives… if, if, if.
To take advantage of this kind of prediction, it needs to be
embedded into a clinical decision support framework. Working with clinicians,
you can develop a model of how they do triage for patients today and how they
select appropriate treatments for a patient. This model will be different in
each clinical setting – the VA is likely to do this differently from your local
hospital network, for instance. The availability of facilities, distance to
them, organization of specialties and much more go into this decision. And if
the decision about triage requires medical judgment that can’t be automated,
this too can be input to the system.
With a clear understanding of the decision, you can improve it
using the prediction. The medical professionals can see how they would change
the decision given the prediction, its accuracy and its false positives.
Instead of simply showing them the prediction and hoping they will change their
decision, the system can change its recommendation in alignment with their
approach.
Remember, AI, Machine Learning and predictive models don’t DO
anything – they just make predictions. If you want to save lives (or engage
customers, prevent fraud, manage risk or anything else), then you have to make decisions
with those predictions.
Maybe I should drop by the VA and tell them…