Machine Learning Healthcare Applications – 2018 and Beyond

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Artificial Intelligence in Healthcare 950×540

In the broad sweep of AI’s current worldly ambitions, machine learning healthcare applications seem to top the list for funding and press in the last three years.

Since early 2013, IBM’s Watson has been used in the medical field, and after winning an astounding series of games against with world’s best living Go player, Google DeepMind‘s team decided to throw their weight behind the medical opportunities of their technologies as well.

Many of the machine learning (ML) industry’s hottest young startups are knuckling down significant portions of their efforts to healthcare, including Nervanasys (recently acquired by Intel), Ayasdi (raised $94MM as of 02/16), Sentient.ai (raised $144MM as of 02/16), Digital Reasoning Systems (raised $36MM as of 02/16) among others.

With all the excitement in the investor and research communities, we at Emerj have found most machine learning executives have a hard time putting a finger on where machine learning is making its mark on healthcare today. We’ve written this article, not to be a complete catalogue of possible applications, but to highlight a number of current and future uses of machine learning in the medical field, with relevant links to external sources and related Emerj interviews.

Current Machine Learning Healthcare Applications

The list below is by no means complete, but provides a useful lay-of-the-land of some of ML’s impact in the healthcare industry.

Diagnosis in Medical Imaging

Computer vision has been one of the most remarkable breakthroughs, thanks to machine learning and deep learning, and it’s a particularly active healthcare application for ML. Microsoft’s InnerEye initiative (started in 2010) is presently working on image diagnostic tools, and the team has posted a number of videos explaining their developments, including this video on machine learning for image analysis:

Deep learning will probably play a more and more important role in diagnostic applications as deep learning becomes more accessible, and as more data sources (including rich and varied forms of medical imagery) become part of the AI diagnostic process.

However, deep learning applications are known be limited in their explanatory capacity. In other words, a trained deep learning system cannot explain “how” it arrived at it’s predictions – even when they’re correct. This kind of “black box problem” is all the more challenging in healthcare, where doctors won’t want to make life-and-death decisions without a firm understanding of how the machine arrived at it’s recommendation (even if those recommendations have proven to be correct in the past).

For readers who aren’t familiar with deep learning but would like an informed, simplified explanation, I recommend listening to our interview with Google DeepMind’s Nando de Freitas.

Treatment Queries and Suggestions

Diagnosis is a very complicated process, and involves – at least for now – a myriad of factors (everything from the color of whites of a patient’s eyes to the food they have for breakfast) of which machines cannot presently collate and make sense; however, there’s little doubt that a machine might aid in helping physicians make the right considerations in diagnosis and treatment, simply by serving as an extension of scientific knowledge.

That’s what Memorial Sloan Kettering (MSK)’s Oncology department is aiming for in its recent partnership with IBM Watson. MSK has reams of data on cancer patients and treatments used over decades, and it’s able to present and suggest treatment ideas or options to doctors in dealing with unique future cancer cases – by pulling from what worked best in the past. The kind of an intelligence-augmenting tool, while difficult to sell into the hurly-burly world of hospitals, is already in preliminary use today.

Scaled Up / Crowdsourced Medical Data Collection

There is a great deal of focus on pooling data from various mobile devices in order to aggregate and make sense of more live health data. Apple’s ResearchKit is aiming to do this in the treatment of Parkinson’s disease and Asperger’s syndrome by allowing users to access interactive apps (one of which applies machine learning for facial recognition) that assess their conditions over time; their use of the app feeds ongoing progress data into an anonymous pool for future study.

IBM is going to great lengths to acquire all the health data it can get its hands on, from partnering with Medtronic to make sense of diabetes and insulin data in real time, to buying out healthcare analytics company Truven Health for $2.6B.

Despite the tremendous deluge of healthcare data provided by the internet of things, the industry still seems to be experimenting in how to make sense of this information and make real-time changes to treatment. Scientists and patients alike can be optimistic that, as this trend of pooled consumer data continues, researchers will have more ammunition for tackling tough diseases and unique cases.

Drug Discovery

While much of the healthcare industry is a morass of laws and criss-crossing incentives of various stakeholders (hospital CEOs, doctors, nurses, patients, insurance companies, etc…), drug discovery stands out as a relatively straightforward economic value for machine learning healthcare application creators. This application also deals with one relatively clear customer who happens to generally have deep pockets: drug companies.

IBM’s own health applications has had initiatives in drug discovery since it’s early days. Google has also jumped into the drug discovery fray and joins a host of companies already raising and making money by working on drug discovery with the help of machine learning.

We’ve covered drug discovery and pharma applications in greater depth elsewhere on Emerj. Many of our investor interviews (including our interview titled “Doctors Don’t Want to be Replaced” with Steve Gullans of Excel VM) feature a relatively optimistic outlook about the speed of innovation in drug discovery vs many other healthcare applications (see our list of “unique obstacles” to medical machine learning in the conclusion of this article).

Robotic Surgery

The da Vinci robot has gotten the bulk of attention in the robotic surgery space, and some could argue for good reason. This device allows surgeons to manipulate dextrous robotic limbs in order to perform surgeries with fine detail and in tight spaces (and with less tremors) than would be possible by the human hand alone. Here’s a video highlighting the incredible dexterity of the Da Vinci robot:

While not all robotic surgery procedures involve machine learning, some systems use computer vision (aided by machine learning) to identify distances, or a specific body part (such as identifying hair follicles for transplantation on the head, in the case of hair transplantation surgery). In addition, machine learning is in some cases used to steady the motion and movement of robotic limbs when taking directions from human controllers.

(Readers with a more pronounced interest in this topic might benefit from our full 2000-word article on robotic surgery.)

Future Applications

Below is a list of applications which are gaining momentum with the help of today’s funding and research focus.

Personalized Medicine

If your child gets their wisdom teeth pulled, it’s likely they’ll be prescribed a few doses of Vicodin. For a urinary tract infection (UTI), it’s likely they’ll get Bactrim. In the hopefully-not-too-distant future, few patients will ever get exactly the same dose of any drug. In fact, if we know enough about the patient’s genetics and history, few patients may even be prescribed the same drug at all.

The promise of personalized medicine is a world in which everyone’s health recommendations and disease treatments are tailored based on their medical history, genetic lineage, past conditions, diet, stress levels, and more.

While eventually this might apply to minor conditions (i.e. giving someone a slightly lesser dose of Bactrim for a UTI, or a completely unique variation of Bactrim formulated to avoid side effects for a person with a specific genetic profile), it is likely to make much of its initial impact in high-stakes situations (i.e. deciding whether or not to go into chemotherapy, based on a person’s age, gender, race, genetic makeup, and more). We cover data-related personal medicine issues in our article titled “Where Healthcare’s Big Data Comes From.”

Automatic Treatment or Recommendation

In the diabetes video created by Medtronic and IBM (visible here), Medtronic’s own Hooman Hakami states that at some point, Medtronic wants to have their insulin checking pumps work autonomously, monitoring blood-glucose levels and injecting insulin as needed, without disturbing the user’s daily life.

This, of course, is a microcosm of a much larger picture of autonomous treatment. Imagine a machine that could adjust a patient’s dose of pain killers or antibiotics by tracking data about their blood, diet, sleep, and stress. Instead of counting on distractible human beings to remember how many pills to take, a small kitchen table machine learning “agent” (think Amazon’s Alexa) might dole out the pills, monitor how many you take, and call a doctor if your condition seems dire or you haven’t followed its directions.

The legal constraints of putting so much power in the “hands” of an algorithm are not trivial, and like any other innovation in healthcare, autonomous treatments of any kind will likely undergo long trails to prove their viability, safety, and superiority to other treatment methods.

Improving Performance (Beyond Amelioration)

Orreco and IBM recently announced a partnership to boost athletic performance, and IBM has set up a similar partnership with Under Armor in January 2016. While western medicine has kept its primary focus on treatment and amelioration of disease, there is a great need for proactive health prevention and intervention, and the first wave of IoT devices (notably the Fitbit) is pushing these applications forward.

One can imagine that disease prevention or athletic performance won’t be the only applications of health-promoting apps. Machine learning may be implemented to track worker performance or stress levels on the job, as well as for seeking positive improvements in at-risk groups (not just relieving symptoms or healing after setbacks).

The ethical concerns around “augmenting” human physical and (especially) mental abilities are intense, and will likely be increasingly pressing the coming 15 years as enhancement technologies become viable.

Autonomous Robotic Surgery

At present, robots like the da Vinci are mostly an extension of the dexterity and trained ability of a surgeon. In the future, machine learning could be used to combine visual data and motor patterns within devices such as the da Vinci in order to allow machines to master surgeries. Machines have recently developed the ability to model beyond-human expertise in some kinds of visual art and painting:

If a machine can be trained to replicate the legendary creative capacity of Van Gough or Picaso, we might imagine that with enough training, such a machine could “drink in” enough hip replacement surgeries to eventually perform the procedure on anyone, better than any living team of doctors. The IEEE has put together an interesting write-up on autonomous surgery that’s worth reading for those interested.

Closing Thoughts on Machine Learning in Healthcare

Diagnosis, treatment, and prevention are all huge problems that are based in part on plentiful data, and their improvement represents incalculable value. This is just the kind of thing that Silicon Valley should pounce on, right? Surely there is opportunity, but there are also unique obstacles in the medical field that aren’t always present in other domains:

  1. Stakeholder-ship is scattered: When you buy a Toyota Camry, it’s a transaction that satisfies your own needs. You buy it from Toyota, you enjoy it’s benefits, and you’re responsible to fix and maintain it. When a hospital brings on a new machine learning healthcare diagnostic device, who pays for it? Would patients pay a premium to be treated at hospitals with such devices? Would hospitals cover the cost in order to brag about better diagnostic tools and attract more patients? Would insurance cover the cost in some way? Doctors might like such a device if it improved diagnostic accuracy, but some patients may resent or not accept being treated by a machine. Similarly, some patients might rally for more machine learning diagnostic tools, but doctors or nurses in fear of their jobs might rally against their widespread adoption. If such a machine made an error (potentially a fatal one), at what point would we say that this was the responsibility of the machine manufacturer, and at what point would we say it was the fault of the doctors for not using it correctly? This is just the tip of the iceberg of the lattice of stakeholders in the medical domain, and it’s one of many reasons why innovation and change are sometimes difficult in the medical field.
  2. Security is tight: When you buy a meal at Wendy’s or a pair of jeans from GAP, you don’t need to give those companies much more than a wad of cash or a credit card. When you undergo diagnostic tests to determine the best way to treat your skin cancer, much more sensitive information must be collected by a healthcare provider. HIPAA (Health Insurance Portability and Accountability Act, passed by Congress in 1996) laws exist – among other reasons – to enforce Federal standards on any transmission of patient medical information. If you create a app to share pictures of food, you’ll have a lot less Federal red tape to slice through than if you create an app for diagnosing disease symptoms through blood tests. Sharing health data across hospitals, through mobile devices, or in other databases implies many unique challenges with HIPAA compliance.
  3. Medicine is more than math: A doctor is not simply an advanced “decision tree,” taking in data points and pumping out the most likely diagnosis. Doctors are assessing streams of information that machines today are either incapable of assessing or are incapable of integrating into a “doctor-replicator” robot. Think about the look on a patient’s face, their gait and walk, what their family members say about their previous behavior (in addition to what they fill out themselves on an intake form), the smell of their breath, their level of nervousness (as expressed by body language and the subconscious), and the list goes on and on. The job of replacing an entire doctor – at least for general diagnosis and treatment – is unlikely. Innovators will have to find the chunks of these problems that they can actually solve, without biting off more than they can chew.
  4. A “black box” won’t do: Machine learning and deep learning (unlike stodgier AI approaches like expert systems) are unable to express why they achieved the result that they did. In some cases, this doesn’t matter. For Facebook, it isn’t completely necessary to know exactly why a ML program identified your face as your face in an image. If it successfully tagged you in an image, that is enough of a win. On the other hand, a patient who is being told that he/she must undergo chemotherapy is unlikely to accept the answer, “The machine learning algorithm said so, based on previous case data and your current condition.” This is one more reason that most doctors should not be shaking in their boots about getting replaced by machines in the next decade.

The above challenges are no reason to stop innovating, and I’m sure there there are some clinicians who have their fingers crossed that more of the world’s data scientists and computer scientists will hone in on improving healthcare and medicine.

At least when it comes to machine learning, it’s likely that useful and widespread applications will develop first in narrow use-cases – for example, a machine learning healthcare application that detects the percentage growth or shrinkage of a tumor over time based on image data from dozens or hundreds of X-ray images from various angles.

While machine learning might help with “suggestions” in a diagnostic situation, a doctor’s judgement would be needed in order to factor for the specific context of the patient. A more narrow computer vision application, on the other hand, could easily beat out any human expert (assuming the model had enough training).

In addition, the Federal “red tape” or HIPAA may make the medical field more of a “Goliath” game as opposed to a “David” one. It seems plausible that some new social network could catch on with teenagers and beat out Snapchat and Facebook by virtue of its virality, marketing, and user interface.

Like Instagram, you might only need a dozen engineers and the right idea at the right time; however, it’s unlikely that a dozen engineers – even if they raised many tens of millions of dollars – would have the requisite industry connections and legal understandings to penetrate the deep layers of stakeholders in order to become a de-facto medical standard. That labyrinth might involve more resources, connections, and know-how than any small Silicon Valley startup can muster, and more patience than most VC’s can bear. It seems that a company like IBM or Medtronic might have a distinct advantage in medical innovation for just those reasons.

Healthcare-Related Machine Learning Interviews:

At Emerj, we’ve been fortunate enough to interview executives and researchers from some of the world’s most prominent universities and most exciting companies. Here is a sampling of some of our interviews that relate to ML and healthcare:

 

Header image credit: Healthable.org

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe