Deep Learning Applications in Medical Imaging

Abder-Rahman Ali

Abder-Rahman Ali is a PhD candidate in artificial intelligence at the University of Stirling, UK. He has extensive experience with machine vision applications for medical imaging.

Deep Learning Applications in Medical Imaging 9

In 1895, the German physicist, Wilhelm Röntgen, showed his wife Anna an X-ray of her hand. “I have seen my death,” she said. Medical imaging broke paradigms when it first began more than 100 years ago, and deep learning medical applications that have evolved over the past few years seem poised to once again take us beyond our current reality and open up new possibilities in the field.

As shown in this heatmap, artificial intelligence (AI) deals in imaging and diagnostics are peaked in 2015 and have continued to hold steady. One third of healthcare AI startups raising venture capital post January 2015 have been working on imaging and diagnostics, and 80 percent of the funding deals took place thereafter. For instance, Enlitic, a startup which utilizes deep learning for medical image diagnosis, raised $10 million in funding from Capitol Health in 2015.

IBM researchers estimate that medical images currently account for at least 90 percent of all medical data, making it the largest data source in the healthcare industry. This becomes an overwhelming amount on a human scale, when you consider that radiologists in some hospital emergency rooms are presented with thousands of images daily. New methods are thus required to extract and represent data from those images more efficiently.

Though one of the most common early healthcare machine learning applications was actually in medical imaging, it’s only recently that deep learning algorithms have been introduced that are able to learn from examples and prior knowledge. Though we haven’t yet arrived at scale, such technologies are bringing society closer to more accurate and quicker diagnoses via deep learning-based medical imaging.

Current Deep Learning Medical Applications in Imaging

The list below provides a sample of ML/DL applications in medical imaging. Though this list is by no means complete, it gives an indication of the long-ranging ML/DL impact in the medical imaging industry today.

Tumor Detection

Over 5 million cases are diagnosed with skin cancer each year in the United States. The most commonly diagnosed cancer in the nation, skin cancer treatments cost the U.S. healthcare system over $8 billion annually.

Melanoma (the deadliest form of skin cancer) is highly curable if diagnosed early and treated properly, with survival rates varying between 15 percent and 65 percent from early to terminal stages respectively. Proper treatment can even produce a 5-year survival rate of over 98 percent.

One of the most promising near-term applications of automated image processing is in detecting melanoma, says John Smith, senior manager for intelligent information systems at IBM Research. To detect the tumor, the DL algorithm learns important features related to the disease from a group of medical images and then makes predictions (i.e. detection) based on that learning.

Enlitic, the Australian-based medical imaging company referenced earlier, is considered an early pioneer in using DL for tumor detection, and its algorithms have been used to detect tumors in lung CT scans. Jeremy Howard, CEO of Enlitic, says his company was able to create an algorithm capable of identifying relevant characteristics of lung tumors with a higher accuracy rate than radiologists.

One thing that deep learning algorithms require is a lot of data, and the recent influx in data is one of the primary reasons for putting machine and deep learning back on the map in the last half decade. Yet lack of medical image data in the wider field is one barrier that still needs to be overcome. IBM was aware of this issue when it acquired Merge Healthcare, a company that helps hospitals store and analyze medical images,  for $1 billion in 2015. IBM has articulated its plans (see video below) to train Watson on Merge’s collection of 30 billion images in order to help doctors in medical diagnosis.

 

Tracking Tumor Development

Medical imaging can also be used for non-invasive monitoring of disease burden and effectiveness of medical intervention, allowing clinical trials to be completed with smaller subject populations and thus reducing drug development costs and time. For instance, Capecitabine (also known as Xeloda), a drug used for breast cancer, was approved in 1998 on the basis of tumor shrinkage on CT scans after a trial of only 162 patients.

Candidate regions in extracted tissues with proliferative activity, often represented as edges of a tissue abnormality, are identified. The DL algorithm generates tumor probability heatmaps, which show overlapping tissue patches classified for tumor probability. Such images provide informative data on different tumor features such as shape, area, density, and location, thus facilitating the tracking of tumor changes.  

Researchers at the Fraunhofer Institute for Medical Image Computing (MEVIS) revealed a new tool in 2013 that employs DL to reveal changes in tumor images, enabling physicians to determine the course of cancer treatment. “The software can, for example, determine how the volume of a tumor changes over time and supports the detection of new tumors,” said Mark Schenk from Fraunhofer MEVIS. Such an approach also has the potential to enable automated progress monitoring.

Blood Flow Quantification and Visualization

Magnetic Resonance Imaging (MRI) allows for the non-invasive visualization and quantification of blood flow in human vessels, without the use of contrast agents. When MRI’s became more widely available in the 1980s, they enabled much more accurate evaluations of the impact of cardiovascular pathologies on local and global changes in cardiac hemodynamics.

Arterys, a DL medical imaging technology company, recently partnered with GE Healthcare to combine its quantification and medical imaging technology with GE Healthcare’s magnetic resonance (MR) cardiac solutions. Arterys’ system enables a much more efficient visualization and quantification of blood flow inside the heart, alongside a comprehensive diagnosis of cardiovascular disease. 

Arterys’ DL software techniques have made it possible for cardiac assessments on GE MR systems to occur in a fraction of the time of conventional cardiac MR scans. The video below demonstrates Arterys’ system:

 

Medical Interpretation

The benefits of a medical imaging test rely on both image and interpretation quality, with the latter being mainly handled by the radiologist; however, interpretation is prone to errors and can be limited, since humans suffer from factors like fatigue and distractions. This is one reason patients sometimes have different interpretations from various doctors, which can make choosing a plan of action a stressful and tedious process.

Metathesaurus (a large biomedical thesaurus) and RadLex (a unified language of radiology terms) can be used to detect disease-related words in radiological reports. A DL algorithm is then trained to detect the presence or absence of the disease in the medical images (i.e. radiology reports), helping doctors come up with better interpretations.

Lunit, a South Korean startup established in 2013, uses its DL algorithms to analyze and interpret X-ray and CT images. Lunit’s system is able to provide interpretations in 5 seconds and with 95 percent accuracy, an achievement that has attracted investments of $2.3 million through international startup incubation programs in just 3 years.

Another South Korean startup established in 2014, Vuno, is also helping doctors in medical image interpretations. Vuno uses its ML/DL technology to analyze the patient imaging data and compares it to a lexicon of already-processed medical data, letting doctors assess a patient’s condition more quickly and provide better decisions. The startup’s co-founders, who met while working at Samsung, realized that their machine learning experience could be applied to a more pressing problem: “Helping doctors and hospitals to combat disease by putting medical data to work.”

 

Vuno’s system at work

Another application that goes hand-in-hand with medical interpretation is image classification. For example, after spotting a lesion, a doctor has to decide whether it is benign or malignant and classify it as such. On this front, Samsung is applying DL in Ultrasound imaging for breast lesion analysis.

“Users can reduce taking unnecessary biopsies and doctors-in-training will likely have more reliable support in accurately detecting malignant and suspicious lesions,” said Professor Han Boo Kyung, a radiologist at Samsung Medical Center. Samsung’s system analyzes a significant amount of breast exam cases and provides the characteristics of the displayed lesion, also indicating whether the lesion is benign or malignant.

Diabetic Retinopathy

Diabetic retinopathy (DR) is considered the most severe ocular complication of diabetes and is one of the leading and fastest growing causes of blindness throughout the world, with around 415 million diabetic patients at risk worldwide. Data from the National Health Interview Survey and the US Census Bureau have lead to projections that the number of Americans 40 years or older having DR will triple from 5.5 million in 2005 to 16 million in 2050.

As with a many debilitating diseases, if detected early DR can be treated efficiently. A recent study published in 2016 by a group of Google researchers in the Journal of the American Medical Association (JAMA), showed that their DL algorithm, which was trained on a large fundus image dataset, has been able to detect DR with more than 90 percent accuracy.

The DL algorithm shown in the study is trained on a neural network (a mathematical function with millions of parameters), which is used to compute diabetic retinopathy severity from the intensities of pixels (picture elements) in a fundus image, eventually resulting in a general function that is able to compute diabetic retinopathy severity on new images.

One of the things Google is currently working on with participating hospitals in India is implementing DL-trained models at scale, a contained trial in a grander effort to help doctors worldwide detect DR early enough for an efficient treatment.

 

Google’s CEO, Sundar Pichal, talking about DR at the Google I/O 2016 event (at 4:57)

The Future

Search recent Quora and Reddit threads and you’ll find that people seem to be concerned about the possibility for radiology to be disrupted by DL. Yet many experts express optimism at the possibilities for DL-based solutions in the medical imaging field. Dr. Bradley Erickson from the Mayo Clinic in Rochester, Minnesota, believes that most diagnostic imaging in the next 15 to 20 years will be done by computers. But be believes that instead of taking radiologists’ jobs, DL will expand their roles in predicting disease and guiding treatment.

“I’m concerned that some people may dig in their heels and say, ‘I’m just not going to let this happen.’ I would say that noncooperation is also counterproductive, and I hope that there’s a lot of physician engagement in this revolution that’s happening in deep learning so that we implement it in the most optimal way,” Erickson said. Dr.Nick Bryan, an Emeritus Professor of Radiology at Penn Medicine, seems to agree with Erickson, predicting that within 10 years no medical imaging exam will be reviewed by a radiologist until it has been pre-analyzed by a machine.

One of the most revolutionary future applications of DL would be in combatting most types of cancer.

Robert S. Merkel, Oncology and Genomics Global Leader at IBM Watson Health, discusses how IBM Watson will fight cancer

As part of this effort in the ‘war on cancer’, Google DeepMind has partnered with UK’s National Health Service (NHS) to help doctors treat head and neck cancers more quickly with DL technologies. The research is being conducted in coordination with the University College London Hospital.

Closing Thoughts on Deep Learning in Medical Imaging

In 2011, IBM Watson won against two of Jeopardy’s greatest champions. In 2016, AlphaGo, a computer program developed by Google DeepMind to play the board game Go, won against Lee Se-dol, who is considered the strongest human Go player in the world.

While games function as important labs for testing DL technologies, IBM Watson and Google DeepMind have both carried over such solutions into the healthcare and medical imaging domains. It seems likely that as the technology develops further, many companies and startups will join bigger players in using ML/DL to help solve different medical imaging issues. Big vendors like GE Healthcare and Siemens have already made significant investments, and recent analysis by Blackford shows 20+ startups are also employing machine intelligence in medical imaging solutions.

While the potential benefits are significant, so are the initial efforts and costs, which is reason for big companies, hospitals, and research labs to come together in solving big medical imaging issues. IBM Watson, for instance, is partnering with more than 15 hospitals and companies using imaging technology in order to learn how cognitive computing can work in the real-world, a service Watson Health is expected to launch in 2017.

GE has also announced a 3-year partnership with UC San Francisco to develop a set of algorithms that help its radiologists distinguish between a normal result and one that requires further attention. This effort is in addition to another GE partnership with Boston’s Children Hospital to create smart imaging technology for detecting pediatric brain disorders.

There are, and will remain, debates about radiology disruption and what it means for the future roles of medical practitioners; however, the potential benefits of applying deep learning toward the combatting and detecting of diseases and cancer seem likely to outweigh the foreseeable  costs.

Image credit: Medium

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.