According to Deloitte and the Economist, global annual health spending should reach $8.734 trillion dollars by 2020, and, as mentioned in our previous report on AI for Healthcare in Asia, InkWood Research estimated the size of the artificial intelligence market in the healthcare industry at around $1.21 billion in 2016. As of now, numerous AI vendors claim to help healthcare professionals diagnose patients using machine vision. Other AI vendors claim to offer solutions for increasing adherence to drug therapy programs.
We researched the space to better understand where machine vision comes into play in the healthcare industry and to answer the following questions:
- What types of computer vision applications are currently in use in healthcare?
- What tangible results has computer vision driven in healthcare?
This report covers vendors offering software across four applications:
- Reducing Attrition in Clinical Trials
- Medical Imaging
This article intends to provide business leaders in the healthcare space with an idea of what they can currently expect from computer vision in their industry. We hope that this article allows business leaders in healthcare to garner insights they can confidently relay to their executive teams so they can make informed decisions when thinking about AI adoption. At the very least, this article intends to act as a method of reducing the time business leaders in healthcare spend researching AI companies they may or may not be interested in working with.
It should be noted that none of the companies listed in this report claim to offer diagnostic tools, but their software could help radiologists find abnormalities in patient scan images that could lead to a diagnosis when interpreted by a medical professional.
MaxQ AI is a US and Isreal-based company with 23 employees. We covered the company in our report on Machine Learning for Healthcare Operations Software before it changed its name from MedyMatch. The company offers a device and accompanying software which it claims can help physicians identify rare anomalies in brain scans using machine vision.
MaxQ AI claims emergency room physicians can use the company’s software to identify anomalies in patient brain scans. Then, they can quickly suggest treatment options to patients and their families. For example, in the event of a stroke, patients recover faster if physicians administer aggressive, targeted treatment to the patient’s brain. Stroke patients that receive treatment sooner experience better outcomes.
A physician or radiologist can upload a patient’s brain scan into MaxQ AI’s software. Developers would have run millions of brain scans into MaxQ AI’s algorithm. This would have trained the algorithm to determine the 1’s and 0’s that might correlate to healthy regions of the brain and those that might correlate to anomalies in the brain. We can infer that the software alerts the physician or visually points out anomalous areas in the patient’s scan. We also could not find a video demonstrating how the software works.
MaxQ AI does not make available any case studies reporting success with its software. The company is still in the process of obtaining FDA approval.
MaxQ AI claims to have partnered with Samsung, GE, and IBM. According to MaxQ AI’s CEO, Gene Saragnese, these partnerships may allow the company to reach up to three-quarters of the world’s hospitals with their software.
MaxQ AI does not list any prominent clients on their website, but it has raised $9 million in funding and is backed by Genesis Capital. Readers should note Genesis Capital was recently acquired by Goldman Sachs.
Gene Saragnese is CEO at MaxQ AI. He holds a BSME in Mechanical Engineering from Rutgers University. Previously, Saragnese worked on numerous imaging projects for Aerospace companies. He also served as Vice President of Molecular Imaging and Computer Tomography at GE Healthcare.
Microsoft offers a software called InnerEye, which it claims can visually identify and display possible tumors and other anomalies in X-ray images.
Microsoft claims radiologists can upload three-dimensional patient scans into the software. The software could then generate area measurements for various parts of the organ or ligament shown in the scan. Then, InnerEye colors areas it believes contain tumors or other anomalies white. A physician could then pay closer attention to these white areas.
In order to do this, developers would have likely run millions of patient scans of various parts of the body labeled as containing a tumor through InnerEye’s algorithm. This would have trained the algorithm to discern the sequences and patterns of 1’s and 0’s that a human would interpret as a patient scan showing a tumor. InnerEye would then be able to point out tumors from patient scans that a physician uploads.
Microsoft notes the technology is designed to assist radiologists and not to replace them in the processes entirely.
Below is a short 5-minute video demonstrating how InnerEye works:
Microsoft does not list any client hospitals on its InnerEye website; however, InnerEye is FDA-approved. Microsoft claims that InnerEye has been used in numerous clinical studies, including those dealing with brain tumor segmentation.
Antonio Criminisi is Principal Researcher on the InnerEye Project at Microsoft. He holds a PhD in Computer Vision from Oxford University. Criminisi has served as a principal researcher at Microsoft for 14 years.
We recently covered Arterys’ medical imaging software for radiologists in our report Machine Learning for Radiology.
Arterys’ machine vision software was reportedly trained to focus on detecting abnormalities in the heart, although the company claims its software is able to detect abnormalities in the lungs and liver to some degree. Arterys claims its software can create three-dimensional models of a patient’s heart on a radiologist’s computer screen.
Arterys claims hospitals can reduce the time radiologists spend scanning patients. For example, Artersy’s CardioAI, a software under the ArterysAI umbrella, uses what the company calls 4D Flow. Arterys claims 4D Flow is installed on a standard MRI.
4D Flow reportedly allows radiologists to see a three-dimensional image of a patient’s heart they can manipulate on a computer screen after an MRI scans a patient. The company claims these scans allow radiologists to gain a more genuine understanding of the patient’s heart without requiring time-consuming, invasive surgery.
It’s likely that developers ran millions of patient MR scans labeled as indicative of healthy and dysfunctioning hearts through 4D Flow’s algorithm. This would have trained the algorithm to discern the sequences and patterns of 1’s and 0’s that, to the human eye, form the image of a healthy or dysfunctional heart as displayed in a patient MR scan.
A physician could then upload a patient MR scan that is not labeled, and the algorithm behind 4D Flow would, in theory, be able to determine whether or not the heart in the patient scan was healthy or dysfunctional. Anomalies within the patient’s heart are identified on a dashboard.
Below is a short 3-minute video demonstrating 4D Flow’s visual scan display:
was cleared by the FDA, and the company claims it has been validated in seven different peer-reviewed medical journals, including the Journal of Cardiovascular Magnetic Resonance. Arterys was one of the original companies to collaborate on the Siemens Healthineers Digital Ecosystem alongside Dell, SecondOpinion.com, and others in the field.
That said, Arterys does not list any major hospitals as clients on their website. The company has raised $43.7 million and is backed by Emergent Medical Partners and 14 other investors.
John Axerio-Cilies is co-founder and CTO at Arterys. He holds a PhD in Flow Physics and Computational Engineering from Stanford University. All other C-level executives at Arterys hold PhDs from Stanford University, with the exception of its CEO, Fabien Becker, who earned his PhD in Physics from the University of Cambridge. Such a roster bodes well for Arterys and lends credibility to their software.
Reducing Attrition in Clinical Trials
AiCure is a New York-based startup with 52 employees. The company offers a software which it claims can help researchers monitor a patient’s adherence to their prescribed treatments using machine vision. Specifically, AiCure claims it can help pharmaceutical researchers reduce the number of people who drop out of clinical trials, also known as attrition.
AiCure claims its software uses a phone app to monitor patients as they undergo treatment plans. Users are instructed to ingest drugs in front of the phone’s camera. While the camera is on the patient, the software identifies the patient using facial recognition technology and determines if the patient ingests their prescribed medication.
To do this, developers would have needed to run thousands of hours of footage showing people ingesting medication from various angles and in various lighting conditions through AiCure’s algorithm. This would have trained the algorithm to discern the sequences and patterns of 1’s and 0’s that a human would interpret as a video of someone taking medication.
In addition, once the algorithm has been trained on this initial dataset, AiCure may need to train its algorithm on its client’s medication. What this means is that a client research institute, for example, might send AiCure the pill it plans to use for a clinical trial. AiCure’s data scientists may then hold the pill up to a camera at various angles and lighting conditions.
AiCure’s algorithm would in theory then be able to determine what the client’s pill looks like when clinical trial participants ingest it. As a result, AiCure’s app would be able to detect if trial participants are in fact taking the client’s pill or not. It should be noted that we are inferring this step, but we believe that it’s likely.
When a trial participant does not take the prescribed medication, the researchers conducting the trial may be notified. This, however, is speculation.
Below is a short 2-minute video demonstrating how AiCure’s software works:
AiCure claims their software’s use case for reducing attrition is clinically-validated. According to AiCure, in one study, participants “achieved nearly 100% adherence and retention to the study.” That said, however, the study involved only 17 participants. Generally speaking, researchers agree that studies should involve at least 30 participants in order to run statistical analyses with results that might be generalizable to a population.
Additionally, study participants were apparently provided a smartphone in order to use AiCure’s app for the purposes of the trial. It was unclear if participants were able to use the smartphone for purposes other than using Aicure’s app. The study followed a particularly vulnerable population: substance abusers seeking treatment for Hepatitis C with a median age of 51.
One could speculate on how access to the smartphone for other purposes may have affected the participant’s adherence to their drug regimen more than Aicure’s app. The reward of smartphone access to a vulnerable population may have been too great to truly discern how much of an effect the AI had on the participant’s retention to the study. It should be noted, however, that the study was approved by an institutional review board, which lends some credence to the findings.
AiCure published another study in which it claims study participants adhered to their medication 95% of the time after using the company’s app. This study, however, was conducted exclusively by AiCure and Roche Pharma employees, and again, participants were provisioned smartphones. Business leaders should note that the study does not mention and institutional review board.
AiCure does not list any major companies as clients, but the company has also raised $27.3 million and are backed by Biomatics Capital Partner and Tribeca Venture Partners.
Isaac Galatzer-Levy is the VP of Clinical and Computational Neuroscience at Aicure. He holds a PhD in Clinical Psychology from Columbia University with Post Doc work in Machine Learning at NYU. Previously, Isaac served as Principal Data Scientist and Clinical Product Specialist at Mindstrong.
Gauss offers a software called Triton, which the company claims can help physicians monitor surgical blood loss using computer vision on an iPad.
Gauss claims physicians can hold up a used surgical sponge to an iPad running Triton. Then, Triton measures the surgical patient’s current blood loss and blood loss rate. It is likely that developers ran millions of images through Gauss’ algorithm showing surgical sponges in various states of bloodiness. This would have trained the algorithm to discern the sequences and patterns of 1’s and 0’s that, to the human eye, form the image of sponge soaked in a various amount of blood.
This would then allow a surgeon to hold up a bloody sponge to an iPad running Triton, and Triton would, in theory, determine how much blood is on the sponge. This information could then be used to determine how much blood the surgical patient lost prior to or during the surgery. However, it is unclear how Gauss’ algorithm determines a patient’s level of blood loss and the rate at which they are losing blood. The estimated blood loss is displayed on the mounted device for the physician to see.
Below is a short 1-minute video demonstrating how Triton works:
Gauss does not list any marquee clients on its website, but Triton is FDA-approved, and the company has raised $51.5 million dollars in venture funding. It is backed by Polaris Partners and Softbank Ventures Korea.
Gauss sites an independent study reported in the American Journal of Perinatology. The study involved 2781 participant, and it compared Triton to surgeons on their ability to determine how much blood C-section patients lost during the surgery. According to the study, Triton identified significant hemorrhages in C-section patients more frequently than the surgeons’ visual estimations. As a result, surgeons used less blood product for patients whose C-sections involved Triton than for those whose did not. Additionally, C-section patients whose surgeries involved Triton experienced shorter hospital stays.
Siddarth Satish is CEO and co-founder at Gauss. He holds an MS in BioEngineering from Berkley and performed a fellowship at Stanford for surgical simulations.
Takeaways for Business Leaders in Healthcare
At this time, the most viable use case for computer vision in healthcare seems to be in radiology. AI-based radiology solutions are supported by C-level executives with PhDs in computer science or machine learning. This is one of the key signs that we look for when determining if a company is legitimate in claiming it offers an AI solution. AI solutions for radiology generally involve aiding radiologists in diagnosing diseases and conditions from X-ray, MR, or CT-scans.
Of all the companies covered in this report, Gauss has secured the most venture funding. That said, the company’s CEO is the least credentialed in terms of his academic experience with AI. Their use case is unique amongst AI vendors offering computer vision solutions for healthcare, which may concern business leaders that do not want to be the company’s “guinea pig,” so to speak. However, the company stands on robust clinical research, which lends their software significant credence.
It may be a red flag that none of the companies discussed in this report list marquee clients on their websites. Generally, we’ve found that business leaders want to know who else like them have implemented an AI solution to positive results. These companies’ lack of case studies and client lists may rightfully make business leaders in healthcare wary of working with them.
Although clinical research is arguably much better than even the best vendor-written case study, it can be flawed or biased just as much as a case study. Although it seems as if all of the companies discussed in this report are legitimately offering AI, business leaders should always be skeptical of companies that lie about their use of it.
Header Image Credit: AVRSpot