Unlike other industrial sectors, many healthcare systems and services are embracing the roadblocks in their technological adoption processes. Thoroughly vetting systems is far more critical for most forms of patient care than having the latest and greatest technology.
In a recent episode of Emerj’s AI in Business podcast, Duke University Health Systems Medical Director Dan Buckland revealed a surprising insight, putting a finer point on these healthcare trends: Only about 10-15% of the challenges in implementing AI involve the technology itself, in his view, while 80-85% relate to how end users adapt to and utilize these technologies.
Further underscoring these challenges, guidelines from the National Institutes of Healthcare discuss several implementation challenges related to AI in healthcare, categorized into six key areas: ethical barriers, technological barriers, liability and regulatory, workforce, social, and above all, patient safety barriers. Each category presents its own set of complexities that healthcare organizations must navigate.
The article also emphasizes that understanding these barriers is crucial for healthcare leaders to facilitate the successful incorporation of AI technologies into clinical practice for improved patient outcomes.
Emerj Senior Editor Matthew DeMello recently spoke with Dr. Dan Elton, Staff Scientist at the National Human Genome Research Institute under the National Institutes of Health, on the ‘AI in Business’ podcast to discuss the challenges and potential of AI adoption in healthcare. Throughout their discussion, Dr. Elton’s views touch on a wide range of impediments to how healthcare systems adopt technology, focusing on FDA regulations, the limited commercial success of AI products, and the shift toward more comprehensive multimodal models for better diagnostic capabilities.
At the time Emerj recorded this podcast with Dan, he was working at Mass General Brigham, but he has recently started a new position at the National Institutes of Health. The National Institutes of Health is the primary agency of the U.S. government responsible for conducting and supporting biomedical research to improve public health and develop treatments for diseases.
In the following analysis of their conversation, we examine two key insights for healthcare and tech leaders:
- Targeted AI solutions find success despite slow adoption: AI adoption in healthcare remains limited, but focused applications like automated billing and scribe task automation are successfully reducing costs and streamlining workflows, paving the way for broader integration in the future.
- Narrow AI models need stronger value propositions for hospitals: Most FDA-approved AI solutions focus on diagnosing single diseases, which offers limited value for hospitals and does not justify the cost and effort required for implementation.
Listen to the full episode below:
Guest: Dr. Dan Elton, Staff Scientist, National Institutes of Health
Expertise: Artificial Intelligence, Deep Learning, Computational Physics
Brief Recognition: Dr. Dan Elton is currently the Staff Scientist at the National Human Genome Research Institute under the National Institutes of Health. Previously, he worked for the Mass General Brigham, where he looked after the deployment and testing of AI systems in the radiology clinic. He earned his Doctorate in 2016 in Physics from Stony Brook University.
Targeted AI Solutions Find Success Despite Slow Adoption
Dan opens the conversation by highlighting that, contrary to the perception that AI is widely used in medicine, its actual adoption is quite limited, primarily because the healthcare sector is slow to integrate new technologies.
While AI is making some inroads in areas like radiology, its overall usage remains minimal in Dr. Elton’s view. Many doctors are eager to leverage AI to alleviate their heavy workloads and streamline processes. However, the current reality shows that significant implementation still needs to be improved in the medical field.
He then shares from his extensive experience in the field of radiology that radiologists are overworked, typically spending only 10 to 15 minutes on average per study, which limits their ability to analyze the substantial amount of data in medical images.
He points out that the increasing complexity and volume of images due to advancements in scanning technology further exacerbates the challenge of keeping up with their workload. Additionally, the transition to electronic health records (EHR) has added to their burden, as it requires significant data entry, leading to the emergence of medical scribes to help manage this work. However, even with scribes, the workload remains considerable for radiologists.
Dan further discusses two notable applications of AI in the medical field:
- Automated billing process
- Automation of medical scribe tasks
For automated billing process:
- Dan mentions that Coda Metrics has developed a solution that automatically determines the best billing codes based on electronic health records (EHR) and doctors’ notes.
- He notes the use case has been deployed successfully at Mass General Brigham, resulting in significant cost savings and a reduction in the number of insurance denials. The technology is characterized as “old school AI” and utilizes specialized algorithms.
For automation of medical scribe tasks:
- There is potential for automating the tasks performed by medical scribes using advanced large language models, such as ChatGPT. Microsoft’s acquisition of Nuance Communications, a leader in medical transcription, aims to integrate large language models with text-to-speech and speech-to-text technologies.
- Deploying LLMs in these areas could automate transcription and EHR form filling, thereby streamlining workflows and alleviating administrative burdens. Most notably, the application may not require FDA approval since it doesn’t involve direct patient diagnosis or high-risk situations.
Narrow AI Models Lack Strong Value Proposition for Hospitals
Dan goes on to mention that around 1,000 AI software products have received FDA approval, with most approvals occurring in the last few years. However, he points out that despite this approval, many of these products have yet to achieve commercial success.
He implies that the benefits or value propositions of these AI solutions may need to be more compelling to drive widespread adoption in the market, leading to a lack of commercial viability.
“The value proposition for the hospitals is not very big. Because they only do a single task, some of these may diagnose a single disease, it may not warrant a hospital paying for it and then installing it. So what’s going on is that there are just a few AI systems, which I’ve seen, that have found widespread use, mostly for triage — basically taking a quick look at a medical scan and determining if there’s something that needs urgent attention. For instance, a brain hemorrhage. A lot of hospitals are deploying AI for that purpose, but less for traditional diagnoses.”
–Dr. Dan Elton, Staff Scientist at the National Institutes of Health Data Scientist
Looking to the future, Dr. Elton believes that large multimodal models will be more effective because they can diagnose multiple conditions and act as a backup or second reader for medical images. He envisions a shift away from numerous single-purpose models toward more comprehensive multimodal systems, which could enhance diagnostic capabilities and streamline processes in healthcare.
He also tells Emerj that his perspective on AI in radiology has significantly changed over the past year. He initially believed AI for radiology was overhyped, citing an article he wrote in 2022.
However, the release of ChatGPT and the GPT-4 language model with vision capabilities made him, and many others, reconsider. Now, there’s growing interest in using foundation models like GPT -4 and fine-tuning them on hospital-specific data, such as imaging and doctor’s notes, which hospitals realize is extremely valuable.
Towards the end of the podcast, Dan explains an interesting aspect of FDA regulation when it comes to AI usage in hospitals. He mentions that if a hospital develops its own AI system and uses it solely in-house, it does not require FDA approval because it falls under “local practice,” which the FDA cannot regulate.
The FDA only steps in if the hospital wants to sell the AI solution externally. Such regulatory practices create a loophole allowing hospitals to use advanced AI models like GPT-4 without needing FDA approval, provided it’s for internal use only.
However, Dan notes that hospitals must take full responsibility for due diligence and ensure the AI is functioning correctly, especially if it’s being used for diagnostic purposes or in areas involving some level of risk. He emphasizes that hospitals need skilled data scientists and possibly new departments to validate and monitor the AI’s performance. This setup is crucial to leverage AI in a high-stakes environment like healthcare safely.
“There is one thing I could point out – radiology is relatively the easiest place right now where you can deploy AI because everything has been digitized. We’re just talking about using images and text, and basically taking that existing data and feeding it into an AI. In other areas, there’s going to be some work that probably needs to be done, a lot more work. For instance, in pathology, those departments, the images may not even have digital images; they may just be using old-school light microscopes.
The same goes for doctors’ clinics or physical examination rooms — they may not be equipped with the necessary technology to integrate AI. So, there’s still a lot of work that needs to be done, and it will likely take longer than many may expect.”
–Dr. Dan Elton, Staff Scientist at the National Institutes of Health