Episode Summary: Recently, we were called upon by the World Bank to do a good deal of research on the potential of applying artificial intelligence to health data in the developing world. Diagnostics was a very big focus of the information that we presented. It appears as though diagnostics is an area of great promise with regards to AI, and that’s what we’re focusing on in this episode the podcast.
This week, we speak with Yufeng Deng, Chief Scientist of Infervision, a company that focuses on computer vision for medical diagnostics. We speak with Deng about the expanding capability of machine vision, including what kind of data one needs to collect and what is now possible with the technology.
In addition, Deng also speaks about how Infervision found a business problem to solve using AI, and in that he provides transferable lessons to business leaders in a variety of industries.
Subscribe to our AI in Industry Podcast with your favorite podcast service:
Expertise: machine learning, numerous computer languages: C++, LaTeX, Matlab, and Python
Brief Recognition: Deng earned his PhD in Biomedical Engineering from Duke University in 2017.
(03:30) What are the established use cases [for machine vision] where the value is really clear in terms of diagnostics in the chest region at present?
Yufeng Deng: I think, really, the lung nodule detection or the early lung cancer detection is really a good starting point for us. What Infervision started about four years ago, we went to talk to a lot of radiologists and asking them what is the most time-consuming task they do. What is the biggest challenge or biggest area that you think AI could help you to improve your efficiency? Most of the answers came back as not surprisingly lung nodule detection. This is because one radiologist look at a set of chest Cts, whether it could be 300 and 400 slices, typically to spend 10 to 15 minutes on each setup chest CT, and half of this time was used to look for early lung cancer or lung nodule.
The lung nodules refer to these tiny, typically less than three centimeters, either round or another shape, these little tiny things could be solid or part solid these things that could develop into lung cancer later on. So we call these lung nodules, which is the manifestation of early lung cancer.
As we are good with the lung cancer screening or lung nodule detection, we’re obviously thinking about what other things that a radiologist look at when he or she reads a chest CT scan. The immediate next thing we think about is bone fracture. And bone fracture is typically detected through chest CTs. And bone fracture can be caused by a lot of things. Could be a traffic accident or it could be an accident from your work. So a lot of these things you have to get a chest CT and detect bone fractures and use the report to go to your insurance or get appropriate reimbursement.
So that makes bone fracture detection on chest CT important piece. On top of that, we’re also branching into chronic lung diseases, such as emphysema as well as cardiac functions related diseases such as coronary artery calcification. So all these things are typically read by radiologists when he or she reads a chest CT series. So we hope to expand our AI capabilities to cover different types of things that the doctor already is gonna look at, so that we could help them further.
(10:00) How does one go about collecting the information when the people who have that information often aren’t able to sit down and do a data labeling job for eight or 10 hours a day?
YD: I guess that’s also one of the advantages of being in China. For us, we actually have a labeling team where we recruit radiologists who come to our office to label for us during their off clinical days. And in addition, we have come up with a four-step labeling protocol to ensure our labeling is accurate.
Because the quality control there is, I will say the most important piece for a good AI model. So we have this four-step quality control process, where each image is at least labeled by two radiologists respectively and independently, and the third step is we’ll have a more experienced radiologist to look at the previous two annotations, two labelings, and make a final decision if these two labelings don’t agree with each other. On top of that, we have a judge who is usually a more experienced person, and on top of that, we have a fourth step, which is a random check process on every day.
We have a quality control specialist who will pull up random images from each day and just to check the labeling on them, and if he or she finds something wrong, will trace back to who was the radiologist who did labeling and who was the judge on that. And hopefully will improve the process if that image didn’t pass.
(13:30) When you look at the hospitals that are really taking what you folks are doing in medical imaging and diagnostics and working it into a workflow successfully and seeing some degree of efficiency from that, actually improving their processes from that, what do they have in common?
YD: I think the first point is workflow integration because if you have a good AI model, that’s not enough. In the medical world, you have to integrate your solution seamlessly into the workflow of the doctors and the physicians, so that they could really use your tool to improve their efficiency, rather your tool is an extra step for them, and that could really decrease their efficiency. We have worked with a lot of hospitals and collaborators, and the first thing we talk to them is how do we integrate our tool into their medical system.
How do we integrate our software and hardware into their IT workflow? A big part of it is how do we display our results. We could either use our own viewer to display all the results along with images, or we could work their PACS systems or work with their reporting systems to have APIs to work together so that it could use their existing software to display the AI results. In that way, they don’t have to open up new windows, so that is one less burden on them.
One interesting question that, so we actually spend a lot of time with our customer is, how do we save our AI results? Do we save everything into their PACS, so that they have to explain to their referring physicians or their clinical departments how, why AI makes such a decision and human make different decisions? Or do we not save anything, and let humans to make the final judgment? These are all the questions we have to consider when we actually work with these hospitals. And a lot of times we provide the options for the radiologist to save what they want, so that doesn’t save additional information to raise questions so that they have to explain to the patient or to their referring physicians.
(18:30) Is the training done [for these machine vision models or do you need to] pull their data into your system in any way?
YD: For the majority of our collaborators or users, we do not pull their data to our training. We have established detail or in-depth collaboration with a few of our collaborators from early on about four years ago when we started this process. And we have partnered with several major hospitals in China, where those hospitals are our strategic partners, where we take data from them and design products with them together. These type of relationships, it takes time to foster. If we just go to a new site, and we don’t know each other that well, they probably are not comfortable for us to use their data, and we’re not comfortable to take data from them either.
Subscribe to our AI in Industry Podcast with your favorite podcast service:
Header Image Credit: Medical News Today