Is Deep Learning in Medical Imaging Coming to Real Life?

JUL 15, 2019

By Jayeeta Ghosh, Trace3 – Senior Data Scientist

Deep Learning is sweeping across many industries with applications in all realms of life. Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on artificial neural networks where learning can be supervised, semi-supervised or unsupervised. Healthcare is certainly not shying away from Deep Learning technology, even though the industry has a reputation of being reluctant to adopt new technology across the board. Healthcare and the leaders within the industry maintain the highest level of expectations when it comes to applying technology to support the advanced care and services they provide when dealing with precious human life. With the advancement of sophisticated devices that are generating a wealth of data, award winning algorithms, and accelerated compute power, this industry is going to experience the greatest effect in serving humans in not too distant future.

What’s happening with Deep Learning in Imaging

The Deep Learning revolution in image processing started with the use of the Convolutional Neural Network (CNN)1 at the 2012 ImageNet2 competition by Alex Krizhevsky, part of Professor Hinton’s team. Alex and team were able to implement large CNN architecture, known as AlexNet3, on multiple graphics processing units (GPUs) to dramatically reduce the error in recognizing images – with 5 convolutional layers, it was larger than the state-of-the-art network at that time. Following the competition, a multitude of CNN architectures were published in various literature -VGG4, ResNet5, and GoogLeNet6 to name a few. In 2014, Facebook7 claimed to achieve almost human level accuracy (97.25%) in recognizing human faces. And the story continued8.

It’s not just image recognition. Object detection, pose identification and finding the position of multiple objects within an image has also become very effective. Immediate application in security9 is eminent leveraging Deep Learning image processing. Semantic segmentation or finding boundary box within images and videos are key attributes of self-driving car research. Image processing using Deep Learning is also spreading to visual inspection and quality control in the manufacturing industry through workflow automation10.

Where are we now in the medical imaging space with Deep Learning

Medical imaging was one of the first fields to take advantage of GPUs with pure visualization, then computational calculations, and currently Deep Learning applications. Have you seen 3D Ultrasound images of a baby? Those were powered by GPUs starting in 2009. Over 70% of academic research in the medical imaging field is happening based on Deep Learning11. Image recognition or classification and segmentation has purpose in early disease detection, diagnosis, and continual treatment in the medical field. Deep Learning12 is being used in both 2D and 3D images of MRI, PET-MRI, X-ray, and ultrasound for applications in neurological images, brain segmentation, stroke images, cancer in the breast, lung, prostate, kidney, as well as other uses.

In November 2017, Google AI researchers announced the publication13 of their study regarding Deep Learning model for early detection of diabetic retinopathy — an eye disease that proliferates with diabetes and leads to blindness if untreated — and concluded that algorithms and physicians working together create the most accurate diagnosis for patients. Massachusetts General Hospital, one of the nation’s major clinical research institutes, reported14 last year in their May 2018 Radiology Newsletter how they are helping the medical imaging field in moving towards a broader adoption of AI. Specifically, they talked about using diagnostic imaging for spinal back pain, CT scan for cancer patients, applications for disease vessels in images of child eyes, and AI assisted high quality image production. A recent NPR news reported15 on how a computer can be trained to read mammograms at the same level as a doctor. A Stanford article16 specifically talks about this technology. A lot of collaboration between technology enablers like IBM17, Google18, Nvidia19 with Universities and other research organizations like Stanford, GE Health, and Massachusetts General Hospital are underway to aid broader AI adoption in the medical imaging field.

To give a specific example of medical images, the image to the right shows 4 axial MRI views of a brain tumor segmentation prediction using Nvidia’s Clara AI transfer learning toolkit20. A sample brain MRI dataset was trained using Deep Learning and predicted the extent of the tumor.  ITK-SNAP21, a visual software application for 3D medical imaging, was used to visualize the segmentation on top of the MRI images. The images show the following: the whole tumor (WT) class includes all visible labels (a combination of green, yellow and blue labels), the tumor core (TC) class is a combination of blue and yellow, and the enhancing tumor core (ET) class is shown in yellow (a hyperactive tumor part). The predicted segmentation result matches the actual tumor in the brain. This technology provides developers the tools to build, manage, and deploy Deep Learning based imaging workflows in next generation clinical settings. This technology when applied in conjunction with radiologist experts can reduce the manual labor and amount of false alarms or misdiagnosis. These tools are meant to be complementary to make radiologists or doctors’ life easier.

What’s holding us back

Even though Deep Learning has great potential in the medical imaging industry, direct adoption in clinical care is slower than expected. This isn’t of great surprise as it’s mostly driven by HIPAA regulatory restrictions, fear of failure in patient safety risks, and legacy infrastructure. In addition, I see this technology as not being mature enough to gain confidence in the clinician’s mind and believe it’s often thought of as being a black box. To overcome these challenges, a partnership is needed between healthcare companies and institutions across the globe over multilingual data generators, secured data processing technology, and interdisciplinary collaboration among clinician and researcher. Deep learning technology will not be replacing doctors in the foreseeable future, but it will improve the care and efficiency with respect to the treatment doctors do provide patients.

One caveat in Deep Learning algorithms is it requires a tremendous amount of data from different imaging sources. Without enough data, the algorithms will perform poorly22 on unseen data, which again reiterates the necessity of collaboration across the globe, albeit after overcoming all the regulatory issues. It is almost certain that, in today’s so called “fake world” a number of ill-minded people with wicked intentions might create fake images and further reduce the credibility of this technology. As is true with any new technology in the healthcare industry – for example the introduction of ultrasound imaging to help doctors improve managing patient pregnancy – the adoption is challenged by questions such as; is it safe, is it accurate, and is it ethical. Similarly, we find ourselves in the same situation with AI adoption in medical imaging, going through a lot of myth vs. fact checking to make sure the technology is sound and not error prone.

Long story short, Deep Learning in medical imaging is coming. It may not come today, but today we are preparing for the embracement of this technology for tomorrow.







Leave a Reply

Your email address will not be published. Required fields are marked *