Deep Learning is Outperforming Humans in Diagnostics
The deep learning computers in a diagnostics imaging lab routinely defeat their human counterparts in diagnosing heart failure, detecting various cancers and predicting their strength.
But scientists dismiss any notion that such machines might someday replace pathologists and radiologists.
“There’s initially always going to be some wincing and anxiety among pathologists an radiologists over this idea — that our computational imaging technology can outperform us or even take our jobs,” says Madabhushi, an F. Alex Nason Professor II of biomedical engineering at the Case School of Engineering.
Since 2016, he and his team have received over $9.5 million from the National Cancer Institute to develop computational tools for analysis of digital pathology images of breast, lung, head and neck cancers to identify which patients with these diseases could be spares aggressive radiotherapy or chemotherapy.
“It’s not so much that we were able to ‘beat’ the pathologist or the radiologist, but rather that the machine was able to add value to what they can offer,” he continues. “There is a desperate need for better decision-support tools that allow them to serve patients, especially in places where there are very few pathologists or radiologists.”
“By providing them with decision support, we can help them become more efficient. For instance, the tools could help reduce the amount of time spent on cases with no obvious disease or obviously benign conditions and instead help them focus on the more confounding cases.”
1. The computational imaging system in Madabhushi’s lab correctly predicted with a 97% accuracy which among 105 patients were already showing evidence of pending heart failure. By comparison, two pathologists were correct 74% and 73%, respectively. The results were recently published in the journal PLOS ONE.
2. Madabhushi and co-investigators show that while human radiologists could flag up to half of all nodules that show up in a CAT scan as “suspicious” or “indeterminate,” about 98% of those nodules actually turn out to be benign. In a recent study published in the Journal of Medical Imaging, Madabhushi and his group showed that their computational imaging technique was between 5-8% superior compared to two human experts in distinguishing benign from malignant lung nodules on CAT scans.
3. In an international study of prostate cancer scans in the U.S., Finland and Australia, the computational imaging algorithms outperformed their human counterparts in two ways, detailed in a study recently published in the Journal of Magnetic Resonance Imaging. In more than 70% of cases where radiologists missed the presence of clinically significant prostate cancer on a magnetic resonance imaging (MRI) scan, the machine algorithm caught it. In half of the cases where radiologists mistakenly identified the presence of clinically significant prostate cancer on an MRI scan, the machine was able to correctly identify that no clinically significant disease was present.
“This is all very exciting data for us, but now we need more validation and to demonstrate these results on larger cohorts,” Madabhushi said. “But we really believe this is more evidence of what computational imaging of pathology and radiology images can do for cardiovascular and cancer research and practical use among pathologists and radiologists.”
So, what exactly are these supercomputers doing that humans can’t that create such a wide margin in diagnostic success?
The short answer could be said for virtually all computer advantages in the last half-century: The machines do work at far greater speed and volume.
The precise difference here is that the diagnostic imaging computers at the CCIPD can read, log, compare and contrast literally hundreds of slides of tissue samples in the amount of time a pathologist might spend on a single slide.
Then, they rapidly and completely catalogue characteristics like texture, shape and structure of glands, nuclei and surrounding tissue to determine the aggressiveness and risk associated with certain diseases.
This is where the ‘deep learning’ comes in: From all of that, they create algorithms which can look beyond what the human eye can see in comparing and contrasting those multitudes of images. Finally, they are working toward predicting everything from how aggressive a disease is going to be to whether a scanned nodule is likely to even turn cancerous.
In the end, all of this new information should help pathologists and radiologists with the interpretations of slides and scans, but more critically can help clinicians make more-informed treatment recommendations.
Madabhushi said that can help a single pathologist do her work more efficiently by more accurately triaging patients by true need for care —or provide hope to an entire nation.
“I always use the example of Botswana, where they have a population of 2 million people — and only one pathologist that we are aware of,” he says. “From that one example alone, you can see that this technology can help that one pathologist be more efficient and help many more people.”