Computer engineers and radiologists at Duke University have developed an artificial intelligence (AI) platform that can analyze potentially cancerous lesions in mammography scans to help determine if a patient should receive an invasive biopsy. Unlike other AI platforms, however, this algorithm is interpretable, meaning it shows physicians exactly how it came to its conclusions.

The researchers trained their interpretable AI algorithm for breast lesions (IAIA-BL) to locate and evaluate lesions just like an actual radiologist would be trained, rather than allowing it to freely develop its own procedures. This, they believe, gives the new platform several advantages over its “black box” counterparts, and could provide for a useful training platform to teach students how to read mammography images. It could also help physicians in sparsely populated regions around the world who do not regularly read mammography scans make better healthcare decisions.

“If a computer is going to help make important medical decisions, physicians need to trust that the AI is basing its conclusions on something that makes sense,” said Joseph Lo, PhD, professor of radiology at Duke. “We need algorithms that not only work, but explain themselves and show examples of what they’re basing their conclusions on. That way, whether a physician agrees with the outcome or not, the AI is helping to make better decisions.”

Lo and colleagues report on the AI platform in Nature Machine Intelligence, in a paper titled, “A case-based interpretable deep learning model for classification of mass lesions in digital mammography.”

Engineering AI that reads medical images is a huge industry. Thousands of independent algorithms already exist, and the FDA has approved more than 100 for clinical use. As the authors noted, “Artificial intelligence is revolutionizing radiology.” However, they cautioned, whether reading MRI, CT, or mammogram scans, very few algorithms use validation datasets with more than 1,000 images or contain demographic information. “ … there are few publicly available mammography datasets, so many models are trained on relatively few cases and the community lacks datasets to externally validate these models,” they further noted. This dearth of information, coupled with the recent failures of several notable examples, has led many physicians to question the use of AI in critical medical decisions.

In one instance, an AI model failed even when researchers trained it with images taken from different facilities using different equipment. Rather than focusing exclusively on the lesions of interest, the AI learned to use subtle differences introduced by the equipment itself to recognize the images coming from the cancer ward and assigning those lesions a higher probability of being cancerous. As one would expect, the AI did not transfer well to other hospitals using different equipment. But because nobody knew what the algorithm was looking at when making decisions, nobody knew it was destined to fail in real-world applications.

As the authors pointed out, “Despite the hope of computer-aided radiology for mammography,” current methods are linked with “serious concerns,” including confounding. Confounding occurs when the predictive model is using incorrect information or reasoning to make a decision, even if the decision is correct, the team added. “In previous studies, researchers created models that seemed to perform well on their test sets, yet on further inspection, based their decisions on confounding information (for example, type of equipment) rather than medical information.”

Interpretability in machine learning models is important in what the authors noted are “high-stakes decisions,” such as whether to order a biopsy based on a mammographic scan. “Mammography poses important challenges that are not present in other computer vision tasks: datasets are small, confounding information is present and it can be difficult even for a radiologist to decide between watchful waiting and biopsy based on a mammogram alone,” they wrote.

“Our idea was to instead build a system to say that this specific part of a potential cancerous lesion looks a lot like this other one that I’ve seen before,” said Alina Barnett, a computer science PhD candidate at Duke and first author of the newly reported study. “Without these explicit details, medical practitioners will lose time and faith in the system if there’s no way to understand why it sometimes makes mistakes.”

Cynthia Rudin, PhD, professor of electrical and computer engineering and computer science at Duke, compares the new AI platform’s process to that of a real-estate appraiser. In the black box models that dominate the field, an appraiser would provide a price for a home without any explanation at all. In a model that includes what is known as a “saliency map,” the appraiser might point out that a home’s roof and backyard were key factors in its pricing decision, but it would not provide any details beyond that.

“Our method would say that you have a unique copper roof and a backyard pool that are similar to these other houses in your neighborhood, which made their prices increase by this amount,” Rudin said. “This is what transparency in medical imaging AI could look like and what those in the medical field should be demanding for any radiology challenge.”

As the authors further commented, “To ensure clinical acceptance, an AI tool will need to provide its reasoning process to its human radiologist collaborators to be a useful aide in these difficult and high-stakes decision-making processes.” The reasoning process of any model would ideally be similar to that of an actual radiologist, who will look at particular aspects of the image that are known to be important, based on the physiology of how lesions develop within breast tissue.

Most AI for spotting pre-cancerous lesions in mammography scans don’t reveal any of their decision-making process (top). If they do, it’s often a saliency map (middle) that only tells doctors where they’re looking. A new AI platform (bottom) not only tells doctors where it’s looking, but which past experiences its using to draw its conclusions. [Alina Barnett, Duke University]
The researchers trained the new AI with 1,136 images taken from 484 patients at Duke University Health System. They first taught the AI to find the suspicious lesions in question and ignore all of the healthy tissue and other irrelevant data. Then they hired radiologists to carefully label the images to teach the AI to focus on the edges of the lesions, where the potential tumors meet healthy surrounding tissue, and compare those edges to edges in images with known cancerous and benign outcomes. Radiating lines or fuzzy edges, known medically as mass margins, are the best predictor of cancerous breast tumors and the first thing that radiologists look for. This is because cancerous cells replicate and expand so fast that not all of a developing tumor’s edges are easy to see in mammograms.

Reporting in their paper, the investigators stated, “In addition to predicting whether a lesion is malignant or benign, our work aims to follow the reasoning processes of radiologists in detecting clinically relevant semantic features of each image, such as the characteristics of the mass margins. The framework includes a novel interpretable neural network algorithm that uses case-based reasoning for mammography.”

“This is a unique way to train an AI how to look at medical imagery,” Barnett said. “Other AIs are not trying to imitate radiologists; they’re coming up with their own methods for answering the question that are often not helpful or, in some cases, depend on flawed reasoning processes.”

After training was complete, the researchers put the AI to the test. While it did not outperform human radiologists, it did just as well as other black box computer models. Importantly, when the new AI is wrong, people working with it will be able to recognize that it is wrong and why it made the mistake. “Our models are decision aids—rather than decision-makers—and aim for better overall human–machine collaboration,” the authors noted. “Thus, unlike existing black-box systems that aim to replace a doctor, we aim to create an IAIA-BL whose explicit reasoning can be understood and verified by a medical practitioner … Our novel deep learning architecture enables IAIA-BL to provide an explanation that shows the underlying decision-making process for each case.”

Moving forward, the team is working to add other physical characteristics for the AI to consider when making its decisions, such as a lesion’s shape, which is a second feature radiologists learn to look at. Rudin and Lo also recently received a Duke MEDx High-Risk High-Impact Award to continue developing the algorithm and conduct a radiologist reader study to see if it helps clinical performance and/or confidence.

“There was a lot of excitement when researchers first started applying AI to medical images, that maybe the computer will be able to see something or figure something out that people couldn’t,” said co-author Fides Schwartz, PhD, research fellow at Duke Radiology. “In some rare instances that might be the case, but it’s probably not the case in a majority of scenarios. So we are better off making sure we as humans understand what information the computer has used to base its decisions on.”

As the team concluded in their report, “Future work with this model might include reader studies in which we measure any improvements in accuracy and radiologists report their trust in our system. Given the increased benefit of other AI assistance to less-experienced readers, it might be valuable to compare the benefit of this system with both sub-specialists and community radiologists who might be called on to do this work only occasionally.”

Previous articleEnsuring Immunodiagnostic Efficacy with Pathogen Evolution
Next articleHow Brain Cell Development Influences Risk of Schizophrenia