AI for Medical Imaging Made More Understandable Through the Use of Saliency Heat Maps; How Effective Is This Technique?

Artificial intelligence or AI models interpreting medical images promise to improve clinicians' ability to make precise and timely diagnoses while lessening workload as well by enabling busy doctors to concentrate on critical cases and delegate rote tasks to machine learning.

However, as indicated in a News-Medical.net report, AI models that lack transparency on the manner and reason a diagnosis is made can be challenging. This opaque reasoning, also called "back box" AI, can reduce clinician trust in the dependability of the AI mechanism and therefore discourage its use.

In addition, this lack of transparency could mislead clinicians into over-trusting the interpretation of the tool.

Furthermore, in the realm of medical imaging, an approach to develop more understandable AI prototypes, not to mention to demystify AI decision-making, has been "saliency assessment," a method that utilizes heat maps to pinpoint whether the tool is appropriately focusing only on the relevant pieces of a given image or homing in on nonessential part of it.

AI for Medical Imaging
A member of the medical staff uses medical equipment in a hybrid interventional operating room at the vascular surgery service of the Nantes CHU hospital, northern France. GEORGES GOBET/AFP via Getty Images

How Heat Maps Work

Heat maps highlight parts of an image that influence the interpretation of the artificial intelligence model.

This could help human doctors see if the AI model is focusing on the dame sites as they do or is mistakenly concentrating on irrelevant spots on an image.

However, new research published in Nature Machine Intelligence reveals that for all their promise, saliency heat maps may not yet be all set for prime time.

The analysis, led by investigator Pranav Rajpurkar from Harvard Medical School, Matthew Lungren from Stanford University, and Adriel Saporta, from New York University, measured the validity of seven widely used saliency approaches to identify how dependably and precisely they could determine pathologies linked to 10 conditions typically diagnosed on X-rays, like pleural effusion, lung lesions, edema, and enlarged heart structures.


Heat Maps' Performance in Image Assessment

To verify performance, the study investigators compared the performance of the tools against human expert judgment.

In the final analysis, a related Nanowerk report said tools that utilize saliency-based heat maps consistently underperformed in image assessment and their ability to spot pathological lesions compared with human radiologists.

The work represents the first comparative analysis between saliency maps and human expert performance in multiple X-ray pathologies' evaluation.

The research offers a granular understanding of whether and how certain pathological characteristics on an image might affect artificial intelligence tool performance.

For Refining Before Adoption in Clinical AI Models

The saliency-map feature is employed as a quality assurance tool by clinical settings using AI to interpret computer-aided detection approaches, like reading chest X-rays.

Nonetheless, in light of the new study findings, the study investigators explained that such a feature needs to be applied with caution and a healthy dose of skepticism.

Moreover, the researchers have cautioned that due to the essential limitations identified in the study, saliency-based heat maps must be further refined before their extensive adoption in clinical AI prototypes.

The full codebase of the team and its data and analysis are open and available to all who need to study this essential aspect of clinical machine learning in medical imaging applications.

Related information about heating mats is shown on Dr. Asuf's Mol. Biology's YouTube video below:

Check out more news and information on Artificial Intelligence in Science Times.

Join the Discussion

Recommended Stories

Real Time Analytics