Explaining Chest X-ray Pathologies in Natural Language

Published in Medical Image Computing and Computer Assisted Interventions, 2022

Recommended citation: https://link.springer.com/chapter/10.1007/978-3-031-16443-9_67


Most deep learning algorithms lack explanations for their predictions, which limits their deployment in clinical practice. Approaches to improve explainability, especially in medical imaging, have often been shown to convey limited information, be overly reassuring, or lack robustness. In this work, we introduce the task of generating natural language explanations (NLEs) to justify predictions made on medical images. NLEs are human-friendly and comprehensive and enable the training of intrinsically explainable models. To this goal, we introduce MIMIC-NLE, the first, large-scale, medical imaging dataset with NLEs. It contains over 38,000 NLEs, which explain the presence of various thoracic pathologies and chest X-ray findings. We propose a general approach to solve the task and evaluate several architectures on this dataset, including via clinician assessment.

Download paper here


@InProceedings{Kayser_2022_MICCAI, author = {Kayser, Maxime and Emde, Cornelius and Camburu, Oana-Maria and Parsons, Guy, and Papiez, Bartlomiej and Lukasiewicz, Thomas}, title = {Explaining Chest X-ray Pathologies in Natural Language}, booktitle = {Proceedings of Medical Image Computing and Computer Assited Intervention (MICCAI)}, month = {September}, year = {2022}, pages = {} }