Show, tell and summarise: learning to generate and summarise radiology findings from medical images

Sonit Singh*, Sarvnaz Karimi, Kevin Ho-Shon, Len Hamey

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Radiology plays a vital role in health care by viewing the human body for diagnosis, monitoring, and treatment of medical problems. In radiology practice, radiologists routinely examine medical images such as chest X-rays and describe their findings in the form of radiology reports. However, this task of reading medical images and summarising its insights is time consuming, tedious, and error-prone, which often represents a bottleneck in the clinical diagnosis process. A computer-aided diagnosis system which can automatically generate radiology reports from medical images can be of great significance in reducing workload, reducing diagnostic errors, speeding up clinical workflow, and helping to alleviate any shortage of radiologists. Existing research in radiology report generation focuses on generating the concatenation of the findings and impression sections. Also, existing work ignores important differences between normal and abnormal radiology reports. The text of normal and abnormal reports differs in style and it is difficult for a single model to learn both the text style and learn to transition from findings to impression. To alleviate these challenges, we propose a Show, Tell and Summarise model that first generates findings from chest X-rays and then summarises them to provide impression section. The proposed work generates the findings and impression sections separately, overcoming the limitation of previous research. Also, we use separate models for generating normal and abnormal radiology reports which provide true insight of model’s performance. Experimental results on the publicly available IU-CXR dataset show the effectiveness of our proposed model. Finally, we highlight limitations in the radiology report generation research and present recommendations for future work.

Original languageEnglish
Number of pages25
JournalNeural Computing and Applications
Early online date5 Apr 2021
DOIs
Publication statusE-pub ahead of print - 5 Apr 2021

Bibliographical note

Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.

Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.

Keywords

  • Artificial intelligence
  • Chest X-rays
  • Computer vision
  • Computer-aided report generation
  • Deep learning
  • Medical imaging
  • Natural language processing
  • Radiology report generation

Fingerprint Dive into the research topics of 'Show, tell and summarise: learning to generate and summarise radiology findings from medical images'. Together they form a unique fingerprint.

Cite this