Few-shot fine-tuning SOTA summarization models for medical dialogues

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

9 Citations (Scopus)
108 Downloads (Pure)

Abstract

Abstractive summarization of medical dialogues presents a challenge for standard training approaches given the paucity of suitable datasets. We explore the performance of state-of-the-art models with zero-shot and few-shot learning strategies and measure the impact of pretraining with general domain and dialogue specific text on the summarization performance.
Original languageEnglish
Title of host publicationThe 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Subtitle of host publicationProceedings of the Student Research Workshop
Place of PublicationStroudsburg, PA, USA
PublisherAssociation for Computational Linguistics (ACL)
Pages254-266
Number of pages13
ISBN (Electronic)9781955917735
Publication statusPublished - Jul 2022
Event2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Seattle, United States
Duration: 10 Jul 202215 Jul 2022

Workshop

Workshop2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Country/TerritoryUnited States
CitySeattle
Period10/07/2215/07/22

Bibliographical note

Copyright the Publisher 2022. Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.

Fingerprint

Dive into the research topics of 'Few-shot fine-tuning SOTA summarization models for medical dialogues'. Together they form a unique fingerprint.

Cite this