Abstract
Abstractive summarization of medical dialogues presents a challenge for standard training approaches given the paucity of suitable datasets. We explore the performance of state-of-the-art models with zero-shot and few-shot learning strategies and measure the impact of pretraining with general domain and dialogue specific text on the summarization performance.
Original language | English |
---|---|
Title of host publication | The 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies |
Subtitle of host publication | Proceedings of the Student Research Workshop |
Place of Publication | Stroudsburg, PA, USA |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 254-266 |
Number of pages | 13 |
ISBN (Electronic) | 9781955917735 |
Publication status | Published - Jul 2022 |
Event | 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Seattle, United States Duration: 10 Jul 2022 → 15 Jul 2022 |
Workshop
Workshop | 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies |
---|---|
Country/Territory | United States |
City | Seattle |
Period | 10/07/22 → 15/07/22 |