Abstract
In this work, we describe our experiments and participating systems in the BioASQ Task 9b Phase B challenge of biomedical question answering. We have focused on finding the ideal answers and investigated multi-task fine-tuning and gradual unfreezing techniques on transformer-based language models. For factoid questions, our ALBERT-based systems ranked first in test batch 1 and fourth in test batch 2. Our DistilBERT systems outperformed the ALBERT variants in test batches 4 and 5 despite having 81% fewer parameters than ALBERT. However, we observed that gradual unfreezing had no significant impact on the model's accuracy compared to standard fine-tuning.
Original language | English |
---|---|
Title of host publication | CLEF 2021 Working Notes |
Subtitle of host publication | Proceedings of the Working Notes of CLEF 2021 - Conference and Labs of the Evaluation Forum |
Editors | Guglielmo Faggioli, Nicola Ferro, Alexis Joly, Maria Maistro, Florina Piroi |
Place of Publication | Aachen, Germany |
Publisher | CEUR |
Pages | 247-257 |
Number of pages | 11 |
Publication status | Published - 2021 |
Event | 2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021 - Virtual, Bucharest, Romania Duration: 21 Sept 2021 → 24 Sept 2021 |
Publication series
Name | CEUR Workshop Proceedings |
---|---|
Publisher | CEUR |
Volume | 2936 |
ISSN (Electronic) | 1613-0073 |
Conference
Conference | 2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021 |
---|---|
Country/Territory | Romania |
City | Virtual, Bucharest |
Period | 21/09/21 → 24/09/21 |
Bibliographical note
Publisher Copyright:© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Keywords
- ALBERT
- BioASQ9b
- DistilBERT
- Question answering
- Transfer learning