In this work, we describe our experiments and participating systems in the BioASQ Task 9b Phase B challenge of biomedical question answering. We have focused on finding the ideal answers and investigated multi-task fine-tuning and gradual unfreezing techniques on transformer-based language models. For factoid questions, our ALBERT-based systems ranked first in test batch 1 and fourth in test batch 2. Our DistilBERT systems outperformed the ALBERT variants in test batches 4 and 5 despite having 81% fewer parameters than ALBERT. However, we observed that gradual unfreezing had no significant impact on the model's accuracy compared to standard fine-tuning.
|Title of host publication||CLEF 2021 Working Notes|
|Subtitle of host publication||Proceedings of the Working Notes of CLEF 2021 - Conference and Labs of the Evaluation Forum|
|Editors||Guglielmo Faggioli, Nicola Ferro, Alexis Joly, Maria Maistro, Florina Piroi|
|Place of Publication||Aachen, Germany|
|Number of pages||11|
|Publication status||Published - 2021|
|Event||2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021 - Virtual, Bucharest, Romania|
Duration: 21 Sep 2021 → 24 Sep 2021
|Name||CEUR Workshop Proceedings|
|Conference||2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021|
|Period||21/09/21 → 24/09/21|
Bibliographical notePublisher Copyright:
© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
- Question answering
- Transfer learning