Transformer-based language models for factoid question answering at BioASQ9b

Urvashi Khanna*, Diego Mollá

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

2 Citations (Scopus)
379 Downloads (Pure)

Abstract

In this work, we describe our experiments and participating systems in the BioASQ Task 9b Phase B challenge of biomedical question answering. We have focused on finding the ideal answers and investigated multi-task fine-tuning and gradual unfreezing techniques on transformer-based language models. For factoid questions, our ALBERT-based systems ranked first in test batch 1 and fourth in test batch 2. Our DistilBERT systems outperformed the ALBERT variants in test batches 4 and 5 despite having 81% fewer parameters than ALBERT. However, we observed that gradual unfreezing had no significant impact on the model's accuracy compared to standard fine-tuning.

Original languageEnglish
Title of host publicationCLEF 2021 Working Notes
Subtitle of host publicationProceedings of the Working Notes of CLEF 2021 - Conference and Labs of the Evaluation Forum
EditorsGuglielmo Faggioli, Nicola Ferro, Alexis Joly, Maria Maistro, Florina Piroi
Place of PublicationAachen, Germany
PublisherCEUR
Pages247-257
Number of pages11
Publication statusPublished - 2021
Event2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021 - Virtual, Bucharest, Romania
Duration: 21 Sept 202124 Sept 2021

Publication series

NameCEUR Workshop Proceedings
PublisherCEUR
Volume2936
ISSN (Electronic)1613-0073

Conference

Conference2021 Working Notes of CLEF - Conference and Labs of the Evaluation Forum, CLEF-WN 2021
Country/TerritoryRomania
CityVirtual, Bucharest
Period21/09/2124/09/21

Bibliographical note

Publisher Copyright:
© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

Keywords

  • ALBERT
  • BioASQ9b
  • DistilBERT
  • Question answering
  • Transfer learning

Fingerprint

Dive into the research topics of 'Transformer-based language models for factoid question answering at BioASQ9b'. Together they form a unique fingerprint.

Cite this