Answer extraction towards better evaluations of NLP systems

Rolf Schwitter, Diego Mollá, Rachel Fournier, Michael Hess

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contribution

Abstract

We argue that reading comprehension tests are not particularly suited for the evaluation of NLP systems. Reading comprehension tests are specifically designed to evaluate human reading skills, and these require vast amounts of world knowledge and common-sense reasoning capabilities. Experience has shown that this kind of full-fledged question answering (QA) over texts from a wide range of domains is so difficult for machines as to be far beyond the present state of the art of NLP. To advance the field we propose a much more modest evaluation set-up, viz. Answer Extraction (AE) over texts from highly restricted domains. AE aims at retrieving those sentences from documents that contain the explicit answer to a user query. AE is less ambitious than full-fledged QA but has a number of important advantages over QA. It relies mainly on linguistic knowledge and needs only a very limited amount of world knowledge and few inference rules. However, it requires the solution of a number of key linguistic problems. This makes AE a suitable task to advance NLP techniques in a measurable way. Finally, there is a real demand for working AE systems in technical domains. We outline how evaluation procedures for AE systems over real world domains might look like and discuss their feasibility

Original languageEnglish
Title of host publicationANLP/NAACL 2000 Workshop
Subtitle of host publicationReading Comprehension Tests as Evaluation for Computer-Based Language Understanding Systems : proceedings
Place of PublicationStroudsburg, PA
PublisherThe Association for Computational Linguistics
Pages20-27
Number of pages8
Publication statusPublished - 2000
Externally publishedYes
EventANLP/NAACL Workshop: Reading Comprehension Tests as Evaluation for Computer-Based Language Understanding Systems - Seattle, United States
Duration: 4 May 20004 May 2000

Workshop

WorkshopANLP/NAACL Workshop
CountryUnited States
CitySeattle
Period4/05/004/05/00

Fingerprint Dive into the research topics of 'Answer extraction towards better evaluations of NLP systems'. Together they form a unique fingerprint.

  • Cite this

    Schwitter, R., Mollá, D., Fournier, R., & Hess, M. (2000). Answer extraction towards better evaluations of NLP systems. In ANLP/NAACL 2000 Workshop: Reading Comprehension Tests as Evaluation for Computer-Based Language Understanding Systems : proceedings (pp. 20-27). Stroudsburg, PA: The Association for Computational Linguistics.