The GREC challenge: Overview and evaluation results

Anja Belz*, Eric Kow, Jette Viethen, Albert Gatt

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

11 Citations (Scopus)

Abstract

The GREC Task at REG'08 required participating systems to select coreference chains to the main subject of short encyclopaedic texts collected from Wikipedia. Three teams submitted a total of 6 systems, and we additionally created four baseline systems. Systems were tested automatically using a range of existing intrinsic metrics. We also evaluated systems extrinsically by applying coreference resolution tools to the outputs and measuring the success of the tools. In addition, systems were tested in a reading/comprehension experiment involving human subjects. This report describes the GREC Task and the evaluation methods, gives brief descriptions of the participating systems, and presents the evaluation results.

Original languageEnglish
Title of host publicationINLG 2008 - 5th International Natural Language Generation Conference, Proceedings of the Conference
Place of PublicationWashington DC
PublisherAssociation for Computing Machinery (ACM)
Pages183-191
Number of pages9
Publication statusPublished - 2008
Event5th International Natural Language Generation Conference, INLG 2008 - Salt Fork, OH, United States
Duration: 12 Jun 200814 Jun 2008

Other

Other5th International Natural Language Generation Conference, INLG 2008
CountryUnited States
CitySalt Fork, OH
Period12/06/0814/06/08

Fingerprint Dive into the research topics of 'The GREC challenge: Overview and evaluation results'. Together they form a unique fingerprint.

Cite this