Abstract
While both spoken and written language processing stand to benefit from parsing, the standard Parseval metrics (Black et al., 1991) and their canonical implementation (Sekine and Collins, 1997) are only useful for text. The Parseval metrics are undefined when the words input to the parser do not match the words in the gold standard parse tree exactly, and word errors are unavoidable with automatic speech recognition (ASR) systems. To fill this gap, we have developed a publicly available tool for scoring parses that implements a variety of metrics which can handle mismatches in words and segmentations, including: alignment-based bracket evaluation, alignment-based dependency evaluation, and a dependency evaluation that does not require alignment. We describe the different metrics, how to use the tool, and the outcome of an extensive set of experiments on the sensitivity of the metrics.
Original language | English |
---|---|
Title of host publication | Proceedings of the Language Resources and Evaluation Conference (LREC) |
Editors | Nicoletta Calzolari |
Place of Publication | Paris |
Publisher | ELRA |
Pages | 333-338 |
Number of pages | 6 |
ISBN (Print) | 9782951740822 |
Publication status | Published - 1 Jan 2006 |
Externally published | Yes |
Event | 5th International Conference on Language Resources and Evaluation, LREC 2006 - Genoa, Italy Duration: 22 May 2006 → 28 May 2006 |
Conference
Conference | 5th International Conference on Language Resources and Evaluation, LREC 2006 |
---|---|
Country/Territory | Italy |
City | Genoa |
Period | 22/05/06 → 28/05/06 |