SParseval: Evaluation metrics for parsing speech

Brian Roark, Mary Harper, Eugene Charniak, Bonnie Dorr, Mark Johnson, Jeremy G. Kahn, Yang Liu, Mari Ostendorf, John Hale, Anna Krasnyanskaya, Matthew Lease, Izhak Shafran, Matthew Snover, Robin Stewart, Lisa Yung

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

26 Citations (Scopus)


While both spoken and written language processing stand to benefit from parsing, the standard Parseval metrics (Black et al., 1991) and their canonical implementation (Sekine and Collins, 1997) are only useful for text. The Parseval metrics are undefined when the words input to the parser do not match the words in the gold standard parse tree exactly, and word errors are unavoidable with automatic speech recognition (ASR) systems. To fill this gap, we have developed a publicly available tool for scoring parses that implements a variety of metrics which can handle mismatches in words and segmentations, including: alignment-based bracket evaluation, alignment-based dependency evaluation, and a dependency evaluation that does not require alignment. We describe the different metrics, how to use the tool, and the outcome of an extensive set of experiments on the sensitivity of the metrics.

Original languageEnglish
Title of host publicationProceedings of the Language Resources and Evaluation Conference (LREC)
EditorsNicoletta Calzolari
Place of PublicationParis
Number of pages6
ISBN (Print)9782951740822
Publication statusPublished - 1 Jan 2006
Externally publishedYes
Event5th International Conference on Language Resources and Evaluation, LREC 2006 - Genoa, Italy
Duration: 22 May 200628 May 2006


Conference5th International Conference on Language Resources and Evaluation, LREC 2006


Dive into the research topics of 'SParseval: Evaluation metrics for parsing speech'. Together they form a unique fingerprint.

Cite this