Abstract
Computer-based aids for writing assistance have been around since at least the early 1980s, focussing primarily on aspects such as spelling, grammar and style. The potential audience for such tools is very large indeed, and this is a clear case where we might expect to see language processing applications having a significant real-world impact. However, existing comparative evaluations of applications in this space are often no more than impressionistic and anecdotal reviews of commercial offerings as found in software magazines, making it hard to determine which approaches are superior. More rigorous evaluation in the scholarly literature has been held back in particular by the absence of shared datasets of texts marked-up with errors, and the lack of an agreed evaluation framework. Significant collections of publicly available data are now appearing; this paper describes a complementary evaluation framework, which has been piloted in the Helping Our Own shared task. The approach, which uses stand-off annotations for representing edits to text, can be used in a wide variety of text-correction tasks, and easily accommodates different error tagsets.
Original language | English |
---|---|
Title of host publication | Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12) |
Editors | Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Mehmet Uğur Doğan, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis |
Publisher | European Language Resources Association (ELRA) |
Pages | 3015-3018 |
Number of pages | 4 |
ISBN (Print) | 9782951740877 |
Publication status | Published - 2012 |
Event | International Conference on Language Resources and Evaluation (8th : 2012) - Istanbul, Turkey Duration: 23 May 2012 → 25 May 2012 |
Conference
Conference | International Conference on Language Resources and Evaluation (8th : 2012) |
---|---|
City | Istanbul, Turkey |
Period | 23/05/12 → 25/05/12 |
Keywords
- evaluation frameworks
- error correction
- replicability