TY - JOUR
T1 - The design and application of rubrics to assess signed language interpreting performance
AU - Wang, Jihong
AU - Napier, Jemina
AU - Goswell, Della
AU - Carmichael, Andy
PY - 2015
Y1 - 2015
N2 - This article explores the development and application of rubrics to assess an experimental corpus of Auslan (Australian Sign Language)/English simultaneous interpreting performances in both language directions. Two rubrics were used, each comprising four main assessment criteria (accuracy, target text features, delivery features and processing skills). Three external assessors - two interpreter educators and one interpreting practitioner - independently rated the interpreting performances. Results reveal marked variability between the raters: inter-rater reliability between the two interpreter educators was higher than between each interpreter educator and the interpreting practitioner. Results also show that inter-rater reliability regarding Auslan-to-English simultaneous interpreting performance was higher than for English-to-Auslan simultaneous interpreting performance. This finding suggests greater challenges in evaluating interpreting performance from a spoken language into a signed language than vice versa. The raters' testing and assessment experience, their scoring techniques and the rating process itself may account for the differences in their scores. Further, results suggest that assessment of interpreting performance inevitably involves some degree of uncertainty and subjective judgment.
AB - This article explores the development and application of rubrics to assess an experimental corpus of Auslan (Australian Sign Language)/English simultaneous interpreting performances in both language directions. Two rubrics were used, each comprising four main assessment criteria (accuracy, target text features, delivery features and processing skills). Three external assessors - two interpreter educators and one interpreting practitioner - independently rated the interpreting performances. Results reveal marked variability between the raters: inter-rater reliability between the two interpreter educators was higher than between each interpreter educator and the interpreting practitioner. Results also show that inter-rater reliability regarding Auslan-to-English simultaneous interpreting performance was higher than for English-to-Auslan simultaneous interpreting performance. This finding suggests greater challenges in evaluating interpreting performance from a spoken language into a signed language than vice versa. The raters' testing and assessment experience, their scoring techniques and the rating process itself may account for the differences in their scores. Further, results suggest that assessment of interpreting performance inevitably involves some degree of uncertainty and subjective judgment.
KW - raters
KW - assessment rubrics
KW - scoring process and techniques
KW - inter-rater reliability
KW - signed language interpreting
UR - http://www.scopus.com/inward/record.url?scp=84937033483&partnerID=8YFLogxK
U2 - 10.1080/1750399X.2015.1009261
DO - 10.1080/1750399X.2015.1009261
M3 - Article
AN - SCOPUS:84937033483
SN - 1750-399X
VL - 9
SP - 83
EP - 103
JO - Interpreter and Translator Trainer
JF - Interpreter and Translator Trainer
IS - 1
ER -