Abstract
Over the past decade, interpreter certification performance testing has gained momentum. Certification tests often involve high stakes, since they can play an important role in regulating access to professional practice and serve to provide a measure of professional competence for end users. The decision to award certification is based on inferences from candidates' test scores about their knowledge, skills and abilities, as well as their interpreting performance in a given target domain. To justify the appropriateness of score-based inferences and actions, test developers need to provide evidence that the test is valid and reliable through a process of test validation. However, there is little evidence that test qualities are systematically evaluated in interpreter certification testing. In an attempt to address this problem, this paper proposes a theoretical argument-based validation framework for interpreter certification performance tests so as to guide testers in carrying out systematic validation research. Before presenting the framework, validity theory is reviewed, and an examination of the argument-based approach to validation is provided. A validity argument for interpreter tests is then proposed, with hypothesized validity evidence. Examples of evidence are drawn from relevant empirical work, where available. Gaps in the available evidence are highlighted and suggestions for research are made.
Original language | English |
---|---|
Pages (from-to) | 231-258 |
Number of pages | 28 |
Journal | Interpreting |
Volume | 18 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2016 |
Keywords
- Argument-based approach
- Interpreter certification
- Performance testing
- Validation
- Validity
- Validity argument