Test validation in interpreter certification performance testing: An argument-based approach

Chao Han, Helen Slatyer

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Over the past decade, interpreter certification performance testing has gained momentum. Certification tests often involve high stakes, since they can play an important role in regulating access to professional practice and serve to provide a measure of professional competence for end users. The decision to award certification is based on inferences from candidates' test scores about their knowledge, skills and abilities, as well as their interpreting performance in a given target domain. To justify the appropriateness of score-based inferences and actions, test developers need to provide evidence that the test is valid and reliable through a process of test validation. However, there is little evidence that test qualities are systematically evaluated in interpreter certification testing. In an attempt to address this problem, this paper proposes a theoretical argument-based validation framework for interpreter certification performance tests so as to guide testers in carrying out systematic validation research. Before presenting the framework, validity theory is reviewed, and an examination of the argument-based approach to validation is provided. A validity argument for interpreter tests is then proposed, with hypothesized validity evidence. Examples of evidence are drawn from relevant empirical work, where available. Gaps in the available evidence are highlighted and suggestions for research are made.

LanguageEnglish
Pages231-258
Number of pages28
JournalInterpreting
Volume18
Issue number2
DOIs
Publication statusPublished - 2016

Fingerprint

interpreter
certification
performance
evidence
Interpreter
Testing
Certification
candidacy
examination
ability

Keywords

  • Argument-based approach
  • Interpreter certification
  • Performance testing
  • Validation
  • Validity
  • Validity argument

Cite this

@article{696441cadf964714b8b1747d3fc550c2,
title = "Test validation in interpreter certification performance testing: An argument-based approach",
abstract = "Over the past decade, interpreter certification performance testing has gained momentum. Certification tests often involve high stakes, since they can play an important role in regulating access to professional practice and serve to provide a measure of professional competence for end users. The decision to award certification is based on inferences from candidates' test scores about their knowledge, skills and abilities, as well as their interpreting performance in a given target domain. To justify the appropriateness of score-based inferences and actions, test developers need to provide evidence that the test is valid and reliable through a process of test validation. However, there is little evidence that test qualities are systematically evaluated in interpreter certification testing. In an attempt to address this problem, this paper proposes a theoretical argument-based validation framework for interpreter certification performance tests so as to guide testers in carrying out systematic validation research. Before presenting the framework, validity theory is reviewed, and an examination of the argument-based approach to validation is provided. A validity argument for interpreter tests is then proposed, with hypothesized validity evidence. Examples of evidence are drawn from relevant empirical work, where available. Gaps in the available evidence are highlighted and suggestions for research are made.",
keywords = "Argument-based approach, Interpreter certification, Performance testing, Validation, Validity, Validity argument",
author = "Chao Han and Helen Slatyer",
year = "2016",
doi = "10.1075/intp.18.2.04han",
language = "English",
volume = "18",
pages = "231--258",
journal = "Interpreting",
issn = "1384-6647",
publisher = "John Benjamins Publishing",
number = "2",

}

Test validation in interpreter certification performance testing : An argument-based approach. / Han, Chao; Slatyer, Helen.

In: Interpreting, Vol. 18, No. 2, 2016, p. 231-258.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Test validation in interpreter certification performance testing

T2 - Interpreting

AU - Han, Chao

AU - Slatyer, Helen

PY - 2016

Y1 - 2016

N2 - Over the past decade, interpreter certification performance testing has gained momentum. Certification tests often involve high stakes, since they can play an important role in regulating access to professional practice and serve to provide a measure of professional competence for end users. The decision to award certification is based on inferences from candidates' test scores about their knowledge, skills and abilities, as well as their interpreting performance in a given target domain. To justify the appropriateness of score-based inferences and actions, test developers need to provide evidence that the test is valid and reliable through a process of test validation. However, there is little evidence that test qualities are systematically evaluated in interpreter certification testing. In an attempt to address this problem, this paper proposes a theoretical argument-based validation framework for interpreter certification performance tests so as to guide testers in carrying out systematic validation research. Before presenting the framework, validity theory is reviewed, and an examination of the argument-based approach to validation is provided. A validity argument for interpreter tests is then proposed, with hypothesized validity evidence. Examples of evidence are drawn from relevant empirical work, where available. Gaps in the available evidence are highlighted and suggestions for research are made.

AB - Over the past decade, interpreter certification performance testing has gained momentum. Certification tests often involve high stakes, since they can play an important role in regulating access to professional practice and serve to provide a measure of professional competence for end users. The decision to award certification is based on inferences from candidates' test scores about their knowledge, skills and abilities, as well as their interpreting performance in a given target domain. To justify the appropriateness of score-based inferences and actions, test developers need to provide evidence that the test is valid and reliable through a process of test validation. However, there is little evidence that test qualities are systematically evaluated in interpreter certification testing. In an attempt to address this problem, this paper proposes a theoretical argument-based validation framework for interpreter certification performance tests so as to guide testers in carrying out systematic validation research. Before presenting the framework, validity theory is reviewed, and an examination of the argument-based approach to validation is provided. A validity argument for interpreter tests is then proposed, with hypothesized validity evidence. Examples of evidence are drawn from relevant empirical work, where available. Gaps in the available evidence are highlighted and suggestions for research are made.

KW - Argument-based approach

KW - Interpreter certification

KW - Performance testing

KW - Validation

KW - Validity

KW - Validity argument

UR - http://www.scopus.com/inward/record.url?scp=84992725841&partnerID=8YFLogxK

U2 - 10.1075/intp.18.2.04han

DO - 10.1075/intp.18.2.04han

M3 - Article

VL - 18

SP - 231

EP - 258

JO - Interpreting

JF - Interpreting

SN - 1384-6647

IS - 2

ER -