TY - JOUR
T1 - Validation of a locally created and rated writing test used for placement in a higher education EFL program
AU - Johnson, Robert C.
AU - Riazi, A. Mehdi
PY - 2017/4
Y1 - 2017/4
N2 - This paper reports a study conducted to validate a locally created and rated writing test. The test was used to inform a higher education institution's decisions regarding placement of entering students into appropriate preparatory English program courses. An amalgam of two influential models – Kane's (1992, 1994) interpretive model and Bachman's (2005) and Bachman and Palmer's (2010) assessment use argument – was used to build a validation framework. A mixed methods approach incorporating a diverse array of quantitative and qualitative data from various stakeholders, including examinees, students, instructors, staff, and administrators, guided the collection and analysis of evidence informing the validation. Results established serious doubts about the writing test, not only in terms of interpreted score meaning, but also the impact of its use on various stakeholders, and on teaching and learning. The study reinforces the importance of comprehensive validation efforts, particularly by test users, for all instruments informing decisions about test-takers, including writing tests and other types of direct performance assessments. Results informed a number of suggested changes regarding the rubric and rater training, among others, thus demonstrating the potential of validation studies as ‘road maps’ for immediate opportunities to improve both testing and decisions made based on testing.
AB - This paper reports a study conducted to validate a locally created and rated writing test. The test was used to inform a higher education institution's decisions regarding placement of entering students into appropriate preparatory English program courses. An amalgam of two influential models – Kane's (1992, 1994) interpretive model and Bachman's (2005) and Bachman and Palmer's (2010) assessment use argument – was used to build a validation framework. A mixed methods approach incorporating a diverse array of quantitative and qualitative data from various stakeholders, including examinees, students, instructors, staff, and administrators, guided the collection and analysis of evidence informing the validation. Results established serious doubts about the writing test, not only in terms of interpreted score meaning, but also the impact of its use on various stakeholders, and on teaching and learning. The study reinforces the importance of comprehensive validation efforts, particularly by test users, for all instruments informing decisions about test-takers, including writing tests and other types of direct performance assessments. Results informed a number of suggested changes regarding the rubric and rater training, among others, thus demonstrating the potential of validation studies as ‘road maps’ for immediate opportunities to improve both testing and decisions made based on testing.
KW - language testing
KW - placement testing
KW - test validity
KW - argument-based validity
UR - http://www.scopus.com/inward/record.url?scp=85000925924&partnerID=8YFLogxK
U2 - 10.1016/j.asw.2016.09.002
DO - 10.1016/j.asw.2016.09.002
M3 - Article
AN - SCOPUS:85000925924
SN - 1075-2935
VL - 32
SP - 85
EP - 104
JO - Assessing Writing
JF - Assessing Writing
ER -