Validation of a locally created and rated writing test used for placement in a higher education EFL program

Robert C. Johnson, A. Mehdi Riazi

Research output: Contribution to journalArticleResearchpeer-review

Abstract

This paper reports a study conducted to validate a locally created and rated writing test. The test was used to inform a higher education institution's decisions regarding placement of entering students into appropriate preparatory English program courses. An amalgam of two influential models – Kane's (1992, 1994) interpretive model and Bachman's (2005) and Bachman and Palmer's (2010) assessment use argument – was used to build a validation framework. A mixed methods approach incorporating a diverse array of quantitative and qualitative data from various stakeholders, including examinees, students, instructors, staff, and administrators, guided the collection and analysis of evidence informing the validation. Results established serious doubts about the writing test, not only in terms of interpreted score meaning, but also the impact of its use on various stakeholders, and on teaching and learning. The study reinforces the importance of comprehensive validation efforts, particularly by test users, for all instruments informing decisions about test-takers, including writing tests and other types of direct performance assessments. Results informed a number of suggested changes regarding the rubric and rater training, among others, thus demonstrating the potential of validation studies as ‘road maps’ for immediate opportunities to improve both testing and decisions made based on testing.

LanguageEnglish
Pages85-104
Number of pages20
JournalAssessing Writing
Volume32
DOIs
Publication statusPublished - Apr 2017

Fingerprint

education
stakeholder
performance assessment
Placement
instructor
student
staff
Teaching
learning
evidence
Informing
Stakeholders
Testing
Raters
Mixed Methods
Roads
Staff
Instructor
Qualitative Data
Performance Assessment

Keywords

  • language testing
  • placement testing
  • test validity
  • argument-based validity

Cite this

@article{a7ae1d2be37b4a66930bfb713c496630,
title = "Validation of a locally created and rated writing test used for placement in a higher education EFL program",
abstract = "This paper reports a study conducted to validate a locally created and rated writing test. The test was used to inform a higher education institution's decisions regarding placement of entering students into appropriate preparatory English program courses. An amalgam of two influential models – Kane's (1992, 1994) interpretive model and Bachman's (2005) and Bachman and Palmer's (2010) assessment use argument – was used to build a validation framework. A mixed methods approach incorporating a diverse array of quantitative and qualitative data from various stakeholders, including examinees, students, instructors, staff, and administrators, guided the collection and analysis of evidence informing the validation. Results established serious doubts about the writing test, not only in terms of interpreted score meaning, but also the impact of its use on various stakeholders, and on teaching and learning. The study reinforces the importance of comprehensive validation efforts, particularly by test users, for all instruments informing decisions about test-takers, including writing tests and other types of direct performance assessments. Results informed a number of suggested changes regarding the rubric and rater training, among others, thus demonstrating the potential of validation studies as ‘road maps’ for immediate opportunities to improve both testing and decisions made based on testing.",
keywords = "language testing, placement testing, test validity, argument-based validity",
author = "Johnson, {Robert C.} and Riazi, {A. Mehdi}",
year = "2017",
month = "4",
doi = "10.1016/j.asw.2016.09.002",
language = "English",
volume = "32",
pages = "85--104",
journal = "Assessing Writing",
issn = "1075-2935",
publisher = "Elsevier",

}

Validation of a locally created and rated writing test used for placement in a higher education EFL program. / Johnson, Robert C.; Riazi, A. Mehdi.

In: Assessing Writing, Vol. 32, 04.2017, p. 85-104.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Validation of a locally created and rated writing test used for placement in a higher education EFL program

AU - Johnson, Robert C.

AU - Riazi, A. Mehdi

PY - 2017/4

Y1 - 2017/4

N2 - This paper reports a study conducted to validate a locally created and rated writing test. The test was used to inform a higher education institution's decisions regarding placement of entering students into appropriate preparatory English program courses. An amalgam of two influential models – Kane's (1992, 1994) interpretive model and Bachman's (2005) and Bachman and Palmer's (2010) assessment use argument – was used to build a validation framework. A mixed methods approach incorporating a diverse array of quantitative and qualitative data from various stakeholders, including examinees, students, instructors, staff, and administrators, guided the collection and analysis of evidence informing the validation. Results established serious doubts about the writing test, not only in terms of interpreted score meaning, but also the impact of its use on various stakeholders, and on teaching and learning. The study reinforces the importance of comprehensive validation efforts, particularly by test users, for all instruments informing decisions about test-takers, including writing tests and other types of direct performance assessments. Results informed a number of suggested changes regarding the rubric and rater training, among others, thus demonstrating the potential of validation studies as ‘road maps’ for immediate opportunities to improve both testing and decisions made based on testing.

AB - This paper reports a study conducted to validate a locally created and rated writing test. The test was used to inform a higher education institution's decisions regarding placement of entering students into appropriate preparatory English program courses. An amalgam of two influential models – Kane's (1992, 1994) interpretive model and Bachman's (2005) and Bachman and Palmer's (2010) assessment use argument – was used to build a validation framework. A mixed methods approach incorporating a diverse array of quantitative and qualitative data from various stakeholders, including examinees, students, instructors, staff, and administrators, guided the collection and analysis of evidence informing the validation. Results established serious doubts about the writing test, not only in terms of interpreted score meaning, but also the impact of its use on various stakeholders, and on teaching and learning. The study reinforces the importance of comprehensive validation efforts, particularly by test users, for all instruments informing decisions about test-takers, including writing tests and other types of direct performance assessments. Results informed a number of suggested changes regarding the rubric and rater training, among others, thus demonstrating the potential of validation studies as ‘road maps’ for immediate opportunities to improve both testing and decisions made based on testing.

KW - language testing

KW - placement testing

KW - test validity

KW - argument-based validity

UR - http://www.scopus.com/inward/record.url?scp=85000925924&partnerID=8YFLogxK

U2 - 10.1016/j.asw.2016.09.002

DO - 10.1016/j.asw.2016.09.002

M3 - Article

VL - 32

SP - 85

EP - 104

JO - Assessing Writing

T2 - Assessing Writing

JF - Assessing Writing

SN - 1075-2935

ER -