Readability of texts: Human evaluation versus computer index

Pooneh Heydari, A. Mehdi Riazi

Research output: Contribution to journalArticleResearchpeer-review

Abstract

This paper reports a study which aimed at exploring if there is any difference between the evaluation of EFL expert readers and computer-based evaluation of English text difficulty. 43 participants including university EFL instructors and graduate students read 10 different English passages and completed a Likert-type scale on their perception of the different components of text difficulty. On the other hand, the same 10 English texts were fed into Word Program and Flesch Readability index of the texts were calculated. Then comparisons were made to see if readers' evaluation of texts were the same or different from the calculated ones. Results of the study revealed significant differences between participants' evaluation of text difficulty and the Flesch Readability index of the texts. Findings also indicated that there was no significant difference between EFL instructors and graduate students' evaluation of the text difficulty. The findings of the study imply that while readability formulas are valuable measures for evaluating level of text difficulty, they should be used cautiously. Further research seems necessary to check the validity of the readability formulas and the findings of the present study.

LanguageEnglish
Pages177-190
Number of pages14
JournalMediterranean Journal of Social Sciences
Volume3
Issue number1
DOIs
Publication statusPublished - Jan 2012

Fingerprint

evaluation
instructor
graduate
Evaluation
Readability
student
expert
university
Graduate Students
Reader
Instructor
Graduate students

Cite this

@article{c01b17b3ec5041b4b4f07486d9e4b5fd,
title = "Readability of texts: Human evaluation versus computer index",
abstract = "This paper reports a study which aimed at exploring if there is any difference between the evaluation of EFL expert readers and computer-based evaluation of English text difficulty. 43 participants including university EFL instructors and graduate students read 10 different English passages and completed a Likert-type scale on their perception of the different components of text difficulty. On the other hand, the same 10 English texts were fed into Word Program and Flesch Readability index of the texts were calculated. Then comparisons were made to see if readers' evaluation of texts were the same or different from the calculated ones. Results of the study revealed significant differences between participants' evaluation of text difficulty and the Flesch Readability index of the texts. Findings also indicated that there was no significant difference between EFL instructors and graduate students' evaluation of the text difficulty. The findings of the study imply that while readability formulas are valuable measures for evaluating level of text difficulty, they should be used cautiously. Further research seems necessary to check the validity of the readability formulas and the findings of the present study.",
author = "Pooneh Heydari and {Mehdi Riazi}, A.",
year = "2012",
month = "1",
doi = "10.5901/mjss.2012.03.01.177",
language = "English",
volume = "3",
pages = "177--190",
journal = "Mediterranean Journal of Social Sciences",
issn = "2039-9340",
publisher = "MCSER-Mediterranean Center of Social and Educational research",
number = "1",

}

Readability of texts : Human evaluation versus computer index. / Heydari, Pooneh; Mehdi Riazi, A.

In: Mediterranean Journal of Social Sciences, Vol. 3, No. 1, 01.2012, p. 177-190.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Readability of texts

T2 - Mediterranean Journal of Social Sciences

AU - Heydari, Pooneh

AU - Mehdi Riazi, A.

PY - 2012/1

Y1 - 2012/1

N2 - This paper reports a study which aimed at exploring if there is any difference between the evaluation of EFL expert readers and computer-based evaluation of English text difficulty. 43 participants including university EFL instructors and graduate students read 10 different English passages and completed a Likert-type scale on their perception of the different components of text difficulty. On the other hand, the same 10 English texts were fed into Word Program and Flesch Readability index of the texts were calculated. Then comparisons were made to see if readers' evaluation of texts were the same or different from the calculated ones. Results of the study revealed significant differences between participants' evaluation of text difficulty and the Flesch Readability index of the texts. Findings also indicated that there was no significant difference between EFL instructors and graduate students' evaluation of the text difficulty. The findings of the study imply that while readability formulas are valuable measures for evaluating level of text difficulty, they should be used cautiously. Further research seems necessary to check the validity of the readability formulas and the findings of the present study.

AB - This paper reports a study which aimed at exploring if there is any difference between the evaluation of EFL expert readers and computer-based evaluation of English text difficulty. 43 participants including university EFL instructors and graduate students read 10 different English passages and completed a Likert-type scale on their perception of the different components of text difficulty. On the other hand, the same 10 English texts were fed into Word Program and Flesch Readability index of the texts were calculated. Then comparisons were made to see if readers' evaluation of texts were the same or different from the calculated ones. Results of the study revealed significant differences between participants' evaluation of text difficulty and the Flesch Readability index of the texts. Findings also indicated that there was no significant difference between EFL instructors and graduate students' evaluation of the text difficulty. The findings of the study imply that while readability formulas are valuable measures for evaluating level of text difficulty, they should be used cautiously. Further research seems necessary to check the validity of the readability formulas and the findings of the present study.

UR - http://www.scopus.com/inward/record.url?scp=84892513659&partnerID=8YFLogxK

U2 - 10.5901/mjss.2012.03.01.177

DO - 10.5901/mjss.2012.03.01.177

M3 - Article

VL - 3

SP - 177

EP - 190

JO - Mediterranean Journal of Social Sciences

JF - Mediterranean Journal of Social Sciences

SN - 2039-9340

IS - 1

ER -