Standardized testing is ubiquitous in educational assessment, but questions have been raised about the extent to which these test scores accurately reflect students' genuine knowledge and skills. To more rigorously investigate this issue, the current study employed a within-subject experimental design to examine item format effects on primary school students' standardized assessment results in literacy, reading comprehension, and numeracy. Eighty-nine Grade 3 students (ages 8-9 years) completed tests that varied only in item format: multiple choice; open-ended; error detection and correction; explain; and, for numeracy questions, low literacy. Analyses contrasted students' performance across these conditions, as well as item response theory-derived item difficulty and ability discrimination estimates. Findings revealed that difficulty increased and accuracy decreased from multiple-choice to open-response to error-correction and explain questions. However, the most difficult item formats tended to yield the greatest discrimination across student ability levels. Despite previous findings to the contrary, low-literacy numeracy questions did not improve student performance or reduce item difficulty. Overall, findings indicated the impact of differing methods of assessment on standardized test performance and highlighted the need for careful consideration of not only the content of assessments but also their approaches to assessment.