Activities per year
Abstract
The emergence of generative AI tools has prompted educators to reconsider traditional approaches to language assessment design. In response, the presentation focuses on the implemented innovative assessment strategies to enhance students' AI literacy and critical thinking skills by incorporating metalanguage criteria into rubrics and assessment tasks. In the language learning process, metalanguage becomes a crucial player, and it comes into play when learners are asked to justify why something is correct or why something is not correct. Aligned with the notion that assessment tasks should foster critical thinking and provide meaningful feedback (Assessment Reform for The Age of Artificial Intelligence, 2023) Modern Greek language assessment rubrics were revised to explicitly include metalanguage criteria (Harun et al., 2017).
Through re-designed online language assessment tasks, students were challenged to articulate and justify their grammatical and syntactical choices using their metalanguage knowledge, evaluate AI-generated content (Kohnke, Moorhouse, & Zou, 2023), and identify ChatGPT’s verbose, redundant, and incorrect answers (Bowman, 2022). The presenter shares insights from testing these exercises with ChatGPT, revealing the limitations of AI-generated explanations.
Critical thinking and evaluation skills are a lot more important now than pre-ChatGPT. We can easily imagine a time when AI tools will be doing the ‘first draft’ of many things, and the ‘added value’ of humans would be to evaluate the output and improve on it. This presentation shares practical examples of the redesigned rubrics and language assessment tasks aiming to empower students to engage with metalanguage and critically evaluate AI-generated content.
Through re-designed online language assessment tasks, students were challenged to articulate and justify their grammatical and syntactical choices using their metalanguage knowledge, evaluate AI-generated content (Kohnke, Moorhouse, & Zou, 2023), and identify ChatGPT’s verbose, redundant, and incorrect answers (Bowman, 2022). The presenter shares insights from testing these exercises with ChatGPT, revealing the limitations of AI-generated explanations.
Critical thinking and evaluation skills are a lot more important now than pre-ChatGPT. We can easily imagine a time when AI tools will be doing the ‘first draft’ of many things, and the ‘added value’ of humans would be to evaluate the output and improve on it. This presentation shares practical examples of the redesigned rubrics and language assessment tasks aiming to empower students to engage with metalanguage and critically evaluate AI-generated content.
| Original language | English |
|---|---|
| Pages | 28-29 |
| Number of pages | 2 |
| Publication status | Published - 27 Nov 2024 |
| Event | LCNAU Biennial Colloquium (8th : 2024): Trans/Formation: research and education in languages and cultures - University of Sydney, Sydney, Australia Duration: 27 Nov 2024 → 29 Nov 2024 https://www.sydney.edu.au/arts/news-and-events/events/lcnau-2024-colloquium.html |
Conference
| Conference | LCNAU Biennial Colloquium (8th : 2024) |
|---|---|
| Country/Territory | Australia |
| City | Sydney |
| Period | 27/11/24 → 29/11/24 |
| Internet address |
Fingerprint
Dive into the research topics of 'Adapting metalanguage assessment practices to foster critical thinking in language learning in the AI era'. Together they form a unique fingerprint.-
ARTS AFTER DARK | Empowering education through AI: Revolution or evolution?
Koromvokis, P. (Participant)
30 Jul 2025Activity: Participating in or organising an event › Participating in a conference, workshop or event series
-
I ask students ‘why?’ (more often) in the era of ChatGPT (blog article on TECHE)
Koromvokis, P. (Other)
5 Mar 2024Activity: Other