Abstract
Every teacher of logic knows that the ease with which a student can translate a natural language sentence into formal logic depends, amongst other things, on just how that natural language sentence is phrased. This paper reports findings from a pilot study of a large scale corpus in the area of formal logic education, where we used a very large dataset to provide empirical evidence for specific characteristics of natural language problem statements that frequently lead to students making mistakes. We developed a rich taxonomy of the types of errors that students make, and implemented tools for automatically classifying student errors into these categories. In this paper, we focus on three specific phenomena that were prevalent in our data: Students were found (a) to have particular difficulties with distinguishing the conditional from the biconditional, (b) to be sensitive to word-order effects during translation, and (c) to be sensitive to factors associated with the naming of constants. We conclude by considering the implications of this kind of large-scale empirical study for improving an automated assessment system specifically, and logic teaching more generally.
Original language | English |
---|---|
Title of host publication | Cogsci 2008 |
Subtitle of host publication | proceedings of the 30th annual meeting of the cognitive science society |
Place of Publication | Austin |
Publisher | Cognitive Science Society |
Pages | 505-510 |
Number of pages | 6 |
ISBN (Print) | 9780976831846 |
Publication status | Published - 2008 |
Event | Annual Conference of the Cognitive Science Society (30th : 2008) - Washington, DC Duration: 23 Jul 2008 → 26 Jul 2008 |
Conference
Conference | Annual Conference of the Cognitive Science Society (30th : 2008) |
---|---|
City | Washington, DC |
Period | 23/07/08 → 26/07/08 |
Keywords
- errors
- slips
- Proof & Logic
- misconceptions
- natural language
- e-learning
- human reasoning
- automated assessment
- educational data mining
- first-order logic
- propositional logic