The practice of Evidence Based Medicine requires practitioners to extract evidence from published medical literature and grade the extracted evidence in terms of quality. With the goal of automating the time-consuming grading process, we assess the effects of a number of factors on the grading of the evidence. The factors include the publication types of individual articles, publication years, journal information and article titles. We model the evidence grading problem as a supervised classification problem and show, using several machine learning algorithms, that the use of publication types alone as features gives an accuracy close to 70%. We also show that the other factors do not have any notable effects on the evidence grades.
|Number of pages||8|
|Journal||CEUR Workshop Proceedings|
|Publication status||Published - 2010|