Abstract
Ensemble techniques are powerful approaches that combine several weak learners to build a stronger one. As a meta learning framework, ensemble techniques can easily be applied to many machine learning techniques. In this paper we propose a neural network extended with an ensemble loss function for text classification. The weight of each weak loss function is tuned within the training phase through the gradient propagation optimization method of the neural network. The approach is evaluated on several text classification datasets. We also evaluate its performance in various environments with several degrees of label noise. Experimental results indicate an improvement of the results and strong resilience against label noise in comparison with other methods.
Original language | English |
---|---|
Title of host publication | Australasian Language Technology Association Workshop 2017 |
Subtitle of host publication | Proceedings of the Workshop |
Editors | Jojo Sze-Meng Wong, Gholamreza Haffari |
Place of Publication | Stroudsburg, PA |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 98-102 |
Number of pages | 5 |
Publication status | Published - 2017 |
Event | Australasian Language Technology Association Workshop 2017 - Brisbane, Australia Duration: 6 Dec 2017 → 8 Dec 2017 |
Conference
Conference | Australasian Language Technology Association Workshop 2017 |
---|---|
Country/Territory | Australia |
City | Brisbane |
Period | 6/12/17 → 8/12/17 |