On extending neural networks with loss ensembles for text classification

Hamideh Hajiabadi, Diego Molla, Reza Monsefi

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

33 Downloads (Pure)

Abstract

Ensemble techniques are powerful approaches that combine several weak learners to build a stronger one. As a meta learning framework, ensemble techniques can easily be applied to many machine learning techniques. In this paper we propose a neural network extended with an ensemble loss function for text classification. The weight of each weak loss function is tuned within the training phase through the gradient propagation optimization method of the neural network. The approach is evaluated on several text classification datasets. We also evaluate its performance in various environments with several degrees of label noise. Experimental results indicate an improvement of the results and strong resilience against label noise in comparison with other methods.
Original languageEnglish
Title of host publicationAustralasian Language Technology Association Workshop 2017
Subtitle of host publicationProceedings of the Workshop
EditorsJojo Sze-Meng Wong, Gholamreza Haffari
Place of PublicationStroudsburg, PA
PublisherAssociation for Computational Linguistics (ACL)
Pages98-102
Number of pages5
Publication statusPublished - 2017
EventAustralasian Language Technology Association Workshop 2017 - Brisbane, Australia
Duration: 6 Dec 20178 Dec 2017

Conference

ConferenceAustralasian Language Technology Association Workshop 2017
Country/TerritoryAustralia
CityBrisbane
Period6/12/178/12/17

Fingerprint

Dive into the research topics of 'On extending neural networks with loss ensembles for text classification'. Together they form a unique fingerprint.

Cite this