Abstract
Relation extraction models based on deep learning have been attracting a lot of attention recently. Little research is carried out to reduce their need of labeled training data. In this work, we propose an unsupervised pre-training method based on the sequence-to-sequence model for deep relation extraction models. The pre-trained models need only half or even less training data to achieve equivalent performance as the same models without pre-training.
Original language | English |
---|---|
Title of host publication | Proceedings of the Australasian Language Technology Association Workshop 2016 |
Editors | Trevor Cohn |
Place of Publication | Melbourne |
Publisher | Australasian Language Technology Association |
Pages | 54-64 |
Number of pages | 11 |
Publication status | Published - 2016 |
Event | 2016 Australasian Language Technology Association Workshop, ALTA 2016 - Caulfield, Australia Duration: 5 Dec 2016 → 7 Dec 2016 |
Conference
Conference | 2016 Australasian Language Technology Association Workshop, ALTA 2016 |
---|---|
Country/Territory | Australia |
City | Caulfield |
Period | 5/12/16 → 7/12/16 |