Unsupervised pre-training with sequence reconstruction loss for deep relation extraction models

Zhuang Li, Lizhen Qu, Qiongkai Xu, Mark Johnson

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

10 Citations (Scopus)

Abstract

Relation extraction models based on deep learning have been attracting a lot of attention recently. Little research is carried out to reduce their need of labeled training data. In this work, we propose an unsupervised pre-training method based on the sequence-to-sequence model for deep relation extraction models. The pre-trained models need only half or even less training data to achieve equivalent performance as the same models without pre-training.

Original languageEnglish
Title of host publicationProceedings of the Australasian Language Technology Association Workshop 2016
EditorsTrevor Cohn
Place of PublicationMelbourne
PublisherAustralasian Language Technology Association
Pages54-64
Number of pages11
Publication statusPublished - 2016
Event2016 Australasian Language Technology Association Workshop, ALTA 2016 - Caulfield, Australia
Duration: 5 Dec 20167 Dec 2016

Conference

Conference2016 Australasian Language Technology Association Workshop, ALTA 2016
Country/TerritoryAustralia
CityCaulfield
Period5/12/167/12/16

Fingerprint

Dive into the research topics of 'Unsupervised pre-training with sequence reconstruction loss for deep relation extraction models'. Together they form a unique fingerprint.

Cite this