A Mixture model for learning multi-sense word embeddings

Dai Quoc Nguyen, Dat Quoc Nguyen, Ashutosh Modi, Stefan Thater, Manfred Pinkal

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contribution

4 Citations (Scopus)

Abstract

Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.
Original languageEnglish
Title of host publicationSEM 2017
Subtitle of host publicationProceedings of the 6th Joint Conference on Lexical and Computational Semantics
PublisherAssociation for Computational Linguistics (ACL)
Pages121-127
Number of pages7
ISBN (Electronic)9781945626531
ISBN (Print)9781945626531
DOIs
Publication statusPublished - 2017
Event6th Joint Conference on Lexical and Computational Semantics - Vancouver
Duration: 3 Aug 20174 Aug 2017

Conference

Conference6th Joint Conference on Lexical and Computational Semantics
Abbreviated titleSEM 2017
CityVancouver
Period3/08/174/08/17

Cite this

Nguyen, D. Q., Nguyen, D. Q., Modi, A., Thater, S., & Pinkal, M. (2017). A Mixture model for learning multi-sense word embeddings. In SEM 2017: Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (pp. 121-127). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/S17-1015