Contextual dependencies in unsupervised word segmentation

Sharon Goldwater*, Thomas L. Griffiths, Mark Johnson

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

151 Citations (Scopus)

Abstract

Developing better methods for segmenting continuous text into words is important for improving the processing of Asian languages, and may shed light on how humans learn to segment speech. We propose two new Bayesian word segmentation methods that assume unigram and bigram models of word dependencies respectively. The bigram model greatly outperforms the unigram model (and previous probabilistic models), demonstrating the importance of such dependencies for word segmentation. We also show that previous probabilistic models rely crucially on suboptimal search procedures.

Original languageEnglish
Title of host publicationProceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Place of PublicationStroudsburg, PA
PublisherAssociation for Computational Linguistics (ACL)
Pages673-680
Number of pages8
Volume1
ISBN (Print)1932432655, 9781932432657
DOIs
Publication statusPublished - Jul 2006
Externally publishedYes
Event21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, COLING/ACL - 2006 - Sydney, Australia
Duration: 17 Jul 200621 Jul 2006

Other

Other21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, COLING/ACL - 2006
Country/TerritoryAustralia
CitySydney
Period17/07/0621/07/06

Fingerprint

Dive into the research topics of 'Contextual dependencies in unsupervised word segmentation'. Together they form a unique fingerprint.

Cite this