A particle filter algorithm for Bayesian wordsegmentation

Benjamin Borschinger, Mark Johnson

Research output: Contribution to journalConference paperpeer-review

33 Downloads (Pure)


Bayesian models are usually learned using batch algorithms that have to iterate multiple times over the full dataset. This is both computationally expensive and, from a cognitive point of view, highly implausible. We present a novel online algorithm for the word segmentation models of Goldwater et al. (2009) which is, to our knowledge, the first published version of a Particle Filter for this kind of model. Also, in contrast to other proposed algorithms, it comes with a theoretical guarantee of optimality if the number of particles goes to infinity. While this is, of course, a theoretical point, a first experimental evaluation of our algorithm shows that, as predicted, its performance improves with the use of more particles, and that it performs competitively with other online learners proposed in Pearl et al. (2011).
Original languageEnglish
Pages (from-to)10-18
Number of pages9
JournalProceedings of the Australasian Language Technology Association Workshop 2011
Publication statusPublished - 2011
EventAustralasian Language Technology Workshop (9th : 2011) - Canberra
Duration: 1 Dec 20112 Dec 2011

Bibliographical note

Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.

Cite this