Abstract
Documents come naturally with structure: a section contains paragraphs which itself contains sentences; a blog page contains a sequence of comments and links to related blogs. Structure, of course, implies something about shared topics. In this paper we take the simplest form of structure, a document consisting of multiple segments, as the basis for a new form of topic model. To make this computationally feasible, and to allow the form of collapsed Gibbs sampling that has worked well to date with topic models, we use the marginalized posterior of a two-parameter Poisson-Dirichlet process (or Pitman-Yor process) to handle the hierarchical modelling. Experiments using either paragraphs or sentences as segments show the method significantly outperforms standard topic models on either whole document or segment, and previous segmented models, based on the held-out perplexity measure.
Original language | English |
---|---|
Pages (from-to) | 5-19 |
Number of pages | 15 |
Journal | Machine Learning |
Volume | 81 |
Issue number | 1 |
DOIs | |
Publication status | Published - Oct 2010 |
Keywords
- Document structure
- Latent Dirichlet allocation
- Segmented topic model
- Two-parameter Poisson-Dirichlet process