Differential topic models

Changyou Chen, Wray Buntine, Nan Ding, Lexing Xie, Lan Du

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)

Abstract

In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.

Original languageEnglish
Article number6777293
Pages (from-to)230-242
Number of pages13
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume37
Issue number2
DOIs
Publication statusPublished - 1 Feb 2015

Fingerprint

Dive into the research topics of 'Differential topic models'. Together they form a unique fingerprint.

Cite this