Discrepancy bounds for uniformly ergodic Markov chain quasi-Monte Carlo

Josef Dick, Daniel Rudolf, Houying Zhu

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)


Markov chains can be used to generate samples whose distribution approximates a given target distribution. The quality of the samples of such Markov chains can be measured by the discrepancy between the empirical distribution of the samples and the target distribution. We prove upper bounds on this discrepancy under the assumption that the Markov chain is uniformly ergodic and the driver sequence is deterministic rather than independent U(0,1) random variables. In particular, we show the existence of driver sequences for which the discrepancy of the Markov chain from the target distribution with respect to certain test sets converges with (almost) the usual Monte Carlo rate of n-1/2.

Original languageEnglish
Pages (from-to)3178-3205
Number of pages28
JournalAnnals of Applied Probability
Issue number5
Publication statusPublished - Oct 2016
Externally publishedYes


  • Markov chain Monte Carlo
  • uniformly ergodic Markov chain
  • discrepancy theory
  • probabilistic method


Dive into the research topics of 'Discrepancy bounds for uniformly ergodic Markov chain quasi-Monte Carlo'. Together they form a unique fingerprint.

Cite this