Fitting finite mixture models using iterative Monte Carlo classification

Jing Xu*, Jun Ma

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    1 Citation (Scopus)


    Parameters of a finite mixture model are often estimated by the expectation–maximization (EM) algorithm where the observed data log-likelihood function is maximized. This paper proposes an alternative approach for fitting finite mixture models. Our method, called the iterative Monte Carlo classification (IMCC), is also an iterative fitting procedure. Within each iteration, it first estimates the membership probabilities for each data point, namely the conditional probability of a data point belonging to a particular mixing component given that the data point value is obtained, it then classifies each data point into a component distribution using the estimated conditional probabilities and the Monte Carlo method. It finally updates the parameters of each component distribution based on the classified data. Simulation studies were conducted to compare IMCC with some other algorithms for fitting mixture normal, and mixture t, densities.

    Original languageEnglish
    Pages (from-to)6684-6693
    Number of pages10
    JournalCommunications in Statistics - Theory and Methods
    Issue number13
    Publication statusPublished - 3 Jul 2017


    • complete data log-likelihood function
    • EM algorithm
    • finite mixture models
    • IMCC algorithm
    • membership probability


    Dive into the research topics of 'Fitting finite mixture models using iterative Monte Carlo classification'. Together they form a unique fingerprint.

    Cite this