MLiT: mixtures of Gaussians under linear transformations

Ahmed Fawzi Otoom, Hatice Gunes, Oscar Perez Concha, Massimo Piccardi

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

The curse of dimensionality hinders the effectiveness of density estimation in high dimensional spaces. Many techniques have been proposed in the past to discover embedded, locally linear manifolds of lower dimensionality, including the mixture of principal component analyzers, the mixture of probabilistic principal component analyzers and the mixture of factor analyzers. In this paper, we propose a novel mixture model for reducing dimensionality based on a linear transformation which is not restricted to be orthogonal nor aligned along the principal directions. For experimental validation, we have used the proposed model for classification of five "hard" data sets and compared its accuracy with that of other popular classifiers. The performance of the proposed method has outperformed that of the mixture of probabilistic principal component analyzers on four out of the five compared data sets with improvements ranging from 0. 5 to 3.2%. Moreover, on all data sets, the accuracy achieved by the proposed method outperformed that of the Gaussian mixture model with improvements ranging from 0.2 to 3.4%.

Original languageEnglish
Pages (from-to)193-205
Number of pages13
JournalPattern Analysis and Applications
Volume14
Issue number2
DOIs
Publication statusPublished - May 2011
Externally publishedYes

Keywords

  • Dimensionality reduction
  • Linear transformations
  • Mixture models
  • Object classification
  • Regularized maximum-likelihood

Fingerprint

Dive into the research topics of 'MLiT: mixtures of Gaussians under linear transformations'. Together they form a unique fingerprint.

Cite this