TY - CHAP
T1 - Mixtures of Normalized Linear Projections
AU - Otoom, Ahmed Fawzi
AU - Perez Concha, Oscar
AU - Gunes, Hatice
AU - Piccardi, Massimo
PY - 2009
Y1 - 2009
N2 - High dimensional spaces pose a challenge to any classification task. In fact, these spaces contain much redundancy and it becomes crucial to reduce the dimensionality of the data to improve analysis, density modeling, and classification. In this paper, we present a method for dimensionality reduction in mixture models and its use in classification. For each component of the mixture, the data are projected by a linear transformation onto a lower-dimensional space. Subsequently, the projection matrices and the densities in such compressed spaces are learned by means of an Expectation Maximization (EM) algorithm. However, two main issues arise as a result of implementing this approach, namely: 1) the scale of the densities can be different across the mixture components and 2) a singularity problem may occur. We suggest solutions to these problems and validate the proposed method on three image data sets from the UCI Machine Learning Repository. The classification performance is compared with that of a mixture of probabilistic principal component analysers (MPPCA). Across the three data sets, our accuracy always compares favourably, with improvements ranging from 2.5% to 35.4%.
AB - High dimensional spaces pose a challenge to any classification task. In fact, these spaces contain much redundancy and it becomes crucial to reduce the dimensionality of the data to improve analysis, density modeling, and classification. In this paper, we present a method for dimensionality reduction in mixture models and its use in classification. For each component of the mixture, the data are projected by a linear transformation onto a lower-dimensional space. Subsequently, the projection matrices and the densities in such compressed spaces are learned by means of an Expectation Maximization (EM) algorithm. However, two main issues arise as a result of implementing this approach, namely: 1) the scale of the densities can be different across the mixture components and 2) a singularity problem may occur. We suggest solutions to these problems and validate the proposed method on three image data sets from the UCI Machine Learning Repository. The classification performance is compared with that of a mixture of probabilistic principal component analysers (MPPCA). Across the three data sets, our accuracy always compares favourably, with improvements ranging from 2.5% to 35.4%.
UR - http://www.scopus.com/inward/record.url?scp=70549111713&partnerID=8YFLogxK
UR - http://purl.org/au-research/grants/arc/LP150100487
U2 - 10.1007/978-3-642-04697-1_7
DO - 10.1007/978-3-642-04697-1_7
M3 - Chapter
AN - SCOPUS:70549111713
SN - 3642046967
SN - 9783642046964
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 66
EP - 76
BT - Advanced Concepts for Intelligent Vision Systems
A2 - Blanc-Talon, Jacques
A2 - Philips, Wilfried
A2 - Popescu, Dan
A2 - Scheunders, Paul
PB - Springer, Springer Nature
CY - Berlin
T2 - 11th International Conference on Advanced Concepts for Intelligent Vision Systems, ACIVS 2009
Y2 - 28 September 2009 through 2 October 2009
ER -