Abstract
Principal Components Analysis (PCA) is a practical and standard statistical tool in modern data analysis that has found application in different areas such as face recognition, image compression and neuroscience. It has been called one of the most precious results from applied linear algebra. PCA is a straightforward, non-parametric method for extracting pertinent information from confusing data sets. It presents a roadmap for how to reduce a complex data set to a lower dimension to disclose the hidden, simplified structures that often underlie it. This paper mainly addresses the Methodological Analysis of Principal Component Analysis (PCA) Method. PCA is a statistical approach used for reducing the number of variables which is most widely used in face recognition. In PCA, every image in the training set is represented as a linear combination of weighted eigenvectors called eigenfaces. These eigenvectors are obtained from covariance matrix of a training image set. The weights are found out after selecting a set of most relevant Eigenfaces. Recognition is performed by projecting a test image onto the subspace spanned by the eigenfaces and then classification is done by measuring minimum Euclidean distance. In this paper we present a comprehensive discussion of PCA and also simulate it on some data sets using MATLAB.
Original language | English |
---|---|
Pages (from-to) | 32-38 |
Number of pages | 7 |
Journal | International Journal of Computational Engineering & Management |
Volume | 16 |
Issue number | 2 |
Publication status | Published - 2013 |
Externally published | Yes |
Keywords
- Principal component
- Covariance matrix
- Eigenvalue
- Eigenvector
- PCA