Upload Code
loading-left
loading loading loading
loading-right

Loading

Profile
No self-introduction
codes (1)
PCA based Face Recognition
no vote
Three well-known appearance based subspace face recognition algorithms are used to test the impact of compression: principal component analysis PCA (Turkish and Puntland, 1991), linear discriminant analysis LDA (belhumeur et al., 1996) and independent component analysis ICA (Bartlett et al., 2002). It is important to mention that we use ICA Architecture 2 from (Bartlett et al., 2002) because ICA architecture 1 has proved to be a suboptimal face recognition task (warm-up match et al., 2005; Draghi et al., 2006). Dimension reduction of LDA and PCA independent component analysis is completed as a preprocessing step. In order to train PCA algorithm, we use subset classes, which have exactly three images of each class. We found such 225 classes (different people), so our training set includes 3 × 225 = 675 images (M = 675, C = 225). The impact of this ratio of overlap on algorithm performance needs to be further explored and will be a part of our future work. According to the theory of principal component analysis, M-1 = 674, the meaningful eigenvectors are obtained. We recommend and maintain the highest 40% of those people through FERET, resulting in 270 2D PCA subspace w (674 = 270 40%). It is calculated to retain 97.85% of the energy 270 eigenvector. This subspace is used for recognition as PCA face space and input independent component analysis and linear discriminant analysis (PCA is preprocessed by dimension reduction step). ICA yields 270 dimensional subspace and LDA produces only 224 dimensional space, because it can theoretically produce the largest C-1 basis vector. All of these stay close to the dimension space of PCA and ICA, so as to make the comparison as fair as possible.
santu619
2016-08-23
2
1
No more~