A list,

PCA (PrincipalComponents Analysis) is a dimensionality reduction method frequently used in image processing. As we all know, when dealing with problems related to digital image processing, such as image query, which is often used, Query a similar image in a database of tens of thousands or millions or even larger.

At this time, our usual method is to extract the response features of the images in the image library, such as color, texture, SIFT, SURF, VLAD and other features, and then save them, establish the response data index, and then extract the corresponding features of the image to be queried, compare with the image features in the database, and find out the nearest image.

Here, in order to improve the accuracy of the query, we usually extract some relatively complex features, such as SIFT and SURF, etc. An image has many such feature points, and each feature point has a corresponding 128-dimensional vector describing the feature points. Imagine that an image has 300 such feature points. If we had a million images in our database, this would be a large amount of memory and would take a lot of time to index. If we performed PCA on each vector and reduced its dimension to 64 dimensions, wouldn’t it save a lot of storage?

















Ii. Source code

The basic principle of %K-L transformation is to remove the component dependence of the sample set data (vector). % % by the vector of a set of N samples to find out the covariance matrix, the covariance matrix diagonalization, % to get eigen vector, the eigen vector formed a set of orthogonal basis N dimensional space, pick up the corresponding eigenvalue from the eigen vector larger several as a transformation matrix column, % use the transformation matrix of linear transformation of the original sample of, Projected onto an eigenspace, the resulting vector is called an eigenvector. K-L transform can be used for sample compression and feature extraction. % in practical problems, it requires that the distribution of the sample set should be relatively tight. If % meets the Gaussian distribution, the energy of the sample can be concentrated in the direction of a few eigenvectors. clear; clc; % % linear classification; x1= [- 5 - 5;- 5 4;4 - 5;- 5 6;- 6 - 5; ] ; x2=[5 5;5 6;6 5; 5 4; 4 5]; x=[x1;x2]; One-dimensional feature extraction X=x1+x2 using PCA transform; m0=sum(sum(X)); % to p (w1) = p = (w2)0.5;
y1=x1';
y2=x2';
w1=y1;
z1=y1*y1';
z2=y2*y2';
r=1/10*(z1+z2); % p=poly(r); % ploy is used to generate characteristic polynomial coefficient vector % root=roots(p) % r eigenvalue % sort(root()); [E,D]=eig(r); %eigenvalues =flipud(sort(diag(D))); EigD,IX =sort(diag(D),'descend'); % eigE(:,1:length(IX))=E(:,IX); % disp(Eigenvalues of the covariance matrix:); disp(eigD); disp(The covariance matrix corresponds to the eigenvector (column) matrix of the eigenvalues:); disp(eigE); % Covariance matrix of sample set X Covx= Cov (X); disp(Covariance matrix of sample set:); disp(Covx); Eigenvalue (D) of covariance matrix CovX and eigenvector matrix EIGENvector (V) [E,D] = eig(CovX); EigD,IX =sort(diag(D),'descend'); % eigE(:,1:length(IX))=E(:,IX);
disp(Eigenvalues of the covariance matrix:); disp(eigD); disp(The covariance matrix corresponds to the eigenvector (column) matrix of the eigenvalues:); disp(eigE); % PCA Y=(x-repmat(mean(X,1),10.1))  * eigE;
disp(Sample set x:); disp(x); disp('PCA results of sample set X :');
disp(Y);

Srange = minmax(x(:,1)'); % Smean = mean(x); % Sample set center pointCopy the code

3. Operation results

Fourth, note

Version: 2014a complete code or write plus 1564658423