Introduction to the

Singular value is a very important concept in matrix, which is generally obtained by singular value decomposition. Singular value decomposition is an important matrix decomposition method in linear algebra and matrix theory, and it is very important in statistics and signal processing.

Before we get to singular values, let’s look at the concept of eigenvalues.

Similar matrix

In linear algebra, a similar matrix is a matrix with a similar relation. Let A and B be n-order matrices. If there is an n-order invertible matrix P such that p-1AP =B, the matrix A is said to be similar to B and denoted as A~B.

Diagonal matrix

A diagonal matrix is a matrix where all elements except the main diagonal are 0. This matrix is often written as diag (a1, A2… , an). Diagonal matrix can be considered to be the simplest kind of matrix. It is worth mentioning that: the diagonal elements can be 0 or other values, and the diagonal matrix with equal elements on the diagonal is called quantitative matrix; The diagonal matrix with all ones on the diagonal is called the identity matrix. The operation of diagonal matrix includes sum operation, difference operation, number multiplication operation and product operation of diagonal matrix of the same order, and the result is still diagonal matrix.

Diagonalizable matrices

Diagonalizable matrices are an important class of matrices in linear algebra and matrix theory. A square matrix A is said to be diagonalizable if it resembles A diagonal matrix, that is, if there exists an invertible matrix P such that P −1AP is diagonal.

The eigenvalue

Let A be the n-order matrix. If there exists A constant λ and n-dimensional non-zero vector x such that Ax=λx, then λ is said to be the eigenvalue of the matrix A, and x is the eigenvector of A belonging to the eigenvalue λ.

A set of eigenvectors of a matrix is a set of orthogonal vectors.

That is, linear transformation of the eigenvector A will only extend or shorten the vector without changing its direction.

A linear transformation can usually be completely described by its eigenvalues and eigenvectors. An eigenspace is a set of eigenvectors with the same eigenvalue.

Characteristics of decomposition

Eigendecomposition, also known as Spectral decomposition, is a method to decompose a matrix into the product of matrices represented by its eigenvalues and eigenvectors. Note that only for diagonalizable matrices can eigen factorization be performed.

Let A be an N×N square matrix with N linearly independent eigenvectors Qi (I =1… , N). Thus, A can be decomposed as: A= Q λ Q-1

Where Q is the N×N square matrix, and its ith column is the eigenvector of A. If all the eigenvectors of A are x1,x2… [x1,x2… xm]\left[x_1,x_2… x_m\right][x1,x2… xm] [x1,x2… xm], where x is the n-dimensional non-zero vector.

As a diagonal matrix, the elements on the diagonal are corresponding eigenvalues, i.e., λ II =λ I. Namely [lambda 1 0 0………… lambda m] \ left [\ begin {matrix} lambda _1… 0 \ \ \ \ 0……………… lambda _m {matrix} \ \ end right] ⎣ ⎢ ⎡ lambda 1… 0…… 0… Lambda m ⎦ ⎥ ⎤

Note here that only diagonalizable matrices can be eigenfactorized. For example, [1101]\left[\begin{matrix}11\\01 \end{matrix} right][1101] cannot be diagonalized, so it cannot be eigen decomposed.

Because A is Q λ q-1, you can see that A is decomposed into three matrices, which are three mappings.

If we now have a vector x, we can conclude the following:


A x = Q Λ Q 1 x Ax = Q Λ Q ^ {1} x

Q is an orthogonal matrix, and the inverse of an orthogonal matrix is equal to its transpose, so Q−1Q^{-1}Q−1 = QTQ^TQT. QTQ^TQT’s transformation of x is an orthogonal transformation, and it represents x in A new coordinate system, which is the coordinate system of all the orthogonal eigenvectors of A. For example, x can be expressed as all eigenvectors of A:


x = a 1 x 1 + a 2 x 2 + + a m x m X = a_1x_1 + a_2x_2 +… +a_mx_m

By the first transformation can put x for [a1a2… am] T [a_1 a_2… a_m] [a1a2… am] T ^ T.


Q Λ Q 1 x = Q Λ [ x 1 T x 2 T x m T ] ( a 1 x 1 + a 2 x 2 + a 3 x 3 + + a m x m ) = Q Λ [ a 1 a 2 a m ] Q Λ Q ^ {1} x = Q Λ \ left [\ begin {matrix} x_1 ^ T \ \ x_2 ^ T \ \ \ \… \ \ x_m ^ T {matrix} \ \ end right] (a_3x_3 a_2x_2 a_1x_1 + + +… + a_mx_m) = Q Λ \ left [\ begin {matrix} a_1 \ \ a_2 \ \… \ \ a_m {matrix} \ \ end right]

Then, in the new coordinate system representation, the middle diagonal matrix is substituted for the new vector coordinates, which results in the vector being stretched or compressed in all directions:


Q Λ [ a 1 a 2 a m ] = Q [ Lambda. 1 0 0 Lambda. m ] [ a 1 a 2 a m ] = Q [ Lambda. 1 a 1 Lambda. 2 a 2 Lambda. m a m ] Q Λ \ left [\ begin {matrix} a_1 \ \ a_2 \ \… \ \ a_m {matrix} \ \ end right] = Q \ left [\ begin {matrix} lambda _1… 0 \ \ \ \ 0……………… lambda _m {matrix} \ \ end right] \ left [\ begin {matrix} a_1 \ \ a_2 \ \… \ \ a_m {matrix} \ \ end right] = Q \ left [\ begin {matrix} lambda _1a_1 \ \ lambda _2a_2 \ \… \ \ lambda _ma_m {matrix} \ \ end right]

If A is not full rank, then it means that there are 0 elements on the diagonal of the diagonal matrix, and this will cause dimensional degradation, which will cause the mapped vector to fall into the subspace of the m-dimensional space.

The final transformation is the transformation of Q to the stretched or compressed vector. Since Q and Q−1Q^{-1}Q−1 are inverse matrices, the transformation of Q is the inverse transformation of the transformation of Q−1Q^{-1}Q−1.

Geometric meaning of eigenvalues

A matrix times a column vector is the same thing as a linear combination of the column vectors of the matrix. A row vector times a matrix is the same thing as a linear combination of the row vectors of the matrix.

So when you multiply a vector times a matrix, you’re essentially transforming this vector geometrically.

As stated previously, λ is a diagonal matrix, and the elements on the diagonal are corresponding eigenvalues, i.e., λ II =λ I. Namely [lambda 1 0 0………… lambda m] \ left [\ begin {matrix} lambda _1… 0 \ \ \ \ 0……………… lambda _m {matrix} \ \ end right] ⎣ ⎢ ⎡ lambda 1… 0…… 0… Lambda m ⎦ ⎥ ⎤

These eigenvalues represent the magnitude of the transformation in each direction of the linear transformation of the vector.

Singular value

If A is an m * n matrix, q=min(m,n), the arithmetic square root of q non-negative eigenvalues of A*A is called the singular value of A.

Singular value decomposition SVD

Eigenvalue decomposition can easily extract the features of the matrix, but the premise is that the matrix is a square matrix. If you have a non-square matrix, then you have to use singular value decomposition. Let’s take a look at the definition of singular value decomposition:


A = U Σ V T A = U Σ V ^ T

Where A is the m by N matrix that you’re trying to decompose, U is an N by N square matrix, sigma is an N by M matrix where the non-diagonal elements are 0. VTV to the TVT is V transpose, which is also an n by n matrix.

The singular values, like the eigenvalues, are arranged from large to small in the sigma matrix, and the singular values decrease very quickly, in many cases, the sum of the first 10% or even 1% of the singular values accounts for more than 99% of the total. In other words, we can also approximate the matrix with the singular values of the former r large. R is a number much less than m and n, so you can compress the matrix.

By singular value decomposition, we can approximately replace the original matrix with a smaller amount of data.

This article is available at www.flydean.com

The most popular interpretation, the most profound dry goods, the most concise tutorial, many tips you didn’t know waiting for you to discover!

Welcome to pay attention to my public number: “procedures those things”, understand technology, more understand you!