Linear algebra
Numpy defines the matrix type, which is used to create matrix objects. By default, their addition, subtraction, multiplication and division operations are calculated by matrix, so the usage is very similar to Matlab. But because NumPy has both nDARray and Matrix objects, users can easily confuse the two. This goes against Python’s “explicit over implicit” principle, so using Matrix in programs is not officially recommended. Here, we’ll still use NDARray.
Matrix and vector product
The definition of matrix, matrix addition, matrix multiplication and matrix transpose are exactly the same as the two-dimensional array and will not be explained any more, but matrix multiplication has different representation.
numpy.dot(a, b[, out])
Compute the product of two matrices, or their inner product in the case of a one-dimensional array.
[example 1]
import numpy as np
x = np.array([1.2.3.4.5])
y = np.array([2.3.4.5.6])
z = np.dot(x, y)
print(z) # 70
x = np.array([[1.2.3], [3.4.5], [6.7.8]])
print(x)
[[1 # 2, 3]
# [3, 4, 5]
# [6, 7, 8]]
y = np.array([[5.4.2], [1.7.9], [0.4.5]])
print(y)
[[5 4 # 2]
# 7 [1 9]
# [0, 4, 5]]
z = np.dot(x, y)
print(z)
# [[7 30 35]
# [19 60 67]
# [37 105 115]]
z = np.dot(y, x)
print(z)
# [[29 40 51]
# [76 93 110]
# [42 51 60]]
Copy the code
Note: Dimensions in linear algebra are different from the dimensions of arrays. For example, n-dimensional row vectors mentioned in line algebra are one-dimensional arrays in Numpy, while n-dimensional column vectors in linear algebra are two-dimensional arrays with shape (n, 1) in Numpy.
Eigenvalues and eigenvectors of matrices
numpy.linalg.eig(a)
Compute the eigenvalues and eigenvectors of the square matrix.numpy.linalg.eigvals(a)
Compute the eigenvalues of the square matrix.
Find the eigenvalue eigenvector of the square matrix
import numpy as np
Create a diagonal matrix!
x = np.diag((1.2.3))
print(x)
1 0 # [[0]
2 0 # [0]
# [0, 0, 3]]
print(np.linalg.eigvals(x))
# [1. 2. 3.]
a, b = np.linalg.eig(x)
The eigenvalues are stored in A and the eigenvectors are stored in B
print(a)
# [1. 2. 3.]
print(b)
[[# 1. 0. 0.]
# / 0. 1. 0.
# [0. 0. 1.]]
# Check whether the eigenvalues and eigenvectors are correct
for i in range(3) :if np.allclose(a[i] * b[:, i], np.dot(x, b[:, i])):
print('Right')
else:
print('Error')
# Right
# Right
# Right
Copy the code
Check whether the symmetric matrix is positive definite (whether the eigenvalues are all positive).
import numpy as np
A = np.arange(16).reshape(4.4)
print(A)
# [[0 1 2 3]
# [4 5 6 7]
# [8 9 10 11]
# [12 13 14 15]]
A = A + A.T # Convert square matrix to symmetric matrix
print(A)
# [[0 5 10 15]
# [5 10 15 20]
# [10 15 20 25]
# [15 20 25 30]]
B = np.linalg.eigvals(A) Find the eigenvalues of A
print(B)
# [+ + 1.82694656e-15-1.72637110e-15]
# To determine whether all eigenvalues are greater than 0, using the all function, it is obvious that the symmetric matrix A is not positive definite
if np.all(B > 0) :print('Yes')
else:
print('No')
# No
Copy the code
Matrix decomposition
Singular value decomposition
Principle of singular value decomposition: Singular value decomposition (SVD) and its applications
u, s, v = numpy.linalg.svd(a, full_matrices=True, compute_uv=True, hermitian=False)
Singular value decompositiona
It’s an M,N matrixfull_matrices
Can be False or True. The default value is Trueu
The magnitude of phi is (M,M),v
The size of theta is (N,N). Otherwise,u
The magnitude of phi is (M,K),v
The size of is (K,N), K=min(M,N).compute_uv
Can be False or True. The default value is True, indicating calculationu,s,v
. If False, only computess
.- There are three total return values
u,s,v
.u
Size M,M,s
The size is (M,N),v
Size N,N,a = u*s*v
. - Among them
s
Is the matrixa
Singular value decomposition of.s
Except the diagonal elements are not0
And all other elements are0
And the diagonal elements are arranged from largest to smallest.s
There aren
Singular values, generally the next one is close to 0, so only the larger ones are retainedr
Zero singular values.
Note: the v returned in Numpy is the transpose of v in what is commonly called singular value decomposition A = U *s*v’.
[example 1]
import numpy as np
A = np.array([[4.11.14], [8.7, -2]])
print(A)
# [[4 11 14]
# [8 7-2]]
u, s, vh = np.linalg.svd(A, full_matrices=False)
print(u.shape) # (2, 2)
print(u)
# [[0.9486833 0.31622777]
# [0.31622777 0.9486833]]
print(s.shape) # (2)
print(np.diag(s))
# [[18.97366596 0.]
# [0. 9.48683298]]
print(vh.shape) # (2, 3)
print(vh)
# [[- 0.33333333-0.66666667 to 0.66666667]
# [0.66666667 0.33333333-0.66666667]
a = np.dot(u, np.diag(s))
a = np.dot(a, vh)
print(a)
# [[4.11.14.]
# [8.7.-2.]
Copy the code
[example 2]
import numpy as np
A = np.array([[1.1], [1, -2], [2.1]])
print(A)
# [[1] 1
# [1-2]
# 1] [2]
u, s, vh = np.linalg.svd(A, full_matrices=False)
print(u.shape) # (3, 2)
print(u)
# [[-5.34522484e-01 -1.11022302e-16]
# [2.67261242 e-01 9.48683298 e-01]
# [8.01783726 e-01 3.16227766 e-01]]
print(s.shape) # (2)
print(np.diag(s))
# [[2.64575131 0.]
# [0. 2.23606798]]
print(vh.shape) # (2, 2)
print(vh)
# [[0.70710678 0.70710678]
# [0.70710678 0.70710678]]
a = np.dot(u, np.diag(s))
a = np.dot(a, vh)
print(a)
[[# 1. 1.]
# / 1. 2.
# [2. 1.]]
Copy the code
QR decomposition
q,r = numpy.linalg.qr(a, mode='reduced')
Calculation of matrixa
QR decomposition of.a
It’s an (M, N) matrix to factorize.mode = reduced
: Returns a biorthogonal matrix of (M, N) column vectorsq
, and the trig matrix of N, Nr
(Reduced QR).mode = complete
: Returns the orthogonal matrix of (M, M)q
, and the trig matrix of M, Nr
(Full QR decomposition).
[example 1]
import numpy as np
A = np.array([[2, -2.3], [1.1.1], [1.3, -1]])
print(A)
# [[2 -2 3]
# [1 1 1]
# [1 3-1]]
q, r = np.linalg.qr(A)
print(q.shape) # (3, 3)
print(q)
# [[0.81649658 0.53452248 0.21821789]
# / - 0.40824829-0.26726124-0.87287156
# [- 0.40824829-0.80178373, 0.43643578]]
print(r.shape) # (3, 3)
print(r)
# [[2.44948974 0. 2.44948974]
# [0. -3.74165739 2.13808994]
# [0.0.0.0 0.65465367]]
print(np.dot(q, r))
# [[2.-2.3.]
# [1.1.1. 1.]
# [1.3.-1.]
a = np.allclose(np.dot(q.T, q), np.eye(3))
print(a) # True
Copy the code
[example 2]
import numpy as np
A = np.array([[1.1], [1, -2], [2.1]])
print(A)
# [[1] 1
# [1-2]
# 1] [2]
q, r = np.linalg.qr(A, mode='complete')
print(q.shape) # (3, 3)
print(q)
# [[0.40824829, 0.34503278 to 0.84515425]
# / - 0.40824829-0.89708523-0.16903085
# 0.50709255 [0.81649658 0.27602622]]
print(r.shape) # (3, 2)
print(r)
# [[2.44948974 0.40824829]
# [0. 2.41522946]
# [0. 0.]
print(np.dot(q, r))
[[# 1. 1.]
# / 1. 2.
# [2. 1.]]
a = np.allclose(np.dot(q, q.T), np.eye(3))
print(a) # True
Copy the code
[example 3]
import numpy as np
A = np.array([[1.1], [1, -2], [2.1]])
print(A)
# [[1] 1
# [1-2]
# 1] [2]
q, r = np.linalg.qr(A)
print(q.shape) # (3, 2)
print(q)
# [[0.40824829 0.34503278]
# 0.40824829 0.89708523] [-
# [0.81649658 0.27602622]]
print(r.shape) # (2, 2)
print(r)
# [[2.44948974 0.40824829]
# [0. 2.41522946]]
print(np.dot(q, r))
[[# 1. 1.]
# / 1. 2.
# [2. 1.]]
a = np.allclose(np.dot(q.T, q), np.eye(2))
print(a) # True (where q is an orthogonal matrix)
Copy the code
Cholesky decomposition
L = numpy.linalg.cholesky(a)
Returns the positive definite matrixa
The Cholesky decompositiona = L*L.T
, includingL
It’s the lower triangle.
[example 1]
import numpy as np
A = np.array([[1.1.1.1], [1.3.3.3],
[1.3.5.5], [1.3.5.7]])
print(A)
# [[1 1 1 1]
# [1 3 3 3]
# [1 3 5 5]
# [1 3 5 7]]
print(np.linalg.eigvals(A))
# [13.13707118 1.6199144 0.51978306 0.72323135]
L = np.linalg.cholesky(A)
print(L)
# [[1. 0. 0.]
# [1.1.41421356 0. 0]
# [1.1.41421356 1.41421356 0.]
# [1.1.41421356 1.41421356 1.41421356]
print(np.dot(L, L.T))
# [[1. 1. 1.]
# [1. 3. 3.
# [1. 3. 5.
# [1.3.5.7.]
Copy the code
Norm and other numbers
The norm of a matrix
numpy.linalg.norm(x, ord=None, axis=None, keepdims=False)
Compute the norm of a vector or matrix.
According to theord
Different norms are calculated for different parameters:
Find the norm of a vector.
import numpy as np
x = np.array([1.2.3.4])
print(np.linalg.norm(x, ord=1))
# 10.0
print(np.sum(np.abs(x)))
# 10
print(np.linalg.norm(x, ord=2))
# 5.477225575051661
print(np.sum(np.abs(x) ** 2) * *0.5)
# 5.477225575051661
print(np.linalg.norm(x, ord=-np.inf))
# 1.0
print(np.min(np.abs(x)))
# 1
print(np.linalg.norm(x, ord=np.inf))
# 4.0
print(np.max(np.abs(x)))
# 4
Copy the code
Find the norm of the matrix
import numpy as np
A = np.array([[1.2.3.4], [2.3.5.8],
[1.3.5.7], [3.4.7.11]])
print(A)
# [[1 2 3 4]
# [2 3 5 8]
# [1 3 5 7]
# [3 4 7 11]]
print(np.linalg.norm(A, ord=1)) # 30.0
print(np.max(np.sum(A, axis=0))) # 30
print(np.linalg.norm(A, ord=2))
# 20.24345358700576
print(np.max(np.linalg.svd(A, compute_uv=False)))
# 20.24345358700576
print(np.linalg.norm(A, ord=np.inf)) # 25.0
print(np.max(np.sum(A, axis=1))) # 25
print(np.linalg.norm(A, ord='fro'))
# 20.273134932713294
print(np.sqrt(np.trace(np.dot(A.T, A))))
# 20.273134932713294
Copy the code
The determinant of a square matrix
numpy.linalg.det(a)
Compute the determinant.
Calculate the determinant.
import numpy as np
x = np.array([[1.2], [3.4]])
print(x)
# [[1, 2]
# [3 4]]
print(np.linalg.det(x))
# 2.0000000000000004
Copy the code
Matrix rank of
numpy.linalg.matrix_rank(M, tol=None, hermitian=False)
Returns the rank of the matrix.
Calculate the rank of a matrix.
import numpy as np
I = np.eye(3) Create an identity matrix
print(I)
[[# 1. 0. 0.]
# / 0. 1. 0.
# [0. 0. 1.]]
r = np.linalg.matrix_rank(I)
print(r) # 3
I[1.1] = 0 # set this element to 0
print(I)
[[# 1. 0. 0.]
# [0. 0 0.]
# [0. 0. 1.]]
r = np.linalg.matrix_rank(I) # The rank becomes 2
print(r) # 2
Copy the code
Matrix trace
numpy.trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None)
The trace of a square matrix is the sum of the principal diagonal elements.
Calculate the trace of a square matrix.
import numpy as np
x = np.array([[1.2.3], [3.4.5], [6.7.8]])
print(x)
[[1 # 2, 3]
# [3, 4, 5]
# [6, 7, 8]]
y = np.array([[5.4.2], [1.7.9], [0.4.5]])
print(y)
[[5 4 # 2]
# 7 [1 9]
# [0, 4, 5]]
print(np.trace(x)) The trace of # A is equal to the trace of A.T
# 13
print(np.trace(np.transpose(x)))
# 13
print(np.trace(x + y)) The trace of the sum is equal to the sum of the traces
# 30
print(np.trace(x) + np.trace(y))
# 30
Copy the code
Solve equations and inverse matrices
Inverse matrix
Let A be an n-order matrix over the number field, and if there exists another n-order matrix B over the same number field such that: AB=BA=E (E is the identity matrix), then B is called the inverse of A, and A is called the invertible matrix.
numpy.linalg.inv(a)
Calculation of matrixa
The sufficient and necessary conditions for invertibility of the matrix are:det(a) ! = 0
Or,a
Full rank).
Calculate the inverse of a matrix.
import numpy as np
A = np.array([[1, -2.1], [0.2, -1], [1.1, -2]])
print(A)
# [[1 -2 1]
# [0 2-1]
# [1 1 2]]
If the determinant of A is not zero, there is an inverse matrix
A_det = np.linalg.det(A)
print(A_det)
# 2.9999999999999996
A_inverse = np.linalg.inv(A) Take the inverse of A
print(A_inverse)
# [[1.0000e +00 + 1.0000e +00 + 1.11022302E-16]
# [3.33333333e-01 1.00000000e+00 -3.33333333e-01]
# [6.66666667e-01 1.00000000e+00 -6.66666667e-01]
x = np.allclose(np.dot(A, A_inverse), np.eye(3))
print(x) # True
x = np.allclose(np.dot(A_inverse, A), np.eye(3))
print(x) # True
A_companion = A_inverse * A_det Find the adjoint of A
print(A_companion)
# [[-3.00000000e+00 -3.00000000e+00 3.33066907e-16]
# [1.00000000 e+00 3.00000000 1.00000000 e+00 e+00]
# [2.00000000 e+00 3.00000000 2.00000000 e+00 e+00]]
Copy the code
Solving linear equations
numpy.linalg.solve(a, b)
Solve linear equations or matrix equations.
Solve linear matrix equations
# x + 2y + z = 7
# 2x - y + 3z = 7
# 3x + y + 2z =18
import numpy as np
A = np.array([[1.2.1], [2, -1.3], [3.1.2]])
b = np.array([7.7.18])
x = np.linalg.solve(A, b)
print(x) # [7.1.-2.]
x = np.linalg.inv(A).dot(b)
print(x) # [7.1.-2.]
y = np.allclose(np.dot(A, x), b)
print(y) # True
Copy the code
reference
- www.cnblogs.com/moon1992/p/…
- www.cnblogs.com/moon1992/p/…
Resources: The DataWhale community teams up to learn