“This is the 12th day of my participation in the First Challenge 2022. For details: First Challenge 2022.”
@TOC
preface
Hello! Friend!!! ଘ(੭, ᵕ)੭ Nickname: Haihong Name: program monkey | C++ player | Student profile: Because of C language, I got acquainted with programming, and then transferred to the computer major, and had the honor to win some state awards, provincial awards… Has been confirmed. Currently learning C++/Linux/Python learning experience: solid foundation + more notes + more code + more thinking + learn English! Machine learning small white stage article only as their own learning notes for the establishment of knowledge system and review know why!
The articles
Matrix theory for Machine Learning (1) : Sets and Mappings
Matrix Theory for Machine Learning (2) : Definitions and Properties of linear Spaces
Matrix theory (3) : Bases and coordinates of linear Spaces
Matrix theory for Machine Learning (4) : Basis transformation and coordinate transformation
Matrix theory for Machine Learning (5) : Linear subspaces
Matrix theory (6) : Intersection and sum of subspaces
Matrix theory for Machine Learning (7) : Euclidean Spaces
Matrix theory (8) : Orthonormal basis and Gram-Schmidt Process
Matrix theory for Machine Learning (9) : Orthogonal complement and projection theorem
Matrix theory for Machine Learning (10) : Definition of linear Transformations
Matrix theory (11) : Matrix representation of linear transformations
Matrix theory for Machine Learning (12) : Approximation theory
Matrix theory for Machine Learning (13) : Hamliton-Cayley theorem, minimum polynomials
Matrix theory for Machine Learning (14) : Vector norm and Its Properties
Matrix theory (15) : The norm of matrices
5.1 Limits of vectors and Matrices
5.1.1 Vector sequence limits
Definition 5.1
Set a given NNN dimensional vector space CnC ^ nCn in vector sequence {χ (k)} \ {\ boldsymbol \ chi ^ {(k)} \} {χ (k)}, among them
If each component of the factor (k) I had \ xi_i ^ {} (k) factor (k) when k I had – up k \ rightarrow \ inftyk – up, has a limit factor I had \ xi_i factor I, namely
Remember χ = (factor 1 and factor 2,… , factor n) \ boldsymbol \ chi = (\ xi_1 \ xi_2,… , \ xi_n) χ = (factor 1 and factor 2,… , factor n)
Says vector sequence {χ (k)} \ {\ boldsymbol \ chi ^ {(k)} \} {χ (k)} have limit or {χ (k)} \ {\ boldsymbol \ chi ^ {(k)} \} {χ (k)} converges to \ boldsymbol \ chi χ
For short {χ(k)}\{boldsymbol\chi^{(k)}\}{χ(k)} convergence
Vector sequence {χ (k)} \ {\ boldsymbol \ chi ^ {(k)} \} {χ (k)} contains a lot of items Simple to understand: the χ (1), χ (2), χ (3)… χ (k) \ boldsymbol \ chi ^ {(1)}, \ boldsymbol \ chi ^ {(2)}, \ boldsymbol \ chi ^ {} (3)… \boldsymbol\chi^{(k)}χ(1), χ(2), χ(3)… With the increase of KKK, the composition of χ(K) gradually approaches the vector χ\ BoldSymbol \ Chi χ
Defined by the vector sequence limit, vector sequence {χ (k)} \ {\ boldsymbol \ chi ^ {(k)} \} {χ (k)} converges to χ \ boldsymbol \ chi the sufficient and necessary conditions
- {χ (k) – χ} \ {\ boldsymbol \ chi ^ {} (k) – \ boldsymbol \ chi \} {χ (k) – χ} converges to zero vector
Or is it
- A vector norm for arbitrary ∣ ∣ ⋅ ∣ ∣ | | \ cdot | | ∣ ∣ ⋅ ∣ ∣, sequence ∣ ∣ {χ (k) – χ} ∣ ∣ | | \ {\ boldsymbol \ chi ^ {} (k) – \ boldsymbol \ chi \} | | ∣ ∣ {χ (k) – χ} ∣ ∣ converges to zero
5.1.2 Square matrix sequence limit
Definition 5.2
For the compound matrix sequence {Ak}\{A_k\}{Ak}, where
If in the k – up k \ rightarrow \ inftyk – up, n2n ^ 2 n2 complex sequence {aij (k)} \ {a_ {ij} ^ {(k)} \} {aij (k)} are separately converge to aija_ {ij} aij, namely
Then AAA is called the limit of {Ak}\{A_k\}{Ak} at k→∞k\rightarrow\inftyk→∞, denoted as
namely
若
When,
If it does not converge, it is called square matrix sequence
Is divergent
As long as the Ak} {\ {A_k \} {Ak} one element aij (k) a_ ^ {ij} {(k)} aij (k) in the k – up k \ rightarrow \ inftyk – up not convergence, The Ak} {\ {A_k \} {Ak} must be no convergence Only the Ak} {\ {A_k \} {Ak} all elements aij (k) a_ ^ {ij} {(k)} aij (k) in the k – up k \ rightarrow \ inftyk – up convergence, Then {Ak}\{A_k\}{Ak} converges
Theorem 5.1.1
The sufficient and necessary conditions for the matrix sequence {Ak}\{A_k\}{Ak} in Cn×nC^{n×n}Cn×n converges to square matrix AAA are: A square matrix norm for arbitrary ∣ ∣ ⋅ ∣ ∣ | | \ cdot | | ∣ ∣ ⋅ ∣ ∣, sequence {∣ ∣ Ak – A ∣ ∣} \ {| | A_k -a | | \} {∣ ∣ Ak – A ∣ ∣} converges to zero
{Ak}\{A_k\}{Ak} is a sequence of matrices, specifically A1,A2,A3… ,AkA_1,A_2,A_3,… ,A_kA1,A2,A3,… , Ak {∣ ∣ Ak ∣ ∣} \ {| | A_k | | \} {∣ ∣ Ak ∣ ∣} is the general series, concrete is ∣ ∣ A1 ∣ ∣, ∣ ∣ A2 ∣ ∣,… , ∣ ∣ Ak ∣ ∣ | | A_1 | | to | | A_2 | |,… , | | A_k | | ∣ ∣ A1 ∣ ∣, ∣ ∣ A2 ∣ ∣,… , ∣ ∣ Ak ∣ ∣ because ∣ ∣ Ak ∣ ∣ | | A_k | | ∣ ∣ Ak ∣ ∣ said only a, so {∣ ∣ Ak ∣ ∣} \ {| | A_k | | \} {∣ ∣ Ak ∣ ∣} is a regular sequence
Properties of convergent square matrix sequences in
– up Ak (1) if the lim k = A \ lim_ \ rightarrow \ infty} {k A_k = Alimk – up Ak = A, for Cn x nC ^ {n * n} Cn x n in any square matrix norm ∣ ∣ ⋅ ∣ ∣ | | \ cdot | | ∣ ∣ ⋅ ∣ ∣, ∣ ∣ Ak ∣ ∣ | | A_k | | ∣ ∣ Ak ∣ ∣ bounded
(2) If limk→∞Ak=A,limk→∞Bk=B\lim_{k\rightarrow\infty}A_k=A,\lim_{k\rightarrow\infty}B_k=Blimk→∞Ak=A,limk→∞Bk=B, And lim – up ak k = a, lim k – up bk = b \ lim_ \ rightarrow \ infty} {k a_k = a, \ lim_ \ rightarrow \ infty} {k b_k = blimk – up ak = a, limk – up bk = b, {ak},{bk}\{a_k\},\{b_k\}{ak},{bk} is a sequence, then
(3) If limk→∞Ak=A,limk→∞Bk=B\lim_{k\rightarrow\infty}A_k=A,\lim_{k\rightarrow\infty}B_k=Blimk→∞Ak=A,limk→∞Bk=B, then
(4) lim – up Ak k = A \ lim_ \ rightarrow \ infty} {k A_k = Alimk – up Ak = A, and Ak – 1 A_k ^ {1} Ak – 1-1 A and A ^ {1} – 1 A, then
Jordan block to a positive integer power
set
是
Order Jordan block, and remember
Order matrix
There are
Due to the
及
, there is
Similarly, for numerical variable XXX, by
available
Theorem 5.1.2
A∈Cn×nA\in C^{n×n}A∈Cn×n power E,A,A2… ,Ak,… E,A,A^2,… ,A^k,… E,A,A2,… ,Ak,… The resultant sequence of matrices {Ak}\{A^k\}{Ak} converges to zero matrices only if the modulo of the eigenvalues of AAA are all less than 1
proposition
Set ∣ ∣ A ∣ ∣ A | | A | | _a ∣ ∣ A ∣ ∣ A is compatible with vector norm ∣ ∣ χ ∣ ∣ A | | \ boldsymbol \ chi | | _a ∣ ∣ χ ∣ ∣ A phalanx of norm, then rho (A) or less ∣ ∣ A ∣ ∣ \ A rho (A) \ leq | | A | | _a rho (A) or less ∣ ∣ A ∣ ∣ A
Theorem 5.1.3
Ak – > 0 A ^ {k} \ rightarrow0Ak – > 0, the sufficient and necessary condition that there are at least A square matrix norm ∣ ∣ ⋅ ∣ ∣ | | \ cdot | | ∣ ∣ ⋅ ∣ ∣, make ∣ ∣ A ∣ ∣ < 1 | | A | | < 1 ∣ ∣ A ∣ ∣ < 1
conclusion
Description:
- Refer to matrix Theory in your textbook
- With the book concept explanation combined with some of their own understanding and thinking
The essay is just a study note, recording a process from 0 to 1
Hope to have a little help to you, if there is a mistake welcome small partners correct