This is the 21st day of my participation in the August More Text Challenge
1. Time complexity
(1) Time frequency The time consumed by the execution of an algorithm cannot be calculated theoretically, and it can only be known by running tests on the computer. But we can’t and don’t need to test every algorithm on the machine, just know which algorithm takes more time and which algorithm takes less time. And the time an algorithm takes is directly proportional to the number of statements executed in the algorithm. Whichever algorithm has more statements executed, it takes more time. The number of statements executed in an algorithm is called statement frequency or time frequency. Notes for T (n).
(2) Time complexity In the time frequency just mentioned, n is called the scale of the problem. When n keeps changing, the time frequency T(n) will also keep changing. But sometimes we wonder how it changes. To this end, we introduce the concept of time complexity. In general, the number of repeated basic operations in the algorithm is a function of the problem size N, expressed by T(n). If there is some auxiliary function F (n), such that when n approaches infinity, the limit value of T(n)/f(n) is a constant not equal to zero, then F (n) is said to be a function of the same order of magnitude as T(n). Denoted as T(n)=O(f(n)), O(f(n)) is called the asymptotic time complexity of the algorithm, or time complexity for short.
In addition, the Landau notation used in the above formula was actually introduced by The German number theorist Paul Bachmann in his 1892 book Analytic Number Theory and popularized by another German number theorist Edmund Landau. The purpose of the Landau notation is to describe the behavior of complex functions in terms of simple functions, giving an upper or lower (exact) bound. In the calculation of algorithm complexity, only the big O symbol is used, while the small O symbol and θ symbol in Landau symbol system are not commonly used. The O here, originally it was capital Greek, but now it’s all capital English O; The little O is also a lowercase English o, while the θ maintains the uppercase Greek θ.
T (n) = o (f (n)) indicates that there is a constant C such that T (n) ≤ C * f(n) when n approaches infinity. In simple terms, T(n) is maximal as n approaches infinity which is about the same size as f(n). So the upper bound on T as n approaches infinity is C times f of n. It doesn’t specify f(n), but it usually takes the simplest possible function. For example, O(2n2+n +1) = O(3n2+n+3) = O(7n2 +n) = O(n2). Notice that there is a constant C hidden in the big O, so there is no coefficient in f(n). If you think of T(n) as a tree, then O(f(n)) represents the trunk of the tree, and you just care about the trunk of the tree, leaving out all the other details.
In different algorithms, if the number of statements executed in the algorithm is a constant, the time complexity is O(1). In addition, when the time frequency is different, the time complexity may be the same, for example, T(n)=n2+3n+4 and T(n)=4n2+2n+1 have different frequency, but the time complexity is the same, both are O(n2). In order of increasing order, the common time complexity is: constant order O(1), logarithmic order O(log2n), linear order O(n), linear logarithmic order O(nlog2n), square order O(n2), cubic order O(n3),… Order K, order K nk, order 2n exponential. As the problem size n increases, the above time complexity increases, and the execution efficiency of the algorithm decreases.
As can be seen from the figure, we should use polynomial O(nk) algorithm whenever possible, rather than exponential algorithm.