“This is the sixth day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”
Why complexity
Good data structures and algorithms can greatly reduce code execution time and storage space, so how do we measure it? The simplest and most direct way to judge the efficiency of a piece of code is to run it on a machine, but the machine has great limitations, such as:
- Statistical results are subject to the test environment: machine test results for different systems and processors can vary widely.
- Statistical results are easily affected by data itself and data size: different data and data of different lengths may have a huge impact on test results.
Therefore, we need a rough method to estimate the efficiency of algorithm execution independent of the external environment. That’s why we introduced complexity.
How to express complexity
How to express the complexity of the algorithm: specifically, the time that the code executes, the memory space that executes consumes.
function fn(n) {
let sum = 0; //1 unit-time
let i = 0; //1 unit-time
for (; i <= n; i++) { // n unit-time
sum += i // n unit-time
}
return sum
}
Copy the code
From the CPU’s point of view, each piece of code is reading and executing data, although the number of CPU executions and execution time vary from operation to operation. We roughly call each execution time unit-time. So the execution time of the above code is (2n+2) * untime-t (n) = (2n+2) * unit-time
We could also write that T of n is equal to O of f of n.
Among them:
- N: Indicates the size of the data
- F (n) : indicates the total number of code executions
- O: indicates that the execution time of the code T(n) is proportional to the f(n) expression
When n is large, like n= 1000000, or even larger, T(n) = O(f(n)) can be expressed as T(n) = O(n).
This is the big O time complexity notation. The maximum O time complexity does not specifically represent the real execution time of the code, but the trend of code execution time with the increase of data size. Therefore, it is also called the asymptotic time complexity (also called time complexity).
Time complexity
As n approaches infinity, T(n) is more affected by the highest order of n, and less affected by other constants, lower orders, and coefficients. So every time we analyze the time complexity of code, we only focus on the section of code that has been executed the most times.
- Case a
function fun1(n) {
let sum = 0;
for(i = 0; i <= n; i++) {
sum += i;
}
return sum
}
Copy the code
The time complexity here is mainly based on the loop in the third row, which is O(n).
- Case 2
function fun2(n) {
let sum = 0;for(i= 0; i <= n; i++) { 1 / / cycle
sum += i;
}
for(i = 0; i <= n; i++) { 2 / / cycle
for(j = 0; j <= n; j++) { sum += i * j; }}return sum
}
Copy the code
So the time for loop 1 is O(n), the time for loop 2 is O(n ^ 2).
- Case 3
function fun3(n) {
let sum = 0;
for(i = 0; i <= n; i++) {
sum += fun(i);
}
return sum
}
Copy the code
The time complexity here is O(n * T fun) = O(n ^ 2).
- Four cases
let i = 1;
while(i < n){
i *= 2;
}
Copy the code
In this loop, I is multiplied by 2 each time until I > n. Then the execution is 2⁰, 2¹, 2², 2³ ·n; Assuming x total executions, then 2x = n, the time complexity is O(log₂n).
When there are no loops, recursive statements, even if there are thousands of lines of code, the time complexity is O(1).
4. Spatial complexity
Time complexity represents the growth relationship between the execution time of the algorithm and the data size, while space complexity represents the growth relationship between the storage space of the algorithm and the data size.
function fun(n) {
let a = [];
for (let i = 0; i < n; i++) {
a.push(i);
}
return a;
}
Copy the code
This code is O(1 + n) = O(n), which is the storage space of I and array A.