preface
Data structure and algorithm is the basis of solving all programming problems. Data structure is a way for computers to organize data for efficient use of resources. Understand various data structures and algorithms. Adding fun, consolidating knowledge, and improving programming and problem solving abilities with algorithms can be divided into time complexity and space complexity
Worry about boring can jump to the time complexity below – 3. Reduce the time complexity of the application to look at an example to add interest
Big O notation for complexity
How do you measure the efficiency of an algorithm? It usually uses resources, such as CPU (time) usage, memory usage, disk usage, and network usage. When we talk about big O notation, we generally think about CPU usage.Let’s try to understand the rules of big O notation through some examples of time complexity.
Time complexity
The complexity can be calculated by removing the low-order terms from the operands of the operation performed by the algorithm and then removing all the coefficients.
1. Several time complexities
-
Constant order O (1)
// O(1)
var n = 100;
var a = 10;
console.log(a);
console.log(n);
// Execute four times
/ / or
for(var i=0; i<10000; i++){console.log(i);
}
/ / 10000 times
Copy the code
Even with tens of thousands of lines of code, the time is O(1) because it doesn’t get longer with any variable (n)
-
Linear order O (n)
// O(n)
for(var i = 1; i <= n; $i++) {
console.log(i)
}
// O(n^2)
function b(n){
for(var i = 1; i <= n; i++) {
for(var j = 1; j <= n; j++) {
console.log(j)
}
}
}
// O(2n) == O(n)
function c(n){
for(var i = 1; i <= n; i++) {
console.log(i)
}
for(var j = 1; j <= n; j++) {
console.log(j)
}
}
Copy the code
So both of these pieces of code vary with n, and the number of times that function A executes is linear with n, so its time is O(n). Function b is a nested loop in which the output statement is executed 10,000 times when n is 100, so its time complexity is O(n^2). For example, if bubble sort is “O(n^2)” \ function c is not nested, but parallel, then its time complexity should be O(2n), because we need to remove the constant coefficient, so its time complexity is still “O(n)”.
-
Square order O (n ^ 2)
Refer to the above
-
Square order O (n ^ 3)
Refer to the above
Function b is cubic order O(n^3) if it adds another loop, and so on
-
The logarithmic order O (logn)
// O(logn)
function d(n){
var i = 1;
while(i<n)
{
i = i * 2; }}// O(nlogn)
function e(n){
for(m=1; m<n; m++) {
d(n)
}
}
Copy the code
If function d loops x, that is, if 2^x=n, then x=log₂n. So the time complexity of this code is O(logn) function e linear log order O(nlogn), that is, if I loop the code of log order function d n times, then the time complexity is n times O(logn) = O(nlogn).
-
Linear log order O(nlogn)
Refer to the above
-
The index order O (2 ^ n)
The performance of an algorithm doubles with each increment of data, for examplePebonacci seriesImplementation of recursive calculation
The Fibonacci sequence starts at zero, and it’s zero
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233...Copy the code
It can be found that the rule is that the value of the current n position is equal to the sum of the previous two elements, that is, arr[n] = arr[n-1]+arr[n-2] is implemented recursively as follows
function fibonacci(n){
if(n === 1 || n === 0 ) return n;
return fibonacci(n-1) + fibonacci(n-2);
}
Copy the code
For the Pebonacci sequence of interest can go to search to see, here is only a brief introduction.
-
O(n!) With O (n ^ n)
At the beginning of my study, I have no further examples of the complexity of these two stages, so THERE is time for follow-up updates
2. Time complexity comparison
Commonly used the time complexity of growing up is in turn: O (1) < O (logn) < O (n) < O (nlogn) < O (n ^ 2) < O (n ^ 3) < O (2 ^ n) < O (n!)
Enclosed here is the book providedBig O quick check table, also check this outComplexity quick reference tablelink
3. Reduce the time complexity of applications
Take 🌰 question: Please calculate the sum of 1 ~ 100. The simplest and crude way to do this is to start from 1. 1+2+3+… + 100,
// Use the for loop
function f(n){
let sum = 0
for(var i=0; i<=n; i++){ sum+=i }return sum
}
f(100); / / 5050
function g(n){
let sum = n * (n+1) /2
return sum
}
g(100); / / 5050
Copy the code
The complexity of function f method increases with the change of n, and the number of execution is linear with n. Function g only needs to execute one line of code and the time complexity is constant order O(1). It can be seen from the comparison of the above examples that function f if n is 10000,100000, the number of cycles can be expected to increase. However, function G code is more simplified and time complexity is reduced. In practical application, we also need to constantly think about ways to reduce time complexity and keep improving
Spatial complexity
The case for spatial complexity is similar to that for time complexity, but it is simpler. The simplest way to analyze it is. There are two main principles:
If you have an array open in your code, then the length of the array is basically your space complexity. For example, if you open a one-dimensional array, your space complexity is O(n), and if you open a two-dimensional array of length n^2, your space complexity is basically n^2
If there is a recursion, then its deepest recursion is the maximum of your space complexity. If you have an array in your recursion, then the space complexity is the maximum of both
The last
Dregs a, welcome all great god many correction, not praise, just supervision correction
reference
Fibonacci understood both time complexity and space complexity