This is the 12th day of my participation in Gwen Challenge
Increasing the complexity of the order of magnitude (by orders of magnitude), big O notation T (n) = O (f (n)) {T {_ {} (n) = O (f (n))}} T (n) = O (f (n))
- O(1){O(1)}O(1) exponential order O(2n){O(2^n)}O(2n)
- Log order O(logn){O(log n)}O(logn) factorial order O(n!) {O(n!) }O(n!)
- Linear order O (n) (n)} {O O (n)
- Linear logarithmic order O(nlogn){O(n logn)}O(nlogn)
- Square order O(n2){O(n^2)}O(n2), cubic order O(n3){O(n^3)}O(n3)… O(nk){O(n^ K)}O(nk)
The complexity magnitude can be roughly divided into polynomial magnitude and non-polynomial magnitude. There are only two nonpolynomial orders of magnitude O(2n){O(2^n)}O(2n) and O(n!). {O(n!) }O(n!) . When the data size n is not polynomial, the execution time of the algorithm will increase sharply, and the execution time of solving the problem will increase indefinitely. So non-polynomial time complexity is actually a very inefficient algorithm.
Log time complexity is the most difficult to analyze: O(logn){O(logn)}O(logn), O(nlogn){O(nlogn)}O(nlogn), merge sort, quick sort time complexity are all O(nlogn){O(nlogn)}O(nlogn).
The following code:
i=1;
while (i <= n) {
i = i * 2;
}
Copy the code
2x=n{2^{x} =n}2x=n x=log2n{x= log_2n}x=log2n, so the time complexity of this code is O(log2n){O(log_2n)}O(log2n), whether base 2, 3 or 10, We can call all logarithmic time complexity O(logn){O(logn)}O(logn).
Log3n {log_3N}log3n is equal to log32∗log2n{log_32 * log_2n}log32∗log2n, So O(log3n)=O(C∗log2n){O(log_3N)=O(C * log_2n)}O(log3n)=O(C∗log2n), where C=log32{C=log_32}C=log32 is a constant. Based on the in front of us a theory: when using the big O mark complexity, can ignore coefficient, namely the O (Cf) (n) = O (f (n)) {O (Cf) (n) = O (f (n))} O (Cf) (n) = O (f (n)). So O(log2n){O(log_2n)}O(log2n) is equal to O(log3n){O(log_3N)}O(log3n). Therefore, in the expression method of logarithmic order time complexity, we ignore the “base” of logarithm and express it as O(logn){O(logn)}O(logn).
Best case, worst case, average case, amortized time complexity
The full name for average time complexity should be weighted average time complexity or expected time complexity.
Amortized time complexity and its corresponding analysis method, amortized analysis (or amortized analysis)
Average complexity is only used in some special cases, whereas amortized time complexity applies to more special and limited scenarios.
A data structure for a set of continuous operation, the time complexity in most cases is very low, only a few cases of high time complexity and the and the coherent temporal relationship between operation, this time, we can put together a set of operations analysis, look to whether can high time complexity of the operation of time consuming, Amortized to other operations that are less time complex. Moreover, in the case where amortized time complexity analysis can be applied, the general amortized time complexity is equal to the best-case time complexity.
Amortization is a special average time complexity algorithm.
An array of
Linked lists are suitable for insertion and deletion, with a time complexity of O(1), and arrays support random access, with a time complexity of O(1) based on the subscript, rather than arrays suitable for lookup, with a time complexity of O(1).
Delete an array of data: you can mark it first and then delete it at the end, the mark-clean algorithm
From the memory model of array storage, the best definition of “subscript” is “offset”. If a[0] is offset by 0, then a[k] is offset by k type_size. If a[0] is offset by 0, then a[k] is offset by k type_size.
a[k]_address = base_address + k * type_size
Copy the code
However, if the array counts from 1, then we calculate the memory address of the array element a[k] as:
a[k]_address = base_address + (k- 1)*type_size
Copy the code
Comparing the two formulas, it is not difficult to see that numbering from 1 increases the subtraction operation for each random access to the array element, which is an additional subtraction instruction for the CPU. For more historical reasons, the C language designers started counting array subscripts with 0.
Memory addressing for two-dimensional arrays:
For array m * n, the address of a [I][j] (I < m,j < n) is:
address = base_address + ( i * n + j) * type_size
Copy the code