This is the 9th day of my participation in Gwen Challenge

Notes on time space complexity analysis

What is complexity analysis?

Data structures and algorithms solve the problem of how to store and process data more cheaply and quickly, so we need a way to measure efficiency and resource consumption, which is called complexity analysis.

Large O complexity notation

The execution efficiency of an algorithm, roughly speaking, is the execution time of the algorithm code. But how do you “visually” get the execution time of a piece of code without actually running it? Here’s a very simple code, 1,2,3… The sum of n. Now, I’ll take you through an estimate of the execution time of this code.

 int cal(int n) {
   int sum = 0;
   int i = 1;
   for (; i < = n; ++i) {
     sum = sum + i;
   }
   return sum;
 }
Copy the code

Assume that each line of code executes at the same time, unit_time. Based on this assumption, what is the total execution time of this code? Lines 2 and 3 take 1 unit_time each, and lines 4 and 5 have been run n times, so it takes 2n*unit_time, so the total execution time of this code is *(2n+2)unit_time. As you can see, the execution time T(n) for all code is proportional to the number of times per line of code is executed. As shown in the representation method diagram

Large O time complexity notation. The maximum O time complexity does not specifically represent the real execution time of the code, but the trend of code execution time with the increase of data size. Therefore, it is also called the asymptotic time complexity (also called time complexity).

We often follow the following principles when analyzing time complexity:

1. Focus only on the piece of code that loops the most

 int cal(int n) {
   int sum = 0;
   int i = 1;
   for (; i < = n; ++i) {
     sum = sum + i;
   }
   return sum;
 }
Copy the code

Lines 2 and 3 are constant-level execution times, independent of the size of n, so it has no effect on complexity. Lines 4 and 5 are the ones that loop the most, so focus on that. As we said earlier, these two lines of code are executed n times, so the total time is order n.

2. Rule of addition: The total complexity is equal to the complexity of the code with the highest order of magnitude

3. Multiplication rule: The complexity of nested code is equal to the product of the complexity of both inside and outside of the nested code

Therefore, if the time complexity of analyzing an algorithm is T(n) = O(2n+2)/T(n) = O(2n^2 +2n+3), the low-order, constant and coefficient parts in the formula can be ignored, that is, T(n) = O(n)/T(n) = O(n^2).

1. O(1)

Generally, the time complexity of this algorithm is equal to two o (1), even if there are thousands of lines of code, as long as there are no circular or recursive statements in the algorithm.

(2) O (logn), O (nlogn)

 i=1;

 while (i < = n)  {

   i = i * 2;

 }
Copy the code

As you can see from the code, the value of variable I starts at 1 and is multiplied by 2 each time through the loop. When greater than n, the loop ends. Remember that geometric sequence we learned in high school? In fact, the value of variable I is a geometric sequence. If I were to list them one by one, it would look something like this:

Now, if I change the code a little bit, what is the time complexity of this code?

 i=1;
 while (i < = n)  {
   i = i * 3;
 }
Copy the code

The time complexity of this code is O(log3n).

Spatial complexity analysis

The full name of the elliptic space complexity is the increasing relationship between the storage space and data size of an algorithm.

The storage space occupied by an algorithm in computer memory includes the storage space occupied by the algorithm itself, the storage space occupied by the input and output data of the algorithm and the storage space temporarily occupied by the algorithm in the process of operation

Algorithm in the temporary occupation during the operation of the storage space and varies with the algorithm, some algorithm takes up only a small amount of temporary work units, and do not change with the size of the problem, we call this kind of algorithm is “in situ”, is the storage saving algorithm, some algorithms take up temporary work unit number related to solve the problem of size n, It increases with the increase of n. When n is large, it will occupy more storage units, such as quicksort and merge sort algorithms.

The space complexity of an algorithm only considers the storage space allocated to local variables during operation, which includes the storage space allocated to parameter variables in the parameter list and the storage space allocated to local variables defined in the function body. If an algorithm is [2] recursive, its spatial complexity is the size of stack space used for recursion, which is equal to the size of temporary storage allocated for a call times the number of calls (i.e., the number of recursive calls plus 1, which represents the beginning of a non-recursive call). The spatial complexity of the algorithm is also given in order of magnitude. For example, when the spatial complexity of an algorithm is a constant, that is, it does not change with the size of the processed data n, it can be expressed as O(1). When the spatial complexity of an algorithm is proportional to logarithm base 2 of n, it can be expressed as O(log2n); When the spatial complexity of an algorithm is linearly proportional to n, it can be expressed as O(n). If the parameter is an array, it only needs to be allocated a space to store a pointer to the address passed by the argument, that is, a machine word length space. If the parameter is used as a reference, only one address space is allocated to store the address of the corresponding argument variable, so that the system can automatically reference the argument variable.

Common spatial complexity is O(1), O(n), O(n2), logarithmic complexity such as O(logn), O(nlogn) is not usually used. Moreover, spatial complexity analysis is much simpler than temporal complexity analysis.

reference

Baike.baidu.com/item/%E7%A9… Blog.csdn.net/u012084802/… Blog.csdn.net/yusirxiaer/…