This is the second day of my participation in the August More Text Challenge

The last article judged the performance of an algorithm based on the execution time of the same set of inputs when comparing algorithms. In this case, you still need to write code to test. This method requires that the test code be executed, which is too cumbersome, so the big O notation is used to estimate the performance of the algorithm.

Recursive method for calculating Fibonacci numbers

From the previous article on the calculation of the NTH term of the Fibonacci sequence, now let’s look at how the two algorithms are represented using the big O notation. Finally, we’ll look at how to use the big O notation to show the complexity of the algorithm.

def fib(n):
    if n<=1:
        return n
    else:
        return fib(n-1)+fib(n-2)
Copy the code

The recursive method that computes the NTH term of the Fibonacci sequence will be called multiple times, so let’s take an example

If n=3, it computes fib of 2 plus fib of 1, where fib of 2 computes fib of 1 plus fib of 0, and fib of 1 and fib of 0 return directly.

So when n=3, fib is executed five times:

graph TD
A["fib(3)"]-->B["fib(2)"]
A-->C["fib(1)"]
B-->D["fib(1)"]
B-->E["fib(0)"]

The relationship between n and the number of function calls can be expressed as T(3) = 5

When n is 4, it computes fib of 3 plus fib of 2, where fib of 3 computes fib of 2 plus fib of 1, where fib of 2 computes fib of 1 plus fib of 0, and in the whole function call, there are two fib of 2 calls, and each fib of 2 calls fib of 1 plus fib of 0.

So when n=4, the whole function call will execute as follows:

graph TD
A["fib(4)"]-->B["fib3)"]
B-->F["fib(2)"]
B-->G["fib(1)"]
F-->H["fib(1)"]
F-->I["fib(0)"]
A-->C["fib(2)"]
C-->D["fib(1)"]
C-->E["fib(0)"]

So 1+2+4+2=9 times fib() is called. The relationship between N and the number of function calls can be expressed as T(4) = 9

Let me draw a graph of the number of calls when N is equal to 5.

graph TD
A["fib(5)"]-->B["fib(4)"]
A-->C["fib(3)"]
B-->D["fib(3)"]
B-->E["fib(2)"]
D-->F["fib(2)"]
D-->G["fib(1)"]
F-->H["fib(1)"]
F-->I["fib(0)"]
E-->J["fib(1)"]
E-->K["fib(0)"]
C-->L["fib(2)"]
C-->M["fib(1)"]
L-->N["fib(1)"]
L-->O["fib(0)"]

Count the number of calls to fib() 1+2+4+6+2=15 times

21 + 22 + 23 = 24 to 1

So how do I write this in general terms:

3 – > 3

4 – > 9

5 – > 15

an = 2(N-1)-1

In big O notation :O(2N)

Why do we say that? Let’s say it later.

The time complexity of ordinary loops

Let’s take a look at how many times the normal loop method calls the function.

def fib(n): if n<=1: return n n1 = 0 n2 = 1 for i in range(n-1): Res = n1+n2 n1 = n2 # updates the values of n1 and n2 as the loop continues, n2 = res return resCopy the code

The relationship between n and the number of function calls can be an = n

In big O notation, O(N).

Big O notation

Because the big O notation is a rough estimation of the analysis method, some constants, coefficients, low order parts will be ignored.

For example, in the recursive method an = 2(n-1)-1, 1 is a constant, and the part relative to the power is also the lower order part.

The big O notation is an estimation method, but it indicates the worst case, because the constant, low order, and coefficient parts are negligible when the input parameter is as large as possible.

conclusion

Common time complexity notation, from fast to slow:

  • O(1)

  • O(log n)

  • O(n)

  • O(n * log n)

  • O(2N)

  • O(n!)

    Corresponding intuitive change curve:

As the input parameter becomes larger, the time consumed becomes longer. It can be seen that the change curve of O(log n) is the slowest among the five change curves except for O(1).