In nature, there is a sequence of numbers that we see all the time. The first sequence is:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233…

If described in mathematical language, the recursion looks like this:


{ F 0 = 0 F 1 = 1 F n = F n 1 + F n 2 ( n p 2 ) \begin{cases} F_0 = 0 \\ F_1 = 1 \\ F_n = F_{n-1}+F_{n-2} &(n\geq2) \end{cases}

In other words, except for the 0th term and the first term, the next number is the sum of the first two adjacent digits.

If you use geometry, you might have seen this:

This is the Fibonacci number, and the sequence it forms is called the Fibonacci sequence.

The name derives from Leonardo Fibonacci, a mathematician born in Pisa, Italy in 1170.

(A crooked building: Leonardo di Serro Piero Da Vinci, born in 1452)

Solving and using Fibonacci numbers and sequences is a popular intersection of modern computer science, traditional classical art and modern new media art design, including but not limited to graphics, performance optimization, avant-garde art, film and television creation and other branches.

The most famous of these is the golden spiral divider. So this article goes back to the very bottom of algorithms and mathematics to understand how the Fibonacci sequence works.

① Fibonacci_1(n) : defining binary recursive calculation

Based on the above definition, the pseudo-code for fib() is described as (default n is non-negative) :


f i b ( n ) = { n ( i f n Or less 1 ) f i b ( n 1 ) + f i b ( n 2 ) ( i f n p 2 ) fib(n)= \begin{cases} n & (if &n\leq1)\\ fib(n-1)+fib(n-2) & (if &n\geq2) \end{cases}
// C++
int fib1(int n){
    return (2 > n) ? n : fib1(n - 1) + fib1(n - 2);
}
Copy the code

Obviously, this algorithm is easy to understand and can be coded directly according to the definition.

But the disadvantage is also very obvious, the binary recursive form, its calculation is very inefficient.

Suppose that the time required to compute ***fib(n)*** is T(n). Then, according to this algorithm idea, it takes ***T(n-1)*** to calculate ***fib(n-1)***, until it finally takes one unit of time to sum up.

Therefore, the recursion of ***T(n)*** is:


T ( n ) = { 1 ( i f n Or less 1 ) T ( n 1 ) + T ( n 2 ) + 1 ( i f n p 2 ) T(n)= \begin{cases} 1 & (if &n\leq1)\\ T(n-1)+T(n-2) + 1 & (if &n\geq2) \end{cases}

If you look closely, you’ll see that ***T(n)*** is very similar to ***fib(n)***.

So if S(n) = [T(n)+1]/2,


S ( n ) = [ T ( n ) + 1 ] / 2 = { ( 1 + 1 ) / 2 [ T ( n 1 ) + T ( n 2 ) + 1 + 1 ] / 2 = { ( 1 + 1 ) / 2 { [ 2 S ( n 1 ) 1 ] + [ 2 S ( n 2 ) 1 ] + 1 + 1 ] } / 2 = { ( 1 + 1 ) / 2 [ 2 S ( n 1 ) + 2 S ( n 2 ) ] / 2 = { 1 ( i f n Or less 1 ) S ( n 1 ) + S ( n 2 ) ( i f n p 2 ) \begin{aligned} S(n)&= [T(n)+1]/2 \\ &= \begin{cases} (1+1)/2 \\ [T(n-1)+T(n-2)+1+1]/2 \\ \end{cases} \\ & = \begin{cases} (1+1)/2 \\ \{[2*S(n-1)-1] + [2*S(n-2)-1] +1 +1 ]\}/2 \end{cases} \\ & = \begin{cases} (1+1)/2 \\ [2*S(n-1) + 2*S(n-2)]/2 \\ \end{cases} \\ &= \begin{cases} 1 & (if &n\leq1)\\ S(n-1)+S(n-2) & (if &n\geq2) \end{cases} \\ \end{aligned}

It can be seen that ***S(n)*** is formally identical to FIB (n) except for the start item.

And:


{ S ( 0 ) = ( T ( 0 ) + 1 ) / 2 = 1 = f i b ( 1 ) S ( 1 ) = ( T ( 1 ) + 1 ) / 2 = 1 = f i b ( 2 ) \begin{cases} S(0) = (T(0)+1)/2 = 1 = fib(1) \\ S(1) = (T(1)+1)/2 = 1 = fib(2) \\ \end{cases}

In other words, S of n is equal to fib of n plus 1.

And then:


S ( n ) = f i b ( n + 1 ) = ( ϕ n + 1 ϕ ^ n + 1 ) / 5 .   ϕ = ( 1 + 5 ) / 2 .   ϕ ^ = ( 1 5 ) / 2 T ( n ) = 2 S ( n ) 1 = 2 f i b ( n + 1 ) 1 = O ( ϕ n + 1 ) material O ( 2 n ) \begin{aligned} & S(n)=fib(n+1)=(\phi^{n+1}-\hat{\phi}^{n+1})/\sqrt{5},\ \phi = (1+\sqrt{5})/2,\ \hat{\phi}=(1-\sqrt{5})/2 \\ & T(n) = 2*S(n)-1 = 2*fib(n+1)-1=O(\phi^{n+1})\approx O(2^n) \\ \end{aligned}

In other words, the time complexity of the binary recursion algorithm reaches a terrifying exponential magnitude, and the result is far from acceptable (more on how φ is calculated later).

If exponential complexity is a qualitative analysis of this algorithm, let’s do a quantitative example.

The most common PC has roughly 1GHz of computing performance, which means you can do 10^9 floating point budgets per second, or a billion.

When we calculate the 100th Fibonacci number, it takes about 2^100 operations, so it takes about 2^100/10^9. What is that?

So roughly, 2 to the 10th, which is 1024, is about 10 to the third, which is 1,000.

So:


2 100 = ( 2 10 ) 10 material ( 1 0 3 ) 10 2 100 / 1 0 9 material ( 1 0 3 ) 10 / 1 0 9 = 1 0 21 \begin{aligned} & 2^{100} = (2^{10})^{10} \approx (10^3)^{10} \\ & 2^{100}/10^9 \approx (10^3)^{10}/10^9 = 10^{21} \\ \end{aligned}

So the question is, what is 10^21 seconds?


{ 1   d a y = 24 h r 60 m i n 60 s e c = 86400 s e c material 25 4000 = 1 0 5 s e c 100   y e a r material 100 365   d a y material 3 1 0 4 d a y = 3 1 0 9 s e c \begin{cases} 1 \ day = 24hr * 60min*60sec = 86400 sec \approx 25*4000 = 10^5 sec \\ 100 \ year \approx 100*365\ day \approx 3 * 10^4 day = 3* 10^9 sec \end{cases}

We can calculate, roughly, that three centuries is about 10^10 seconds.

Then, 10^21 seconds is equal to 10^21/10^10 = 10^11 “three centuries”.

That’s 300 billion centuries

That’s three trillion years

The observable universe is believed to have an age of 14 billion years. It’s no exaggeration to say that if we were to give it to a computer by definition, our existing universe would have been reborn thousands of times.

And all this time, just to solve the 100th Fibonacci number.

So this dichotomy recursion, this layer upon layer computation, is clearly unacceptable.

② Fibonacci_2(n) : linear recursive calculation

From the algorithm of binary recursion, it’s not hard to see the problem.

Each time you compute a new value, you have to go back to the previous term, and then you have to work your way back up to the very beginning term. And then you start all over again from the starting entry all the way to the current entry.

In this process, a large number of the same items are repeated countless times, which is why time complexity is so high and computing resources are so wasted.

Along these lines, what if I could give the program a memory that would save the items that have been calculated and pull them out when needed instead of going back to the beginning and starting all over again. Does that save a lot of rework?

This way of opening up storage space as a computing aid is a very typical “space for time” thinking.

In this algorithm, a separate array space of length N is opened, and the program will store the result of this item in the space when it reaches the number of the operation.

Therefore, when the current item data is needed, only the direct retrieval can be done. When calculating the NTH term, the program only needs to make one trip from the starting point to accumulate the current item value. And it’s much closer to how we humans calculate this sequence.

// C++
int fib2(int n, int& prev){
    if (0 == n){
        prev = 1;
        return 0;
    }
    else {
        int prevPrev;
        prev = fib2(n - 1, prevPrev);
        returnprevPrev + prev; }}Copy the code

The time complexity of this algorithm is O(n), and due to the auxiliary storage space and the location of the item in the sequence (distance/length/coordinate…) So, the space complexity is also O(n).

If we were to calculate the 100th Fibonacci number on a microcomputer here, it would theoretically take 100/10^9 seconds, much less than one second. The optimization effect of this algorithm is much higher than that of the first binary recursive algorithm.

③ Fibonacci_3(n) : iterative calculation

However, on closer inspection, the linear recursive algorithm also has its problems.

For computers, “space” is a limited resource. A small number is fine, but when the amount of data increases, space consumption will become a reality.

And in the above open up the array space, in fact, the real use, only for the last two numbers. That is, the current result of the Fibonacci number and the former first term (the former second term can be calculated by subtracting the first term from the current result).

The result of a small project that has been exploited will not be accessed by the program, and its real purpose will be lost.

If the program can automatically destroy the previous results, so that the entire auxiliary space is kept in a certain size range, with the growth of the calculation items and dynamic follow, always save the latest two items, to avoid waste. So space consumption is no longer linked to scale. In other words, the cost of space resources becomes constant, which means that the complexity of space goes to the extreme O(1).

And we’re going to optimize along these lines.

We don’t need to open another program to watch the unused results in real time and delete them. Another way to think about it is that we can achieve the same effect by repeatedly writing new results and overwriting old data in a fixed space.

In fact, we only need two variables.

One stores the current item and one stores the previous item.

// C++
int fib3(int n){
    int f = 1, g = 0;
    while (0 < n--) { g += f; f = g - f; }
    return g;
}
Copy the code

In the program, g is the final result and f is the previous term.

There is no expensive array space overhead and no complex resource management.

Just two variables, like two dragons and a pearl spiraling upward, climbing from 0 to the current item.

The time complexity of the algorithm is also O(n), while the spatial complexity of O(1), which has been optimized to the extreme, is also called “in place algorithm”.

As you can see, the program is very simple and efficient enough to be excellent. A few short characters, condensation of the human top wisdom, have to sigh the charm of the algorithm.

④ Fibonacci_4(N) : solve the general formula

But!

It’s not over!

The human race will never stop pursuing the ultimate wisdom.

The spatial complexity has been optimized to the extreme O(1), so there is another index to evaluate the algorithm, namely time complexity.

So far, the time complexity of any Fibonacci number is still O(n).

And what I’m going to do is I’m going to optimize it even further, until IT’s O(1) perfect.

You might be confused by the fact that the Fibonacci numbers for the first n-1 and n-2 terms are in no way able to escape the traversal from the starting point to the NTH term.

Yeah, so if you want to maximize optimization.

Well, welcome to the world of mathematics.

Notice that we originally defined the Fibonacci number in terms of its natural form as a visual recursive formula.

Vivid descriptions are easy to understand, but inevitably primitive. So what we’re going to do is we’re going to recursively describe it, and we’re going to derive a general term formula for calculating Fibonacci numbers for any term. That is, you have one variable, and a bunch of coefficients and math, and you don’t have to go through each derivation, and you just use one equation to figure it out.

To make it easier to understand, let’s use elementary math that high school students can learn:

So far known:


a 1 = 1 a 2 = 1 a n = a n 1 + a n 2 \begin{aligned} & a_1 = 1 \\ & a_2 = 1 \\ & a_n = a_{n-1} + a_{n-2} \\ \end{aligned}

Set:


a n + Alpha. a n 1 = Beta. ( a n 1 + Alpha. a n 2 ) a_n + \alpha*a_{n-1} = \beta*(a_{n-1}+\alpha*a_{n-2})

That is:


a n = ( Beta. Alpha. ) a n 1 + Alpha. Beta. a n 2 a_n = (\beta-\alpha)*a_{n-1}+\alpha*\beta*a_{n-2}

According to:


{ a n = a n 1 + a n 2 a n = ( Beta. Alpha. ) a n 1 + Alpha. Beta. a n 2 \begin{cases} a_n = a_{n-1} + a_{n-2} \\ a_n = (\beta-\alpha)*a_{n-1}+\alpha*\beta*a_{n-2} \\ \end{cases}

We can find:


{ Beta. Alpha. = 1 Alpha. Beta. = 1 = > { Beta. = 1 + Alpha. Alpha. ( 1 + Alpha. ) = 1 = > Alpha. 2 + Alpha. 1 = 0 \begin{cases} \beta – \alpha = 1 \\ \alpha * \beta = 1 \\ \end{cases} => \begin{cases} \beta =1 + \alpha \\ \alpha*(1+\alpha)=1 \\ \end{cases} => \alpha^2+\alpha-1=0

Alpha. 1 . 2 = b Plus or minus b 2 4 a c 2 a = 1 Plus or minus 1 + 4 2 = 1 Plus or minus 5 2 \ begin} {aligned \ alpha_ {1, 2} & = \ frac {- b/PM/SQRT {b ^ 2-4 ac}} {2} a \ \ & = \ frac {1 \ PM \ SQRT + 4} {1} {2} \ \ & = \frac{-1\pm\sqrt{5}}{2} \end{aligned}

We can take a positive number here, so if we bring back the formula, we can get:


{ Alpha. = 5 1 2 Beta. = 5 + 1 2 \begin{cases} \alpha=\frac{\sqrt{5}-1}{2} \\ \beta=\frac{\sqrt{5}+1}{2} \\ \end{cases}

Because of the geometric sequence


T n = a n + Alpha. a n 1 T n 1 = a n 1 + Alpha. a n 2 T n = Beta. T n 1 \begin{aligned} & T_n = a_{n}+\alpha*a_{n-1} \\ & T_{n-1} = a_{n-1}+\alpha*a_{n-2} \\ & T_{n}=\beta*T_{n-1} \end{aligned}

So:


a n + 1 + Alpha. a n = ( a 2 + Alpha. a 1 ) Beta. n 1 = ( 1 + Alpha. ) Beta. n 1 = Beta. n \begin{aligned} a_{n+1}+\alpha*a_{n}&=(a_2+\alpha*a_1)*\beta^{n-1} \\ &= (1+\alpha)*\beta^{n-1} \\ &= \beta^n \end{aligned}

Remove n+1 of β from both sides of the equation, and obtain:


a n + 1 + Alpha. a n Beta. n + 1 = Beta. n Beta. n + 1 a n + 1 Beta. n + 1 + Alpha. Beta. a n Beta. n = 1 Beta. \begin{aligned} & \frac{a_{n+1}+\alpha*a_{n}}{\beta^{n+1}}=\frac{\beta^{n}}{\beta^{n+1}} \\ & \frac{a_{n+1}}{\beta^{n+1}}+\frac{\alpha}{\beta}*\frac{a_n}{\beta^n}=\frac{1}{\beta} \\ \end{aligned}

To:


b n = a n Beta. n b_n=\frac{a_n}{\beta^n}

Then substitute the above formula:


b n + 1 + Alpha. Beta. b n = 1 Beta. b_{n+1}+\frac{\alpha}{\beta}b_n=\frac{1}{\beta}

Set:


b n + 1 + Lambda. = Alpha. Beta. ( b n + Lambda. ) b_{n+1}+\lambda = – \frac{\alpha}{\beta}(b_n+\lambda)

Is:


b n + 1 + Alpha. Beta. b n = Alpha. Beta. Lambda. Lambda. b_{n+1}+\frac{\alpha}{\beta}b_n=-\frac{\alpha}{\beta}\lambda-\lambda

That is:


1 Beta. = Alpha. Beta. Lambda. Lambda. \frac{1}{\beta} =-\frac{\alpha}{\beta}\lambda-\lambda

To:


Lambda. = 1 Alpha. + Beta. \lambda = -\frac{1}{\alpha+\beta}

So:


{ b n + Lambda. = ( Alpha. Beta. ) n 1 ( b 1 + Lambda. ) b 1 = Alpha. 1 Beta. = 1 Beta. b n = a n Beta. n Alpha. = 5 1 2 Beta. = 5 + 1 2 \begin{cases} b_n+\lambda=(-\frac{\alpha}{\beta})^{n-1}(b_1+\lambda) \\ b_1 = \frac{\alpha_1}{\beta} = \frac{1}{\beta} \\ b_n = \frac{a_n}{\beta^n} \\ \alpha = \frac{\sqrt{5}-1}{2} \\ \beta = \frac{\sqrt{5}+1}{2} \\ \end{cases}

Substitute in to obtain:


a n Beta. n + Lambda. = ( Alpha. Beta. ) n 1 ( 1 Beta. + Lambda. ) a n 1 5 Beta. n = 1 ( Alpha. ) n 1 Beta. ( 1 Beta. 1 5 ) \begin{aligned} & \frac{a_n}{\beta^n}+\lambda = (-\frac{\alpha}{\beta})^{n-1}*(\frac{1}{\beta}+\lambda) \\ & a_n-\frac{1}{\sqrt{5}}*\beta^{n}=-1*(-\alpha)^{n-1}*\beta*(\frac{1}{\beta}-\frac{1}{\sqrt{5}}) \\ \end{aligned}

Further solution:


a n = 1 ( Alpha. ) n 1 ( 1 Beta. 5 ) + 1 5 Beta. n = 1 ( ( 5 1 ) 2 ) n 1 ( 1 2 ) ( 2 5 5 5 + 1 5 ) + 1 5 ( 5 + 1 2 ) n = 1 5 [ ( 1 + 5 2 ) n ( 1 5 2 ) n ] \begin{aligned} a_n &= -1*(-\alpha)^{n-1}*(1-\frac{\beta}{\sqrt{5}})+\frac{1}{\sqrt{5}}*\beta^n \\ & =-1*(\frac{-(\sqrt{5}-1)}{2})^{n-1}*(\frac{1}{2})*(\frac{2*\sqrt{5}}{\sqrt{5}}-\frac{\sqrt{5}+1}{\sqrt{5}})+\frac{1}{\sq rt{5}}*(\frac{\sqrt{5}+1}{2})^n \\ &= \frac{1}{\sqrt{5}}*[(\frac{1+\sqrt{5}}{2})^n-(\frac{1-\sqrt{5}}{2})^n] \end{aligned}

Finally, the general formula of Fibonacci sequence is:


f i b ( n ) = ( ϕ n ϕ ^ n ) / 5 .   ϕ = ( 1 + 5 ) / 2 .   ϕ ^ = ( 1 5 ) / 2 fib(n)=(\phi^{n}-\hat{\phi}^{n})/\sqrt{5},\ \phi = (1+\sqrt{5})/2,\ \hat{\phi}=(1-\sqrt{5})/2

This is why the time complexity mentioned in solution 1 is exponential.

Of course, there are many methods to solve the general term formula, here is not repeated, the details can be queried by themselves.

With the general formula, the program is easy to write:

// C++
int fib4(int n)
{
    float var1 = (1 + sqrt(5)) / 2;
    float var2 = (1 - sqrt(5)) / 2;
    float var3 = sqrt(5) / 5;
    return var3 * (pow(var1, n) - pow(var2, n));
}
Copy the code

According to the C++ pow function can reach O(1), so it is natural that the time complexity is O(1), and the space complexity is also O(1).

Even with fast exponentiations, the time complexity can be reduced to order logn.

⑤ The Golden Ratio

The golden ratio, which we often hear about, is a proportion.

Defined as:


a + b a = a b = ϕ   ( a > b > 0 ) \frac{a+b}{a}=\frac{a}{b}=\phi \ (a>b>0)

This ratio is 1.618 to three decimal places.

Does this look familiar?

The German astronomer Kepler discovered that the ratio of the first and second terms of the Fibonacci sequence:


1 1 . 1 2 . 2 3 . 3 5 . 5 8 . 8 13 . 13 21 . 21 34 . . . . \frac{1}{1}, \frac{1}{2}, \frac{2}{3}, \frac{3}{5}, \frac{5}{8}, \frac{8}{13}, \frac{13}{21}, \frac{21}{34}, …

As n goes to infinity, it turns out to be the golden ratio.


lim n up f i b n + 1 f i b n = 1 2 ( 1 + 5 ) = ϕ material 1.618… \ lim_ {n \ rightarrow \ infty} {\ frac {fib_ {n + 1}} {fib_ {n}}} = \ frac {1} {2} (1 + \ sqrt5) = \ phi \ approx 1.618…

The golden Section is so widely used that one of the most famous illustrations is the Mona Lisa below:

The Fibonacci sequence, first observed in nature in the growth and distribution of petals, sunflower seeds in a sunflower disk, changes in the threads of a nautilus, and even the spiral arms of the Milky Way galaxy, is all around us.

The Fibonacci sequence, as a magical representation of nature, abstractly describes this universal and mysterious phenomenon.

The reason why people generally think the Golden Section is beautiful is mostly because it abstracts the reality of nature.

Of course it’s much more than that,

Such as UI design interface division ratio, page layout beauty, logo design beauty, architectural design order.

There are even Fibonacci lookup algorithms in computer science.

No more application scenarios here.

This chapter is to bring you an in-depth understanding of the mathematical knowledge and calculation method behind the Fibonacci sequence. When we understand its principle from the bottom, there is a natural connection between the numerous and complicated surface. Also for our further innovation, to provide more solid theory and thinking.

[This article is original, welcome to forward, prohibited handling]