“This is the 12th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”

Where there is life, there is learning

digression

It’s Friday, tomorrow is your day off, and you can treat yourself to a treat after working hard all week. Hey, this Pizza looks good, ok, let’s have a Pizza… On November 13, Mystery Man acquired Bx and renamed bX

Another dull day… Life is so simple

It’s warming up today, by the way. Nice

Without further ado, load up

Space complexity and time complexity

1. Time complexity

(1) Time frequency The time consumed by the execution of an algorithm cannot be calculated theoretically, and it can only be known by running tests on the computer. But we can’t and don’t need to test every algorithm on the machine, just know which algorithm takes more time and which algorithm takes less time. And the time an algorithm takes is directly proportional to the number of statements executed in the algorithm. Whichever algorithm has more statements executed, it takes more time. The number of statements executed in an algorithm is called statement frequency or time frequency. Notes for T (n). (2) Time complexity In the time frequency just mentioned, n is called the scale of the problem. When n keeps changing, the time frequency T(n) will also keep changing. But sometimes we wonder how it changes. To this end, we introduce the concept of time complexity. In general, the number of repeated basic operations in the algorithm is a function of the problem size N, expressed by T(n). If there is some auxiliary function F (n), such that when n approaches infinity, the limit value of T(n)/f(n) is a constant not equal to zero, then F (n) is said to be a function of the same order of magnitude as T(n). Denoted as T(n)=O(f(n)), O(f(n)) is called the asymptotic time complexity of the algorithm, or time complexity for short.

2. The spatial complexity of the algorithm

Space Complexity is a measure of the amount of storage Space temporarily occupied by an algorithm while it is running. The storage space occupied by an algorithm in computer memory includes the storage space occupied by the algorithm itself, the storage space occupied by the input and output data of the algorithm and the storage space temporarily occupied by the algorithm in the process of operation.

For example

Fibonacci numbers

The Fibonacci sequence, also known as the Golden section sequence, was introduced by Mathematician Leonardo Fibonacci, who used rabbit breeding as an example. 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233… This sequence starts at term 3, and each term is equal to the sum of the first two terms and let’s try to evaluate the first 40 terms and see how long it takes to code

package main
import (
    "fmt"
    "time"
)
func main(a) {
    result := 0
    start := time.Now()
    for i := 1; i <= 40; i++ {
        result = fibonacci(i)
        fmt.Printf("数列第 %d 位: %d\n", i, result)
    }
    end := time.Now()
    delta := end.Sub(start)
    fmt.Printf("Program execution time: %s\n", delta)
}
func fibonacci(n int) (res int) {
    if n <= 2 {
        res = 1
    } else {
        res = fibonacci(n- 1) + fibonacci(n2 -)}return
}
Copy the code

It’s going to be too long, so I’ll cut you some of it

Number 1:1

Number 2 in the series: 1

Number 3 in the series: 2

Number 4 in the series: 3

.

Number 39 in the series: 63245986

Number 40 in the series: 102334155

The execution time of the program is 2.2848865s

Probably so, the performance of the computer, the execution time will be different

Use memory caching to improve performance

Most improve performance or reduce the time complexity is to exchange with space, one is common memory cache memory cache implementation approach is in the n number is calculated at the same time, to save its value in the array index as the n position, in the subsequent calculations first need to look for in the array values are calculated, If found, it gets directly from the array; if not, it evaluates

package main
import (
    "fmt"
    "time"
)
const LIM = 41
var fibs [LIM]uint64
func main(a) {
    var result uint64 = 0
    start := time.Now()
    for i := 1; i < LIM; i++ {
        result = fibonacci(i)
        fmt.Printf("数列第 %d 位: %d\n", i, result)
    }
    end := time.Now()
    delta := end.Sub(start)
    fmt.Printf("Program execution time: %s\n", delta)
}
func fibonacci(n int) (res uint64) {
    Fibonacci (n) = Fibonacci (n);
    iffibs[n] ! =0 {
        res = fibs[n]
        return
    }
    if n <= 2 {
        res = 1
    } else {
        res = fibonacci(n- 1) + fibonacci(n2 -)
    }
    fibs[n] = res
    return
}
Copy the code

The partial results are also captured and the running results are as follows:

Number 1:1

Number 2 in the series: 1

Number 3 in the series: 2

Number 4 in the series: 3

.

Number 39 in the series: 63245986

Number 40 in the series: 102334155

The execution time of the program is 0.0149603s

It can be seen from the running results that the time used to obtain the 40th digit of the sequence is 0.0149603 seconds after using the memory cache. Compared with the execution efficiency when the memory cache is not used before, it can be seen that the advantage of the memory cache is quite obvious.

You thought it was over

This example uses a recursive real, which is a very memory consuming real. Quick question: How do I implement Fibonacci numbers without recursion?

After reading an INI file, how can I change the value by key?

Write via file.WriteAt method

After reading, you find any mistakes, write them down below! Rub it in with my Black Tiger Fu!