Writing in the front

Big factory autumn recruitment advance batch has been opened, sounded the horn of autumn recruitment of fresh graduates, which means that we have to join the brutal recruitment competition. The written interview is the standard job hunting in the past, and the algorithm for most people is a difficulty, only through systematic learning and understanding of brush questions to conquer it.

In order to help more people understand data structures and algorithms, we will launch a new blog series “Understand algorithms”, I hope you can get through the painful days. I in algorithm research, ability is limited, I hope you can take its essence, learn dry goods.

What is an algorithm

An Algorithm is a set of methods used to manipulate data and solve program problems.

There are two ways to measure the relative merits of different algorithms:

  • Post-statistical method: through statistics, monitoring, the use of computer timers to compare the running time of different algorithms, so as to determine the efficiency of the algorithm, but with very large limitations.

  • Prior analysis and estimation: Statistical estimation of an algorithm prior to computer programming.

For example, we know that the Fibonacci sequence starts with the third term, and each term is the sum of the first two. Namely: 0,1,1,2,3,5,8…

Analysis: When we observe the rule of Fibonacci sequence, we can see that the sequence needs to be divided into two cases when the algorithm is used to express it, namely, the first two items and the calculation of the elements after the third item starts.

The simplest way is to implement the Fibonacci algorithm using recursion:

function fib(num){
  if(num <= 1) return num;
  return fib(num - 1) + fib(num - 2);
}
Copy the code

Of course, it can also be implemented using a loop method:

function fib(num){
  if(num <= 1) return num;
  let num1 = 0, num2 = 1;
  for(let i = 0; i < num - 1; i++){
    // Each addition is the sum of the first two terms
    let sum = num1 + num2;
    Num2 will be used as num1 for the next addition
    num1 = num2;
    // The result will be the next num2
    num2 = sum;
    // However, the above two lines are not interchangeable
  }
  return num2;
}
Copy the code

In fact, the running time of a program written in a high-level programming language depends on the following factors:

  • Strategies and methods adopted by the algorithm (the quality of the algorithm)
  • Compiled code quality (software performance)
  • The input size of the problem (how much input)
  • The speed at which the machine executes instructions (hardware performance)

In general, the running time of the program depends mainly on the quality of the algorithm and the size of the problem input.

Time complexity and space complexity

We all know the story of Gauss. It goes like this: When Gauss was at school, his teacher asked him how to calculate the sum of non-zero natural numbers up to 100, namely 1+2+… + + 100 = 99? Many of his classmates were working from start to finish, but Gauss was not busy working. He was thinking about the problem.

Gauss observed that the sum of the first term and the last term was the same as the sum of the second term and the penultimate term, which was 101, and he found that the other terms followed the same pattern. There are 50 pairs, so 101 times 50 is equal to 5,050. So he became the first person in the class to calculate the answer.

Tell us the truth: sharpening the knife does not mistakenly cut wood workers, with the right method more easily.

So, we carefully analyze the advantages and disadvantages of gaussian algorithm and conventional algorithm.

General algorithm:

function commonFunc(){
  let sum = 0; // Execute once
  for(let i = 1; i <= 100; i++){ // Execute n+1 times
    sum += i;// Execute n times
  }
  return sum;// Execute once
}
Copy the code

Gaussian algorithm:

function guassFunc(){
  let sum = 0;// Execute once
  sum = (1*n) *n / 2;// Execute once
  return sum;// Execute once
}
Copy the code

As shown above, the regular algorithm is executed 2n+3 times, while the Gaussian algorithm is executed 3 times. Since the execution times of the beginning and end statements are the same, the difference between n and 1 times is the main focus of the middle algorithm. It is obvious that the Gaussian algorithm is far better than the conventional algorithm.

Algorithm time complexity

Let’s look at how the time complexity of an algorithm is defined in Big Talk Data Structures.

Algorithm time complexity: In algorithm analysis, the total number of statements executed T(n) is a function of problem size n, and then the change of T(n) with n is analyzed and the order of magnitude of T(n) is determined. The time complexity of the algorithm, which is the time measure of the algorithm, is denoted as T(n)=O(f(n)). It means that with the increase of time scale N, the growth rate of algorithm execution time is the same as that of F (n), which is called the progressive time complexity of the algorithm, or time complexity for short.

Time complexity is a measure used to estimate the running time of a program. Suppose the problem size of the algorithm is N, then the number of operation units (used to represent the time consumed by the program), with the increase of data size n, the growth rate of algorithm execution time is the same as the growth rate of f(n), which is called time complexity O(f(n)).

T(n)=O(f(n))

  • T(n) represents the time the code takes to execute
  • N indicates the data size
  • F (n) represents the total number of times each line of code is executed
  • O means the code execution time T(n) is proportional to T(n)

We see that O() is mentioned in the description above to reflect the time complexity of the algorithm, also known as the big O count. The big O is used to represent the upper bound, the upper bound on the worst-case running time of the algorithm, the time it takes to run in the worst-case.

The method of deriving large O order:

  • Replace all addition constants in running time with constant 1
  • In the modified number of times the function is run, only the highest order items are retained
  • If the highest order term exists and is not 1, the constant multiplied by the term is removed

Then, we practice the calculation method of the algorithm’s time complexity by ourselves, as follows:

function testFunc(n){
  for(let i = 0; i < n; i += i){ // Perform 1+log2(n) times
    for(let j = 0; j < n; j++){ // Execute n+1 times
      console.log("yichuan");// Execute (1+log2(n))*(n+1) times}}}Copy the code

We see that in the first for loop, I += I is I = I + I =2i, so when 2i=n we get n=log2(n), and to break out of the loop we have to execute 1+log2(n) times. In the second for loop, the statement must be executed n+1 times before exiting the loop, whereas the print statement is executed (1+log2(n))*(n+1) times. In fact, if you use big O notation and ignore the addition constant and the highest degree coefficient, you get nlogn times, which is O(nlogn).

Remember, when calculating the time complexity of the algorithm, the addition constant, the highest degree coefficient and the base of the logarithmic term can be ignored according to the actual situation of the higher order times.

Common counting orders are:

Execution times function Order number Informal terms
10010 O(1) Constant of the order
3n+2 O(n) Linear order
3n^2+2n+1 O(n^2) Square order
3log2(n)+1 O(log(n)) The logarithmic order
4n+3nlog2(n)+1 O(nlog(n)) Nlogn order
4n^3+3n^2+2n+1 O(n^3) Cubic order
2^n O(2^n) The index order

We can see the comparison of the time consumed to calculate the order of the time complexity of common algorithms:

The time complexity curves are as follows:

Algorithm space complexity

In fact, when coding, it’s all about sacrificing space for time. Analogous to algorithm time complexity, spatial complexity represents the growth relationship between algorithm storage space and data size.

Algorithm space complexity: By calculating the storage space required by the algorithm, and the calculation formula is: S(n)=O(f(n)).

  • N indicates the size of the problem
  • F (n) represents a function of the storage space occupied by n in the statement
function spaceFunc(n){
  const arr = [];// Line 2 of code
  arr.length = n;// Line 3
  for(let i = 0; i < n; i++){
    arr[i] = i * i;
  }
  
  for(let j = n - 1; j >= 0; --j){
    console.log(arr[i]); }}Copy the code

In line 2, create a space in memory to store the variable ARr and assign an empty array to it. In line 3, set the length of the array to n, which is automatically filled with n undefined; Moreover, the rest of the code doesn’t take up much space, so the space complexity of the entire code is O(n).

Worst case and average case

Worst case run time is the case where the program takes the most time to run, and there’s nothing worse. In general, we refer to the worst case running time.

Average case running time refers to the average time expected by the program, but in actual testing, it is difficult to obtain through analysis, and requires a certain amount of experimental data and estimates.

As we know, when searching for a certain number among n random numbers, the best case is to find the first number when searching, and the time complexity is O(1) at this time, and the worst case is to find the NTH number when searching. So, the worst case is O(n), and the average case is O(1+n)/2, which is O(n/2).

summary

In this paper, I introduce the concepts of what an algorithm is, why an algorithm is used, how to measure the time and space complexity of an algorithm, and the worst-case and average case of an algorithm.

Refer to the article

Big Talk Data Structure

“The Beauty of JavaScript Data Structures and Algorithms – Time and Space Complexity”

An Insight into time and Space Complexity

Write in the last

Thank you for reading, I will continue to share with you more excellent articles, this article reference a large number of books and articles, if there are mistakes and mistakes, hope to correct.

More latest articles please pay attention to the author nuggets a sichuan firefly account and the public number front gravitation.