The original link

preface

In this chapter, we will introduce two other commonly used algorithms: dynamic programming and greedy algorithms. Dynamic programming is often compared to a recursive inverse process, and greedy algorithms are the best choice for many optimization problems. Next, we will study these two algorithms in detail.

Dynamic programming

Why is dynamic programming sometimes considered the opposite of recursion? Because recursion starts at the top and breaks down the problem, solves the whole problem by solving all the small problems that you break down. Dynamic programming solutions work from the bottom up, solving all the small problems and then merging them into a single solution that solves the big problem.

Recursion is a simple but inefficient way to solve problems. Many languages, including JavaScript, do not efficiently interpret recursive code as machine code, and although written programs are concise, their execution is inefficient. This is not to say that using recursion is a bad thing. It is, in essence, just that the directives and object-oriented programming languages do not implement recursion well enough because they do not feature recursion as a high-level programming feature.

The Fibonacci series

The Fibonacci sequence is defined as the following sequence:

0.1.1.2.3.5.8.13.21.34.55.Copy the code

You can see that when n is greater than or equal to 2, an is equal to an minus 1 plus an minus 2. This sequence has a long history. It was used in 700 AD by Fibonacci, an Italian mathematician, to describe the growth of rabbits under ideal conditions.

It is not hard to see that this sequence can be represented by a simple recursive function.

function fibo (n) {
    if (n <= 0)  return 0;
    if (n === 1) return 1;
    return fibo(n - 1) + fibo(n - 2);
}
Copy the code

This approach is very performance intensive, and at the order of n in the thousands, the program becomes extremely slow or even unresponsive. Using dynamic programming to start with the simplest subproblem it can solve makes a big difference. Here we first use an array to access the results of each generation of subproblems, convenient for later use.

function fibo (n) {
    if (n <= 0) return 0;
    if (n <= 1) return 1;
    var arr = [0.1];
    for (var i = 2; i <= n; i++) {
        arr[i] = arr[i - 1] + arr[i - 2];
    }
    return arr[n];
}
Copy the code

Careful students found that the array here can be removed, replaced by local variables to achieve can save a lot of memory space.

function fibo (n) {
    if (n <= 0) return 0;
    if (n <= 1) return 1;
    var res, a = 0, b = 1;
    for (var i = 2; i <= n; i++) {
        res = a + b;
        a = b;
        b = res;
    }
    return res;
}
Copy the code

Is there any way to be more concise here? And the answer is yes, I can save one more variable.

function fibo (n) {
    if (n <= 0) return 0;
    if (n <= 1) return 1;
    var a = 0, b = 1;
    for (var i = 2; i <= n; i++) {
        b = a + b;
        a = b - a;
    }
    return b;
}
Copy the code

Find the longest common substring

Another problem suitable for dynamic programming is finding the longest common substring of two strings. For example, in the words Raven and havoc, the longest common substring is “av.” Finding the longest common substring is often used in genetics to describe DNA molecules using the first letter of the base in a nucleotide.

We can use brute force to solve this problem, but it’s clumsy.

function maxSubString (str1, str2) {
    if(! str1 || ! str2)return ' ';
    var len1 = str1.length,
        len2 = str2.length;
    var maxSubStr = ' ';
    for (var i = 0; i < len1; i++) {
        for (var j = 0; j < len2; j++) {
            var tempStr = ' ',
                k = 0;
            while ((i + k < len1) && (j + k < len2) && (str1[i + k] === str2[j + k])) {
                tempStr += str1[i + k];
                k++;
            }
            if(tempStr.length > maxSubStr.length) { maxSubStr = tempStr; }}}return maxSubStr;
}
Copy the code

We will not expand the dynamic programming algorithm to find the longest common substring. Interested students can jump to the following link:

  • Find the longest common substring of two strings
  • Dynamic programming: find the longest common substring/longest common subsequence

Knapsack problem

Knapsack problem is a classical problem in algorithm research. Imagine that you are a safe-deposit box robber and open a safe full of rare treasures, but you have to put these treasures into a small backpack of yours. Items in the safe vary in size and value. You want your backpack to have the maximum value.

Of course, brute force computing can solve this problem, but dynamic programming is more effective. The key idea behind using dynamic programming to solve the backpack problem is to calculate the maximum value of each item loaded into the backpack until the backpack is full.

If in the safe in our example, there are 5 items, their sizes are 3, 4, 7, 8, 9, and their value are respectively 4, 5, 10, 11, 13, and the volume with the backpack is 16, then the appropriate solution is to select the third item and fifth items, their total size is 16, 23 total value.

Table 1:0-1 Backpack problems

items A B C D E
The value of 4 5 10 11 13
size 3 4 7 8 9

First, let’s look at recursively solving this problem:

function knapsack (capacity, objectArr, order) {
    if (order < 0 || capacity <= 0) {
        return 0;
    }
    if (arr[order].size > capacity) {
        return knapsack(capacity, objectArr, order - 1);
    }
    return Math.max(arr[order].value + knapsack(capacity - arr[order].size, objectArr, order - 1),
                    knapsack(capacity, objectArr, order - 1));
}

console.log(knapsack(16[{value: 4.size: 3},
    {value: 5.size: 4},
    {value: 10.size: 7},
    {value: 11.size: 8},
    {value: 13.size: 9}].4)); / / 23
Copy the code

In order to improve the efficiency of the program, we might as well change the recursive implementation into dynamic programming. There’s a technical term for this: the 0-1 backpack problem. 0-1 knapsack problem, DP solution has always troubled a lot of beginners, most people learn once forget once, so, this time we try 💪 to keep it in mind.

Note that the key to understanding the 0-1 backpack problem is to understand the meaning of “0-1,” where each item is either taken with (1) or left with (0).

The basic idea

0-1 knapsack problem substructure: To select a given ith item, it is necessary to compare the optimal solution of the sub-problem of choosing the formation of the ith item with that of not choosing the ith item. It is divided into two sub-problems and compared to select the best one.

F [I][W] represents the maximum value that can be obtained by putting the first I items into a backpack of capacity W. Then its state transition equation is:

f[i][w] = max{ f[i- 1][w], f[i- 1][w-w[i]]+v[i] }
Copy the code

Where w[I] represents the weight of the ith article and V [I] represents the value of the ith article.

function knapsack (capacity, objectArr) {
    var n = objectArr.length;
    var f = [];
    for (var i = 0; i <= n; i++) {
        f[i] = [];
        for (var w = 0; w <= capacity; w++) {
            if (i === 0 || w === 0) {
                f[i][w] = 0;
            } else if (objectArr[i - 1].size <= w) {
                var size = objectArr[i - 1].size,
                    value = objectArr[i - 1].value
                f[i][w] = Math.max(f[i - 1][w - size] + value, f[i - 1][w]);
            } else {
                f[i][w] = f[i - 1][w]; }}}return f[n][capacity];
}
Copy the code

The space complexity and time complexity of the above methods are O(nm), where N is the number of items and M is the backpack capacity. There is no room for optimization in time complexity, but in space complexity we can optimize to order m. First we need to rewrite the state transition equation:

f[w] = max{ f[w], f[w-w[i]]+v[i] }
Copy the code

Take a look at the code example:

function knapsack (capacity, objectArr) {
    var n = objectArr.length;
    var f = [];
    for (var w = 0; w <= capacity; w++) {
        for (var i = 0; i < n; i++) {
            if (w === 0) {
                f[w] = 0;
            } else if (objectArr[i].size <= w) {
                var size = objectArr[i].size,
                    value = objectArr[i].value
                f[w] = Math.max(f[w - size] + value, f[w] || 0);
            } else {
                f[w] = Math.max(f[w] || 0, f[w - 1]); }}}return f[capacity];
}
Copy the code

Greedy algorithm

The dynamic programming algorithm studied earlier, which can be used to optimize solutions found through suboptimal algorithms — these are often implemented based on recursive schemes. For many problems, dynamic programming is overkill, and a simple algorithm is often enough.

Greedy algorithm is a relatively simple algorithm. Greedy algorithms will always choose the best solution in the present, regardless of whether this choice will affect future choices. The use of greedy algorithms usually indicates that the series of local “best” choices that the implementer wants to make will lead to the final overall “best” choice. If so, the algorithm will yield an optimal solution, otherwise, a suboptimal solution. However, for many problems, finding the optimal solution is too cumbersome to be worth it, so using greedy algorithms is enough.

Knapsack problem

If the items put into the knapsack are continuous in nature, then greedy algorithms can be used to solve the knapsack problem. In other words, the item must not be counted discretely, such as cloth and gold dust. If the items used are continuous, the value of the item can be determined simply by dividing the unit price of the item by its volume. In this case, it is optimal to load the highest value item until that item is full or the backpack is full, then the next highest value item until that item is also full or the backpack is full, and so on. We call this type of problem the partial knapsack problem.

Table 2: Partial backpack problems

items A B C D E
The value of 50 140 60 60 80
size 5 20 10 12 20
ratio 10 7 6 5 4

The reason we can’t solve the discrete item problem with greedy algorithms is because we can’t fit “half a TV” into a backpack. In other words, greedy algorithms can’t solve the 0-1 knapsack problem, because in the 0-1 knapsack problem, you have to put the whole thing in or not put it in.

function knapsack (capacity, objectArr) {
    // First order by cost performance, high -> low
    objectArr.sort(function (a, b) {
        return parseFloat(b.value / b.size) - parseFloat(a.value / a.size);
    });
    // Count items
    var n = objectArr.length;
    // Record the selected size, the selected maximum value
    var selected = 0,
        maxValue = 0;
    for (var i = 0; i < n && selected < capacity; i++) {
        var size = objectArr[i].size,
            value = objectArr[i].value;
        if (size <= capacity - selected) {
            maxValue += value;
            selected += size;
        } else {
            // Calculate the ratiomaxValue += value * ((capacity - selected) / size); selected = capacity; }}return maxValue;
}
Copy the code

Refer to the link

  • 0-1 knapsack problem, greedy algorithm, dynamic programming
  • Knapsack problem understanding of dynamic programming and greedy algorithms