Dynamic programming (DP) is a method to solve complex problems by decomposing original problems into relatively simple subproblems.
From the point of view of the interview, dynamic programming is a formal algorithm in the interview will not escape in any case, there was once a great man said such a sentence:
So why is dynamic planning so important in an interview?
In fact, the main reason is that dynamic planning is very suitable for interviews, because dynamic planning can not “back”.
Us a lot of job seekers is through back here for an interview, and before this approach to test, what flip turn binary tree, list, quick, merge, bubbling back, basically can also fish in troubled waters in an interview in the past, which is in fact this test algorithm ability, thinking, this is who take an examination of to prepare good attitude, willing to take the time to back the topic, Screen out the ones who don’t bother to carry them.
But with the Internet cold, talent supply further overheating, more and more people recite questions, the threshold of the interview has been increased, so this time needs a very test algorithm thinking, changeable and easy to design the topic type, dynamic planning is perfect to meet this requirement.
For example, there are 1261 algorithm topics in LeetCode, among which dynamic planning topics occupy nearly 200. Dynamic planning can account for 1/6 of the total topics, indicating its popularity.
More importantly, the topic difficulty of dynamic programming is mainly medium and high difficulty:
So, since we already know this is an algorithmic interview question, we can’t be too prepared, and this article does our best to make dynamic programming clear.
Start with money
We learned earlier that greedy algorithms can solve the “change coin problem,” but that’s only partially true because the denominations given are 1, 5, and 25. In our real life, the face value of our current fifth set of RMB is 100, 50, 20, 10, 5, 1 respectively. Our RMB can be changed by greedy algorithm.
So is there a situation where you can’t use greedy algorithms? For example, the central bank of an algorithmic planet issues strange coins with the value of 1, 5 and 11 yuan respectively, and the greedy algorithm is invalid when it needs to collect 15 yuan.
We can calculate, according to the greedy algorithm strategy, we first take the largest denominative 11, the remaining 4 for each of the four 1 yuan weird coins, so a total of five weird coins is needed to make up 15 yuan.
In fact, if we do a simple math, we know that we need at least three fives to get 15.
Here have a problem, the disadvantages of greedy algorithm show in front of the special value COINS, because “limited horizon, no big picture”, first take out the biggest 11 after a miracle of denomination currency is completely blocked the room to play, because the rest of the 4 to fill the price is very high, we need to order take out four face value of 1 gold coin.
Improved computing strategy
So now that greedy algorithms are no longer suitable for this scenario, how can we change the calculation strategy?
When we encounter this kind of problem in the interview process, if there is no idea, also want to think of a universal algorithm – brute force cracking.
Let’s analyze the above problem. The problem is actually “given a set of coins, what is the minimum number of coins we need to make n with the current denomination”.
We’re going to make up this n, and as long as n is not 0, there’s always going to be the last coin that makes up the n, so let’s say we make up the 15 with {11,1,1,1}, and then we make up the 15 with {1,1,1}.
If you use {5,5,5} to make 15, the last coin will be 5.
- So if the last coin is 11, then we’re left with 4, and then the question again becomes, what’s the minimum number of coins we need to get n-11, when n is 4, we can only take out 4 1’s
- If we assume that the last coin is 5, then the question becomes, how many coins do we need to make up n minus 5 with our current currency
Did you notice that our question can be continuously decomposed into “how many coins do we need to raise N with the current currency”? For example, we use f(n) function to represent “how many coins do we need to raise N at least”.
“The original big problem is gradually decomposed into similar but smaller sub-problems”, which is the optimal sub-structure. We can recursively construct the optimal solution of the whole problem step by step from the optimal solution of the sub-problem in a bottom-up way.
At this time, we assume that coins of 1, 5 and 11 denominations are the last coin respectively:
- The last coin has a face value of 11: min = f(4) + 1
- The last coin has a denomination of 5: min = f(10) + 1
- The last coin has denomination 1: min = f(14) + 1
Do you see the problem at this point? The minimum change min is related to the minimum value in the solution of the three functions f(4), F (10) and F (14), after all, the “+1” is common.
Assuming that the total number of coins is N, then F (4) = F (n-11), F (10) = F (n-5), f(14) = f(n-1), we get the following formula:
f(n) = min{f(n-1), f(n-5), f(n-11)} + 1
Copy the code
Let’s specify what the minimum number of coins f(n-1) has to make up for it in the above formula, is it the following formula again:
f(n-1) = min{f(n-1-1), f(n-1-5), f(n-1-11)} + 1
Copy the code
And so on…
This is deja vu, isn’t it recursion?
Yes, we can recursively solve the least change problem as follows:
function f(n) {
if(n === 0) return 0
let min = Infinity
if (n >= 1) {
min = Math.min(f(n- 1) + 1, min)
}
if (n >= 5) {
min = Math.min(f(n- 5) + 1, min)
}
if (n >= 11) {
min = Math.min(f(n- 11) + 1, min)
}
return min
}
console.log(f(15)) / / 3
Copy the code
- When n=0, return 0 directly to increase program robustness
- We first set the minimum change min as “infinite”, after convenient
Math.min
For the minimum - When the last coin is 1, we recurse
min = Math.min(f(n-1) + 1, min)
Find the minimum change in this case - When the last coin is 5, we recurse
min = Math.min(f(n-5) + 1, min)
Find the minimum change in this case - When the last coin is 11, we recurse
min = Math.min(f(n-11) + 1, min)
Find the minimum change in this case
The downside of recursion
We seem to have solved the problem, but don’t worry, let’s test, when n is equal to 70, let’s test how many coins we need to make up this number.
The answer is 8, but our time is as follows:
What if n is equal to 270? With the help of the 8th-generation i7 processor and node.js 12.x, IT took me a long time to figure this out:
When n=27000, we successfully burst the stack:
So why does it take so long to execute? In the final analysis, it is caused by the inefficiency of the recursive algorithm, as shown in the figure below:
If we calculate F (70), we need to calculate the different situations when the last coin is 1, 5 and 11 denominals respectively, and these three different situations can be decomposed into three situations as sub-problems, and so on… And these algorithms are O(3) complex, which is extremely inefficient.
Let’s look at the picture again:
For example, there are two f(64), F (58) and F (54), all of which are repeated. These are only the tip of the iceberg in our whole calculation system. There are many repeated calculations that cannot be shown in the figure.
We can see that we double calculate a lot of invalid functions, wasted computation force, exactly how much waste we have from the above function execution time test has a certain understanding.
For another simple example, let’s say we want to calculate the sum of 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1.
We start counting… , until we count the sum above as 8, then we “+ 1” after the above “1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1”, then what is the sum?
This is when you can’t even count and blurt out “9”.
Why are we so fast in the back? Because we already remember the previous result “8” in our head, we only need to calculate “8 + 1”, which saves us from repeating the previous calculation.
What does our recurrence look like? Like continuing to count “1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1” to figure out “9,” which is very time consuming.
Assuming the minimum number of coins needed to make up n with m denominational coins, the recursive time complexity for the above problem is staggering O(nᵐ), and exponential time complexity can be said to be one of the worst time complexity.
We have already found the problem, the time complexity is extremely high due to the large amount of double computation, and we have to find a way to solve this problem.
Memos and recursion
Now that you have to know there are a large number of redundant calculation, so we can set up a memo, calculated the answer records in the memo, again we need answers, we went to the memo to find, if can return directly to find the answer, this avoids the repeated calculation, this algorithm is a typical space, in time of thinking, We traded the extra memory used by memos for more efficient calculations.
With the idea, in fact, the code is very simple, we just need to establish a cache memo, in the function internal check whether there is in the result, if there is a return.
Javascript Code presentation
function f(n) {
function makeChange(amount) {
if(amount <= 0) return 0
// Check whether the result already exists in the memo
if(cache[amount]) return cache[amount]
let min = Infinity
if (amount >= 1) {
min = Math.min(makeChange(amount- 1) + 1, min)
}
if (amount >= 5) {
min = Math.min(makeChange(amount- 5) + 1, min)
}
if (amount >= 11) {
min = Math.min(makeChange(amount- 11) + 1, min)
}
return (cache[amount] = min)
}
/ / memo
const cache = []
return makeChange(n)
}
console.log(f(70)) / / 8
Copy the code
Java Code Presentation
public class Cions {
public static void main(String[] args) {
int a = coinChange(70);
System.out.println(a);
}
private static HashMap<Integer,Integer> cache = new HashMap<>();
public static int coinChange(int amount) {
return makeChange(amount);
}
public static int makeChange(int amount) {
if (amount <= 0) return 0;
// Check whether the result already exists in the memo
if(cache.get(amount) ! =null) return cache.get(amount);
int min = Integer.MAX_VALUE;
if (amount >= 1) {
min = Math.min(makeChange(amount-1) + 1, min);
}
if (amount >= 5) {
min = Math.min(makeChange(amount-5) + 1, min);
}
if (amount >= 11) {
min = Math.min(makeChange(amount-11) + 1, min);
}
cache.put(amount, min);
returnmin; }}Copy the code
Our execution time is only:
In fact, using memos to solve recursive double-counting problems is called “memorization search.”
In essence, this method has the same purpose as the “pruning” of backtracking method, which is to remove all the repeated nodes in the figure above and only retain one node. Of course, the figure above cannot show all the nodes. If all the repeated nodes are removed, only the linear node form will be left at last:
This recursive algorithm with memos is O(n) in time, which is not that far from the time of dynamic programming.
So isn’t that enough? Why do dynamic programming?
Remember the other big problem with recursion we mentioned above?
Burst stack!
This is the result of our memo recursive calculation of F (27000) :
The depth of the programming language stack is limited, and even if we prune it, the stack will burst again after five digits, making recursion impossible for large computations.
This is determined by the form of recursion, and our recursion here is the “top down” way of calculating, that is, from f(70) f(69)… The idea of f(1) gradually decomposing is not entirely applicable here, and we need a “bottom-up” approach to solve the problem.
“Bottom up” is f(1)… F (70) F (69) solves large-scale problems by recursion of small-scale problems. Dynamic programming usually replaces recursion with iteration to solve problems.
The top-down approach is very common in another kind of algorithm thought, the divide-and-conquer algorithm
In addition, the other drawback of recursion + memo is that there is no room for optimization, because in the worst case, the maximum depth of recursion is N.
So, we need the system recursion stack to use O(n) space, because of the recursion, and after iteration we don’t need that much storage space at all, so we can keep going.
Dynamic transfer equation
Remember what the form of each node looked like when we used the memo cache above. We used the “memo” as a table, which is called DP table, as follows:
Note: in the figure above, f[n] represents the function of the minimum number of coins needed to raise n, and the number in the box represents the result of the function
Why don’t we look for patterns in the figure above?
We observe that F [1]: F [1] = min(f[0], F [-5], F [-11]) + 1
Since there is no such negative number as f[-5], we set it to positive infinity, so f[1] = 1.
F [5]: f[1] = min(f[4], F [0], f[-6]) + 1, f[4] = 4, f[0] = 0, f[-6]=Infinity, f[5] = 1.
Do you see that? We can derive any of these nodes from the previous nodes, without having to do double calculations, and the related equation is:
f[n] = min(f[n-1], f[n-5], f[n-11]) + 1
Copy the code
Remember when we said dynamic programming has more room for optimization? Recursion + memo requires O(n) spatial complexity due to the depth of recursion, but iteration-based dynamic programming requires only constant levels of complexity.
Look at the figure below, for example, if we solve f(70), we only need the first three solutions, namely F (59) f(69) f(65), to be solved by applying the formula, then f(0)f(1)… F of 58 is useless at all, so we can stop storing them and take up extra space, which leaves room for optimization.
The above equation is the dynamic transfer equation, and the key to solving the dynamic programming problem is the dynamic transfer equation.
Of course, if you just derive the dynamic transfer equation you can almost do the dynamic programming problem, but a lot of people don’t get it right, why is that? You have to think about borders.
The boundary issue
Part of the boundary problem we actually gave the solution in the above section, for this change problem we have the following boundary problem.
When n is negative in f[n] : Our solution to this problem is to treat f[n] as positive infinity whenever n is negative, because normally we don’t have arrays with negative subangles, so in fact f[n] is negative. It does not exist at all, and since we require the least change, in order to exclude such non-existence and facilitate our calculation, we directly regard it as positive infinity, which can maximize the convenience of our dynamic transfer equation implementation.
To deal with the problem that n is 0 in f[n] : the case of n=0 belongs to the initial condition of the dynamic transfer equation, and the initial condition is the special case that the dynamic transfer equation cannot deal with. For example, if there is no such initial condition, our equation is as follows: F [0] = min(f[-1], F [-5], F [-11]) + 1, the minimum is also positive infinity, this is a special case cannot be handled, so we can only set the initial conditions manually.
With the boundary problem solved we can get the complete dynamic transfer equation:
f[0] = 0 (n=0)
f[n] = min(f[n-1], f[n-5], f[n-11]) + 1 (n>0)
Copy the code
Complete analysis of the change problem
So let’s go back to the change problem. This time let’s assume that there are different denominations of coins and an amount. Write a function to calculate the minimum number of coins needed to make up the total. If no one coin combination makes up the total, return -1.
Such as:
Input: Coins = [1, 2, 5], amount = 11Copy the code
In fact, the change problem above is the generalization of the change problem we have been dealing with. Our denominations are fixed, namely 1, 5 and 11. This time, it is not fixed, but an array of Coins is given, which contains the related denominations.
With previous experience, this kind of problem is no longer natural, let’s rearrange the idea.
Determine the optimal substructure: the optimal substructure is that the solution of the original problem is composed of the optimal solution of the subproblem. We assume that at least K coins are needed to make up the total value n, then F (n) = min{f(n-cᵢ)}, and cᵢ is the value of the coin.
Dealing with boundary problems: the same old way, when n is negative, the value is positive infinity, when n=0, the value is also 0.
The dynamic transfer equation is obtained:
F [0] =0 (n=0) f[n] = min(f[n-cᵢ]) + 1 (n>0)Copy the code
Based on the above derivation, we get the following code:
Javascript Code presentation
const coinChange = (coins, amount) = > {
// Initialize the memo, fill the memo with Infinity, Infinity indicates that the value cannot be raised with coins
const dp = new Array(amount + 1).fill(Infinity)
// Set the initial condition to 0
dp[0] = 0
for (var i = 1; i <= amount; i++) {
for (const coin of coins) {
// Find the minimum value according to the dynamic transfer equation
if (coin <= i) {
dp[i] = Math.min(dp[i], dp[i - coin] + 1)}}}// if 'dp[amount] === Infinity' returns -1, otherwise returns optimal
return dp[amount] === Infinity ? - 1 : dp[amount]
}
Copy the code
Java Code Presentation
class Solution {
public int coinChange(int[] coins, int amount) {
// Initialize the memo and fill the memo with amount+1, which indicates that the value cannot be raised with coins
int[] dp = new int[amount + 1];
Arrays.fill(dp,amount+1);
// Set the initial condition to 0
dp[0] =0;
for (int coin : coins) {
for (int i = coin; i <= amount; i++) {
// Find the minimum value according to the dynamic transfer equation
if(coin <= i) {
dp[i]=Math.min(dp[i],dp[i-coin]+1); }}}// if 'dp[amount] === amount+1' returns -1 if there is no optimal solution, otherwise returns optimal solution
return dp[amount] == amount+1 ? -1: dp[amount]; }}Copy the code
summary
Let’s summarize the learning process:
- Starting from greedy algorithm to solve the zero finding problem, it is found that greedy algorithm can not find the optimal solution under any circumstances
- We decided to approach the problem in a different way, and we finally found the key point, which is optimal substructure.
- With the help of the above two findings, we solved the minimum change problem recursively
- However, after the algorithm complexity analysis and practical test, we found that the recursive method is extremely inefficient, we must use a method to solve the current problem
- We solved the time complexity problem with memos and recursion, but we couldn’t get rid of the stack explosion with a top-down approach, and we needed a new “bottom-up” approach
- We solve this problem efficiently by using dynamic transfer equation in an iterative manner
In fact, dynamic programming is essentially a brute force solution that has been repeatedly optimized. We reduce a large number of overlapping sub-problems through dynamic programming, and the solving process of all the dynamic programming problems we talked about later can be optimized step by step from brute force solution to dynamic programming.
In this paper, we learned how the dynamic programming came about. In the process of solving problems later on, if we have no idea, we can repeat the process in our mind, but we will not go through the whole process again after the problem solving, but directly use dynamic programming to solve the problem.
You may ask the interview questions so many, which one should use dynamic programming? How to judge?
In fact, the most accurate way is to look at the given problem in the problem, whether the problem can be decomposed into sub-problems, and then whether the solution of the sub-problem can be derived from the solution of the original problem.
Of course, although the above method is accurate, but requires a certain amount of experience accumulation, we can use a method that is not so accurate, but simple enough, if the topic meets one of the following conditions, then it is most likely to be a dynamic programming topic:
- Find the maximum, find the minimum
- Determine if the solution is feasible
- Number of statistical schemes
Welcome to our official account: