preface

  • Dynamic programming is a method to solve the original problem by decomposing it into a series of relatively simple sub-problems. Dynamic programming can usually be used to solve problems with overlapping subproblems and optimal substructures.
  • In LeetCode, there are a lot of problems about dynamic planning, very logical and skillful, this time to learn dynamic planning with a very classic jump step problem.

The analysis process

A frog can jump one step at a time, or two steps at a time. Find out how many ways the frog can jump up 10 steps.

  • First of all, it’s not that difficult when we parse the process:

    • When there is a step, there is only one way to jump.
    • When there are two steps, you can either jump one step at a time or jump two steps at a time. There are two steps to jump.
    • If there are three steps, jump to the first step and then two steps, or jump to the second step and then one step, which is equivalent to f(1)+ F (2).
    • When there are four steps, you need to jump to the second step and then two steps, or you can jump to the third step and then one step, which is equivalent to f(2)+ F (3).
    • .
    • When there are ten steps, you need to jump to eight steps and then two steps, or you can jump to nine steps and then one step, which is equivalent to f(8)+ F (9).
  • The above process is the exhaustive analysis of dynamic programming.

  • Then we can find that when there is only one or two steps, the way of jumping can be determined, there is only one or two, and this is the determined boundary of dynamic programming.

  • Whether there are 3 steps or 10 steps behind, the jumping step can be calculated by f(x) = F (x-1)+ F (x-2) to obtain the results, find the law, and determine the optimal substructure function.

  • Then f(x) = F (x-1)+f(x-2) is the state transition mode of dynamic programming.

Code implementation

Violence recursion is implemented:

  • This problem, in addition to dynamic programming, can also be solved recursively, but the time is O(2^n). But you can also optimize to order n.
/** * dynamic programming: **@AUTHOR ZRH
 * @DATE2021/4/22 * /
public class Test1 {

    public static void main(String[] args) {
        Integer num = 10;
        System.out.println("test1 : " + test1(num));
    }

    /** * recursively solve violence */
    private static Integer test1(Integer y) {
        if (1 == y) {
            return 1;
        }
        if (2 == y) {
            return 2;
        }
        return test1(y - 1) + test1(y - 2); }}Copy the code
  • The above method is top-down calculation, each step is equivalent to adding another f(x)= F (x-1)+f(x-2) calculation method, so the time complexity is O(2^n).
  • In view of the above situation, it can be found that the whole recursive process has a lot of repeated calculation, and the time consumption of repeated calculation can be reduced if the calculation results are saved.

Optimize recursive implementation:

/** * dynamic programming: **@AUTHOR ZRH
 * @DATE2021/4/22 * /
public class Test1 {

    public static void main(String[] args) {
        Integer num = 10;
        System.out.println("test2 : " + test2(num));
    }

    /** ** optimize recursion */
    private final static Map<Integer, Integer> map = new HashMap<>();
    private static Integer test2(Integer y) {
        if (1 == y) {
            return 1;
        }
        if (2 == y) {
            return 2;
        }
        if(map.containsKey(y)){
            return map.get(y);
        }
        int value = test1(y - 1) + test1(y - 2);
        map.put(y, value);
        returnvalue; }}Copy the code
  • The above code uses map to cache the results of each calculation, reducing the consumption of a large number of repeated calculations, and its time complexity is O(n).

Dynamic programming code implementation:

/** * dynamic programming: **@AUTHOR ZRH
 * @DATE2021/4/22 * /
public class Test1 {

    public static void main(String[] args) {
    	// Only one or two steps will return the result directly
        if (1 == y) {
            return 1;
        }
        if (2 == y) {
            return 2;
        }
        // a and b are the results of the first two steps of the current step
        int a = 1, b = 2, result = 0;
        for(int i = 3; i <= y; i++){
            result = a + b;
            a = b;
            b = result;
        }
        returnresult; }}Copy the code
  • So this is one step, two steps, ten steps, it’s a bottom-up calculation. The time is O(n).

The last

  • The solution idea of dynamic programming:
    • Exhaustive analysis
    • To determine the boundary
    • Optimal suboutcome
    • State transition equation
    • Code implementation
  • For the above process, dynamic programming has a reference framework:
dp[0] [0] [...]. = boundary valuefor(state1: All states1The value of the) {for(state2: All states2The value of the) {for(...). {// State transition equationDp [state1] [state2] [...]. = maximize}}}Copy the code
  • If there is anything wrong, please point it out and I will correct it after confirming
  • Learn with an open mind and make progress together