Identify the problem, in fact, has already solved the problem half. The reason why it is time-consuming is repeated calculation, so we can create a “memo”, each time after calculating the answer to a sub-question, do not rush back, first write down in the “memo” and then return; Every time you encounter a subproblem, look it up in the “memo” section. If you find that the problem has been solved before, just take the answer out and use it instead of taking time to calculate it. You usually use an array to act as the “memo”, but you can also use a hash table (dictionary), the idea is the same.

int fib(int N) { if (N < 1) return 0; Vector <int> memo(N + 1, 0); // Initialize the simplest case return helper(memo, N); } int helper(vector<int>& memo, int n) { // base case if (n == 1 || n == 2) return 1; // If (memo[n]! = 0) return memo[n]; memo[n] = helper(memo, n - 1) + helper(memo, n - 2); return memo[n]; }Copy the code

Now, draw the recursion tree, and you’ll know exactly what memo does.

In fact, the recursion algorithm with “memo” transforms a recursion tree with huge redundancy into a recursion graph without redundancy by “pruning”, which greatly reduces the number of sub-problems (i.e. nodes in the recursion graph). For the full video course on Data Structure Algorithms and Left-god algorithms, please click on Data Structure Algorithms.

What about the time of the recursive algorithm? Number of subproblems times the time it takes to solve a subproblem.

The number of subproblems is the total number of nodes in the graph. Since there is no redundant calculation in this algorithm, the subproblems are F (1), F (2), F (3)… F (20), the number is proportional to the input size n = 20, so the number of subquestions is O(n).

The time to solve a subproblem, same as above, with no loops, is O(1). Therefore, the time complexity of this algorithm is O(n). Rather than brute force, it’s dimension reduction. At this point, recursive solutions with memos are as efficient as iterative dynamic programming solutions.

In practice, this approach is similar to iterative dynamic programming, except that it is called “top-down” and dynamic programming is called “bottom-up”. What do you mean “top down”? Notice that the recursion tree, or the graph, that we just drew, goes from top to bottom, from a big original problem like F (20), and then scales down, until f(1) and f(2) hit bottom, and then returns the answer layer by layer, which is called “top down.”

What do you mean “bottom up”? Instead, we go straight from the bottom, the simplest and smallest problem, f(1) and F (2), and work our way up until we get to the answer we want, F (20). That’s the idea of dynamic state programming, and that’s why dynamic programming usually gets rid of recursion and is done by iteration.

If you want to complete the Java data structure algorithm course, please click here: Java data structure algorithm video tutorial.