### UNDER CONSTRUCTION

Dynamic programming is an algorithm design paradigm that provides effective and elegant solutions to a wide class of problems. The basic idea is to recursively divide a complex problem into a number of simpler subproblems; store the solutions to each of these subproblems; and, ultimately, use the stored answers to solve the original problem. By caching solutions to subproblems, dynamic programming can sometimes avoid exponential waste.

Dynamic programming can be applied to optimization problems that exhibit two properties:

• Optimal structure the optimal solution to a problem can be computed from optimal solutions to smaller subproblems.
• Overlapping subproblems the process of recursively breaking the problem into subproblems, subproblems of subproblems, subproblems of subproblems of subproblems and so on, results in some subproblems being repeated.

There are two implementation strategies for dynamic programming: top-down (memoization) and bottom-up (tabulation).

Memoization. Implements dynamic programming as a recursive procedure. To solve a sub-problem, we simply call the recursive procedure on the subproblem, with one crucial optimization: whenever we finish a recursive call, we cache its result, and whenever we begin a recursive call, we return the cached result if there is one.

Bottom-up dynamic programming. Implements dynamic programming by identifying all the subproblems of a given problem and the dependencies among subproblems. The subproblems are then solved in dependency order (i.e., starting a problem that has no subproblems, and working up to the original problem). Bottom-up dynamic programming is typically implemented using loops rather than recursion.

Backtracing. It is often convenient to solve optimization problems in two steps. The first step computes the optimal value of the solution using dynamic programming. The second step, backtracing, computes the optimal solution by using a top-down procedure. Backtracing saves the cost of computing and storing the optimal solution to every subproblem.
Top-down computation of the optimal solution to the problem typically involves selecting among a set of alternative subproblems for a problem. Since we have already cached the optimal values to each subproblem, we may use the optimal value to select among the alternatives rather than searching through all of them.

### Recommended Problems

#### C level

1. Consider the following recursive function:
```      public int f(int n, int m) {
if (n == 0) {
return m;
}
return f(n - 1, m + 1) + f(n - 1, m);
}
```
What is the time complexity (in Θ notation) of f, as a function of n and m? What is its time complexity if the function is memoized?