Dynamic programming is an algorithm design paradigm that provides effective and elegant solutions to a wide class of problems. The basic idea is to recursively divide a complex problem into a number of simpler subproblems; store the solutions to each of these subproblems; and, ultimately, use the stored answers to solve the original problem. By caching solutions to subproblems, dynamic programming can sometimes avoid exponential waste.

Dynamic programming can be applied to optimization problems that exhibit two properties:

There are two implementation strategies for dynamic programming: top-down (memoization) and bottom-up (tabulation).

Memoization. Implements dynamic programming as a recursive procedure. To solve a sub-problem, we simply call the recursive procedure on the subproblem, with one crucial optimization: whenever we finish a recursive call, we cache its result, and whenever we begin a recursive call, we return the cached result if there is one.

Bottom-up dynamic programming. Implements dynamic programming by identifying all the subproblems of a given problem and the dependencies among subproblems. The subproblems are then solved in dependency order (i.e., starting a problem that has no subproblems, and working up to the original problem). Bottom-up dynamic programming is typically implemented using loops rather than recursion.

Backtracing. It is often convenient to solve optimization problems in two steps. The first step computes the optimal value of the solution using dynamic programming. The second step, backtracing, computes the optimal solution by using a top-down procedure. Backtracing saves the cost of computing and storing the optimal solution to every subproblem.
Top-down computation of the optimal solution to the problem typically involves selecting among a set of alternative subproblems for a problem. Since we have already cached the optimal values to each subproblem, we may use the optimal value to select among the alternatives rather than searching through all of them.

Recommended Problems

C level

  1. Consider the following recursive function:
          public int f(int n, int m) {
            if (n == 0) {
              return m;
            return f(n - 1, m + 1) + f(n - 1, m);
    What is the time complexity (in Θ notation) of f, as a function of n and m? What is its time complexity if the function is memoized?

B level

  1. Let G be a directed graph with vertices 1, 2, ... n and with positive edge weights. For any vertices x, m, y, let d(x,m,y) be the weight of the shortest path from x to y that does not pass through any vertex greater than or equal to m. Write a dynamic programming recurrence for d(x,m,y).

A level

  1. A subsequence of a string a = a0a1...an is a string of the form a = ai1ai2...aim where 0 <= i1 < i2 ... < im < n. That is, a subsequence consists of characters that belong to the original string, in the same order that they are in the original string. For example, subsequences of "Hello, world" include "Hello", "Held", "lord", ...
    The longest common subsequence of a pair of strings a and b is a string that is a subsequence of both a and b, and which has maximal length among all common subsequences of a and b. Develop an efficient algorithm for computing the length of the longest common subsequence of two strings. The time complexity must not exceed O(nm) where m and n are the lengths of the a and b.
  2. Use your solution to the previous exercise to compute the longest common subsequence of two strings.