### ALGORITHM DESIGN

Greedy algorithms. A greedy algorithm solves an optimization problem by making locally optimal choices at each step. Greedy algorithms typically do not compute globally optimal solutions. In some cases they do (but typically require a non-trivial argument to show why), and in others they can still produce good (but not optimal) results.
Familiar examples (all of which do compute globally optimal solutions):

• Kruskal's minimum spanning tree algorithm (repeatedly add lowest-weight edge that does not create a cycle)
• Prim's minimum spanning tree algorithm (repeatedly add lowest-weight edge that connects a new vertex to the spanning tree)
• Huffman coding (repeatedly combine two smallest-frequency tries)
• Dijkstra's shortest path algorithm (repeatedly relax un-visited vertex with shortest distance to source)
• Ford-Fulkerson max flow algorithm (repeatedly add augmenting paths)

Network flow. Many problems can be modeled as problems on edge-weighted graphs and digraphs:

• Shortest paths
• Maxflow
• Minimum spanning tree
Reducing a problem to one of these fundamental network problems is often an effective strategy. Familiar examples:
• Compute the mincut of a flow network by finding its maxflow
• Bipartite matching (connect source to one cell of the bipartition and sink to the other, and compute a maximum flow. Edges with non-zero flow belong to maximal matching)
• Find paths of least importance in an image via shortest paths (seam carving)

Divide-and-conquer. Divide-and-conquer algorithms solve a problem by breaking it into into subproblems, recursively solving each subproblem, and combining the results.
Familiar examples:

• Mergesort (divide array in half, sort each half, and merge the sorted halves)
• Quicksort (select pivot, partition array into elements ≤ and ≥ the pivot, sort each in place)

Dynamic programming. Dynamic programming is design strategy that is similar to divide-and-conquer. The defining characteristic of dynamic programming is that the subproblems overlap, and we store the solution to each subproblem to avoid the cost of re-computing it.
Familiar examples:

• Shortest paths in directed acyclic graphs by relaxing vertices in topological order
• Bellman-Ford

Randomization. A randomized algorithm is an algorithm whose run-time (or output) depends on the results of random coin flips. Randomized algorithms are typically evaluated on the basis of their expected running time the average of all its possible run-times weighted by their probability).
Familiar examples:

• Quicksort (avoid worst-case performance in practice by shuffling input array)
• Quickselect (avoid worst-case performance in practice by shuffling input array)

### Recommended Problems

#### C level

1. Let G be a directed graph. A coloring of G is a function mapping each vertex to a color, so that no two adjacent vertices are assigned the same color. Greedy graph coloring coloring is an algorithm that computes a coloring as follows. Call the available colors the palette, and suppose that there is a natural order on the colors (e.g., the smallest color is red, followed by blue, followed by green, and so on). Traverse the vertices of the graph (in any order), at each step assigning the vertex the lowest color in the palette that is not already assigned to one of its neighbors. Does greedy graph coloring produce a globally optimal result (i.e., using the fewest colors from the palette)? Why or why not?

#### B level

1. Design an algorithm that generates a number uniformly at random between 0 and n that is not divisible by 7 or 11, and analyze its running time.