While dynamic programming can be successfully applied to a variety of optimization problems, many times the problem has an even more straightforward solution by using a *greedy approach*. This approach reduces solving multiple subproblems to find the optimal to simply solving one greedy one. Implementation of greedy algorithms is usually more straighforward and more efficient, but *proving* a greedy strategy produces optimal results requires additional work.

**Problem**

Given a set of *activities* *A* of length *n*

A= <a_{1},a_{2}, ...,a_{n}>

with *starting times*

S= <s_{1},s_{2}, ...,s_{n}>

and *finishing times*

F= <f_{1},f_{2}, ...,f_{n}>

such that 0 ≤ *s*_{i} < *f*_{i} < ∞, we define two activities *a*_{i} and *a*_{j} to be *compatible* if

f_{i}≤s_{j}orf_{j}≤s_{i}

i.e. one activity ends before the other begins so they do not overlap.

Find a *maximal* set of compatible activies, e.g. scheduling the most activities in a lecture hall. Note that we want to find the maximum *number* of activities, **not** necessarily the maximum *use* of the resource.

**Dynamic Programming Solution**

*Step 1: Characterize optimality*

Without loss of generality, we will assume that the *a*'s are sorted in non-decreasing order of finishing times, i.e. *f*_{1} ≤ *f*_{2} ≤ ... ≤ *f*_{n}.

Define the set *S*_{ij}

S_{ij}= {a_{k}∈S:f_{i}≤s_{k}<f_{k}≤s_{j}}

as the subset of activities that can occur between the completion of *a*_{i} (*f*_{i}) and the start of *a*_{j} (*s*_{j}).

Note that *S*_{ij} = ∅ for *i* ≥ *j* since otherwise *f*_{i} ≤ *s*_{j} < *f*_{j} ⇒ *f*_{i} < *f*_{j} which is a contradiction for *i* ≥ *j* by the assumption that the activities are in sorted order.

Furthermore let *A*_{ij} be the *maximal* set of activities for *S*_{ij}. Using a "cut-and-paste" argument, if *A*_{ij} contains activity *a*_{k} then we can write

A_{ij}=A_{ik}∪ {a_{k}} ∪A_{kj}

where *A*_{ik} and *A*_{kj} must also be optimal (otherwise if we could find subsets with more activities that were still compatible with *a*_{k} then it would contradict the assumption that *A*_{ij} was optimal).

*Step 2: Define the recursive solution (top-down)*

Let *c[i,j]* = |*A*_{ij}|, then

i.e. compute *c[i,j]* for each *k* = *i*+1, ..., *j*-1 and select the max.

*Step 3: Compute the maximal set size (bottom-up)*

Construct an *n*x*n* table which can be done in polynomial time since clearly for each *c[i,j]* we will examine no more than *n* subproblems giving an upper bound on the worst case of O(*n*^{3}).

**BUT WE DON'T NEED TO DO ALL THAT WORK!** Instead at each step we could simply select (*greedily*) the activity that finishes first and is compatible with the previous activities. Intuitively this choice leaves the most time for other future activities.

**Greedy Algorithm Solution**

To use the greedy approach, we must *prove* that the greedy choice produces an optimal solution (although not necessarily the *only* solution).

Consider any non-empty subproblem *S*_{ij} with activity *a*_{m} having the earliest finishing time, i.e.

f_{min}= min{f_{k}:a_{k}∈S_{ij}}

then the following two conditions must hold

a_{m}is used in an optimal subset ofS_{ij}S_{im}= ∅ leavingS_{mj}as the only subproblem

meaning that the greedy solution produces an optimal solution.

*Proof*

Let

A_{ij}be an optimal solution forS_{ij}anda_{k}be the first activity inA_{ij}→ If

a_{k}=a_{m}then the condition holds.→ If

a_{k}≠a_{m}then constructA_{ij}^{'}=A_{ij}- {a_{k}} ∪ {a_{m}}. Sincef_{m}≤f_{k}⇒A_{ij}^{'}is still optimal.If

S_{im}is non-empty ⇒a_{k}with

f_{i}≤s_{k}<f_{k}≤s_{m}<f_{m}⇒

f_{k}<f_{m}which contradicts the assumption thatf_{m}is the minimum finishing time. ThusS_{im}= ∅.

Thus instead of having 2 subproblems each with *n*-*j*-1 choices per problem, we have reduced it to 1 subproblem with 1 choice.

*Algorithm*

Always start by choosing the first activity (since it finishes first), then repeatedly choose the next compatible activity until none remain. The algorithm can be implemented either recursively or iteratively in O(n) time (assuming the activities are sorted by finishing times) since each activity is examined only once.

**Example**

Consider the following set of activities represented graphically in non-decreasing order of finishing times

Using the greedy strategy an optimal solution is {1, 4, 8, 11}. Note another optimal solution not produced by the greedy strategy is {2, 4, 8, 11}.

A general procedure for creating a greedy algorithm is:

- Determine the optimal substructure (like dynamic programming)
- Derive a recursive solution (like dynamic programming)
- For every recursion, show
oneof the optimal solutions is thegreedyone.- Demonstrate that by selecting the
greedychoice,allother subproblems areempty.- Develop a recursive/iterative implementation.

Usually we try to cast the problem such that we only need to consider one subproblem and that the greedy solution to the subproblem is optimal. Then the subproblem along with the greedy choice produces the optimal solution to the original problem.

Often seemingly similar problems warrant the use of one or the other approach. For example consider the *knapsack problem*. Suppose a thief wishes to maximize the value of stolen goods subject to the limitation that whatever they take must fit into a fixed size knapsack (or subject to a maximum weight).

**0-1 Problem**

If there are *n* items with value *v*_{i} and weight *w*_{i} where the *v*_{i}'s and *w*_{i}'s are integers, find a subset of items with maximum total value for total weight ≤ *W*. This version requires *dynamic programming* to solve since taking the most valuable per pound item may not produce optimal results (if it precludes taking additional items).

**Fractional Problem**

Assume that fractions of items can be taken. This version can utilize a *greedy algorithm* where we simply take as much of the most valuable per pound items until the weight limit is reached.