Sei sulla pagina 1di 58

Algorithm Design & Analysis Chapter -04 (Dynamic Programming & Greedy Algorithms ) T.

E(Computer)
By I.S.Borse SSVPS BSD COE ,DHULE
3/8/2009 ADA Unit -4 I.S Borse 1

Outline Chapter 4
1. Dynamic Programming ( Introduction) i) Multistage Graph ii) Optimal Binary Search Tree(OBST) iii) 0/1 Knapsack Problem iv) Travelling Salesman Problem 2. Greedy Algorithms ( Introduction) i) Job Sequencing ii) Optimal Merge Patterns
3/8/2009 ADA Unit -4 I.S Borse 2

Dynamic Programming
Dynamic Programming is an algorithm design method that can be used when the solution to a problem may be viewed as the result of a sequence of decisions

3/8/2009

ADA Unit -4

I.S Borse

The General Method


Dynamic programming
An algorithm design method that can be used when the solution can be viewed as the result of a sequence of decisions Some solvable by Greedy method under the condition Condition: an optimal sequence of decisions can be found by making the decisions one at a time and never making an erroneous decision For many other problems Not possible to make stepwise decisions (based only on local information) in a manner like Greedy method

3/8/2009

ADA Unit -4

I.S Borse

The General Method


Enumeration vs. dynamic programming Enumeration Enumerating all possible decision sequences and picking out the best prohibitive time and storage requirements Dynamic programming Drastically reducing the time and storage by avoiding some decision sequences that cannot possibly be optimal Making explicit appeal to the principle of optimality Definition [Principle of optimality] The principle of optimality states that an optimal sequence of decisions has the property that whatever the initial state and decision are, the remaining decisions must constitute an optimal decision sequence with regard to the state resulting from the first decision.
3/8/2009 ADA Unit -4 I.S Borse 5

Principle of optimality: Suppose that in solving a problem, we have to make a sequence of decisions D1, D2, , Dn. If this sequence is optimal, then the last k decisions, 1 k n must be optimal. e.g. the shortest path problem If i, i1, i2, , j is a shortest path from i to j, then i1, i2, , j must be a shortest path from i1 to j In summary, if a problem can be described by a multistage graph, then it can be solved by dynamic programming.
3/8/2009 ADA Unit -4 I.S Borse 6

Principle of optimality

The General Method


Greedy method vs. dynamic programming

Greedy method
Only one decision sequence is ever generated

Dynamic programming
Many decision sequences may be generated But sequences containing suboptimal subsequences discarded because they cannot be optimal due to the principle of optimality

3/8/2009

ADA Unit -4

I.S Borse

The General Method


Notation and formulation for the principle Notation and formulation
S0: initial problem state n decisions di, 1in have to be made to solve the problem and D1={r1,r2,,rj} is the set of possible decision values for d1 Si is the problem state when ri chosen, and i is an optimal sequence wrt Si Then, when the principle of optimality holds, an optimal sequence wrt S0 is the best of the decision sequences ri,i, 1in
3/8/2009 ADA Unit -4 I.S Borse 8

Dynamic Programming
Forward approach and backward approach: Note that if the recurrence relations are formulated using the forward approach then the relations are solved backwards . i.e., beginning with the last decision On the other hand if the relations are formulated using the backward approach, they are solved forwards. To solve a problem by using dynamic programming: Find out the recurrence relations. Represent the problem by a multistage graph.
3/8/2009 ADA Unit -4 I.S Borse 9

Steps for Dynamic Programming


1) The problem can be divided into stages with decision required at each stage 2) Each state has a number of states associated with it 3) The decision at one stage transform one stage to next 4) Given current state the optimal decision for each of the remaining state does not depend on the previous state or decision 5) There exists a recursive relationship that identifies the optimal solution for stage j , given the stage j+1 has already been solved 6) One final stage must be solvable by itself.
3/8/2009 ADA Unit -4 I.S Borse 10

The shortest path


To find a shortest path in a multi-stage graph
3 1 2 4 7

Apply the greedy method : the shortest path from S to T : 1+2+5=8


3/8/2009 ADA Unit -4 I.S Borse 11

The shortest path in multistage graphs


e.g.
1 11 5 16 5 9

D
18

13 2

The greedy method can not be applied to this case: (S, A, D, T) 1+4+18 = 23. The real shortest path is: (S, C, F, T) 5+2+2 = 9.
3/8/2009 ADA Unit -4 I.S Borse 12

Multistage Graphs
Definition: multistage graph G(V,E) A directed graph in which the vertices are partitioned into k2 disjoint sets Vi, 1ik If <u,v> E, then u Vi and v Vi+1 for some I, 1i<k |V1|= |Vk|=1, and s(source) V1 and t(sink) Vk c(i,j)=cost of edge <i,j> Definition: Multistage Graph Problem Find a minimum-cost path from s to t e.g., (5-stage graph)
3/8/2009 ADA Unit -4 I.S Borse 13

Multistage Graphs

3/8/2009

ADA Unit -4

I.S Borse

14

Multistage Graphs
DP formulation
Every s to t path is the result of a sequence of k-2 decisions The principle of optimality holds (Why?) p(i, j) = a minimum-cost path from vertex j in Vi to vertex t cost(i, j) = cost of path p(i , j)
cos t ( i , j ) min { c ( j , l )
l Vi 1 j ,l E

cos t ( i

1, l )}

cost(k-1,j) = c(j,t) if <j,t> E, otherwise Then computing cost(k-2,j) for all j Vk-2 Then computing cost(k-3,j) for all j Vk-3 Finally computing cost(1,s)
3/8/2009 ADA Unit -4 I.S Borse 15

Multistage Graphs

(k=5)
Stage 5 cost(5,12) = 0.0 Stage 4 cost(4,9) = min {4+cost(5,12)} = 4 cost(4,10) = min {2+cost(5,12)} = 2 cost(4,11) = min {5+cost(5,12)} = 5 Stage 3 cost(3,6) = min {6+cost(4,9), 5+cost(4,10)} = 7 cost(3,7) = min {4+cost(4,9), 3+cost(4,10)} = 5 cost(3,8) = min {5+cost(4,10), 6+cost(4,11)} = 7
3/8/2009 ADA Unit -4 I.S Borse 16

Multistage Graphs

Stage 2 cost(2,2) = min {4+cost(3,6), 2+cost(3,7), 1+cost(3,8)} = 7 cost(2,3) = min {2+cost(3,6), 7+cost(3,7)} = 9 cost(2,4) = min {11+cost(3,8)} = 18 cost(2,5) = min {11+cost(3,7), 8+cost(3,8)} = 15 Stage 1 cost(1,1) = min {9+cost(2,2), 7+cost(2,3), 3+cost(2,4), 2+cost(2,5)} = 16 Important notes: avoiding the recomputation of cost(3,6), cost(3,7), and cost(3,8) in computing cost(2,2)
3/8/2009 ADA Unit -4 I.S Borse 17

Multistage Graphs
void Fgraph (graph G, int k, int n, int p[] ) // The input is a k-stage graph G = (V,E) with n vertices indexed in order // of stages. E is a set of edges and c[i][j] is the cost of <i, j>. // p[1 : k] is a minimum-cost path. { float cost[MAXSIZE]; int d[MAXSIZE], r; cost[n] = 0.0; for (int j=n-1; j >= 1; j--) { // Compute cost[j]. let r be a vertex such that <j, r> is an edge of G and c[j][r] + cost[r] is minimum; cost[j] = c[j][r] + cost[r]; d[j] = r; } // Find a minimum-cost path. p[1] = 1; p[k] =n ; for ( j=2; j <= k-1; j++) p[j] = d[ p[ j-1 ] ]; }
ADA Unit -4 I.S Borse 19

3/8/2009

Multistage Graphs
Backward approach
bcos t ( i , j) min { bcos t ( i
l Vi 1 j, l E

1, l )

c ( l , j)}

3/8/2009

ADA Unit -4

I.S Borse

20

The shortest path in multistage graphs


e.g.
1 11 5 16 5 9

D
18

13 2

The greedy method can not be applied to this case: (S, A, D, T) 1+4+18 = 23. The real shortest path is: (S, C, F, T) 5+2+2 = 9.
3/8/2009 ADA Unit -4 I.S Borse 21

Dynamic programming approach


Dynamic programming approach
1 2

A B

d(A, T) d(B, T)

d(C, T)

d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}

d(A,T) = min{4+d(D,T), 11+d(E,T)} = min{4+18, 11+13} = 22.

D E

d(D, T)

11
3/8/2009 ADA Unit -4 I.S Borse

T
d(E, T)
22

Dynamic programming
d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)} = min{9+18, 5+13, 16+2} = 18. d(C, T) = min{ 2+d(F, T) } = 2+2 = 4 d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)} = min{1+22, 2+18, 5+4} = 9. The above way of reasoning is called D backward reasoning.
9

d(D , T ) d(E , T )

E
d(F , T )

16

F
23

3/8/2009

ADA Unit -4

I.S Borse

Backward approach (forward reasoning)


d(S, A) = 1 d(S, B) = 2 d(S, C) = 5 d(S,D)=min{d(S, A)+d(A, D),d(S, B)+d(B, D)} = min{ 1+4, 2+9 } = 5 d(S,E)=min{d(S, A)+d(A, E),d(S, B)+d(B, E)} = min{ 1+11, 2+5 } = 7 d(S,F)=min{d(S, A)+d(A, F),d(S, B)+d(B, F)} = min{ 2+16, 5+2 } = 7
3/8/2009 ADA Unit -4 I.S Borse 24

d(S,T) = min{d(S, D)+d(D, T),d(S,E)+ d(E,T), d(S, F)+d(F, T)} = min{ 5+18, 7+13, 7+2 } =9

3/8/2009

ADA Unit -4

I.S Borse

25

Optimal binary search trees


e.g. binary search trees for 3, 7, 9, 12;
3 7 7
12

12

12

12

7 (b)
ADA Unit -4 I.S Borse

(a)
3/8/2009

(c)

(d)
26

Optimal binary search trees


n identifiers : a1 <a2 <a3 << an Pi, 1 i n : the probability that ai is searched. Qi, 0 i n : the probability that x is searched where ai < x < ai+1 (a0=- , an+1= ).
n n

Pi
i 1 i 1

Qi

3/8/2009

ADA Unit -4

I.S Borse

27

10

14

11

E7

E0

E1

E2

E3

E4

12

Identifiers : 4, 5, 8, 10, 11, 12, 14 Internal node : successful search, Pi External node : unsuccessful search, Qi

E5

E6

The expected cost of a binary tree:


n n

Pi
n 1

level(a

)
n 0

Qi

(level(E

1)

The level of the root : 1


ADA Unit -4 I.S Borse 28

3/8/2009

The dynamic programming approach


Let C(i, j) denote the cost of an optimal binary search tree containing ai,,aj . The cost of the optimal binary search tree with ak as its root :
k 1 n

C(1, n)

1 k n

min Pk

Q0
i 1

Pi

Qi

C 1, k 1
ak

Qk
i k 1

Pi

Qi

C k 1, n

P 1 ...P k-1 Q 0 ...Q k-1 a1...a k -1


3/8/2009

P k+1 ...P n Q k ...Q n ak + 1 ...a n


I.S Borse

C (1 ,k -1 ) ADA Unit -4

C (k +1 ,n )

29

General formula
k 1

C(i, j)

min
i k j

Pk

Q i -1
m j i

Pm Pm

Qm Qm 1, j

C i, k

Qk
m k 1

C k

1, j
j

min
i k j

C i, k

C k
ak

Q i -1
m i

Pm

Qm

P 1 ...P k-1 Q 0 ...Q k-1 a1...a k -1


3/8/2009

P k+1 ...P n Q k ...Q n ak + 1 ...a n

C (1 ,k -1 )

ADA Unit -4

I.S Borse

C (k +1 ,n )

30

Computation relationships of subtrees


e.g. n=4
C(1,4)

C(1,3)

C(2,4)

C(1,2)

C(2,3)

C(3,4)

Time complexity : O(n3) when j-i=m, there are (n-m) C(i, j)s to compute. Each C(i, j) with j-i=m can be computed in O(m) time.

O(
3/8/2009

m(n
1 m n

m) )

O(n )
I.S Borse 31

ADA Unit -4

0/1 knapsack problem


n objects , weight W1, W2, ,Wn profit P1, P2, ,Pn capacity M maximize
1 i n

Pi x i

subject to W x M xi = 0 or 1, 1 i n
i i 1 i n

e. g.
i 1 2 3 Wi 10 3 5 Pi 40 20 30 M = 10

3/8/2009

ADA Unit -4

I.S Borse

32

The multistage graph solution


The 0/1 knapsack problem can be described by a multistage graph.
x2=0 1 x1=1 40 x2=1 0 0 x2=0 00 01 20 0 10 x3=0 0 x3=1 0 x3=0 x3=1 30 0 x3=0
3/8/2009 ADA Unit -4 I.S Borse

100

011

0 0

S
x1=0 0

30
010

0 0

001

000

33

The Dynamic Programming Approach


The longest path represents the optimal solution: x1=0, x2=1, x3=1 P x = 20+30 = 50 Let fi(Q) be the value of an optimal solution to objects 1,2,3,,i with capacity Q. fi(Q) = max{ fi-1(Q), fi-1(Q-Wi)+Pi } The optimal solution is fn(M).
i i

3/8/2009

ADA Unit -4

I.S Borse

34

All Pairs Shortest Paths


Problem definition Determine a matrix A such that A( i, j) is the length of a shortest path from i to j One method using the algorithm ShortestPaths Each vertex requiring O(n2) time, so total time is O(n3) Restriction: no negative edge allowed DP algorithm The principle of optimality (Does it hold? Why?) (weaker) restriction: no cycle with negative length

3/8/2009

ADA Unit -4

I.S Borse

35

All Pairs Shortest Paths


DP algorithm (Continued) Ak(i , j) = the length of a shortest path from i to j going through no vertex of index greater than k

A ( i , j)

min{ A

k 1

( i , j), A

k 1

(i, k )

k 1

( k , j)},

3/8/2009

ADA Unit -4

I.S Borse

36

All Pairs Shortest Paths


Algorithm
1. void AllPaths(float cost[][SIZE], float A[][SIZE], int n) 2. // cost[1:n][1:n] is the cost adjacency matrix of 3. // a graph with n vertices; A[i][j] is the cost 4. // of a shortest path from vertex i to vertex j. 5. // cost[i][i] = 0.0, for 1 <= i <= n. 6. { for (int i=1; i<=n; i++) 7. for (int j=1; j<=n; j++) 8. A[i][j] = cost[i][j]; // Copy cost into A. 9. for (int k=1; k<=n; k++) 10. A[i][j] = min(A[i][j], A[i][k]+A[k][j]); 11. }

There are max 3 nested loops each running 1 to n. so time complexity is O(n3)
3/8/2009 ADA Unit -4 I.S Borse 37

The Traveling Salesperson Problem


Problem TSP is a permutation problem (not subset problem) Usually the permutation problem is harder than the subset one Because n! > 2n Given a directed graph G(V,E)

cij= edge cost


A tour is a directed simple cycle that includes every vertex in V The TSP is to find a tour of minimum cost Many applications 1. Routing a postal van to pick up mail from mail boxes located at n different sites 2. Planning robot arm movements to tighten the nuts on n different positions 3. Planning production in which n different commodities are manufactured on the same sets of machines
3/8/2009 ADA Unit -4 I.S Borse 38

The Traveling Salesperson Problem


DP formulation Assumption: A tour starts and ends at vertex 1 The principle of optimality holds (Why?) g(i, S) = length of a shortest path starting at vertex i, going through all vertices in S, and terminating at vertex 1 g(1,V-{1}) = g(i, S) =
min { c 1k
2 k n

g(k, V g(j, S

{1, k}) }
{j}) }

min { c ij
j S

Solving the recurrence relation g(i,) = ci1, 1in Then obtain g(i, S) for all S of size 1 Then obtain g(i, S) for all S of size 2 Then obtain g(i, S) for all S of size 3 Finally obtain g(1,V-{1})

3/8/2009

ADA Unit -4

I.S Borse

39

The Traveling Salesperson Problem


10

1
12 6 8 20 10 8

2
9 13

0 5 6

10 0 13 8

12 9 0 9

20 10 12 0

12

For |S|=0 g(2,)=c21=5 g(3,)=c31=6 g(4,)=c41=8 For |S|=1 g(2,{3})=c23+g(3,)=15 g(2,{4})=c24+g(4,)=18 g(3,{2})=c32+g(2,)=18 g(3,{4})=c34+g(4,)=20 g(4,{2})=c42+g(2,)=13 g(4,{3})=c43+g(3,)=15
3/8/2009 ADA Unit -4 I.S Borse 40

The Traveling Salesperson Problem


For |S|=2 g(2,{3,4}) = min {c23+g(3,{4}), c24+g(4,{3})} = 25 g(3,{2,4}) = min {c32+g(2,{4}), c34+g(4,{2})} = 25 g(4,{2,3}) = min {c42+g(2,{3}), c43+g(3,{2})} = 23 For |S|=3 g(1,{2,3,4} = min {c12+g(2,{3,4}), c13+g(3,{2,4}), c14+g(4,{2,3})} = = min {35, 40, 43} = 35 Time complexity Let N be the number of g(i, S) that have to be computed before can be used to compute g(1,V-{1})
n 2

N=

(n
k 0

1)

n k

(n

1) 2

Total time = O(n22n) This is better than enumerating all n! different tours
3/8/2009 ADA Unit -4 I.S Borse 41

The traveling salesperson (TSP) problem


e.g. a directed graph :
2

1
10 4 6 5 8

2
9 3

3
1 1 2 3 4 2 2 2 4 6
I.S Borse

Cost matrix:

3 10 9 7

4 5 4

3 8

3/8/2009

ADA Unit -4

42

The multistage graph solution


(1,2,3) 9 6 (1,2) 2 3 (1) 10 (1,3) 4 5 (1,4) 8 7 (1,4,3) 3 (1,4,3,2) (1,4,2) 9 (1,4,2,3) 2 (1,3,4) 8 (1,3,4,2) 4 (1,3,2) (1,3,2,4) 6 2 (1,2,4) 7 (1,2,4,3) 4 4 (1,2,3,4)

A multistage graph can describe all possible tours of a directed graph. Find the shortest path: (1, 4, 3, 2, 1) 5+7+3+2=17
3/8/2009 ADA Unit -4 I.S Borse 43

The representation of a node


Suppose that we have 6 vertices in the graph. We can combine {1, 2, 3, 4} and {1, 3, 2, 4} into one node.
(1,3,2) (1,3,2,4) combine (2), (4,5,6) (4), (5,6) (3), (4,5,6)

(1,2,3)

(1,2,3,4)

(3),(4,5,6) means that the last vertex visited is 3 and remaining vertices to be visited are (4, 5, 6).

(a)

(b)

the

3/8/2009

ADA Unit -4

I.S Borse

44

The dynamic programming approach


Let g(i, S) be the length of a shortest path starting at vertex i, going through all vertices in S and terminating at vertex 1. The length of an optimal tour : The general form:
g(1, V - {1})

min {c
2 k n

1k

g(k, V - {1, k})}

Time complexity:
g(i, S)
n

m in {c ij
j S

g(j, S - {j})}
( ),( ) (n-k)
k)

n
k 2

(n O(n
2

1)( n

2 k

)( n

(n-1)
(n
n 2 k

2 )
ADA Unit -4 I.S Borse

)
45

3/8/2009

Review: Dynamic Programming


Dynamic programming is another strategy for designing algorithms Use when problem breaks down into recurring small subproblems

3/8/2009

ADA Unit -4

I.S Borse

46

Review: Dynamic Programming


Summary of the basic idea:
Optimal substructure: optimal solution to problem consists of optimal solutions to subproblems Overlapping subproblems: few subproblems in total, many recurring instances of each Solve bottom-up, building a table of solved subproblems that are used to solve larger ones

Variations:
Table could be 3-dimensional, triangular, a tree, etc.

3/8/2009

ADA Unit -4

I.S Borse

47

Greedy Algorithms

3/8/2009

ADA Unit -4

I.S Borse

48

Overview
Like dynamic programming, used to solve optimization problems. Problems exhibit optimal substructure (like DP). Problems also exhibit the greedy-choice property.
When we have a choice to make, make the one that looks best right now. Make a locally optimal choice in hope of getting a globally optimal solution.
3/8/2009 ADA Unit -4 I.S Borse 49

Greedy Strategy
The choice that seems best at the moment is the one we go with. Prove that when there is a choice to make, one of the optimal choices is the greedy choice. Therefore, its always safe to make the greedy choice. Show that all but one of the subproblems resulting from the greedy choice are empty.

3/8/2009

ADA Unit -4

I.S Borse

50

Elements of Greedy Algorithms


Greedy-choice Property.
A globally optimal solution can be arrived at by making a locally optimal (greedy) choice.

Optimal Substructure.

3/8/2009

ADA Unit -4

I.S Borse

51

Greedy Algorithms
A greedy algorithm always makes the choice that looks best at the moment
My everyday examples:
Walking to the Corner Playing a bridge hand

The hope: a locally optimal choice will lead to a globally optimal solution For some problems, it works

Dynamic programming can be overkill; greedy algorithms tend to be easier to code


3/8/2009 ADA Unit -4 I.S Borse 52

Review: The Knapsack Problem


More formally, the 0-1 knapsack problem:
The thief must choose among n items, where the ith item worth vi dollars and weighs wi pounds Carrying at most W pounds, maximize value
Note: assume vi, wi, and W are all integers 0-1 b/c each item must be taken or left in entirety

A variation, the fractional knapsack problem:


Thief can take fractions of items Think of items in 0-1 problem as gold ingots, in fractional problem as buckets of gold dust
3/8/2009 ADA Unit -4 I.S Borse 53

Review: The Knapsack Problem And Optimal Substructure


Both variations exhibit optimal substructure To show this for the 0-1 problem, consider the most valuable load weighing at most W pounds
If we remove item j from the load, what do we know about the remaining load? A: remainder must be the most valuable load weighing at most W - wj that thief could take from museum, excluding item j

3/8/2009

ADA Unit -4

I.S Borse

54

Solving The Knapsack Problem


The optimal solution to the fractional knapsack problem can be found with a greedy algorithm
How?

The optimal solution to the 0-1 problem cannot be found with the same greedy strategy
Greedy strategy: take in order of dollars/pound Example: 3 items weighing 10, 20, and 30 pounds, knapsack can hold 50 pounds
Suppose item 2 is worth $100. Assign values to the other items so that the greedy strategy will fail

3/8/2009

ADA Unit -4

I.S Borse

55

Greedy Choice Property


Dynamic programming? Memorize? Yes, but Activity selection problem also exhibits the greedy choice property:
Locally optimal choice globally optimal soln Them 17.1: if S is an activity selection problem sorted by finish time, then optimal solution A S such that {1} A
Sketch of proof: if optimal solution B that does not contain {1}, can always replace first activity in B with {1} (Why?). Same number of activities, thus optimal.
3/8/2009 ADA Unit -4 I.S Borse 56

Review: The Knapsack Problem


The famous knapsack problem:
A thief breaks into a museum. Fabulous paintings, sculptures, and jewels are everywhere. The thief has a good eye for the value of these objects, and knows that each will fetch hundreds or thousands of dollars on the clandestine art collectors market. But, the thief has only brought a single knapsack to the scene of the robbery, and can take away only what he can carry. What items should the thief take to maximize the haul?
3/8/2009 ADA Unit -4 I.S Borse 57

The Knapsack Problem: Greedy Vs. Dynamic


The fractional problem can be solved greedily The 0-1 problem cannot be solved with a greedy approach
As you have seen, however, it can be solved with dynamic programming

3/8/2009

ADA Unit -4

I.S Borse

58

ADA Unit -3 I.S Borse

59

Potrebbero piacerti anche