Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
E(Computer)
By I.S.Borse SSVPS BSD COE ,DHULE
3/8/2009 ADA Unit -4 I.S Borse 1
Outline Chapter 4
1. Dynamic Programming ( Introduction) i) Multistage Graph ii) Optimal Binary Search Tree(OBST) iii) 0/1 Knapsack Problem iv) Travelling Salesman Problem 2. Greedy Algorithms ( Introduction) i) Job Sequencing ii) Optimal Merge Patterns
3/8/2009 ADA Unit -4 I.S Borse 2
Dynamic Programming
Dynamic Programming is an algorithm design method that can be used when the solution to a problem may be viewed as the result of a sequence of decisions
3/8/2009
ADA Unit -4
I.S Borse
3/8/2009
ADA Unit -4
I.S Borse
Principle of optimality: Suppose that in solving a problem, we have to make a sequence of decisions D1, D2, , Dn. If this sequence is optimal, then the last k decisions, 1 k n must be optimal. e.g. the shortest path problem If i, i1, i2, , j is a shortest path from i to j, then i1, i2, , j must be a shortest path from i1 to j In summary, if a problem can be described by a multistage graph, then it can be solved by dynamic programming.
3/8/2009 ADA Unit -4 I.S Borse 6
Principle of optimality
Greedy method
Only one decision sequence is ever generated
Dynamic programming
Many decision sequences may be generated But sequences containing suboptimal subsequences discarded because they cannot be optimal due to the principle of optimality
3/8/2009
ADA Unit -4
I.S Borse
Dynamic Programming
Forward approach and backward approach: Note that if the recurrence relations are formulated using the forward approach then the relations are solved backwards . i.e., beginning with the last decision On the other hand if the relations are formulated using the backward approach, they are solved forwards. To solve a problem by using dynamic programming: Find out the recurrence relations. Represent the problem by a multistage graph.
3/8/2009 ADA Unit -4 I.S Borse 9
D
18
13 2
The greedy method can not be applied to this case: (S, A, D, T) 1+4+18 = 23. The real shortest path is: (S, C, F, T) 5+2+2 = 9.
3/8/2009 ADA Unit -4 I.S Borse 12
Multistage Graphs
Definition: multistage graph G(V,E) A directed graph in which the vertices are partitioned into k2 disjoint sets Vi, 1ik If <u,v> E, then u Vi and v Vi+1 for some I, 1i<k |V1|= |Vk|=1, and s(source) V1 and t(sink) Vk c(i,j)=cost of edge <i,j> Definition: Multistage Graph Problem Find a minimum-cost path from s to t e.g., (5-stage graph)
3/8/2009 ADA Unit -4 I.S Borse 13
Multistage Graphs
3/8/2009
ADA Unit -4
I.S Borse
14
Multistage Graphs
DP formulation
Every s to t path is the result of a sequence of k-2 decisions The principle of optimality holds (Why?) p(i, j) = a minimum-cost path from vertex j in Vi to vertex t cost(i, j) = cost of path p(i , j)
cos t ( i , j ) min { c ( j , l )
l Vi 1 j ,l E
cos t ( i
1, l )}
cost(k-1,j) = c(j,t) if <j,t> E, otherwise Then computing cost(k-2,j) for all j Vk-2 Then computing cost(k-3,j) for all j Vk-3 Finally computing cost(1,s)
3/8/2009 ADA Unit -4 I.S Borse 15
Multistage Graphs
(k=5)
Stage 5 cost(5,12) = 0.0 Stage 4 cost(4,9) = min {4+cost(5,12)} = 4 cost(4,10) = min {2+cost(5,12)} = 2 cost(4,11) = min {5+cost(5,12)} = 5 Stage 3 cost(3,6) = min {6+cost(4,9), 5+cost(4,10)} = 7 cost(3,7) = min {4+cost(4,9), 3+cost(4,10)} = 5 cost(3,8) = min {5+cost(4,10), 6+cost(4,11)} = 7
3/8/2009 ADA Unit -4 I.S Borse 16
Multistage Graphs
Stage 2 cost(2,2) = min {4+cost(3,6), 2+cost(3,7), 1+cost(3,8)} = 7 cost(2,3) = min {2+cost(3,6), 7+cost(3,7)} = 9 cost(2,4) = min {11+cost(3,8)} = 18 cost(2,5) = min {11+cost(3,7), 8+cost(3,8)} = 15 Stage 1 cost(1,1) = min {9+cost(2,2), 7+cost(2,3), 3+cost(2,4), 2+cost(2,5)} = 16 Important notes: avoiding the recomputation of cost(3,6), cost(3,7), and cost(3,8) in computing cost(2,2)
3/8/2009 ADA Unit -4 I.S Borse 17
Multistage Graphs
void Fgraph (graph G, int k, int n, int p[] ) // The input is a k-stage graph G = (V,E) with n vertices indexed in order // of stages. E is a set of edges and c[i][j] is the cost of <i, j>. // p[1 : k] is a minimum-cost path. { float cost[MAXSIZE]; int d[MAXSIZE], r; cost[n] = 0.0; for (int j=n-1; j >= 1; j--) { // Compute cost[j]. let r be a vertex such that <j, r> is an edge of G and c[j][r] + cost[r] is minimum; cost[j] = c[j][r] + cost[r]; d[j] = r; } // Find a minimum-cost path. p[1] = 1; p[k] =n ; for ( j=2; j <= k-1; j++) p[j] = d[ p[ j-1 ] ]; }
ADA Unit -4 I.S Borse 19
3/8/2009
Multistage Graphs
Backward approach
bcos t ( i , j) min { bcos t ( i
l Vi 1 j, l E
1, l )
c ( l , j)}
3/8/2009
ADA Unit -4
I.S Borse
20
D
18
13 2
The greedy method can not be applied to this case: (S, A, D, T) 1+4+18 = 23. The real shortest path is: (S, C, F, T) 5+2+2 = 9.
3/8/2009 ADA Unit -4 I.S Borse 21
A B
d(A, T) d(B, T)
d(C, T)
D E
d(D, T)
11
3/8/2009 ADA Unit -4 I.S Borse
T
d(E, T)
22
Dynamic programming
d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)} = min{9+18, 5+13, 16+2} = 18. d(C, T) = min{ 2+d(F, T) } = 2+2 = 4 d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)} = min{1+22, 2+18, 5+4} = 9. The above way of reasoning is called D backward reasoning.
9
d(D , T ) d(E , T )
E
d(F , T )
16
F
23
3/8/2009
ADA Unit -4
I.S Borse
d(S,T) = min{d(S, D)+d(D, T),d(S,E)+ d(E,T), d(S, F)+d(F, T)} = min{ 5+18, 7+13, 7+2 } =9
3/8/2009
ADA Unit -4
I.S Borse
25
12
12
12
7 (b)
ADA Unit -4 I.S Borse
(a)
3/8/2009
(c)
(d)
26
Pi
i 1 i 1
Qi
3/8/2009
ADA Unit -4
I.S Borse
27
10
14
11
E7
E0
E1
E2
E3
E4
12
Identifiers : 4, 5, 8, 10, 11, 12, 14 Internal node : successful search, Pi External node : unsuccessful search, Qi
E5
E6
Pi
n 1
level(a
)
n 0
Qi
(level(E
1)
3/8/2009
C(1, n)
1 k n
min Pk
Q0
i 1
Pi
Qi
C 1, k 1
ak
Qk
i k 1
Pi
Qi
C k 1, n
C (1 ,k -1 ) ADA Unit -4
C (k +1 ,n )
29
General formula
k 1
C(i, j)
min
i k j
Pk
Q i -1
m j i
Pm Pm
Qm Qm 1, j
C i, k
Qk
m k 1
C k
1, j
j
min
i k j
C i, k
C k
ak
Q i -1
m i
Pm
Qm
C (1 ,k -1 )
ADA Unit -4
I.S Borse
C (k +1 ,n )
30
C(1,3)
C(2,4)
C(1,2)
C(2,3)
C(3,4)
Time complexity : O(n3) when j-i=m, there are (n-m) C(i, j)s to compute. Each C(i, j) with j-i=m can be computed in O(m) time.
O(
3/8/2009
m(n
1 m n
m) )
O(n )
I.S Borse 31
ADA Unit -4
Pi x i
subject to W x M xi = 0 or 1, 1 i n
i i 1 i n
e. g.
i 1 2 3 Wi 10 3 5 Pi 40 20 30 M = 10
3/8/2009
ADA Unit -4
I.S Borse
32
100
011
0 0
S
x1=0 0
30
010
0 0
001
000
33
3/8/2009
ADA Unit -4
I.S Borse
34
3/8/2009
ADA Unit -4
I.S Borse
35
A ( i , j)
min{ A
k 1
( i , j), A
k 1
(i, k )
k 1
( k , j)},
3/8/2009
ADA Unit -4
I.S Borse
36
There are max 3 nested loops each running 1 to n. so time complexity is O(n3)
3/8/2009 ADA Unit -4 I.S Borse 37
g(k, V g(j, S
{1, k}) }
{j}) }
min { c ij
j S
Solving the recurrence relation g(i,) = ci1, 1in Then obtain g(i, S) for all S of size 1 Then obtain g(i, S) for all S of size 2 Then obtain g(i, S) for all S of size 3 Finally obtain g(1,V-{1})
3/8/2009
ADA Unit -4
I.S Borse
39
1
12 6 8 20 10 8
2
9 13
0 5 6
10 0 13 8
12 9 0 9
20 10 12 0
12
For |S|=0 g(2,)=c21=5 g(3,)=c31=6 g(4,)=c41=8 For |S|=1 g(2,{3})=c23+g(3,)=15 g(2,{4})=c24+g(4,)=18 g(3,{2})=c32+g(2,)=18 g(3,{4})=c34+g(4,)=20 g(4,{2})=c42+g(2,)=13 g(4,{3})=c43+g(3,)=15
3/8/2009 ADA Unit -4 I.S Borse 40
N=
(n
k 0
1)
n k
(n
1) 2
Total time = O(n22n) This is better than enumerating all n! different tours
3/8/2009 ADA Unit -4 I.S Borse 41
1
10 4 6 5 8
2
9 3
3
1 1 2 3 4 2 2 2 4 6
I.S Borse
Cost matrix:
3 10 9 7
4 5 4
3 8
3/8/2009
ADA Unit -4
42
A multistage graph can describe all possible tours of a directed graph. Find the shortest path: (1, 4, 3, 2, 1) 5+7+3+2=17
3/8/2009 ADA Unit -4 I.S Borse 43
(1,2,3)
(1,2,3,4)
(3),(4,5,6) means that the last vertex visited is 3 and remaining vertices to be visited are (4, 5, 6).
(a)
(b)
the
3/8/2009
ADA Unit -4
I.S Borse
44
min {c
2 k n
1k
Time complexity:
g(i, S)
n
m in {c ij
j S
g(j, S - {j})}
( ),( ) (n-k)
k)
n
k 2
(n O(n
2
1)( n
2 k
)( n
(n-1)
(n
n 2 k
2 )
ADA Unit -4 I.S Borse
)
45
3/8/2009
3/8/2009
ADA Unit -4
I.S Borse
46
Variations:
Table could be 3-dimensional, triangular, a tree, etc.
3/8/2009
ADA Unit -4
I.S Borse
47
Greedy Algorithms
3/8/2009
ADA Unit -4
I.S Borse
48
Overview
Like dynamic programming, used to solve optimization problems. Problems exhibit optimal substructure (like DP). Problems also exhibit the greedy-choice property.
When we have a choice to make, make the one that looks best right now. Make a locally optimal choice in hope of getting a globally optimal solution.
3/8/2009 ADA Unit -4 I.S Borse 49
Greedy Strategy
The choice that seems best at the moment is the one we go with. Prove that when there is a choice to make, one of the optimal choices is the greedy choice. Therefore, its always safe to make the greedy choice. Show that all but one of the subproblems resulting from the greedy choice are empty.
3/8/2009
ADA Unit -4
I.S Borse
50
Optimal Substructure.
3/8/2009
ADA Unit -4
I.S Borse
51
Greedy Algorithms
A greedy algorithm always makes the choice that looks best at the moment
My everyday examples:
Walking to the Corner Playing a bridge hand
The hope: a locally optimal choice will lead to a globally optimal solution For some problems, it works
3/8/2009
ADA Unit -4
I.S Borse
54
The optimal solution to the 0-1 problem cannot be found with the same greedy strategy
Greedy strategy: take in order of dollars/pound Example: 3 items weighing 10, 20, and 30 pounds, knapsack can hold 50 pounds
Suppose item 2 is worth $100. Assign values to the other items so that the greedy strategy will fail
3/8/2009
ADA Unit -4
I.S Borse
55
3/8/2009
ADA Unit -4
I.S Borse
58
59