Dynamic Programming

85
DYNAMIC PROGRAMMING Problems like knapsack problem, shortest path can be solved by greedy method in which optimal decisions can be made one at a time. For many problems, it is not possible to make stepwise decision in such a manner that the sequence of decisions made is optimal.

description

it contains the detail information about Dynamic programming, Knapsack problem, Forward / backward knapsack, Optimal Binary Search Tree (OBST), Traveling sales person problem(TSP) using dynamic programming

Transcript of Dynamic Programming

Page 1: Dynamic Programming

DYNAMIC PROGRAMMING

• Problems like knapsack problem, shortest path can be solved by greedy method in which optimal decisions can be made one at a time.

• For many problems, it is not possible to make stepwise decision in such a manner that the sequence of decisions made is optimal.

Page 2: Dynamic Programming

DYNAMIC PROGRAMMING (Contd..)

Example: • Suppose a shortest path from vertex i to vertex j is

to be found. • Let Ai be vertices adjacent from vertex i. which of

the vertices of Ai should be the second vertex on the path?

• One way to solve the problem is to enumerate all decision sequences and pick out the best

• In dynamic programming the principle of optimality is used to reduce the decision sequences.

Page 3: Dynamic Programming

3

DYNAMIC PROGRAMMING (Contd..)

Principle of optimality: • An optimal sequence of decisions has the property

that whatever the initial state and decisions are, the remaining decisions must constitute an optimal decision sequence with regard to the state resulting from the first decision.

• In the greedy method only one decision sequence is generated.

• In dynamic programming many decision sequences may be generated.

Page 4: Dynamic Programming

0/1 knapsack problem

Example:• [0/1 knapsack problem]:the xi’s in 0/1 knapsack problem is

restricted to either 0 or 1.• Using KNAP(l,j,y) to represent the problem

Maximize ∑pixi

l≤i≤j subject to ∑wixi ≤ y, ………………..(1) l≤i≤j xi = 0 or 1 l≤i≤j

The 0/1 knapsack problem is KNAP(l,n,M).

Page 5: Dynamic Programming

0/1 knapsack problem: principal of optimality

• Let y1 ,y2 , …… ,yn be an optimal sequence of 0/l values for x1 ,x2 …..,xn respectively.

• If y1 = 0 then y2,y3,……,yn must constitute an optimal sequence for KNAP (2,n,M).

• If it does not, then y1 ,y2 , …… ,yn is not an optimal sequence for KNAP (1, n, M)

• If y1=1, then y1 ,y2 , …… ,yn is an optimal sequence for KNAP(2,n,M-w1 ).

• If it is not there is another 0/l sequence z1,z2,….,zn such that ∑pizi has greater value.

• Thus the principal of optimality holds.

Page 6: Dynamic Programming

6

0/1 knapsack problem: principal of optimality (contd..)

• Let gj(y) be the value of an optimal solution to KNAP (j+1, n, y).

• Then g0(M) is the value of an optimal solution to KNAP(l,n,M)

• From the principal of optimality

g0(M) = max{g1(M), g1(M-W1) + P1)}

• g0(M) is the maximum profit which is the value of the optimal solution

Page 7: Dynamic Programming

0/1 knapsack problem: principal of optimality (contd..)

• The principal of optimality can be equally applied to intermediate states and decisions as has been applied to initial states and decisions

• Let y1,y2,…..yn be an optimal solution to KNAP(l,n,M).

• Then for each j l≤j≤n , y1,..,yj and yj+1,….,yn must be optimal solutions to the problems KNAP(1,j,∑wiyi)

l≤i≤j and KNAP(j+1,n,M-∑wiyi) respectively l≤i≤j

Page 8: Dynamic Programming

8

Recurrence Relation• Then gi(y) = max{gi+1(y), (xi +1= 0 case)

gi+1(y-wi +1) + pi+1}…(1) (xi +1= 1case)

• Equation (1) is known as recurrence relation. • Dynamic programming algorithms solve the

recurrence relation to obtain a solution to the given problem

• To solve 0/1 knapsack problem , we use the knowledge gn(y) = 0 for all y, because gn(y) is an optimal solution (profit) to the problem

KNAP(n+1, n, y) which is obviously zero for any y

Page 9: Dynamic Programming

9

Solving Recurrence Relation

• Substituting i = n -1 and using gn(y)= 0 in

the above relation(1), we obtain gn-1(y)

• Then using gn-1(y), we obtain gn-2(y) and so

on till we get g0(M) (with i=0) which is the

solution of the knapsack problem

Page 10: Dynamic Programming

10

Forward and backward approaches

• In the forward approach ,decision xi is made

in terms of optimal decision sequences for xi+1……..xn (i.e we look ahead).

• In the backward approach, decision xi is in terms of optimal decision sequences for x1……xi-1(i.e we look backward)

Page 11: Dynamic Programming

11

Forward and backward approaches: 0/1 knapsack problem

• For the 0/l knapsack problem gi(y)=max{gi+1(y), gi+1(y-wi+1)+Pi+1}………(1)

• (1) is the forward approach as gn-1(y) is obtained using gn(y)

• fj(y) = max{fj-1(y), fj-1(y-wi) + pj} …………..(2)• (2) is the backward approach, as fj(y) is obtained using fj-1(y) fj(y) is the value of optimal solution to Knap(i,j,y) (2) may be solved by beginning with

f0(y) = 0 for all y ≥ 0 (no element case) and fi(y) = -infinity for y < 0 (no capacity)

Page 12: Dynamic Programming

12

FEATURES OF DYNAMIC PROGRAMMING SOLUTIONS

• It is easier to obtain the recurrence relation using backward approach.

• Dynamic programming algorithms often have polynomial complexity.

• Optimal solution to sub problems are retained so as to avoid recomputing their values.

Page 13: Dynamic Programming

13

OPTIMAL BINARY SEARCH TREES

• Definition: binary search tree (BST) A binary search tree is a binary tree; either it is empty or each node contains an identifier and

(i) all identifiers in the left sub tree of T are less than the identifiers in the root node T.

(ii) all the identifiers the right sub tree are greater than the identifier in the root node T.

(iii) the right and left sub tree are also BSTs.

Page 14: Dynamic Programming

14

ALGORITHM TO SEARCH FOR AN IDENTIFIER IN THE TREE ‘T’.

Procedure SEARCH (T X I)

// Search T for X, each node had fields LCHILD, IDENT, RCHILD//

// Return address i pointing to the identifier X// //Initially T is pointing to tree.

//ident(i)=X or i=0 //

i T

Page 15: Dynamic Programming

15

Algorithm to search for an identifier in the tree ‘T’(Contd..)

While i 0 do

case : X < Ident(i) : i LCHILD(i)

: X = IDENT(i) : RETURN i

: X > IDENT(i) : I RCHILD(i)

endcase

repeat

end SEARCH

Page 16: Dynamic Programming

16

Optimal Binary Search trees - Example

If

For while

repeat

loopif each identifier is searched with equal probability the average number of comparisons for the above tree are 1+2+2+3+4 = 12/5.

5

Page 17: Dynamic Programming

17

OPTIMAL BINARY SEARCH TREES (Contd..)

• Let us assume that the given set of identifiers are {a1,a2,……..an} with a1<a2<…….<an.

• Let Pi be the probability with which we are searching for ai.

• Let Qi be the probability that identifier x being searched for is such that ai<x<ai+1 0≤i≤n, and a0=-∞ and an+1=+∞.

Page 18: Dynamic Programming

18

OPTIMAL BINARY SEARCH TREES (Contd..)

• Then ∑Qi is the probability of an unsuccessful search. 0≤i≤ n ∑P(i) + ∑Q(i) = 1. Given the data, 1≤i≤n 0≤i≤n let us construct one optimal binary search tree for

(a1……….an).• In place of empty sub tree, we add external nodes

denoted with squares. • Internet nodes are denoted as circles.

Page 19: Dynamic Programming

19

OPTIMAL BINARY SEARCH TREES (Contd..)

If

For while

repeat

loop

Page 20: Dynamic Programming

20

Construction of optimal binary search trees

• A BST with n identifiers will have n internal nodes and n+1 external nodes.

• Successful search terminates at internal nodes unsuccessful search terminates at external nodes.

• If a successful search terminates at an internal node at level L, then L iterations of the loop in the algorithm are needed.

• Hence the expected cost contribution from the internal nodes for ai is P(i) * level(ai).

Page 21: Dynamic Programming

21

OPTIMAL BINARY SEARCH TREES (Contd..)

• Unsuccessful searche terminates at external nodes i.e. at i = 0.

• The identifiers not in the binary search tree may

be partitioned into n+1 equivalent classes

Ei 0≤i≤n.

Eo contains all X such that X≤ai

Ei contains all X such that ai<X<=ai+1 1≤i≤n

En contains all X such that X > an

Page 22: Dynamic Programming

22

OPTIMAL BINARY SEARCH TREES (Contd..)

• For identifiers in the same class Ei, the

search terminate at the same external node.• If the failure node for Ei is at level L, then

only L-1 iterations of the while loop are made

The cost contribution of the failure node for Ei is Q(i) * (level (Ei ) -1)

Page 23: Dynamic Programming

23

OPTIMAL BINARY SEARCH TREES (Contd..)

• Thus the expected cost of a binary search tree is:

∑P(i) * level (ai) + ∑Q(i) * (level(Ei) – 1) …(2)

1≤i≤n 0≤i≤n • An optimal binary search tree for {a1…….,an} is a

BST for which (2) is minimum .• Example: Let {a1,a2, a3}={do, if, stop}

Page 24: Dynamic Programming

24

OPTIMAL BINARY SEARCH TREES (Contd..)

Level 1 stop if do

Level 2 if Q(3) do stop if

Q(2)

Level 3 do stop

Q(0) Q(1)

(a) (b) (c) {a1,a2,a3}={do,if,stop}

Page 25: Dynamic Programming

25

OPTIMAL BINARY SEARCH TREES (Contd..)

stop do

do stop

if

if

(d) (c)

Page 26: Dynamic Programming

26

OPTIMAL BINARY SEARCH TREES (Contd..)

• With equal probability P(i)=Q(i)=1/7.• Let us find an OBST out of these.• Cost(tree a)=∑P(i)*level a(i) +∑Q(i)*level (Ei) -1

1≤i≤n 0≤i≤n

(2-1) (3-1) (4-1) (4-1)

=1/7[1+2+3 + 1 + 2 + 3 + 3 ] = 15/7• Cost (tree b) =17[1+2+2+2+2+2+2] =13/7• Cost (tree c) =cost (tree d) =cost (tree e) =15/7

tree b is optimal.

Page 27: Dynamic Programming

27

OPTIMAL BINARY SEARCH TREES (Contd..)

• If P(1) =0.5 ,P(2) =0.1, P(3) =0.005 , Q(0) =.15 , Q(1) =.1, Q(2) =.05 and Q(3) =.05 find the OBST.

• Cost (tree a) = .5 x 3 +.1 x 2 +.05 x 3 +.15x3 +.1x3 +.05x2 +.05x1 = 2.65

• Cost (tree b) =1.9 , Cost (tree c) =1.5 ,Cost (tree d) =2.05 ,

• Cost (tree e) =1.6 Hence tree C is optimal.

Page 28: Dynamic Programming

28

OPTIMAL BINARY SEARCH TREES (Contd..)

• To obtain a OBST using Dynamic programming we need to take a sequence of decisions regarding the construction of tree.

• First decision is which of ai is to be the root.• Let us choose ak as the root . Then the internal

nodes for a1,…….,ak-1 and the external nodes for classes Eo,E1,……,Ek-1 will lie in the left subtree L of the root.

• The remaining nodes will be in the right subtree R.

Page 29: Dynamic Programming

29

OPTIMAL BINARY SEARCH TREES (Contd..)

Define

Cost(L) =∑P(i)* level(ai) + ∑Q(i)*(level(Ei )-1)

1≤i≤k 0≤i≤k

Cost(R) =∑P(i)*level(ai) + ∑Q(i)*(level(Ei )-1)

k≤i≤n k≤i≤n• Tij be the tree with nodes ai+1,…..,aj and nodes

corresponding to Ei,Ei+1,…..,Ej.

• Let W(i,j) represents the weight of tree Tij.

Page 30: Dynamic Programming

30

OPTIMAL BINARY SEARCH TREES (Contd..)

W(i,j)=P(i+1) +…+P(j)+Q(i)+Q(i+1)…Q(j)=Q(i) +∑j [Q(l)+P(l)]

l=i+1• The expected cost of the search tree in (a) is (let us call it T) is

  P(k)+cost(l)+cost(r)+W(0,k-1)+W(k,n)

W(0,k-1) is the sum of probabilities corresponding to nodes and nodes belonging to equivalent classes to the left of ak.

W(k,n) is the sum of the probabilities corresponding to those on the right of ak. ak

L R

(a) OBST with root ak

Page 31: Dynamic Programming

31

OPTIMAL BINARY SEARCH TREES (Contd..)

• The expected cost of the tree is P(K)+COST(L)+COST(R)+W(0,k-1)+W(k,n)…….....(3)

• If T is optimal then (3) must be minimum over all binary search trees containing a1,…ak-1 and E0,E1,…..,Ek-1

• Similarly cost(R) must be minimum.• Let C(i,j) represent the cost of the tree Tij

(containing nodes ai+1,……aj and Ei,…..,Ej).• Then cost(l) = C(0,k-1) and cost(R) =C(k,n)

substituting for cost(l) and cost(R) in (3),

Page 32: Dynamic Programming

32

OPTIMAL BINARY SEARCH TREES (Contd..)

• The expected cost of the search tree is• P(k)+C(0,k-1)+C(k,n)+W(0,k-1)+W(k,n)…..(4) is

minimum if k is chosen such that (4) is minimum cost of the BST with nodes a1,a2,…..,an and E0,….,EnC(0,n)=min{C(0,k-1)+C(k,n)+P(k)+W(0,k-1)+W(k,n)}…..(5)

1<=k≤nsubstitute i instead of 0 and j instead of n in (5)

C(I, j) = min { C(i, k-1)+ C (k, j)+ P (k) + W(i, k-1) + W (k, j)} = min {C(i,k-1)+C(k,j)+W(i,j)}…………..(6)

i<k≤jP(k) + W(i, k-1) + W (k, j)= P(k) +p(i+1)+…+P(k-1)+P(k+1) +

…+P(j) = P(i+1)+…+P(k)+P(k+1)+…P(j) =W(i,j)

Page 33: Dynamic Programming

33

OPTIMAL BINARY SEARCH TREES (Contd..)

• Equation (6) may be solved for C(0,n) by first computing C(i,j) such that j-i=1 (C(i,i)=0 and W(i,i)=Q(i),0≤i≤n)

• We can compute all C(i,j) such that j-i = 2, then all C(i,j) with j-i=3 etc.

• If during these computations, the root R(i,j) of each tree Tij is recorded, then OBST may be constructed.

Page 34: Dynamic Programming

34

Optimal Binary Search Trees – Example

Example: Let n = 4 and (a1,a2,a3,a4) = (do, if, read, while)

Let P(1:4)=(3,3,1,1) and Q(0:4)=(2,3,1,1,1) .• P’s and Q’s have been multiplied by 16 for convenience.• Initially ,W(i,j) = Q(i), C(i,i) = 0, R(i,i) = 0 , 0≤i≤4

J j-1

W(i,j) =Q(i) +∑ Q(l) +P(l) =P(j) +Q(j) +Q(i) + ∑ Q(l) +P(l)

l=i+1 l=i+1 ….…(7)

= P(j) +Q(j) +W(i,j-1)

Page 35: Dynamic Programming

35

Optimal Binary Search Trees – Example (contd..)

C(i,j) = min{C(i,k-1)+C(k,j)} +W(i,j)……….…(8) i<k≤j

Using (7), W(0,1) = P(1) +Q(1) +W(0,0) =3+3+Q(0)=3+3+2=8

Using (8) C(0,1) = W(0,1) +min{C(0,0) +C(1,1)}

= 8 +min{0+0}

R(0,1) = 1 because R(i,j) is the value of k that minimizes(8)

Page 36: Dynamic Programming

36

Optimal Binary Search Trees – Example (contd..)

W(1,2) =P(2) +Q(2) +W(1,1) =P(2) +Q(2) +Q(1) =3+1+3 =7

C(1,2) =W(1,2) +min{C(1,1) +C(2,2)}=7 +min{0,0)=7

R(1,2) =2

W(2,3) =P(3) +Q(3) +W(2,2) =3 k = 2

C(2,3) =W(2,3) +min{C(2,2)+C(3,3)} =3 k = 3

R(2,3) =3

W(3,4) =P(4) +Q(4) +W(3,3) =1+1+Q(3) =3

C(3,4) =W(3,4) +min{C(3,3) +C(4,4)} =3 k = 4

R(3,4) =4

Page 37: Dynamic Programming

37

Optimal Binary Search Trees – Example (contd..)

• Knowing W(i,i+1), and C(i,i+1) 0≤i<4 , we are computing W(i,i+2) ,C(i,i+2) R(i,i+2) 0≤i<3

• This process may be repeated until W(0,4) ,C(0,4) and R(0,4) are obtained

• Using R(i,j) let us see how to construct an OBST.• C(o,4)=32 is the minimum cost of BST for

{ a1, a2, a3, a4}.

do, if, read, while

Page 38: Dynamic Programming

38

Optimal Binary Search Trees – Example (contd..)

• The root of tree T04 is a2

R(0,4)=2• The left sub-tree of T04 is T01 and right sub-tree is T24.

• R (0, 1) =1 a1 is the root of T01,

• R (2, 4) =3

a3 is root of T24,

• T01 has T00 and T11 as sub-trees , T24 has T22 and T34

• For T00,T11 and T22 the root is 0, T24 it is 4.

Page 39: Dynamic Programming

39

Optimal Binary Search Trees – Example (contd..)

a2={if} if

do Read

while

OBST

Page 40: Dynamic Programming

40

Optimal Binary Search Trees – Example (contd..)

We can verify that the cost of OBST

= ∑P (i) level (ai)+∑Q(i) (level(Ei)-1)

1≤i≤n 0≤i≤n

= 2×3+1×3+2×1+3×1+2×2+2×3+2×1+3×1+3 ×1

= 6+3+2+3+4+6+2+3+3 = 32

Page 41: Dynamic Programming

41

COMPLEXITY OF THE PROCEDURE FOR CONSTRUCTING OBST

C(i,j)=min{C(i,k-1)+C(k,i)}+w(i,j) i<k≤j • C (i,j)s are computed for j-i=1,2,…,n. If j-i=m then

there are n-m+1 C (i,j)s to compute• (for n=4, and j-i =1,they are C(0,1), C(1,2), C(2,3),

C(3,4))• Each C(i,j) requires to compute m quantities • Hence each C (i,j) can be computed in o(m) time.

The total time for all c(i,j)s with j-i=m is 0(n-m+1)(m)=0(nm-m2+n)=0(nm-m2) .

Page 42: Dynamic Programming

42

COMPLEXITY OF THE PROCEDURE FOR CONSTRUCTING OBST (Contd..)

• The total time to evaluate all the C (i,j)s and R(i,j)s is

∑(nm-m2) = n∑m-∑m2 = n (m)(m+1)/2

– m (m+1)(2m+1)/6 = 0(n3)

1≤m≤n

Page 43: Dynamic Programming

43

COMPLEXITY OF THE PROCEDURE FOR CONSTRUCTING OBST (Contd..)

• The complexity can be reduced to 0(n2) using D.E.Knuth’s result which limits the search to the range

• R(i,j-1)≤k≤R(i+1,j), instead of min{C(i,k-1)+C(k,i)}+w(i,j)

1<k≤j• Using the values of R (i,j),the tree T0n can

be constructed in 0(n) time.

Page 44: Dynamic Programming

44

ALGORITHM FOR OBST

PROCEDURE OBST (P,Q,n)

// Given n distinct identifiers a1<a2<…. <an and probability P (i) 1≤i≤n, Q (i) 0≤i≤n, the cost of OBST C (i,j) is computed//

// R (i,j) is the root of tree //// w (i.j) is the Weight of OBST //Real P (n), q (0: n), C (0: n, 0: n),W (0: n, 0: n); integer R (0: n, 0: n)

Page 45: Dynamic Programming

45

ALGORITHM FOR OBST (Contd..)

for i←0 to n-1 do(w (i,i), R(i,i),C(i,i) )← (Q (i), 0, 0) (w (i,i+1), R(i,i+1),C(i,i+1)←Q(i)+Q(i+1)+P(i+1), i+1, Q(i)+Q(i+1)+P(i+1)// optimal trees with one node //repeat(w(n,n),R(n,n),C(n,n) ) ← (Q (n), 0, 0)for m←2 to n do // find optimal //for i←0 to n-m do // trees with m // j←i+m // nodes //

Page 46: Dynamic Programming

46

ALGORITHM FOR OBST (Contd..)

w(i,j)←w(i,j-1)+P(j)+Q(j)k←a value l in the rangeR (i,j-1)≤l≤R(i+1,j) that Minimizes{C (i, l-1) +C (l,j)}C(i,j)=w(i,j)+C(i,k-1)+C(k,j)R (i,j)←kRepeat RepeatEnd OBST

Page 47: Dynamic Programming

47

0/1 KNAPSACK PROBLEM

• The solution of 0/1 knapsack problem is a sequence of decisions on x1,…, xn and the total profit; x1,

…..,xn may be 0 or 1.

• Let fj(X) be the value of an optimal solution to

KNAP (1, j, X). • Since the principle of optimality holds

fn (M) = max {fn-1(M) (Corresponding to xn=0),

fn-1(M-wn) +pn} (corresponding to xn=1)

Page 48: Dynamic Programming

48

0/1 KNAPSACK PROBLEM (Contd..)

• In general

fi(X)=max{fi-1(X), fi-1(X-wi)+pi}….(1)

• (1)  may be solved by beginning with f0(X)=0 for

all X (no elements case) and

fi(X)= -∞ for X<0 (no weight case)

f1,…,fn may be solved successively.

EXAMPLE:-• Let n=3, (w1,w2,w3) = (2,3,4);

• (P1,P2,P3) = (1,2,3), M= 6

Page 49: Dynamic Programming

49

0/1 KNAPSACK PROBLEM (Contd..)

SOLUTION:

f0(X) = 0 X, fi(X) = -∞ if X<0

f0 (6) = 0

f1(6) = max{f0(6), f0(6-2)+1} = max {0,1} = 1

f2(6) = max{f1(6), f1(6-3)+2} = max{1,f1(3)+2}

f1(3) = max{f0(3), f0(3-2)+1} = max{0,1}=1

f2(6) = max {1, 3} =3

f3(6) = max{f2(6), f2(6-4)+5}=max{3, f2(2)+5}

f2(2) = max{f1(2), f1(2-3)+2}=max{1, f1(-1)+2} = 1

(as f1 (-1) = -∞)

f3(6)=max{3,f2(2)+5}=max{3,6}=6

Page 50: Dynamic Programming

50

0/1 KNAPSACK PROBLEM (Contd..)

• Similarly we can compute f1 (1) = 0, f2 (1) = 0, f1 (2) =1 and f2 (3) =2

• Each fi is completely specified by the pairs (Pj,Wj) where Wj is a value of X at which fi takes a jump Pj=fi(Wj) and (P0,W0)=(0,0) is introduced .

• As we always choose maximum profit, if Wj<Wj+1 0≤j≤r then Pj<Pj+1.

Page 51: Dynamic Programming

51

0/1 KNAPSACK PROBLEM (Contd..)• Also fi(X)=fi(Wj) for all X such that Wj≤X<Wj+1 0≤j<r

(as no increase in profit for X>=Wj . and

fi(X)=fi(Wr) for all X X>Wr (as only 0, 1, 2, ...r-1 objects are there.

• (fj(X) is the value of an optimal solution to KNAP(1,j,X)

• (For such X’s as 2.5, 3.5, we do not have the profit, so we assign the previous profit.)

Page 52: Dynamic Programming

52

0/1 KNAPSACK PROBLEM (Contd..)

• Let us define the sets Si-1, Si1 and Si as follows.

• Si-1 is the set of all pairs (p, w) for fi-1 including (0, 0).

• Si1= {(p, w)/ (P-pi, W-wi) Si-1}

• (This is the set of pairs for fi(X) =fi-1(X-wi) + pi)

• Si is obtained by merging together Si-1 and Si1. (This

corresponds to maximum of fi-1(X) and fi-1(X-wi)+pi.

Page 53: Dynamic Programming

53

0/1 KNAPSACK PROBLEM (Contd..)

• If one of Si-1 and Si1 has pair (pj,wj) and the other has a pair

(pk,wk) and pj≤pk while wj≥wk then the pair (pj,wj) is discarded.

• This is known as dominance rule or purging rule (pk,wk) dominates (pj,wj).

For exampleS0 contains tuples when x1 = 0. p1 = 0 ; w1 = 0 S0={0,0}

S11 contains tuples when x1 =1 p1 = 1; w1 = 2 S1

1={(1,2)}

S1 contains tuples when x1 = 0 and x1 = 1 S1 = {(0,0), (1,2)}

Page 54: Dynamic Programming

54

0/1 knapsack problem example

n = 3, (w1, w2, w3) = (2,3,4), (p1,p2,p3)= (1,2,5) and M=6.• S0 = {0, 0}; S1

1 is obtained by adding (p1, w1)=(1, 2) to (0, 0) Thus S1

1 = {(1,2)} ; S1 is obtained by combining S0 and S11

Hence S1 = {(0, 0), (1, 2)}; S21= {(2, 3), (3, 5)} adding

(2, 3) to elements of S1

• S2 ={(0,0),(1,2), (2,3),(3,5)}; S31={(5,4),(6,6),(7,7),(8,9)}

• S3 ={(0,0),(1,2),(2,3),(5,4),(6,6),(7,7),(8,9)}• The pair (3,5) is eliminated from S3 as 3<5 and 5>4. (Using

purging rule (3,5) is discarded.

Page 55: Dynamic Programming

55

0/1 knapsack problem example (contd..)

• computation of xi’s is as follows:• Consider the Last value in Sn. Let it be (P, W). • So, we need to determine x1,x2,x3,….,xn such that ∑pixi= P and

1≤i≤n• ∑wixi=W check if (p,w) is also in Sn-1 . 1≤i≤n

If (p, w) Sn-1then set xn= 0 otherwise xn=1 (p, w) has come from (P-Pn, W-Wn).

• Now check (P-Pn, W-Wn) Sn-1also belongs to Sn-2 or not• If it is xn-1= 0 otherwise xn-1= 1. • The process is repeated.

Page 56: Dynamic Programming

56

0/1 knapsack problem example (contd..)

• Let us apply this procedure to the example M = 6.• The last tuple in S3 is (6, 6) (6,6) S2 so x3=1.

• (6,6) came from 6-P3, 6-W3=(1,2) (1,2) S2 and

(1,2) is also in S1 so set x2=0.

• (1, 2) S0 so x1= 1.

• Hence an optimal solution is (x1,x2,x3) = (1,0,1) .

Page 57: Dynamic Programming

57

Informal Knapsack Algorithm

Procedure DKP (P, W, n, M)S0= {0, 0)}For i←1 to n do

Si1← {(P1, W1) / P1- pi,W1-wi Si-1 and W1≤M}

// Si1 is constructed by adding (pi,wi) to

tuples in Si-1 //

Si← MERGE-PURGE (Si-1, Si1)

repeat(PX, WX)← Last tuple in Sn-1

Page 58: Dynamic Programming

58

Informal Knapsack Algorithm (Contd..)

(PY, WY)← (P1+Pn, W1+Wn) where W1 is the largest

integer W in any tuple in Sn-1 such that W+Wn≤M

// compute xn// // As PX is in Sn-1 and PY is in Sn, PX>PY means

we need not choose PY, so Xn←0 //

8 if PX>PY then xn← 0 else xn← 1

9 end if

// compute similarly xn-1…x1 //

end DKP

Page 59: Dynamic Programming

59

Detailed knapsack algorithm

• In implementation P, W are treated as one-dimensional arrays.

• Pointers F(i) i= 0,1,…,n with F(i) pointing to the first element in Si 0≤i≤n and F(n) pointing to one more than the last element in Sn-1 are used

Page 60: Dynamic Programming

60

Detailed knapsack algorithm

Procedure DKNAP(p,w,n,M,m)

Real p(n), w(n), P(m), W(m), pp, ww, M

Integer F(0:n), l, h, u, i, j, p, next // p(n), w(n) are //

F(0)← 1; P(1)← W(1)← 0 // S0 // // given arrays //

l← h← 1 // start and end of S0 // // P(m), W(m) are //

// computed pair //

F(1)← next← 2 // next free spot in P and W //

4 for i← 1 to n do

Page 61: Dynamic Programming

61

Detailed knapsack algorithm (Contd..)

k← l // k is the temporary index for tuples in Si-1 //

6 u← largest k such that l≤k≤h and W(k)+ wi≤M

// this is to remove tuples like(7,7), (8,9) in S3 in the example //

// k is 2 in S2 //

// so, k points to the next tuple in Si-1 that has to be merged //

// into Si // // pairs for which Wj+Wi >M are not generated//

7 for j← l to u do // generate Si1 and merge //

Page 62: Dynamic Programming

62

Detailed knapsack algorithm (Contd..)

8 (pp, ww)← (P(j)+pi, W(j)+wi) // next element in Si //

9 while k≤h and W(k) ≤ ww do P(next)← P(k); W(next)←W(k)next← next+1; k←k+112 repeat13 if k≤h and W(k)= ww then pp← max(pp, P(k)); 14 k←k+1;15 endif

Page 63: Dynamic Programming

63

Detailed knapsack algorithm (Contd..)

16 if pp>P(next-1) then (P(next),W(next))←(pp, ww);

17 next←next+118 endif19 while k≤h and P(k)≤P(next-1) do

//use purging rule //20 k←k+1 repeat 21 repeat 22

Page 64: Dynamic Programming

64

Detailed knapsack algorithm (Contd..)

// merge m remaining terms from Si-1 //23 while k≤h do24 (P(next), W(next)← P(k), W(k))25 next←next+1; k←k+126 repeat // initialize for Si-1 //27 l←h+1; h←next-1; F(i+1)← next28 repeat29 call PARTS // parts implements lines 8-9 //

30 end DKNAP // of DKP i.e. computations of xi s//

Page 65: Dynamic Programming

65

Detailed knapsack algorithm (Contd..)

Example: • n=3, M=6, (w1,w2,w3) = (2,3,4), (p1,p2,p3)= (1,2,5)

• S0= {0,0}; S11= {(1,2)} S1={(0,0), (1,2)}; S2

1=

{(2,3), (3,5)}• S2= {(0,0), (1,2), (2,3), (3,5)}; S3

1={ (5,4), (6,6),

(7,7), (8,9)}• S3= {(0,0), (1,2), (2,3), (5,4), (6,6), (7,7), (8,9)}

Page 66: Dynamic Programming

66

Detailed knapsack algorithm (Contd..)

• Let us see how S3 is generated using the procedure DKNAP 1 2 3 4 5 6 7 8 9 10 11 12

P 0 0 1 0 1 2 3 0 1 2 5 6

W 0 0 2 0 2 3 5 0 2 3 4 6↑ ↑ ↑ ↑

F(0) F(1) F(2) F(3) l←4 h←7 F(2+1)= F(3)←Next= 8 (line 27) k←l = 4

Page 67: Dynamic Programming

67

Detailed knapsack algorithm (Contd..)

• u←largest k such that 4≤k≤7 and W(k)+w3≤6• such k is 5 because w3= 4 and 4+2=6• 4+3= 7>6 and 4+5= 9>6• Now S3

1 is generated and merged with S2 in lines 7-22,

• The addresses of starting of set Si are stored in the array F(i)

• F(0) 1 means F(0) is pointing to 1st pair or address of first pair is 1.

Page 68: Dynamic Programming

68

Detailed knapsack algorithm (Contd..)

for j←4 to 5 do (Line 7) 4 5 6 7

(pp,ww)←P(4)+p3, W(4)+w3= (5,4) (P,W) (0,0) (1,2) (2,3) (3,5)

(p3, w3)= (5,4)

In lines 9-12 with k=4,5,6

P(8) = 0, W(8) = 0; (P(9), W(9)) = (1,2); (P(10), W(10)) = (2,3)

(P(11), W(11)) = (2,3) are generated, while is exited with k = 7

as W(7) = 5> ww = 4 and next←11

Page 69: Dynamic Programming

69

Detailed knapsack algorithm (Contd..)

• Line 13 is not true for any k as W(k) ≠ ww = 4• In lines 16-18 (5,4) = (pp, ww) is included in S3 ,

as pp= 5> P(next-1)= P(10)=2 i.e. profit gained till now.(P(11), W(11)) = (5,4) and next←12

• In lines 19-21 P(7) = 3≤P(11) = 5 so (3,5) is not included.

• Hence k is incremented to 8 // W(k) ≥ ww but P(k)<=pp case//

Page 70: Dynamic Programming

70

Detailed knapsack algorithm (Contd..)

• Similarly with j←5 (pp,ww)←(P(5)+p3, W(5)+w3) = (6,6)

• And in lines 16-18 (P(12), W(12)) = (6,6)is included in S3

• Lines 23-25 for W(k) ≥ ww and P(k) ≥ pp case, which in this example are not entered as k is 8 for j←5 and where k is 7 for j← 4 p(k)≤pp.

Page 71: Dynamic Programming

71

Detailed knapsack algorithm (Contd..)

• Thus S3 is {(0,0), (1,2), (2,3), (5,4), (6,6)}• Which is {(P(8), W(8)), (P(9),W(9)),

(P(10), W(10)), (P(11),W(11)), (P(12), W(12))}.

• F(3) = 8 is pointing to the start of this array i.e. (P(8), W(8)).

Page 72: Dynamic Programming

72

THE TRAVELING SALESPERSON PROBLEM

• 0/1 Knapsack problem is a subset selection problem. Given n objects there are 2n different subsets of objects.

• The solution is one of these 2n subsets.• In permutation problems like Traveling

Salesperson Problem(TSP) there are n! different permutations of n objects. (n!>2n).

Page 73: Dynamic Programming

73

THE TRAVELING SALESPERSON PROBLEM (Contd..)

TSP problem is as follows:• Let G = (V,E) be directed graph with edge costs

Cij. Cij>0 for all i and j and Cij = +∞ if <i,j> E.• Let |V|=n and assume n>1. • A tour of G is a directed cycle that includes every

vertex in V. • The cost of a tour is the sum of the cost of the

edges on the tour. • The TSP problem is to find a tour of minimum

cost.

Page 74: Dynamic Programming

74

THE TRAVELING SALESPERSON PROBLEM (Contd..)

Applications of TSP:-

1. A postal van is to pick up mail from mail boxes located at n different sites. The route taken by a postal van is a tour and the problem is to find a tour of minimum length.

2. A minimum cost tour of a robot arm used to tighten the nuts on some piece of machinery on an assembly line will minimize the time needed for the arm to complete its task.

Page 75: Dynamic Programming

75

TSP AND THE PRINCIPLE OF OPTIMALITY

• Assume a tour to be a simple path that starts and ends at vertex 1.

• Every tour consists of an edge <1,k> for some k V-{1} and a path from vertex k to vertex 1.

• The path from vertex k to vertex 1 goes through each vertex in V-{1,k} exactly once.

• If the tour is optimal, then the path from k to 1 must be a shortest k to 1 path going through all vertices in V-{1,k}.

• Hence the principle of optimality holds.

Page 76: Dynamic Programming

76

TSP AND THE PRINCIPLE OF OPTIMALITY (Contd..)

• Let g(i,S) be the length of a shortest path starting at vertex i, going through all vertices in S and terminating at vertex 1.

• S is the set of all vertices through shortest path goes• g(1, V-{1}) is the length of an optimal salesperson

tour. • Form the principle of optimality, it follows that

g(1, V-{1})= min{C1k+g(k,V-{1,k})} ……(1) 2≤k≤n (forward approach)

Page 77: Dynamic Programming

77

TSP AND THE PRINCIPLE OF OPTIMALITY (Contd..)

• In general, for iS,

g(i,S)= min{Cij+g(j, S-{j})}………..(2) js

• (1) may be solved for g(1, V-{1}) if we know g(k, V-{1,k}) for all choices of k.

• These values are obtained from (2)• g(i,Ø) = Ci1 1≤i≤n. • g(i,Ø) is the length of the shortest path starting at i

going through no vertex and ending at 1.

Page 78: Dynamic Programming

78

Solution of TSP using Dynamic programming

• Using equation (1), We obtain g(i,S) for all S of size 1, then for all S of size 2 etc.

• For all |S|<n-1 g(i,S) is computed with i≠1, 1S, iS. Example :

1 2 3 4 1 2

1 0 10 1520

2 5 0 910

3 6 13 0 12 4 3 4 8 8 9 0

(a) Directed Graph (b) Edge Length Matrix

Page 79: Dynamic Programming

79

Solution of TSP using Dynamic programming contd..

• Let us solve the problem using dynamic programming approach.

As g(i,Ø) = Ci1 1≤i≤n

g(2,Ø) = C21= 5; g(3,Ø) = C31= 6; g(4,Ø) = C41 = 8• we know that

g(i,S) = min{Cij+g(j, S-{j})}j S

• for |S|=1

g(2,{3}) = C23+g(3,Ø) = 9+6 = 15

g(2,{4}) = C24+g(4,Ø) = 10+8 = 18

g(3,{2}) = C32+g(2,Ø) = 13+5 = 18

g(4,{2}) = C42+g(2,Ø) = 8+5 = 13

g(4,{3}) = C43+g(3,Ø) = 9+6 = 15

Page 80: Dynamic Programming

80

Solution of TSP using Dynamic programming contd..

• Next, we compute g (i,S) with | S | = 2 i 1, 1 S, i Sg (2, {3,4}) = min { C23 + g (3,{4}), C24 + g (4, {3})

= min { 9+20, 10+15 } = min { 29,25} = 25

g (3, {2,4}) = min { C32 + g (2,{4}), C34 + g (4, {2})

= min { 13+18, 12+13 } = min { 31,25} = 25

g (4, {2,3}) = min { C42 + g (2,{3}), C43 + g (3, {2})

= min { 8+15, 9+18 } = min { 23, 27} = 23

Page 81: Dynamic Programming

81

Solution of TSP using Dynamic programming contd..

We compute g (i,S) with | S | = 3

g (1, {2, 3, 4}) = min { C12 + g (2,{3,4}), C13 + g (3, {2,4})

, C14 + g (4, {2,3})

= min { 10 + 25, 15 + 25, 20+23 }

= min { 35, 40, 43 } = 35• Thus an optimal tour of the graph has length 35.

Page 82: Dynamic Programming

82

Solution of TSP using Dynamic programming contd..

• The tour of this length may be constructed if we retain the value of j that minimizes each g(i,S) for

| S | = 3 and | S | = 2 and | S | = 1.• Let J (i,S) be the minimal tour.• J (1, {2,3,4}) = 2 as C12 + g(2,{3,4}) = 35 is the

minimum.

Page 83: Dynamic Programming

83

Solution of TSP using Dynamic programming contd..

• The remaining tour may be obtained from g(2, {3,4}).

Observe that J (2, { 3, 4} ) = 4 and

J (4, { 3} ) = 3

The optimal tour starts at 1 goes through the vertices 2, 4, 3 respectively and ends at 1.

i.e. 1,2,4,3, 1 are the vertices on the shortest path

Page 84: Dynamic Programming

84

Complexity of TSPComplexity of TSP = (n22n)

Space needed = (n2n)Complexity involves the computations of g(1, V-{1}) or g(i, S) Let N be the number of g(i, S) to be computedFor each values of /S/, there are n-1 choices of iThe number of distinct sets S of size k not including

1 and i is (n-2) C k

Hence N = Σ (n-1) (n-2) C k k= 0, 1, …, n-2

Page 85: Dynamic Programming

85

Complexity of TSP contd..

N = (n-1) Σ (n-2) C k k= 0, 1, …, n-2 = (n-1) 2 (n-2)

(Applying Binomial theorem below with a=1 and x=1)

Binomial theorem is as follows(a +x)n = a n n C 0 + a n-1

nC 1 x + a n-2 nC2 x 2 +…+nC n xn

As g(1, V-{1}) involves the computation of g(i, s) maximum n times, the Complexity is n(n2n) =n22n

As g(i, s) are to be stored for further computations,The storage required is (n2n)