Introduction to Algorithms Dynamic Programming – Part 2

28
Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis

description

Introduction to Algorithms Dynamic Programming – Part 2. CSE 680 Prof. Roger Crawfis. The 0/1 Knapsack Problem. Given: A set S of n items, with each item i having w i - a positive weight b i - a positive benefit Goal: Choose items with maximum total benefit but with weight at most W. - PowerPoint PPT Presentation

Transcript of Introduction to Algorithms Dynamic Programming – Part 2

Page 1: Introduction to Algorithms  Dynamic Programming – Part 2

Introduction to Algorithms Dynamic Programming – Part 2

CSE 680Prof. Roger Crawfis

Page 2: Introduction to Algorithms  Dynamic Programming – Part 2

The 0/1 Knapsack Problem

Given: A set S of n items, with each item i having wi - a positive weight bi - a positive benefit

Goal: Choose items with maximum total benefit but with weight at most W.

If we are not allowed to take fractional amounts, then this is the 0/1 knapsack problem. In this case, we let T denote the set of items we take

Objective: maximize

Constraint:

Ti

ib

Ti

i Ww

Page 3: Introduction to Algorithms  Dynamic Programming – Part 2

Example

Given: A set S of n items, with each item i having bi - a positive “benefit” wi - a positive “weight”

Goal: Choose items with maximum total benefit but with weight at most W.

Weight:

Benefit:

1 2 3 4 5

4 in 2 in 2 in 6 in 2 in

$20 $3 $6 $25 $80

Items:

box of width 9 in

Solution:• item 5 ($80, 2 in)• item 3 ($6, 2in)• item 1 ($20, 4in)

“knapsack”

Page 4: Introduction to Algorithms  Dynamic Programming – Part 2

First Attempt

Sk: Set of items numbered 1 to k. Define B[k] = best selection from Sk. Problem: does not have sub-problem optimality:

Consider set S={(3,2),(5,4),(8,5),(4,3),(10,9)} of(benefit, weight) pairs and total weight W = 20

Best for S4:

Best for S5:

Page 5: Introduction to Algorithms  Dynamic Programming – Part 2

Second Attempt

Sk: Set of items numbered 1 to k. Define B[k,w] to be the best selection from Sk with

weight at most w This does have sub-problem optimality.

I.e., the best subset of Sk with weight at most w is either: the best subset of Sk-1 with weight at most w or the best subset of Sk-1 with weight at most w-wk plus item k

----

else}],1[],,1[max{

if],1[],[

kk

k

bwwkBwkBwwwkB

wkB

Page 6: Introduction to Algorithms  Dynamic Programming – Part 2

Knapsack Example

item weight value 1 2 $12 2 1 $10 3 3 $20 4 2 $15

Knapsack of capacity W = 5w1 = 2, v1= 12 w2 = 1, v2= 10

w3 = 3, v3= 20 w4 = 2, v4= 15

Max item allowed

Max Weight0 1 2 3 4 5

0 0 0 0 0 0 0

1 0 0 12 12 12 12

2 0 10 12 22 22 22

3 0 10 12 22 30 32

4 0 10 15 25 30 37

Page 7: Introduction to Algorithms  Dynamic Programming – Part 2

Algorithm

Since B[k,w] is defined in terms of B[k-1,*], we can use two arrays of instead of a matrix.

Running time is O(nW).

Not a polynomial-time algorithm since W may be large.

Called a pseudo-polynomial time algorithm.

Algorithm 01Knapsack(S, W):Input: set S of n items with

benefit bi and weight wi; maximum weight WOutput: benefit of best subset of S

with weight at most Wlet A and B be arrays of length W

+ 1for w 0 to W do

B[w] 0for k 1 to n docopy array B into array A for w wk to W doif A[w-wk] bk > A[w] then B[w] A[w-wk] bk return B[W]

Page 8: Introduction to Algorithms  Dynamic Programming – Part 2

Longest Common Subsequence (LCS)

A subsequence of a sequence/string S is obtained by deleting zero or more symbols from S. For example, the following are some subsequences of “president”: pred, sdn, predent. In other words, the letters of a subsequence of S appear in order in S, but they are not required to be consecutive.

The longest common subsequence problem is to find a maximum length common subsequence between two sequences.

Page 9: Introduction to Algorithms  Dynamic Programming – Part 2

LCS

For instance, Sequence 1: president Sequence 2: providence Its LCS is priden.

president

providence

Page 10: Introduction to Algorithms  Dynamic Programming – Part 2

LCS

Another example:

Sequence 1: algorithm Sequence 2: alignmentOne of its LCS is algm.

a l g o r i t h m

a l i g n m e n t

Page 11: Introduction to Algorithms  Dynamic Programming – Part 2

Naïve Algorithm

For every subsequence of X, check whether it’s a subsequence of Y .

Time: Θ(n2m).2m subsequences of X to check.Each subsequence takes Θ(n) time to

check: scan Y for first letter, for second, and so on.

Page 12: Introduction to Algorithms  Dynamic Programming – Part 2

Optimal Substructure

Notation:prefix Xi = x1,...,xi is the first i letters of X.

Theorem Let Z = z1, . . . , zk be any LCS of X and Y .

1. If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1.

2. If xm yn, then either zk xm and Z is an LCS of Xm-1 and Y .

3. or zk yn and Z is an LCS of X and Yn-1.

Page 13: Introduction to Algorithms  Dynamic Programming – Part 2

Optimal Substructure

Proof: (case 1: xm = yn)

Any sequence Z’ that does not end in xm = yn can be made longer by adding xm = yn to the end. Therefore,

(1) longest common subsequence (LCS) Z must end in xm = yn.

(2) Zk-1 is a common subsequence of Xm-1 and Yn-1, and

(3) there is no longer CS of Xm-1 and Yn-1, or Z would not be an LCS.

Theorem Let Z = z1, . . . , zk be any LCS of X and Y .

1. If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1.

2. If xm yn, then either zk xm and Z is an LCS of Xm-1 and Y .

3. or zk yn and Z is an LCS of X and Yn-1.

Page 14: Introduction to Algorithms  Dynamic Programming – Part 2

Optimal Substructure

Proof: (case 2: xm yn, and zk xm)

Since Z does not end in xm,

(1) Z is a common subsequence of Xm-1 and Y, and

(2) there is no longer CS of Xm-1 and Y, or Z would not be an LCS.

Theorem Let Z = z1, . . . , zk be any LCS of X and Y .

1. If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1.

2. If xm yn, then either zk xm and Z is an LCS of Xm-1 and Y .

3. or zk yn and Z is an LCS of X and Yn-1.

Page 15: Introduction to Algorithms  Dynamic Programming – Part 2

Recursive Solution

Define c[i, j] = length of LCS of Xi and Yj . We want c[m,n].

----

. and 0, if])1,[],,1[max(, and 0, if1]1,1[

,0or 0 if0],[

ji

ji

yxjijicjicyxjijic

jijic

Page 16: Introduction to Algorithms  Dynamic Programming – Part 2

Recursive Solution

.)end( )end( if]),[],,[max(,)end( )end( if1],[

,empty or empty if0],[

prefixcprefixcprefixprefixcc

c[springtime, printing]

c[springtim, printing] c[springtime, printin]

[springti, printing] [springtim, printin] [springtim, printin] [springtime, printi]

[springt, printing] [springti, printin] [springtim, printi] [springtime, print]

Page 17: Introduction to Algorithms  Dynamic Programming – Part 2

Recursive Solution

.)end( )end( if]),[],,[max(,)end( )end( if1],[

,empty or empty if0],[

prefixcprefixcprefixprefixcc

p r i n t i n g

s

p

r

i

n

g

t

i

m

e

• Keep track of c[,] in a table of nm entries:• top/down • bottom/up

Page 18: Introduction to Algorithms  Dynamic Programming – Part 2

How to compute LCS?

Let A=a1a2…am and B=b1b2…bn . len(i, j): the length of an LCS between

a1a2…ai and b1b2…bj

With proper initializations, len(i, j) can be computed as follows.

,. and 0, if)),1(),1,(max(

and 0, if1)1,1(,0or 0 if0

),(

----

ji

ji

bajijilenjilenbajijilen

jijilen

Page 19: Introduction to Algorithms  Dynamic Programming – Part 2

procedure LCS-Length(A, B) 1. for i ← 0 to m do len(i,0) = 0 2. for j ← 1 to n do len(0,j) = 0 3. for i ← 1 to m do 4. for j ← 1 to n do

5. if ji ba then

--" "),(

1)1,1(),(jiprev

jilenjilen

6. else if )1,(),1( -- jilenjilen

7. then

-" "),(

),1(),(jiprev

jilenjilen

8. else

-" "),(

)1,(),(jiprev

jilenjilen

9. return len and prev

LCS Algorithm

Page 20: Introduction to Algorithms  Dynamic Programming – Part 2

i j 0 1 p

2 r

3 o

4 v

5 i

6 d

7 e

8 n

9 c

10 e

0 0 0 0 0 0 0 0 0 0 0 0

1 p 2

0 1 1 1 1 1 1 1 1 1 1

2 r 0 1 2 2 2 2 2 2 2 2 2

3 e 0 1 2 2 2 2 2 3 3 3 3

4 s 0 1 2 2 2 2 2 3 3 3 3

5 i 0 1 2 2 2 3 3 3 3 3 3

6 d 0 1 2 2 2 3 4 4 4 4 4

7 e 0 1 2 2 2 3 4 5 5 5 5

8 n 0 1 2 2 2 3 4 5 6 6 6

9 t 0 1 2 2 2 3 4 5 6 6 6

Running time and memory: O(mn) and O(mn).

Page 21: Introduction to Algorithms  Dynamic Programming – Part 2

The Backtracking Algorithm

procedure Output-LCS(A, prev, i, j)1. if i = 0 or j = 0 then return2. if prev(i, j)=”“ then

Output-LCS(A, prev, i-1, j-1)Print A[i];

3. else if prev(i, j)=”“ thenOutput-LCS(A, prev, i-1, j)

4. else Output-LCS(A, prev, i, j-1)

Page 22: Introduction to Algorithms  Dynamic Programming – Part 2

i j 0 1 p

2 r

3 o

4 v

5 i

6 d

7 e

8 n

9 c

10 e

0 0 0 0 0 0 0 0 0 0 0 0

1 p 2

0 1 1 1 1 1 1 1 1 1 1

2 r 0 1 2 2 2 2 2 2 2 2 2

3 e 0 1 2 2 2 2 2 3 3 3 3

4 s 0 1 2 2 2 2 2 3 3 3 3

5 i 0 1 2 2 2 3 3 3 3 3 3

6 d 0 1 2 2 2 3 4 4 4 4 4

7 e 0 1 2 2 2 3 4 5 5 5 5

8 n 0 1 2 2 2 3 4 5 6 6 6

9 t 0 1 2 2 2 3 4 5 6 6 6

Output: priden

Example

Page 23: Introduction to Algorithms  Dynamic Programming – Part 2

Transitive Closure

Compute the transitive closure of a directed graph If there exists a path (non-trivial) from node i to node j, then

add an edge to the resulting graph. Also called the reachability: If a can reach b and b can

reach c, then a can reach c.

From Wikipedia.org: http://en.wikipedia.org/wiki/File:Transitive-Closure.PNG

Page 24: Introduction to Algorithms  Dynamic Programming – Part 2

Warshall’s Algorithm

On the kth iteration, the algorithm determines for every pair of vertices i, j if a path exists from i and j with just vertices 1,…,k allowed as intermediate nodes.

R(k-1)[i,j] (path using just 1 ,…,k-1) R(k)[i,j] = or

R(k-1)[i,k] and R(k-1)[k,j] (path from i to k and from k to j using just 1 ,…,k-1)

i

j

k

{Initial condition?

Page 25: Introduction to Algorithms  Dynamic Programming – Part 2

Warshall’s Algorithm

Constructs transitive closure T as the last matrix in the sequence of n-by-n matrices R(0), … , R(k), … , R(n) where

R(k)[i,j] = 1 iff there is nontrivial path from i to j with only the first k vertices allowed as intermediate

Note that R(0) = A (adjacency matrix), R(n) = T (transitive closure)

Example: Path’s with only node 1 allowed.

3

42

1

3

42

1

Page 26: Introduction to Algorithms  Dynamic Programming – Part 2

Warshall’s Algorithm

Complete example:

R(0)

0 0 1 01 0 0 10 0 0 00 1 0 0

R(1)

0 0 1 01 0 1 10 0 0 00 1 0 0

R(2)

0 0 1 01 0 1 10 0 0 01 1 1 1

R(3)

0 0 1 01 0 1 10 0 0 01 1 1 1

R(4)

0 0 1 01 1 1 10 0 0 01 1 1 1

3

42

13

42

13

42

1

3

42

13

42

1

Page 27: Introduction to Algorithms  Dynamic Programming – Part 2

Warshall’s Algorithm3

42

10 0 1 01 0 0 10 0 0 00 1 0 0

R(0) =

0 0 1 01 0 1 10 0 0 00 1 0 0

R(1) =

0 0 1 01 0 1 10 0 0 01 1 1 1

R(2) =

0 0 1 01 0 1 10 0 0 01 1 1 1

R(3) =

0 0 1 01 1 1 10 0 0 01 1 1 1

R(4) =

3

42

1

Page 28: Introduction to Algorithms  Dynamic Programming – Part 2

Warshall’s Algorithm

Time efficiency: Θ(n3)Space efficiency: Matrices can be written over their predecessors (with some care), so it’s Θ(n2).