Lecture 12 - Stanford University

64
Lecture 12 More Bellman-Ford, Floyd-Warshall, and Dynamic Programming!

Transcript of Lecture 12 - Stanford University

Page 1: Lecture 12 - Stanford University

Lecture12MoreBellman-Ford,Floyd-Warshall,

andDynamicProgramming!

Page 2: Lecture 12 - Stanford University

Announcements

• HW5dueFriday

• Midtermshavebeengraded!

• AvailableonGradescope.

• Mean/Median:66 (itwasahardtest!)

• Max:97

• Std.Dev:14

• Pleaselookatthesolutionsandcometoofficehoursifyouhavequestionsaboutyourmidterm!

Page 3: Lecture 12 - Stanford University

Recall

• Aweighteddirectedgraph:

u

v

a

b

t

3 32

5

2

13

16

1

• Weights onedges

representcosts.

• Thecostofapathisthe

sumoftheweights

alongthatpath.

• Ashortestpathfroms

totisadirectedpath

fromstotwiththe

smallestcost.

• Thesingle-source

shortestpath problemis

tofindtheshortestpath

fromstovforallvin

thegraph.

1

21

Thisisa

pathfrom

stotof

cost22.

s

Thisisapathfromstotof

cost10.Itistheshortest

pathfromstot.

Page 4: Lecture 12 - Stanford University

Lasttime

• Dijkstra’salgorithm!

• Bellman-Fordalgorithm!

• Bothsolvesingle-sourceshortestpathinweightedgraphs.

u

v

a

b

t

3 32

5

2

13

16

1

1

2

1

s

Wedidn’tquitefinishwith

theBellman-Fordalgorithm

solet’sdothatnow.

Page 5: Lecture 12 - Stanford University

Bellman-Fordvs.Dijkstra

• d[v]=∞ forallvinV

• d[s]=0

• For i=0,…,n-2:

• For uinV:

• For vinu.outNeighbors:

• d[v]←min(d[v],d[u]+w(u,v))

• Whiletherearenot-sure nodes:

• Pickthe not-sure nodeuwiththesmallestestimated[u].

• For vinu.outNeighbors:

• d[v]←min(d[v],d[u]+w(u,v))

• Markuas sure.

Dijkstra(G,s):

Bellman-Ford(G,s):

Insteadofpickingucleverly,

justupdateforalloftheu’s.

Page 6: Lecture 12 - Stanford University

Forpedagogicalreasonswhichwewillseelatertoday…

• Weareactuallygoingtochangethistobedumber.

• Keepnarrays:d(0),d(1),…,d(n-1)

• d(0)[v]=∞ forallvinV

• d(0)[s]=0

• For i=0,…,n-2:

• For uinV:

• For vinu.outNeighbors:

• d(i+1)[v]←min(d(i)[v],d(i)[u]+w(u,v))

• Thendist(s,v)=d(n-1)[v]

Bellman-Ford*(G,s):

Page 7: Lecture 12 - Stanford University

Anotherwayofwritingthis

• Weareactuallygoingtochangethistobedumber.

• Keepnarrays:d(0),d(1),…,d(n-1)

• d(0)[v]=∞ forallvinV

• d(0)[s]=0

• For i=0,…,n-2:

• For vinV:

• d(i+1)[v]←min(d(i)[v],minu inv.inNbrs{d(i)[u]+w(u,v)})

• Thendist(s,v)=d(n-1)[v]

Bellman-Ford*(G,s):

Theforloopoverugets

pickedupinthismin.

Page 8: Lecture 12 - Stanford University

Bellman-Ford Gates

Union

Dish

Packard

1

1

4

25

20

22

CS161

HowfarisanodefromGates?

0

=

• For i=0,…,n-2:

• For vinV:

• d(i+1)[v]←min(d(i)[v],minu {d(i)[u]+w(u,v)})

∞0 ∞ ∞ ∞

GatesPackardCS161UnionDish

d(0)

d(1)

d(2)

d(3)

d(4)

Page 9: Lecture 12 - Stanford University

Bellman-Ford Gates

Union

Dish

Packard

1

1

4

25

20

22

CS161

HowfarisanodefromGates?

0

25

1

i=0

25

0 ∞ ∞ ∞

GatesPackardCS161UnionDish

d(0)

0 1 ∞ ∞d(1)

d(2)

d(3)

d(4)

• For i=0,…,n-2:

• For vinV:

• d(i+1)[v]←min(d(i)[v],minu {d(i)[u]+w(u,v)})

Page 10: Lecture 12 - Stanford University

Bellman-Ford Gates

Union

Dish

Packard

1

1

4

25

20

22

CS161

HowfarisanodefromGates?

0

2

45

23

1

i=1

25

0 ∞ ∞ ∞

GatesPackardCS161UnionDish

d(0)

0 1 ∞ ∞d(1)

0 1 2 45d(2) 23

d(3)

d(4)

• For i=0,…,n-2:

• For vinV:

• d(i+1)[v]←min(d(i)[v],minu {d(i)[u]+w(u,v)})

Page 11: Lecture 12 - Stanford University

Bellman-Ford Gates

Union

Dish

Packard

1

1

4

25

20

22

CS161

HowfarisanodefromGates?

0

2

6

23

1

i=2

25

0 ∞ ∞ ∞

GatesPackardCS161UnionDish

d(0)

0 1 ∞ ∞d(1)

0 1 2 45d(2) 23

0 1 2 6d(3) 23

d(4)

• For i=0,…,n-2:

• For vinV:

• d(i+1)[v]←min(d(i)[v],minu {d(i)[u]+w(u,v)})

Page 12: Lecture 12 - Stanford University

Bellman-Ford Gates

Union

Dish

Packard

1

1

4

25

20

22

CS161

HowfarisanodefromGates?

0

2

6

23

1

i=3

25

0 ∞ ∞ ∞

GatesPackardCS161UnionDish

d(0)

0 1 ∞ ∞d(1)

0 1 2 45d(2) 23

0 1 2 6d(3) 23

0 1 2 6d(4) 23

• For i=0,…,n-2:

• For vinV:

• d(i+1)[v]←min(d(i)[v],minu {d(i)[u]+w(u,v)})

Page 13: Lecture 12 - Stanford University

Interpretationofd(i)

25

0 ∞ ∞ ∞

GatesPackardCS161UnionDish

d(0)

0 1 ∞ ∞d(1)

0 1 2 45d(2) 23

0 1 2 6d(3) 23

0 1 2 6d(4) 23

d(i)[v]isequaltothecostofthe

shortestpathbetweensandv

withatmosti edges.

Gates

Union

Dish

Packard

1

1

4

25

20

22

CS161

0

2

6

23

1

Page 14: Lecture 12 - Stanford University

WhydoesBellman-Fordwork?

• Inductivehypothesis:

• d(i)[v]isequaltothecostoftheshortestpathbetweensandvwithatmosti edges.

• Conclusion:

Page 15: Lecture 12 - Stanford University

Aside:simplepathsAssumethereisnonegativecycle.

• Thennotonlyarethereshortestpaths,butactuallythere’salwaysasimple shortestpath.

• Asimplepathinagraphwithnverticeshasatmostn-1edgesinit.

“Simple”means

thatthepathhas

nocyclesinit.

v

su

x

ts v

y

-2

2

3

-5

10

t

Can’taddanotheredge

withoutmakingacycle!

Thiscycleisn’thelping.

Justgetridofit.

Page 16: Lecture 12 - Stanford University

Whydoesitwork?

• Inductivehypothesis:

• d(i)[v]isequaltothecostoftheshortestpathbetweensandvwithatmosti edges.

• Conclusion(s):

• d(n-1)[v]isequaltothecostoftheshortestpathbetweensandvwithatmostn-1edges.

• Iftherearenonegativecycles,d(n-1)[v]isequaltothecostoftheshortestpath.

Noticethatnegativeedgeweights arefine.

Justnotnegativecycles.

Page 17: Lecture 12 - Stanford University

Noteonimplementation

• Don’tactuallykeepallnarraysaround.

• Justkeeptwoatatime:“lastround”and“thisround”

25

0 ∞ ∞ ∞

GatesPackardCS161UnionDish

d(0)

0 1 ∞ ∞d(1)

0 1 2 45d(2) 23

0 1 2 6d(3) 23

0 1 2 6 23d(4)

Onlyneedthese

twoinorderto

computed(4)

Page 18: Lecture 12 - Stanford University

ThisseemsmuchslowerthanDijkstra

• Anditis:

RunningtimeO(mn)

• However,it’salsomoreflexibleinafewways.

• Canhandlenegativeedges

• Ifwekeepondoingtheseiterations,thenchangesinthenetworkwillpropagatethrough.

• For i=0,…,n-2:

• For vinV:

• d(i+1)[v]←min(d(i)[v],minu inv.nbrs{d(i)[u]+w(u,v)})

• Thendist(s,v)=d(n-1)[v]

Page 19: Lecture 12 - Stanford University

Negativecycles Gates

Union

Dish

Packard

1

1

-3

10

-2

CS161

• For i=0,…,n-2:

• For vinV:

• d(i+1)[v]←min(d(i)[v],minu inv.nbrs{d(i)[u]+w(u,v)})

-3

0 ∞ ∞ ∞

GatesPackardCS161UnionDish

d(0)

0 1 ∞ ∞d(1)

0 -5 2 7d(2) -3

-4 -5 -4 6d(3) -34

Thisisnotlookinggood!

-4 -5 -4 6d(4) -7

Page 20: Lecture 12 - Stanford University

Negativeedgeweights Gates

Union

Dish

Packard

1

1

-3

10

-2

CS161

• For i=0,…,n-2:

• For vinV:

• d(i+1)[v]←min(d(i)[v],minu inv.nbrs{d(i)[u]+w(u,v)})

-3

0 ∞ ∞ ∞

GatesPackardCS161UnionDish

d(0)

0 1 ∞ ∞d(1)

0 -5 2 7d(2) -3

-4 -5 -4 6d(3) -3

-4 -5 -4 6d(4) -7

4

Butwecantellthatit’snotlookinggood:

Somestuffchanged!

-4 -9 -4 3d(5) -7

Page 21: Lecture 12 - Stanford University

NegativecyclesinBellman-Ford

• Iftherearenonegativecycles:

• Everythingworksasitshould,andstabilizes.

• Iftherearenegativecycles:

• Noteverythingworksasitshould…

• Note:itcouldn’tpossiblywork,sinceshortestpathsaren’twell-definediftherearenegativecycles.

• Thed[v]valueswillkeepchanging.

• Solution:

• Gooneroundmoreandseeifthingschange.

Page 22: Lecture 12 - Stanford University

Bellman-Fordalgorithm

• d(0)[v]=∞ forallvinV

• d(0)[s]=0

• For i=0,…,n-1:

• For vinV:

• d(i+1)[v]←min(d(i)[v],minu inv.inNeighbors {d(i)[u]+w(u,v)})

• Ifd(n-1) !=d(n) :

• Return NEGATIVECYCLEL

• Otherwise,dist(s,v)=d(n-1)[v]

Bellman-Ford*(G,s):

Runningtime:O(mn)

Page 23: Lecture 12 - Stanford University

Bellman-Fordisalsousedinpractice.

• eg,RoutingInformationProtocol(RIP)usessomethinglikeBellman-Ford.

• Olderprotocol,notusedasmuchanymore.

• Eachrouterkeepsatable ofdistancestoeveryotherrouter.

• PeriodicallywedoaBellman-Fordupdate.• Aka,foranedge(u,v):

• d(i+1)[v]←min(d(i)[v],d(i)[u]+w(u,v))

• Thismeansthatiftherearechangesinthenetwork,thiswillpropagate.(maybeslowly…)

Destination Costtoget

there

Sendto

whom?

172.16.1.0 34 172.16.1.1

10.20.40.1 10 192.168.1.2

10.155.120.1 9 10.13.50.0

Page 24: Lecture 12 - Stanford University

Recap:shortestpaths

• BFS:

• (+)O(n+m)

• (-)onlyunweightedgraphs

• Dijkstra’salgorithm:• (+)weightedgraphs

• (+)O(nlog(n)+m)ifyouimplementitright.

• (-)nonegativeedgeweights

• (-)very“centralized”(needtokeeptrackofalltheverticestoknowwhichtoupdate).

• TheBellman-Fordalgorithm:• (+)weightedgraphs,evenwithnegativeweights

• (+)canbedoneinadistributedfashion,everyvertexusingonlyinformationfromitsneighbors.

• (-)O(nm)

Page 25: Lecture 12 - Stanford University

ImportantthingaboutB-Ffortherestofthislecture

25

0 ∞ ∞ ∞

GatesPackardCS161UnionDish

d(0)

0 1 ∞ ∞d(1)

0 1 2 45d(2) 23

0 1 2 6d(3) 23

0 1 2 6d(4) 23

d(i)[v]isequaltothecostofthe

shortestpathbetweensandv

withatmosti edges.

Gates

Union

Dish

Packard

1

1

4

25

20

22

CS161

0

2

6

23

1

Page 26: Lecture 12 - Stanford University

Bellman-Fordisanexampleof…

DynamicProgramming!

• ExampleofDynamicprogramming:

• Fibonaccinumbers.

• (AndBellman-Ford)

• Whatisdynamicprogramming,exactly?

• Andwhyisitcalled“dynamicprogramming”?

• Anotherexample:Floyd-Warshall algorithm

• An“all-pairs”shortestpathalgorithm

Today:

Page 27: Lecture 12 - Stanford University

Pre-Lectureexercise:HownottocomputeFibonacciNumbers

• Definition:

• F(n)=F(n-1)+F(n-2),withF(0)=F(1)=1.

• Thefirstseveralare:

1,1,2,3,5,8,13,21,34,55,89,144,…

• Question:

• Givenn,whatisF(n)?

Page 28: Lecture 12 - Stanford University

Candidatealgorithm

• def Fibonacci(n):

• if n == 0 or n == 1:

• return 1

• return Fibonacci(n-1) + Fibonacci(n-2)

(Seemstowork,accordingtotheIPython notebook…)

Runningtime?• T(n)=T(n-1)+T(n-2)+O(1)

• T(n) ≥T(n-1)+T(n-2)forn≥ 2• SoT(n) growsatleastasfastas

theFibonaccinumbers

themselves…

• Funfact,that’slike𝜙, where

𝜙 =./√1

2isthegoldenratio.

• aka,EXPONENTIALLYQUICKLYL

SeeCLRSProblem4-4fora

walkthroughofhowfastthe

Fibonaccinumbersgrow!

Page 29: Lecture 12 - Stanford University

What’sgoingon?

ConsiderFib(8)

8

76

6554

44 543332

2 2 2 2 3 3 42 32 31 1 110

10 10 10 10 10 102121 21

21

10 10 10 10

etc

That’salotof

repeated

computation!

Page 30: Lecture 12 - Stanford University

Maybethiswouldbebetter:

8

7

6

5

4

3

2

1

0

def fasterFibonacci(n):

• F = [1, 1, None, None, …, None ] • \\ F has length n

• for i = 2, …, n:

• F[i] = F[i-1] + F[i-2]

• return F[n]

Muchbetterrunningtime!

Page 31: Lecture 12 - Stanford University

Thiswasanexampleof…

Page 32: Lecture 12 - Stanford University

Whatisdynamicprogramming?

• Itisanalgorithmdesignparadigm

• likedivide-and-conquerisanalgorithmdesignparadigm.

• Usuallyitisforsolvingoptimizationproblems

• eg,shortestpath

• (Fibonaccinumbersaren’tanoptimizationproblem,buttheyareagoodexample…)

Page 33: Lecture 12 - Stanford University

Elementsofdynamicprogramming

• Bigproblemsbreakupintosub-problems.• Fibonacci:F(i)fori ≤ n• Bellman-Ford:Shortestpathswithatmosti edgesfori≤ n

• Thesolutiontoaproblemcanbeexpressedintermsofsolutionstosmallersub-problems.

• Fibonacci:

F(i+1)=F(i) +F(i-1)

• Bellman-Ford:

d(i+1)[v]←min{d(i)[v],minu {d(i)[u]+weight(u,v)}}

1.Optimalsub-structure:

Shortestpathwithat

mosti edgesfromstovShortestpathwithatmost

i edgesfromstou.

Page 34: Lecture 12 - Stanford University

Elementsofdynamicprogramming

• Thesub-problemsoverlapalot.

• Fibonacci:

• LotsofdifferentF[j]willuseF[i].

• Bellman-Ford:

• Lotsofdifferententriesofd(i+1) willused(i)[v].

• Thismeansthatwecansavetimebysolvingasub-problemjustonceandstoringtheanswer.

2.Overlappingsub-problems:

Page 35: Lecture 12 - Stanford University

Elementsofdynamicprogramming

• Optimalsubstructure.

• Optimalsolutionstosub-problemsaresub-solutionstotheoptimalsolutionoftheoriginalproblem.

• Overlappingsubproblems.

• Thesubproblems showupagainandagain

• Usingtheseproperties,wecandesignadynamic

programmingalgorithm:

• Keepatableofsolutionstothesmallerproblems.

• Usethesolutionsinthetabletosolvebiggerproblems.

• Attheendwecanuseinformationwecollectedalongthewaytofindthesolutiontothewholething.

Page 36: Lecture 12 - Stanford University

Twowaystothinkaboutand/orimplementDPalgorithms

• Topdown

•Bottomup

This picture isn’t hugely relevant but I like it.

Page 37: Lecture 12 - Stanford University

Bottomupapproachwhatwejustsaw.

• ForFibonacci:

• Solvethesmallproblemsfirst

• fillinF[0],F[1]

• Thenbiggerproblems

• fillinF[2]

• …

• Thenbiggerproblems

• fillinF[n-1]

• Thenfinallysolvetherealproblem.

• fillinF[n]

Page 38: Lecture 12 - Stanford University

Bottomupapproachwhatwejustsaw.

• ForBellman-Ford:

• Solvethesmallproblemsfirst

• fillind(0)

• Thenbiggerproblems

• fillind(1)

• …

• Thenbiggerproblems

• fillind(n-2)

• Thenfinallysolvetherealproblem.

• fillind(n-1)

Page 39: Lecture 12 - Stanford University

Topdownapproach

• Thinkofitlikearecursivealgorithm.

• Tosolvethebigproblem:• Recurse tosolvesmallerproblems

• Thoserecurse tosolvesmallerproblems

• etc..

• Thedifferencefromdivideandconquer:• Memo-ization

• Keeptrackofwhatsmallproblemsyou’vealreadysolvedtopreventre-solvingthesameproblemtwice.

Page 40: Lecture 12 - Stanford University

Exampleoftop-downFibonacci

• define a global list F = [1,1,None, None, …, None]

• def Fibonacci(n):

• if F[n] != None:

• return F[n]

• else:

• F[n] = Fibonacci(n-1) + Fibonacci(n-2)

• return F[n]

Page 41: Lecture 12 - Stanford University

Memo-ization visualization

8

76

6554

44 543332

2 2 2 2 3 3 42 32 31 1 110

10 10 10 10 10 102121 21

21

10 10 10 10

etc

Page 42: Lecture 12 - Stanford University

Memo-ization Visualizationctd

8

7

6

5

4

3

2

1

0

• define a global list F = [1,1,None, None, …, None]

• def Fibonacci(n):

• if F[n] != None:

• return F[n]

• else:

• F[n] = Fibonacci(n-1) + Fibonacci(n-2)

• return F[n]

Page 43: Lecture 12 - Stanford University

Whathavewelearned?

• Paradigminalgorithmdesign.

• Usesoptimalsubstructure

• Usesoverlappingsubproblems

• Canbeimplementedbottom-up ortop-down.

• It’safancynameforaprettycommon-senseidea:

Page 44: Lecture 12 - Stanford University

Why“dynamicprogramming”?

• Programming referstofindingtheoptimal“program.”

• asin,ashortestrouteisaplan akaaprogram.

• Dynamic referstothefactthatit’smulti-stage.

• Butalsoit’sjustafancy-soundingname.

Manipulatingcomputercodeinanactionmovie?

Page 45: Lecture 12 - Stanford University

Why“dynamicprogramming”?

• RichardBellmaninventedthenameinthe1950’s.

• Atthetime,hewasworkingfortheRANDCorporation,whichwasbasicallyworkingfortheAirForce,andgovernmentprojectsneededflashynamestogetfunded.

• FromBellman’sautobiography:

• “It’s impossible to use the word, dynamic, in the pejorative sense…I thought dynamic programming was a good name. It was something not even a Congressman could object to.”

Page 46: Lecture 12 - Stanford University

Floyd-Warshall AlgorithmAnotherexampleofDP

• ThisisanalgorithmforAll-PairsShortestPaths(APSP)

• Thatis,IwanttoknowtheshortestpathfromutovforALLpairsu,v ofverticesinthegraph.

• Notjustfromaspecialsinglesources.

t-2

s

u

v

5

2

2

1

s u v t

s 0 2 4 2

u 1 0 2 0

v ∞ ∞ 0 -2

t ∞ ∞ ∞ 0

Source

Destination

Page 47: Lecture 12 - Stanford University

• ThisisanalgorithmforAll-PairsShortestPaths(APSP)

• Thatis,IwanttoknowtheshortestpathfromutovforALLpairsu,v ofverticesinthegraph.

• Notjustfromaspecialsinglesources.

• Naïvesolution(ifwewanttohandlenegativeedgeweights):

• ForallsinG:

• RunBellman-FordonGstartingats.

• TimeO(n⋅nm)=O(n2m),

• maybeasbadasn4 ifm=n2

Floyd-Warshall AlgorithmAnotherexampleofDP

Page 48: Lecture 12 - Stanford University

Optimalsubstructure

k-1

2

1

3

k

k+1

u

v

n

Labelthevertices1,2,…,n

(Weomitsomeedgesinthe

picturebelow).

LetD(k-1)[u,v]bethesolution

toSub-problem(k-1).

Thisistheshortest

pathfromutov

throughtheblueset.

IthaslengthD(k-1)[u,v]

Sub-problem(k-1):

Forallpairs,u,v,findthecostoftheshortest

pathfromutov,sothatalltheinternal

verticesonthatpatharein{1,…,k-1}.

Page 49: Lecture 12 - Stanford University

Optimalsubstructure

k-1

2

1

3

k

k+1

u

v

n

Labelthevertices1,2,…,n

(Weomitsomeedgesinthe

picturebelow).

LetD(k-1)[u,v]bethesolution

toSub-problem(k-1).

Thisistheshortest

pathfromutov

throughtheblueset.

IthaslengthD(k-1)[u,v]

Sub-problem(k-1):

Forallpairs,u,v,findthecostoftheshortest

pathfromutov,sothatalltheinternal

verticesonthatpatharein{1,…,k-1}.

Question:HowcanwefindD(k)[u,v]usingD(k-1)?

Page 50: Lecture 12 - Stanford University

HowcanwefindD(k)[u,v]usingD(k-1)?

k-1

2

1

3

k

k+1

u

v

n

D(k)[u,v]isthecostoftheshortestpathfromutovso

thatallinternalverticesonthatpatharein{1,…,k}.

Page 51: Lecture 12 - Stanford University

HowcanwefindD(k)[u,v]usingD(k-1)?

k-1

2

1

3

k

k+1

u

v

n

D(k)[u,v]isthecostoftheshortestpathfromutovso

thatallinternalverticesonthatpatharein{1,…,k}.

Case1:wedon’t

needvertexk.

D(k)[u,v]=D(k-1)[u,v]

Page 52: Lecture 12 - Stanford University

HowcanwefindD(k)[u,v]usingD(k-1)?

k-1

2

1

3

k

k+1

u

v

n

D(k)[u,v]isthecostoftheshortestpathfromutovso

thatallinternalverticesonthatpatharein{1,…,k}.

Case2:weneed

vertexk.

Page 53: Lecture 12 - Stanford University

Case2continued

k-1

2

1

3

k

uv

n

• Supposetherearenonegative

cycles.• ThenWLOGtheshortestpathfrom

utovthrough{1,…,k}issimple.

• Ifthatpath passesthroughk,it

mustlooklikethis:

• Thispathistheshortestpath

fromutokthrough{1,…,k-1}.• sub-pathsofshortestpathsare

shortestpaths

• Sameforthispath.

Case2:weneed

vertexk.

D(k)[u,v]=D(k-1)[u,k]+ D(k-1)[k,v]

Page 54: Lecture 12 - Stanford University

HowcanwefindD(k)[u,v]usingD(k-1)?

• D(k)[u,v]=min{D(k-1)[u,v], D(k-1)[u,k] +D(k-1)[k,v]}

• Optimalsubstructure:

• Wecansolvethebigproblemusingsmallerproblems.

• Overlappingsub-problems:

• D(k-1)[k,v]canbeusedtohelpcomputeD(k)[u,v]forlotsofdifferentu’s.

Case1:Costof

shortestpath

through{1,…,k-1}

Case2:Costofshortestpath

fromutokandthenfromktov

through{1,…,k-1}

Page 55: Lecture 12 - Stanford University

HowcanwefindD(k)[u,v]usingD(k-1)?

• D(k)[u,v]=min{D(k-1)[u,v], D(k-1)[u,k] +D(k-1)[k,v]}

• Usingour paradigm,thisimmediatelygivesusanalgorithm!

Case1:Costof

shortestpath

through{1,…,k-1}

Case2:Costofshortestpath

fromutokandthenfromktov

through{1,…,k-1}

Page 56: Lecture 12 - Stanford University

Floyd-Warshall algorithm

• Initializen-by-narraysD(k)fork=0,…,n

• D(k)[u,u]=0forallu,forallk

• D(k)[u,v]=∞ forallu≠ v,forallk

• D(0)[u,v]=weight(u,v)forall(u,v)inE.

• For k=1,…,n:

• For pairsu,v inV2:

• D(k)[u,v]=min{D(k-1)[u,v], D(k-1)[u,k] +D(k-1)[k,v]}

• Return D(n)

Thebasecase

checksout:the

onlypaththrough

zeroothervertices

areedgesdirectly

fromutov.

Thisisabottom-up algorithm.

Page 57: Lecture 12 - Stanford University

We’vebasicallyjustshown

• Theorem:

IftherearenonegativecyclesinaweighteddirectedgraphG,thentheFloyd-Warshall algorithm,runningonG,returnsamatrixD(n) sothat:

D(n)[u,v]=distancebetweenuandvinG.

• Runningtime:O(n3)

• BetterthanrunningBFntimes!

• NotreallybetterthanrunningDijkstrantimes.

• Butit’ssimplertoimplementandhandlesnegativeweights.

• Storage:

• Needtostoretwo n-by-narrays,andtheoriginalgraph.

Workoutthedetailsofthe

proof!(OrseeLecture

Notesforafewmore

details).

AswithBellman-Ford,wedon’treallyneedtostoreallnoftheD(k).

Page 58: Lecture 12 - Stanford University

Whatifthereare negativecycles?

• JustlikeBellman-Ford,Floyd-Warshall candetectnegativecycles:

• Negativecycle⇔ ∃ vs.t. thereisapathfromvtovthatgoesthroughallnverticesthathascost<0.

• Negativecycle⇔ ∃ vs.t. D(n)[v,v]<0.

• Algorithm:

• RunFloyd-Warshall asbefore.

• IfthereissomevsothatD(n)[v,v]<0:

• return negative cycle.

Page 59: Lecture 12 - Stanford University

Whathavewelearned?

• TheFloyd-Warshall algorithmisanotherexampleofdynamicprogramming.

• ItcomputesAllPairsShortestPathsinadirectedweightedgraphintimeO(n3).

Page 60: Lecture 12 - Stanford University

AnotherExampleofDP?

• Longestsimplepath(sayalledgeweightsare1):

b

a

Whatisthelongestsimplepathfromstot?

s

t

Page 61: Lecture 12 - Stanford University

Thisisanoptimizationproblem…

• CanweuseDynamicProgramming?

• OptimalSubstructure?

• Longestpathfromstot=longestpathfromstoa

+longestpathfromatot?

b

as

t

NOPE!

Page 62: Lecture 12 - Stanford University

Thisdoesn’tgiveoptimalsub-structureOptimalsolutionstosubproblems don’tgiveusanoptimalsolutiontothebigproblem.(Atleastifwetrytodoitthisway).

• Thesubproblems wecameupwitharen’tindependent:

• Oncewe’vechosenthelongestpathfromatot

• whichusesb,

• ourlongestpathfromstoashouldn’tbeallowedtouseb

• sinceb wasalreadyused.

b

as

t

• Actually,thelongestsimplepath

problemisNP-complete.• Wedon’tknowofanypolynomial-

timealgorithmsforit,DPor

otherwise!

Page 63: Lecture 12 - Stanford University

Recap

• Twomoreshortest-pathalgorithms:

• Bellman-Fordforsingle-sourceshortestpath

• Floyd-Warshall forall-pairsshortestpath

• Dynamicprogramming!

• Thisisafancynamefor:

• Breakupanoptimizationproblemintosmallerproblems

• Theoptimalsolutionstothesub-problemsshouldbesub-solutionstotheoriginalproblem.

• Buildtheoptimalsolutioniterativelybyfillinginatableofsub-solutions.

• Takeadvantageofoverlappingsub-problems!

Page 64: Lecture 12 - Stanford University

Nexttime

• Moreexamplesofdynamicprogramming!

• Pre-lectureexercise:findingoptimalsubstructure

Wewillstopbulletswithour

action-packedcodingskills,

andalsomaybefindlongest

commonsubsequences.

Before nexttime