Lecture 12 - Stanford University
Transcript of Lecture 12 - Stanford University
Lecture12MoreBellman-Ford,Floyd-Warshall,
andDynamicProgramming!
Announcements
• HW5dueFriday
• Midtermshavebeengraded!
• AvailableonGradescope.
• Mean/Median:66 (itwasahardtest!)
• Max:97
• Std.Dev:14
• Pleaselookatthesolutionsandcometoofficehoursifyouhavequestionsaboutyourmidterm!
Recall
• Aweighteddirectedgraph:
u
v
a
b
t
3 32
5
2
13
16
1
• Weights onedges
representcosts.
• Thecostofapathisthe
sumoftheweights
alongthatpath.
• Ashortestpathfroms
totisadirectedpath
fromstotwiththe
smallestcost.
• Thesingle-source
shortestpath problemis
tofindtheshortestpath
fromstovforallvin
thegraph.
1
21
Thisisa
pathfrom
stotof
cost22.
s
Thisisapathfromstotof
cost10.Itistheshortest
pathfromstot.
Lasttime
• Dijkstra’salgorithm!
• Bellman-Fordalgorithm!
• Bothsolvesingle-sourceshortestpathinweightedgraphs.
u
v
a
b
t
3 32
5
2
13
16
1
1
2
1
s
Wedidn’tquitefinishwith
theBellman-Fordalgorithm
solet’sdothatnow.
Bellman-Fordvs.Dijkstra
• d[v]=∞ forallvinV
• d[s]=0
• For i=0,…,n-2:
• For uinV:
• For vinu.outNeighbors:
• d[v]←min(d[v],d[u]+w(u,v))
• Whiletherearenot-sure nodes:
• Pickthe not-sure nodeuwiththesmallestestimated[u].
• For vinu.outNeighbors:
• d[v]←min(d[v],d[u]+w(u,v))
• Markuas sure.
Dijkstra(G,s):
Bellman-Ford(G,s):
Insteadofpickingucleverly,
justupdateforalloftheu’s.
Forpedagogicalreasonswhichwewillseelatertoday…
• Weareactuallygoingtochangethistobedumber.
• Keepnarrays:d(0),d(1),…,d(n-1)
• d(0)[v]=∞ forallvinV
• d(0)[s]=0
• For i=0,…,n-2:
• For uinV:
• For vinu.outNeighbors:
• d(i+1)[v]←min(d(i)[v],d(i)[u]+w(u,v))
• Thendist(s,v)=d(n-1)[v]
Bellman-Ford*(G,s):
Anotherwayofwritingthis
• Weareactuallygoingtochangethistobedumber.
• Keepnarrays:d(0),d(1),…,d(n-1)
• d(0)[v]=∞ forallvinV
• d(0)[s]=0
• For i=0,…,n-2:
• For vinV:
• d(i+1)[v]←min(d(i)[v],minu inv.inNbrs{d(i)[u]+w(u,v)})
• Thendist(s,v)=d(n-1)[v]
Bellman-Ford*(G,s):
Theforloopoverugets
pickedupinthismin.
Bellman-Ford Gates
Union
Dish
Packard
1
1
4
25
20
22
CS161
HowfarisanodefromGates?
0
∞
∞
∞
∞
=
• For i=0,…,n-2:
• For vinV:
• d(i+1)[v]←min(d(i)[v],minu {d(i)[u]+w(u,v)})
∞0 ∞ ∞ ∞
GatesPackardCS161UnionDish
d(0)
d(1)
d(2)
d(3)
d(4)
Bellman-Ford Gates
Union
Dish
Packard
1
1
4
25
20
22
CS161
HowfarisanodefromGates?
0
∞
∞
25
1
i=0
∞
25
0 ∞ ∞ ∞
GatesPackardCS161UnionDish
d(0)
0 1 ∞ ∞d(1)
d(2)
d(3)
d(4)
• For i=0,…,n-2:
• For vinV:
• d(i+1)[v]←min(d(i)[v],minu {d(i)[u]+w(u,v)})
Bellman-Ford Gates
Union
Dish
Packard
1
1
4
25
20
22
CS161
HowfarisanodefromGates?
0
2
45
23
1
i=1
∞
25
0 ∞ ∞ ∞
GatesPackardCS161UnionDish
d(0)
0 1 ∞ ∞d(1)
0 1 2 45d(2) 23
d(3)
d(4)
• For i=0,…,n-2:
• For vinV:
• d(i+1)[v]←min(d(i)[v],minu {d(i)[u]+w(u,v)})
Bellman-Ford Gates
Union
Dish
Packard
1
1
4
25
20
22
CS161
HowfarisanodefromGates?
0
2
6
23
1
i=2
∞
25
0 ∞ ∞ ∞
GatesPackardCS161UnionDish
d(0)
0 1 ∞ ∞d(1)
0 1 2 45d(2) 23
0 1 2 6d(3) 23
d(4)
• For i=0,…,n-2:
• For vinV:
• d(i+1)[v]←min(d(i)[v],minu {d(i)[u]+w(u,v)})
Bellman-Ford Gates
Union
Dish
Packard
1
1
4
25
20
22
CS161
HowfarisanodefromGates?
0
2
6
23
1
i=3
∞
25
0 ∞ ∞ ∞
GatesPackardCS161UnionDish
d(0)
0 1 ∞ ∞d(1)
0 1 2 45d(2) 23
0 1 2 6d(3) 23
0 1 2 6d(4) 23
• For i=0,…,n-2:
• For vinV:
• d(i+1)[v]←min(d(i)[v],minu {d(i)[u]+w(u,v)})
Interpretationofd(i)
∞
25
0 ∞ ∞ ∞
GatesPackardCS161UnionDish
d(0)
0 1 ∞ ∞d(1)
0 1 2 45d(2) 23
0 1 2 6d(3) 23
0 1 2 6d(4) 23
d(i)[v]isequaltothecostofthe
shortestpathbetweensandv
withatmosti edges.
Gates
Union
Dish
Packard
1
1
4
25
20
22
CS161
0
2
6
23
1
WhydoesBellman-Fordwork?
• Inductivehypothesis:
• d(i)[v]isequaltothecostoftheshortestpathbetweensandvwithatmosti edges.
• Conclusion:
Aside:simplepathsAssumethereisnonegativecycle.
• Thennotonlyarethereshortestpaths,butactuallythere’salwaysasimple shortestpath.
• Asimplepathinagraphwithnverticeshasatmostn-1edgesinit.
“Simple”means
thatthepathhas
nocyclesinit.
v
su
x
ts v
y
-2
2
3
-5
10
t
Can’taddanotheredge
withoutmakingacycle!
Thiscycleisn’thelping.
Justgetridofit.
Whydoesitwork?
• Inductivehypothesis:
• d(i)[v]isequaltothecostoftheshortestpathbetweensandvwithatmosti edges.
• Conclusion(s):
• d(n-1)[v]isequaltothecostoftheshortestpathbetweensandvwithatmostn-1edges.
• Iftherearenonegativecycles,d(n-1)[v]isequaltothecostoftheshortestpath.
Noticethatnegativeedgeweights arefine.
Justnotnegativecycles.
Noteonimplementation
• Don’tactuallykeepallnarraysaround.
• Justkeeptwoatatime:“lastround”and“thisround”
∞
25
0 ∞ ∞ ∞
GatesPackardCS161UnionDish
d(0)
0 1 ∞ ∞d(1)
0 1 2 45d(2) 23
0 1 2 6d(3) 23
0 1 2 6 23d(4)
Onlyneedthese
twoinorderto
computed(4)
ThisseemsmuchslowerthanDijkstra
• Anditis:
RunningtimeO(mn)
• However,it’salsomoreflexibleinafewways.
• Canhandlenegativeedges
• Ifwekeepondoingtheseiterations,thenchangesinthenetworkwillpropagatethrough.
• For i=0,…,n-2:
• For vinV:
• d(i+1)[v]←min(d(i)[v],minu inv.nbrs{d(i)[u]+w(u,v)})
• Thendist(s,v)=d(n-1)[v]
Negativecycles Gates
Union
Dish
Packard
1
1
-3
10
-2
CS161
• For i=0,…,n-2:
• For vinV:
• d(i+1)[v]←min(d(i)[v],minu inv.nbrs{d(i)[u]+w(u,v)})
∞
-3
0 ∞ ∞ ∞
GatesPackardCS161UnionDish
d(0)
0 1 ∞ ∞d(1)
0 -5 2 7d(2) -3
-4 -5 -4 6d(3) -34
Thisisnotlookinggood!
-4 -5 -4 6d(4) -7
Negativeedgeweights Gates
Union
Dish
Packard
1
1
-3
10
-2
CS161
• For i=0,…,n-2:
• For vinV:
• d(i+1)[v]←min(d(i)[v],minu inv.nbrs{d(i)[u]+w(u,v)})
∞
-3
0 ∞ ∞ ∞
GatesPackardCS161UnionDish
d(0)
0 1 ∞ ∞d(1)
0 -5 2 7d(2) -3
-4 -5 -4 6d(3) -3
-4 -5 -4 6d(4) -7
4
Butwecantellthatit’snotlookinggood:
Somestuffchanged!
-4 -9 -4 3d(5) -7
NegativecyclesinBellman-Ford
• Iftherearenonegativecycles:
• Everythingworksasitshould,andstabilizes.
• Iftherearenegativecycles:
• Noteverythingworksasitshould…
• Note:itcouldn’tpossiblywork,sinceshortestpathsaren’twell-definediftherearenegativecycles.
• Thed[v]valueswillkeepchanging.
• Solution:
• Gooneroundmoreandseeifthingschange.
Bellman-Fordalgorithm
• d(0)[v]=∞ forallvinV
• d(0)[s]=0
• For i=0,…,n-1:
• For vinV:
• d(i+1)[v]←min(d(i)[v],minu inv.inNeighbors {d(i)[u]+w(u,v)})
• Ifd(n-1) !=d(n) :
• Return NEGATIVECYCLEL
• Otherwise,dist(s,v)=d(n-1)[v]
Bellman-Ford*(G,s):
Runningtime:O(mn)
Bellman-Fordisalsousedinpractice.
• eg,RoutingInformationProtocol(RIP)usessomethinglikeBellman-Ford.
• Olderprotocol,notusedasmuchanymore.
• Eachrouterkeepsatable ofdistancestoeveryotherrouter.
• PeriodicallywedoaBellman-Fordupdate.• Aka,foranedge(u,v):
• d(i+1)[v]←min(d(i)[v],d(i)[u]+w(u,v))
• Thismeansthatiftherearechangesinthenetwork,thiswillpropagate.(maybeslowly…)
Destination Costtoget
there
Sendto
whom?
172.16.1.0 34 172.16.1.1
10.20.40.1 10 192.168.1.2
10.155.120.1 9 10.13.50.0
Recap:shortestpaths
• BFS:
• (+)O(n+m)
• (-)onlyunweightedgraphs
• Dijkstra’salgorithm:• (+)weightedgraphs
• (+)O(nlog(n)+m)ifyouimplementitright.
• (-)nonegativeedgeweights
• (-)very“centralized”(needtokeeptrackofalltheverticestoknowwhichtoupdate).
• TheBellman-Fordalgorithm:• (+)weightedgraphs,evenwithnegativeweights
• (+)canbedoneinadistributedfashion,everyvertexusingonlyinformationfromitsneighbors.
• (-)O(nm)
ImportantthingaboutB-Ffortherestofthislecture
∞
25
0 ∞ ∞ ∞
GatesPackardCS161UnionDish
d(0)
0 1 ∞ ∞d(1)
0 1 2 45d(2) 23
0 1 2 6d(3) 23
0 1 2 6d(4) 23
d(i)[v]isequaltothecostofthe
shortestpathbetweensandv
withatmosti edges.
Gates
Union
Dish
Packard
1
1
4
25
20
22
CS161
0
2
6
23
1
Bellman-Fordisanexampleof…
DynamicProgramming!
• ExampleofDynamicprogramming:
• Fibonaccinumbers.
• (AndBellman-Ford)
• Whatisdynamicprogramming,exactly?
• Andwhyisitcalled“dynamicprogramming”?
• Anotherexample:Floyd-Warshall algorithm
• An“all-pairs”shortestpathalgorithm
Today:
Pre-Lectureexercise:HownottocomputeFibonacciNumbers
• Definition:
• F(n)=F(n-1)+F(n-2),withF(0)=F(1)=1.
• Thefirstseveralare:
1,1,2,3,5,8,13,21,34,55,89,144,…
• Question:
• Givenn,whatisF(n)?
Candidatealgorithm
• def Fibonacci(n):
• if n == 0 or n == 1:
• return 1
• return Fibonacci(n-1) + Fibonacci(n-2)
(Seemstowork,accordingtotheIPython notebook…)
Runningtime?• T(n)=T(n-1)+T(n-2)+O(1)
• T(n) ≥T(n-1)+T(n-2)forn≥ 2• SoT(n) growsatleastasfastas
theFibonaccinumbers
themselves…
• Funfact,that’slike𝜙, where
𝜙 =./√1
2isthegoldenratio.
• aka,EXPONENTIALLYQUICKLYL
SeeCLRSProblem4-4fora
walkthroughofhowfastthe
Fibonaccinumbersgrow!
What’sgoingon?
ConsiderFib(8)
8
76
6554
44 543332
2 2 2 2 3 3 42 32 31 1 110
10 10 10 10 10 102121 21
21
10 10 10 10
etc
That’salotof
repeated
computation!
Maybethiswouldbebetter:
8
7
6
5
4
3
2
1
0
def fasterFibonacci(n):
• F = [1, 1, None, None, …, None ] • \\ F has length n
• for i = 2, …, n:
• F[i] = F[i-1] + F[i-2]
• return F[n]
Muchbetterrunningtime!
Thiswasanexampleof…
Whatisdynamicprogramming?
• Itisanalgorithmdesignparadigm
• likedivide-and-conquerisanalgorithmdesignparadigm.
• Usuallyitisforsolvingoptimizationproblems
• eg,shortestpath
• (Fibonaccinumbersaren’tanoptimizationproblem,buttheyareagoodexample…)
Elementsofdynamicprogramming
• Bigproblemsbreakupintosub-problems.• Fibonacci:F(i)fori ≤ n• Bellman-Ford:Shortestpathswithatmosti edgesfori≤ n
• Thesolutiontoaproblemcanbeexpressedintermsofsolutionstosmallersub-problems.
• Fibonacci:
F(i+1)=F(i) +F(i-1)
• Bellman-Ford:
d(i+1)[v]←min{d(i)[v],minu {d(i)[u]+weight(u,v)}}
1.Optimalsub-structure:
Shortestpathwithat
mosti edgesfromstovShortestpathwithatmost
i edgesfromstou.
Elementsofdynamicprogramming
• Thesub-problemsoverlapalot.
• Fibonacci:
• LotsofdifferentF[j]willuseF[i].
• Bellman-Ford:
• Lotsofdifferententriesofd(i+1) willused(i)[v].
• Thismeansthatwecansavetimebysolvingasub-problemjustonceandstoringtheanswer.
2.Overlappingsub-problems:
Elementsofdynamicprogramming
• Optimalsubstructure.
• Optimalsolutionstosub-problemsaresub-solutionstotheoptimalsolutionoftheoriginalproblem.
• Overlappingsubproblems.
• Thesubproblems showupagainandagain
• Usingtheseproperties,wecandesignadynamic
programmingalgorithm:
• Keepatableofsolutionstothesmallerproblems.
• Usethesolutionsinthetabletosolvebiggerproblems.
• Attheendwecanuseinformationwecollectedalongthewaytofindthesolutiontothewholething.
Twowaystothinkaboutand/orimplementDPalgorithms
• Topdown
•Bottomup
This picture isn’t hugely relevant but I like it.
Bottomupapproachwhatwejustsaw.
• ForFibonacci:
• Solvethesmallproblemsfirst
• fillinF[0],F[1]
• Thenbiggerproblems
• fillinF[2]
• …
• Thenbiggerproblems
• fillinF[n-1]
• Thenfinallysolvetherealproblem.
• fillinF[n]
Bottomupapproachwhatwejustsaw.
• ForBellman-Ford:
• Solvethesmallproblemsfirst
• fillind(0)
• Thenbiggerproblems
• fillind(1)
• …
• Thenbiggerproblems
• fillind(n-2)
• Thenfinallysolvetherealproblem.
• fillind(n-1)
Topdownapproach
• Thinkofitlikearecursivealgorithm.
• Tosolvethebigproblem:• Recurse tosolvesmallerproblems
• Thoserecurse tosolvesmallerproblems
• etc..
• Thedifferencefromdivideandconquer:• Memo-ization
• Keeptrackofwhatsmallproblemsyou’vealreadysolvedtopreventre-solvingthesameproblemtwice.
Exampleoftop-downFibonacci
• define a global list F = [1,1,None, None, …, None]
• def Fibonacci(n):
• if F[n] != None:
• return F[n]
• else:
• F[n] = Fibonacci(n-1) + Fibonacci(n-2)
• return F[n]
Memo-ization visualization
8
76
6554
44 543332
2 2 2 2 3 3 42 32 31 1 110
10 10 10 10 10 102121 21
21
10 10 10 10
etc
Memo-ization Visualizationctd
8
7
6
5
4
3
2
1
0
• define a global list F = [1,1,None, None, …, None]
• def Fibonacci(n):
• if F[n] != None:
• return F[n]
• else:
• F[n] = Fibonacci(n-1) + Fibonacci(n-2)
• return F[n]
Whathavewelearned?
• Paradigminalgorithmdesign.
• Usesoptimalsubstructure
• Usesoverlappingsubproblems
• Canbeimplementedbottom-up ortop-down.
• It’safancynameforaprettycommon-senseidea:
Why“dynamicprogramming”?
• Programming referstofindingtheoptimal“program.”
• asin,ashortestrouteisaplan akaaprogram.
• Dynamic referstothefactthatit’smulti-stage.
• Butalsoit’sjustafancy-soundingname.
Manipulatingcomputercodeinanactionmovie?
Why“dynamicprogramming”?
• RichardBellmaninventedthenameinthe1950’s.
• Atthetime,hewasworkingfortheRANDCorporation,whichwasbasicallyworkingfortheAirForce,andgovernmentprojectsneededflashynamestogetfunded.
• FromBellman’sautobiography:
• “It’s impossible to use the word, dynamic, in the pejorative sense…I thought dynamic programming was a good name. It was something not even a Congressman could object to.”
Floyd-Warshall AlgorithmAnotherexampleofDP
• ThisisanalgorithmforAll-PairsShortestPaths(APSP)
• Thatis,IwanttoknowtheshortestpathfromutovforALLpairsu,v ofverticesinthegraph.
• Notjustfromaspecialsinglesources.
t-2
s
u
v
5
2
2
1
s u v t
s 0 2 4 2
u 1 0 2 0
v ∞ ∞ 0 -2
t ∞ ∞ ∞ 0
Source
Destination
• ThisisanalgorithmforAll-PairsShortestPaths(APSP)
• Thatis,IwanttoknowtheshortestpathfromutovforALLpairsu,v ofverticesinthegraph.
• Notjustfromaspecialsinglesources.
• Naïvesolution(ifwewanttohandlenegativeedgeweights):
• ForallsinG:
• RunBellman-FordonGstartingats.
• TimeO(n⋅nm)=O(n2m),
• maybeasbadasn4 ifm=n2
Floyd-Warshall AlgorithmAnotherexampleofDP
Optimalsubstructure
k-1
2
…
1
3
k
k+1
u
v
n
Labelthevertices1,2,…,n
(Weomitsomeedgesinthe
picturebelow).
LetD(k-1)[u,v]bethesolution
toSub-problem(k-1).
Thisistheshortest
pathfromutov
throughtheblueset.
IthaslengthD(k-1)[u,v]
Sub-problem(k-1):
Forallpairs,u,v,findthecostoftheshortest
pathfromutov,sothatalltheinternal
verticesonthatpatharein{1,…,k-1}.
Optimalsubstructure
k-1
2
…
1
3
k
k+1
u
v
n
Labelthevertices1,2,…,n
(Weomitsomeedgesinthe
picturebelow).
LetD(k-1)[u,v]bethesolution
toSub-problem(k-1).
Thisistheshortest
pathfromutov
throughtheblueset.
IthaslengthD(k-1)[u,v]
Sub-problem(k-1):
Forallpairs,u,v,findthecostoftheshortest
pathfromutov,sothatalltheinternal
verticesonthatpatharein{1,…,k-1}.
Question:HowcanwefindD(k)[u,v]usingD(k-1)?
HowcanwefindD(k)[u,v]usingD(k-1)?
k-1
2
…
1
3
k
k+1
u
v
n
D(k)[u,v]isthecostoftheshortestpathfromutovso
thatallinternalverticesonthatpatharein{1,…,k}.
HowcanwefindD(k)[u,v]usingD(k-1)?
k-1
2
…
1
3
k
k+1
u
v
n
D(k)[u,v]isthecostoftheshortestpathfromutovso
thatallinternalverticesonthatpatharein{1,…,k}.
Case1:wedon’t
needvertexk.
D(k)[u,v]=D(k-1)[u,v]
HowcanwefindD(k)[u,v]usingD(k-1)?
k-1
2
…
1
3
k
k+1
u
v
n
D(k)[u,v]isthecostoftheshortestpathfromutovso
thatallinternalverticesonthatpatharein{1,…,k}.
Case2:weneed
vertexk.
Case2continued
k-1
2
…
1
3
k
uv
n
• Supposetherearenonegative
cycles.• ThenWLOGtheshortestpathfrom
utovthrough{1,…,k}issimple.
• Ifthatpath passesthroughk,it
mustlooklikethis:
• Thispathistheshortestpath
fromutokthrough{1,…,k-1}.• sub-pathsofshortestpathsare
shortestpaths
• Sameforthispath.
Case2:weneed
vertexk.
D(k)[u,v]=D(k-1)[u,k]+ D(k-1)[k,v]
HowcanwefindD(k)[u,v]usingD(k-1)?
• D(k)[u,v]=min{D(k-1)[u,v], D(k-1)[u,k] +D(k-1)[k,v]}
• Optimalsubstructure:
• Wecansolvethebigproblemusingsmallerproblems.
• Overlappingsub-problems:
• D(k-1)[k,v]canbeusedtohelpcomputeD(k)[u,v]forlotsofdifferentu’s.
Case1:Costof
shortestpath
through{1,…,k-1}
Case2:Costofshortestpath
fromutokandthenfromktov
through{1,…,k-1}
HowcanwefindD(k)[u,v]usingD(k-1)?
• D(k)[u,v]=min{D(k-1)[u,v], D(k-1)[u,k] +D(k-1)[k,v]}
• Usingour paradigm,thisimmediatelygivesusanalgorithm!
Case1:Costof
shortestpath
through{1,…,k-1}
Case2:Costofshortestpath
fromutokandthenfromktov
through{1,…,k-1}
Floyd-Warshall algorithm
• Initializen-by-narraysD(k)fork=0,…,n
• D(k)[u,u]=0forallu,forallk
• D(k)[u,v]=∞ forallu≠ v,forallk
• D(0)[u,v]=weight(u,v)forall(u,v)inE.
• For k=1,…,n:
• For pairsu,v inV2:
• D(k)[u,v]=min{D(k-1)[u,v], D(k-1)[u,k] +D(k-1)[k,v]}
• Return D(n)
Thebasecase
checksout:the
onlypaththrough
zeroothervertices
areedgesdirectly
fromutov.
Thisisabottom-up algorithm.
We’vebasicallyjustshown
• Theorem:
IftherearenonegativecyclesinaweighteddirectedgraphG,thentheFloyd-Warshall algorithm,runningonG,returnsamatrixD(n) sothat:
D(n)[u,v]=distancebetweenuandvinG.
• Runningtime:O(n3)
• BetterthanrunningBFntimes!
• NotreallybetterthanrunningDijkstrantimes.
• Butit’ssimplertoimplementandhandlesnegativeweights.
• Storage:
• Needtostoretwo n-by-narrays,andtheoriginalgraph.
Workoutthedetailsofthe
proof!(OrseeLecture
Notesforafewmore
details).
AswithBellman-Ford,wedon’treallyneedtostoreallnoftheD(k).
Whatifthereare negativecycles?
• JustlikeBellman-Ford,Floyd-Warshall candetectnegativecycles:
• Negativecycle⇔ ∃ vs.t. thereisapathfromvtovthatgoesthroughallnverticesthathascost<0.
• Negativecycle⇔ ∃ vs.t. D(n)[v,v]<0.
• Algorithm:
• RunFloyd-Warshall asbefore.
• IfthereissomevsothatD(n)[v,v]<0:
• return negative cycle.
Whathavewelearned?
• TheFloyd-Warshall algorithmisanotherexampleofdynamicprogramming.
• ItcomputesAllPairsShortestPathsinadirectedweightedgraphintimeO(n3).
AnotherExampleofDP?
• Longestsimplepath(sayalledgeweightsare1):
b
a
Whatisthelongestsimplepathfromstot?
s
t
Thisisanoptimizationproblem…
• CanweuseDynamicProgramming?
• OptimalSubstructure?
• Longestpathfromstot=longestpathfromstoa
+longestpathfromatot?
b
as
t
NOPE!
Thisdoesn’tgiveoptimalsub-structureOptimalsolutionstosubproblems don’tgiveusanoptimalsolutiontothebigproblem.(Atleastifwetrytodoitthisway).
• Thesubproblems wecameupwitharen’tindependent:
• Oncewe’vechosenthelongestpathfromatot
• whichusesb,
• ourlongestpathfromstoashouldn’tbeallowedtouseb
• sinceb wasalreadyused.
b
as
t
• Actually,thelongestsimplepath
problemisNP-complete.• Wedon’tknowofanypolynomial-
timealgorithmsforit,DPor
otherwise!
Recap
• Twomoreshortest-pathalgorithms:
• Bellman-Fordforsingle-sourceshortestpath
• Floyd-Warshall forall-pairsshortestpath
• Dynamicprogramming!
• Thisisafancynamefor:
• Breakupanoptimizationproblemintosmallerproblems
• Theoptimalsolutionstothesub-problemsshouldbesub-solutionstotheoriginalproblem.
• Buildtheoptimalsolutioniterativelybyfillinginatableofsub-solutions.
• Takeadvantageofoverlappingsub-problems!
Nexttime
• Moreexamplesofdynamicprogramming!
• Pre-lectureexercise:findingoptimalsubstructure
Wewillstopbulletswithour
action-packedcodingskills,
andalsomaybefindlongest
commonsubsequences.
Before nexttime