linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form...

172

Transcript of linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form...

Page 1: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

linear programming

David Steurer

Algorithms, Computing, and Probability, Fall 2018, ETH Zürich

Page 2: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

linear programminggiven: linear function (objective) and system of linearinequalities (constraints)

find: optimal solution that maximizes objective among allsolutions that satisfy the constraints ( feasible solutions)

c x⊺

{Ax ⩽ b}

x∗

Page 3: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

linear programminggiven: linear function (objective) and system of linearinequalities (constraints)

find: optimal solution that maximizes objective among allsolutions that satisfy the constraints ( feasible solutions)

c x⊺

{Ax ⩽ b}

x∗

Page 4: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

s t

3/3

0/2 1/4

2/2

3/3

2/2

2/3

3/3

o

p

q

r

application: network flow

network flowgiven: dir. graph with edge cap. ,source and target

find: maximum flow from to in within capacities

G = (V , E) c: E → R⩾0

s ∈ V t ∈ V

s t G c

Page 5: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

s t

3/3

0/2 1/4

2/2

3/3

2/2

2/3

3/3

o

p

q

r

application: network flow

network flowgiven: dir. graph with edge cap. ,source and target

find: maximum flow from to in within capacities

linear programming formulationvariables: for all

G = (V , E) c: E → R⩾0

s ∈ V t ∈ V

s t G c

f ⩾ 0e e ∈ E

Page 6: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

s t

3/3

0/2 1/4

2/2

3/3

2/2

2/3

3/3

o

p

q

r

application: network flow

network flowgiven: dir. graph with edge cap. ,source and target

find: maximum flow from to in within capacities

linear programming formulationvariables: for all

capacity: for all

G = (V , E) c: E → R⩾0

s ∈ V t ∈ V

s t G c

f ⩾ 0e e ∈ E

f ⩽ c(e)e e ∈ E

Page 7: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

s t

3/3

0/2 1/4

2/2

3/3

2/2

2/3

3/3

o

p

q

r

application: network flow

network flowgiven: dir. graph with edge cap. ,source and target

find: maximum flow from to in within capacities

linear programming formulationvariables: for all

capacity: for all

conservation: for

G = (V , E) c: E → R⩾0

s ∈ V t ∈ V

s t G c

f ⩾ 0e e ∈ E

f ⩽ c(e)e e ∈ E

f = f∑e∈δ (u)G+ e ∑e∈δ (u)

G− e u ∈ V ∖ {s, t}

Page 8: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

s t

3/3

0/2 1/4

2/2

3/3

2/2

2/3

3/3

o

p

q

r

application: network flow

network flowgiven: dir. graph with edge cap. ,source and target

find: maximum flow from to in within capacities

linear programming formulationvariables: for all

capacity: for all

conservation: for

objective: (net flow out of source)

G = (V , E) c: E → R⩾0

s ∈ V t ∈ V

s t G c

f ⩾ 0e e ∈ E

f ⩽ c(e)e e ∈ E

f = f∑e∈δ (u)G+ e ∑e∈δ (u)

G− e u ∈ V ∖ {s, t}

f − f∑e∈δ (s)G+ e ∑e∈δ (s)

G− e

Page 9: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry of linear programs

Page 10: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry of linear programs

hyperplane: solutions to linear equation {x ∣ a x = b}⊺

Page 11: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry of linear programs

hyperplane: solutions to linear equation

half-space: solutions to linear inequality

{x ∣ a x = b}⊺

{x ∣ a x ⩽ b}⊺

Page 12: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry of linear programs

hyperplane: solutions to linear equation

half-space: solutions to linear inequality

polyhedron: finite intersection of half-spaces (feasible region of linear program)

lemma: every polyhedron is convex (contains the linesegment between any two of its points)

{x ∣ a x = b}⊺

{x ∣ a x ⩽ b}⊺

{x ∣ Ax ⩽ b}

Page 13: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry of linear programs

hyperplane: solutions to linear equation

half-space: solutions to linear inequality

polyhedron: finite intersection of half-spaces (feasible region of linear program)

lemma: every polyhedron is convex (contains the linesegment between any two of its points)

proof:

{x ∣ a x = b}⊺

{x ∣ a x ⩽ b}⊺

{x ∣ Ax ⩽ b}

Page 14: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry of linear programs

hyperplane: solutions to linear equation

half-space: solutions to linear inequality

polyhedron: finite intersection of half-spaces (feasible region of linear program)

lemma: every polyhedron is convex (contains the linesegment between any two of its points)

proof:

corollary: set of optima of linear program is convex

{x ∣ a x = b}⊺

{x ∣ a x ⩽ b}⊺

{x ∣ Ax ⩽ b}

Page 15: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solution

consider linear program with constraints {Ax ⩽ b}

Page 16: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solution

consider linear program with constraints

definition: feasible solution is basic if it satisfies lin.-indep. constraints from with equality

{Ax ⩽ b}

x ∈ Rn n

{Ax ⩽ b}

Page 17: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solution

consider linear program with constraints

definition: feasible solution is basic if it satisfies lin.-indep. constraints from with equality

In other words, is basic feasible solution (BFS) if and subset of rows such that and has rank

{Ax ⩽ b}

x ∈ Rn n

{Ax ⩽ b}

x Ax ⩽ b

∃ S n A x = bS S AS n

Page 18: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solution

consider linear program with constraints

definition: feasible solution is basic if it satisfies lin.-indep. constraints from with equality

In other words, is basic feasible solution (BFS) if and subset of rows such that and has rank

claim: linear program with constraints has BFSs

{Ax ⩽ b}

x ∈ Rn n

{Ax ⩽ b}

x Ax ⩽ b

∃ S n A x = bS S AS n

m ⩽ (nm)

Page 19: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solution

consider linear program with constraints

definition: feasible solution is basic if it satisfies lin.-indep. constraints from with equality

In other words, is basic feasible solution (BFS) if and subset of rows such that and has rank

claim: linear program with constraints has BFSs

proof: every subset gives rise to at most one BFS (why?)

{Ax ⩽ b}

x ∈ Rn n

{Ax ⩽ b}

x Ax ⩽ b

∃ S n A x = bS S AS n

m ⩽ (nm)

Page 20: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solution

consider linear program with constraints

definition: feasible solution is basic if it satisfies lin.-indep. constraints from with equality

In other words, is basic feasible solution (BFS) if and subset of rows such that and has rank

claim: linear program with constraints has BFSs

proof: every subset gives rise to at most one BFS (why?)

geometry: BFS is corner of polyhedron

{Ax ⩽ b}

x ∈ Rn n

{Ax ⩽ b}

x Ax ⩽ b

∃ S n A x = bS S AS n

m ⩽ (nm)

P = {x ∣ Ax ⩽ b}

Page 21: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solution

consider linear program with constraints

definition: feasible solution is basic if it satisfies lin.-indep. constraints from with equality

In other words, is basic feasible solution (BFS) if and subset of rows such that and has rank

claim: linear program with constraints has BFSs

proof: every subset gives rise to at most one BFS (why?)

geometry: BFS is corner of polyhedron

for every BFS , linear objective such that is unique opt.

{Ax ⩽ b}

x ∈ Rn n

{Ax ⩽ b}

x Ax ⩽ b

∃ S n A x = bS S AS n

m ⩽ (nm)

P = {x ∣ Ax ⩽ b}

x ∃ x

Page 22: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solution

claim: no BFS is strict convex combination of two pts in P

Page 23: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solution

claim: no BFS is strict convex combination of two pts in

proof:

P

Page 24: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solutions are optimal

theorem: every feasible, bounded linear program inequational form has a BFS as optimum

Page 25: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solutions are optimal

theorem: every feasible, bounded linear program inequational form has a BFS as optimum

there may be (infinitely) many optima,but at least one of them is a BFS!

Page 26: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solutions are optimal

theorem: every feasible, bounded linear program inequational form has a BFS as optimum

there may be (infinitely) many optima,but at least one of them is a BFS!

recall: equational form without loss of generality

finite (but exponential) algorithm for linear programming:enumerate all basic feasible solutions (how?) and output onewith best objective value

Page 27: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

basic feasible solutions are optimal

theorem: every feasible, bounded linear program inequational form has a BFS as optimum

there may be (infinitely) many optima,but at least one of them is a BFS!

recall: equational form without loss of generality

finite (but exponential) algorithm for linear programming:enumerate all basic feasible solutions (how?) and output onewith best objective value

proof of theorem: suppose objective is to maximize subject to constraints ; assume LP bounded

c x⊺

{Ax = b, x ⩾ 0}

Page 28: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: every feasible, bounded linear program inequational form has a BFS as optimum

proof of theorem: suppose objective is to maximize subject to constraints ; assume LP bounded

enough to show: for every feasible , can find BFS withobjective

c x⊺

{Ax = b, x ⩾ 0}

x ∈ R0n

⩾ c x⊺0

Page 29: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: every feasible, bounded linear program inequational form has a BFS as optimum

proof of theorem: suppose objective is to maximize subject to constraints ; assume LP bounded

enough to show: for every feasible , can find BFS withobjective

among all feasible with , choose one withmaximum number of zero coordinates (will show: is BFS)

c x⊺

{Ax = b, x ⩾ 0}

x ∈ R0n

⩾ c x⊺0

x c x ⩾ c x⊺ ⊺0 x~

x~

Page 30: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: every feasible, bounded linear program inequational form has a BFS as optimum

proof of theorem: suppose objective is to maximize subject to constraints ; assume LP bounded

enough to show: for every feasible , can find BFS withobjective

among all feasible with , choose one withmaximum number of zero coordinates (will show: is BFS)

w.l.o.g.: exactly first coordinates of are non-zero

c x⊺

{Ax = b, x ⩾ 0}

x ∈ R0n

⩾ c x⊺0

x c x ⩾ c x⊺ ⊺0 x~

x~

k ⩾ 0 x~

Page 31: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: every feasible, bounded linear program inequational form has a BFS as optimum

proof of theorem: suppose objective is to maximize subject to constraints ; assume LP bounded

enough to show: for every feasible , can find BFS withobjective

among all feasible with , choose one withmaximum number of zero coordinates (will show: is BFS)

w.l.o.g.: exactly first coordinates of are non-zero

claim: first columns of are linearly-independent

c x⊺

{Ax = b, x ⩾ 0}

x ∈ R0n

⩾ c x⊺0

x c x ⩾ c x⊺ ⊺0 x~

x~

k ⩾ 0 x~

k A

Page 32: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: every feasible, bounded linear program in equational form has a BFS as optimum

claim: first columns of are linearly-independent

if claim holds, then is indeed BFS (why?)

k A

x~

Page 33: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: every feasible, bounded linear program in equational form has a BFS as optimum

claim: first columns of are linearly-independent

if claim holds, then is indeed BFS (why?)

proof of claim: (by contradiction) otherwise, with and

k A

x~

∃w ≠ 0Aw = 0 w , … , w = 0k+1 n

Page 34: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: every feasible, bounded linear program in equational form has a BFS as optimum

claim: first columns of are linearly-independent

proof of claim: (by contradiction) otherwise, with and

case 1: if and , then solutions for are feasible and have unbounded objective;

k A

∃w ≠ 0Aw = 0 w , … , w = 0k+1 n

c w > 0⊺ w ⩾ 0 x = + t ⋅ w(t) x~

t ⩾ 0

Page 35: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: every feasible, bounded linear program in equational form has a BFS as optimum

claim: first columns of are linearly-independent

proof of claim: (by contradiction) otherwise, with and

case 1: if and , then solutions for are feasible and have unbounded objective;contradicting boundedness of LP

k A

∃w ≠ 0Aw = 0 w , … , w = 0k+1 n

c w > 0⊺ w ⩾ 0 x = + t ⋅ w(t) x~

t ⩾ 0

Page 36: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: every feasible, bounded linear program in equational form has a BFS as optimum

claim: first columns of are linearly-independent

proof of claim: (by contradiction) otherwise, with and

case 2: if and has neg. coord., then suchthat is feasible with and more zerocoords. than ;

k A

∃w ≠ 0Aw = 0 w , … , w = 0k+1 n

c w ⩾ 0⊺ w ∃t ⩾ 0x = + t ⋅ w′ x~ c x ⩾ c x⊺ ′ ⊺

0

x~

Page 37: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: every feasible, bounded linear program in equational form has a BFS as optimum

claim: first columns of are linearly-independent

proof of claim: (by contradiction) otherwise, with and

case 2: if and has neg. coord., then suchthat is feasible with and more zerocoords. than ; contradicting choice of

k A

∃w ≠ 0Aw = 0 w , … , w = 0k+1 n

c w ⩾ 0⊺ w ∃t ⩾ 0x = + t ⋅ w′ x~ c x ⩾ c x⊺ ′ ⊺

0

x~ x~

Page 38: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

algorithms for linear programming

Page 39: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

algorithms for linear programming

simplex method [Dantzig’46]

move along adjacent corners of the feasible polyhedron until anoptimal solution is found

Page 40: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

algorithms for linear programming

simplex method [Dantzig’46]

move along adjacent corners of the feasible polyhedron until anoptimal solution is found

works well in practice but has super-polynomial time in theworst case

Page 41: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

algorithms for linear programming

simplex method [Dantzig’46]

move along adjacent corners of the feasible polyhedron until anoptimal solution is found

works well in practice but has super-polynomial time in theworst case

ellipsoid method [Shor–Judin–Nemirovski’70, Khachyian’79]

first polynomial-time algorithm (but not practical)

Page 42: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

algorithms for linear programming

simplex method [Dantzig’46]

move along adjacent corners of the feasible polyhedron until anoptimal solution is found

works well in practice but has super-polynomial time in theworst case

ellipsoid method [Shor–Judin–Nemirovski’70, Khachyian’79]

first polynomial-time algorithm (but not practical)

interior-point methods [Karmarkar’84, …]

best-known worst-case running time; also practical

lots of recent works [e.g., Lee–Sidford’14]; even leading to fasternetwork flow algorithms [e.g., Madry]

Page 43: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

algorithms for linear programming

simplex method [Dantzig’46]

move along adjacent corners of the feasible polyhedron until anoptimal solution is found

works well in practice but has super-polynomial time in theworst case

ellipsoid method [Shor–Judin–Nemirovski’70, Khachyian’79]

first polynomial-time algorithm (but not practical)

interior-point methods [Karmarkar’84, …]

best-known worst-case running time; also practical

lots of recent works [e.g., Lee–Sidford’14]; even leading to fasternetwork flow algorithms [e.g., Madry]

Page 44: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

linear programming (LP) as algorithmic techniqueone of the most powerful and versatile tools for algorithm design

Page 45: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

linear programming (LP) as algorithmic techniqueone of the most powerful and versatile tools for algorithm design

for many problems, the algorithms with the best known provableguarantees rely on LP as a subroutine

examples

traveling salesman problemnetwork design (e.g., Steiner tree)edge-disjoint pathscompressed sensingnonnegative matrix factorization

Page 46: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

linear programming (LP) as algorithmic techniqueone of the most powerful and versatile tools for algorithm design

for many problems, the algorithms with the best known provableguarantees rely on LP as a subroutine

typical pattern for such algorithms

construct linear program (relaxation)�. find an optimal solution to the linear program�. post-process to get sol. for orig. problem (rounding)�.

x∗

x∗

Page 47: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

linear programming (LP) as algorithmic techniqueone of the most powerful and versatile tools for algorithm design

for many problems, the algorithms with the best known provableguarantees rely on LP as a subroutine

typical pattern for such algorithms

construct linear program (relaxation)�. find an optimal solution to the linear program�. post-process to get sol. for orig. problem (rounding)�.

typically, most of the “analysis work” is in step 3 and most ofthe “algorithmic work” is in step 2

x∗

x∗

Page 48: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

computational complexity of linear programmingLP as decision problem: given a system of linear inequalities,decide if it is satisfiable or not

Page 49: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

computational complexity of linear programmingLP as decision problem: given a system of linear inequalities,decide if it is satisfiable or not

Page 50: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

computational complexity of linear programmingLP as decision problem: given a system of linear inequalities,decide if it is satisfiable or not

LP in NP: if a system of linear inequalities is satisfiable, thereis a polynomial-size certificate for that

Page 51: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

computational complexity of linear programmingLP as decision problem: given a system of linear inequalities,decide if it is satisfiable or not

LP in NP: if a system of linear inequalities is satisfiable, thereis a polynomial-size certificate for that

LP in coNP: if a system of linear inequalities is not satisfiable,there is a polynomial-size certificate for that

Page 52: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

computational complexity of linear programmingLP as decision problem: given a system of linear inequalities,decide if it is satisfiable or not

LP in NP: if a system of linear inequalities is satisfiable, thereis a polynomial-size certificate for that

LP in coNP: if a system of linear inequalities is not satisfiable,there is a polynomial-size certificate for that

LP in P: there is a polynomial-time algorithm to decide thesatisfiability of systems of linear inequalities

Page 53: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

bounds on LP solutions(or LP in NP)

Page 54: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

notation: (binary) encoding size of object

theorem: for every bounded linear program , there exists

an optimal solution such that

⟨X⟩ = X

L

x∗ ⟨x ⟩ ⩽ ⟨L⟩∗ O(1)

Page 55: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

notation: (binary) encoding size of object

theorem: for every bounded linear program , there exists

an optimal solution such that

proof: it is enough to bound the encoding size of basic feasiblesolutions because there always exists a basic feasible solutionthat is optimal

⟨X⟩ = X

L

x∗ ⟨x ⟩ ⩽ ⟨L⟩∗ O(1)

Page 56: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

notation: (binary) encoding size of object

theorem: for every bounded linear program , there exists

an optimal solution such that

proof: it is enough to bound the encoding size of basic feasiblesolutions because there always exists a basic feasible solutionthat is optimal

in the next slides, we will bound the encoding size of basicfeasible solutions

⟨X⟩ = X

L

x∗ ⟨x ⟩ ⩽ ⟨L⟩∗ O(1)

Page 57: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

lemma: every satisfiesA ∈ Zn×n

⟨detA⟩ ⩽ O(⟨A⟩)

Page 58: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

lemma: every satisfies

proof:

A ∈ Zn×n

⟨detA⟩ ⩽ O(⟨A⟩)

∣detA∣ = ∣ sign(π) a ∣∑π∈Sn∏i=1

ni,π(i)

Page 59: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

lemma: every satisfies

proof:

A ∈ Zn×n

⟨detA⟩ ⩽ O(⟨A⟩)

∣detA∣ = ∣ sign(π) a ∣∑π∈Sn∏i=1

ni,π(i)

⩽ n! ⋅ 2⟨A⟩ = 2⟨A⟩+O(n log n)

Page 60: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

lemma: every satisfies

proof: since ,

A ∈ Zn×n

⟨detA⟩ ⩽ O(⟨A⟩)

⟨A⟩ ⩾ n2

∣detA∣ = ∣ sign(π) a ∣∑π∈Sn∏i=1

ni,π(i)

⩽ n! ⋅ 2⟨A⟩ = 2⟨A⟩+O(n log n) ⩽ 2O(⟨A⟩) □

Page 61: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

lemma: every basic feasible solution of satisfiesx~ L∀j ∈ [n]. ⟨ ⟩ ⩽ O(⟨A⟩ + ⟨b⟩ + n )x~j

2

Page 62: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

let be an LP in equational form

suppose has linear constraints with and

lemma: every basic feasible solution of satisfies

L

L {Ax = b,x ⩾ 0}b ∈ Zm A ∈ Zm×n

x~ L∀j ∈ [n]. ⟨ ⟩ ⩽ O(⟨A⟩ + ⟨b⟩ + n )x~j

2

Page 63: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

let be an LP in equational form

suppose has linear constraints with and

lemma: every basic feasible solution of satisfies

proof: since is a BFS, it is the solution of a non-singularlinear system , where each equations is either oneof the equations in or of the form

L

L {Ax = b,x ⩾ 0}b ∈ Zm A ∈ Zm×n

x~ L∀j ∈ [n]. ⟨ ⟩ ⩽ O(⟨A⟩ + ⟨b⟩ + n )x~j

2

x~

{ x = }A~

b~

{Ax = b} x = 0i

Page 64: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

let be an LP in equational form

suppose has linear constraints with and

lemma: every basic feasible solution of satisfies

proof: since is a BFS, it is the solution of a non-singularlinear system , where each equations is either oneof the equations in or of the form

by Cramer’s rule, , where is obtainedfrom by replacing the -th column with

L

L {Ax = b,x ⩾ 0}b ∈ Zm A ∈ Zm×n

x~ L∀j ∈ [n]. ⟨ ⟩ ⩽ O(⟨A⟩ + ⟨b⟩ + n )x~j

2

x~

{ x = }A~

b~

{Ax = b} x = 0i

= det / detx~j A~

j A~

A~

j

A~

j b~

Page 65: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

let be an LP in equational form

suppose has linear constraints with and

lemma: every basic feasible solution of satisfies

proof: since is a BFS, it is the solution of a non-singularlinear system , where each equations is either oneof the equations in or of the form

by Cramer’s rule, , where is obtainedfrom by replacing the -th column with

therefore,

L

L {Ax = b,x ⩾ 0}b ∈ Zm A ∈ Zm×n

x~ L∀j ∈ [n]. ⟨ ⟩ ⩽ O(⟨A⟩ + ⟨b⟩ + n )x~j

2

x~

{ x = }A~

b~

{Ax = b} x = 0i

= det / detx~j A~

j A~

A~

j

A~

j b~

⟨ ⟩ ⩽ ⟨det ⟩ + ⟨det ⟩ ⩽ O(⟨A⟩ + ⟨b⟩ + n )x~j A~

j A~ 2 □

Page 66: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

duality and Farkas lemma(or LP in coNP)

Page 67: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

certificates of satisfiability and unsatisfiabilitywe saw: the satisfiability of a system of linear inequalitiesalways has a polynomial-size certificate

Page 68: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

certificates of satisfiability and unsatisfiabilitywe saw: the satisfiability of a system of linear inequalitiesalways has a polynomial-size certificate

how could we certify that a system of linear inequalities isunsatisfiable?

Page 69: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

unsatisfiability of systems of linear equations

theorem: A linear system is unsatisfiable iff thefollowing linear system is satisfiable

{Ax = b}

{A y = 0, b y = 1}⊺ ⊺

Page 70: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

unsatisfiability of systems of linear equations

theorem: A linear system is unsatisfiable iff thefollowing linear system is satisfiable

idea: for every , we can derive the implied equation from the equations

{Ax = b}

{A y = 0, b y = 1}⊺ ⊺

y

{y Ax = y b}⊺ ⊺ {Ax = b}

Page 71: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

unsatisfiability of systems of linear equations

theorem: A linear system is unsatisfiable iff thefollowing linear system is satisfiable

idea: for every , we can derive the implied equation from the equations

the theorem says that is unsatisfiable if and only ifwe can derive in this way the syntactically unsatisfiableequation

{Ax = b}

{A y = 0, b y = 1}⊺ ⊺

y

{y Ax = y b}⊺ ⊺ {Ax = b}

{Ax = b}

{0 x = 1}⊺

Page 72: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

unsatisfiability of systems of linear inequalities

Farkas lemma (form I): is unsatisfiable iff thefollowing system of linear inequalities is satisfiable

{Ax ⩽ b}

{A y = 0, b y = −1, y ⩾ 0}⊺ ⊺

Page 73: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

unsatisfiability of systems of linear inequalities

Farkas lemma (form I): is unsatisfiable iff thefollowing system of linear inequalities is satisfiable

idea: for every , we can derive the implied constraint from the constraints ;

{Ax ⩽ b}

{A y = 0, b y = −1, y ⩾ 0}⊺ ⊺

y ⩾ 0{y Ax ⩽ y b}⊺ ⊺ {Ax ⩽ b}

Page 74: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

unsatisfiability of systems of linear inequalities

Farkas lemma (form I): is unsatisfiable iff thefollowing system of linear inequalities is satisfiable

idea: for every , we can derive the implied constraint from the constraints ; if satisfies

above conditions, this implied constraint itself is unsatisfiable

{Ax ⩽ b}

{A y = 0, b y = −1, y ⩾ 0}⊺ ⊺

y ⩾ 0{y Ax ⩽ y b}⊺ ⊺ {Ax ⩽ b} y

Page 75: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

unsatisfiability of systems of linear inequalities

Farkas lemma (form I): is unsatisfiable iff thefollowing system of linear inequalities is satisfiable

idea: for every , we can derive the implied constraint from the constraints ; if satisfies

above conditions, this implied constraint itself is unsatisfiable

equivalent form II: a system is unsatisfiableif and only if the system is satisfiable

{Ax ⩽ b}

{A y = 0, b y = −1, y ⩾ 0}⊺ ⊺

y ⩾ 0{y Ax ⩽ y b}⊺ ⊺ {Ax ⩽ b} y

{Ax = b,x ⩾ 0}{A y ⩾ 0, b y = −1}⊺ ⊺

Page 76: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

unsatisfiability of systems of linear inequalities

Farkas lemma (form I): is unsatisfiable iff thefollowing system of linear inequalities is satisfiable

idea: for every , we can derive the implied constraint from the constraints ; if satisfies

above conditions, this implied constraint itself is unsatisfiable

equivalent form II: a system is unsatisfiableif and only if the system is satisfiable

complexity interpretation: LP in coNP

{Ax ⩽ b}

{A y = 0, b y = −1, y ⩾ 0}⊺ ⊺

y ⩾ 0{y Ax ⩽ y b}⊺ ⊺ {Ax ⩽ b} y

{Ax = b,x ⩾ 0}{A y ⩾ 0, b y = −1}⊺ ⊺

Page 77: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry interpretation

columns of matrix generate conea ,… , a1 n A

C = {x ⋅ a +⋯+ x ⋅ a ∣ x ,… ,x ⩾ 0}1 1 n n 1 n

Page 78: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry interpretation

columns of matrix generate cone

system is unsat. iff

a ,… , a1 n A

C = {x ⋅ a +⋯+ x ⋅ a ∣ x ,… ,x ⩾ 0}1 1 n n 1 n

{Ax = b,x ⩾ 0} b C∈

Page 79: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry interpretation

columns of matrix generate cone

system is unsat. iff

geometry obs.: iff halfspace with but

a ,… , a1 n A

C = {x ⋅ a +⋯+ x ⋅ a ∣ x ,… ,x ⩾ 0}1 1 n n 1 n

{Ax = b,x ⩾ 0} b C∈

b C∈ ∃ H = {x ∣ y x ⩾ 0}⊺

a ,… , a ∈ H1 n b H∈

Page 80: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry interpretation

columns of matrix generate cone

system is unsat. iff

geometry obs.: iff halfspace with but

in other words: iff with but

a ,… , a1 n A

C = {x ⋅ a +⋯+ x ⋅ a ∣ x ,… ,x ⩾ 0}1 1 n n 1 n

{Ax = b,x ⩾ 0} b C∈

b C∈ ∃ H = {x ∣ y x ⩾ 0}⊺

a ,… , a ∈ H1 n b H∈

b C∈ ∃y A y ⩾ 0⊺

b y < 0⊺

Page 81: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

certificates for optimization problemswe saw: systems of linear inequalities have polynomial-sizecertificates for both satisfiability and unsatisfiability(namely, basic feasible solutions for the system or its dual)

Page 82: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

certificates for optimization problemswe saw: systems of linear inequalities have polynomial-sizecertificates for both satisfiability and unsatisfiability(namely, basic feasible solutions for the system or its dual)

for optimization problems, our goal is to certify lower andupper bounds on the optimal objective value

Page 83: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

certificates for optimization problemswe saw: systems of linear inequalities have polynomial-sizecertificates for both satisfiability and unsatisfiability(namely, basic feasible solutions for the system or its dual)

for optimization problems, our goal is to certify lower andupper bounds on the optimal objective value

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

Page 84: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

certificates for optimization problemswe saw: systems of linear inequalities have polynomial-sizecertificates for both satisfiability and unsatisfiability(namely, basic feasible solutions for the system or its dual)

for optimization problems, our goal is to certify lower andupper bounds on the optimal objective value

to certify lower bounds, can use basic feasible solutions for (P)

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

Page 85: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

certificates for optimization problemswe saw: systems of linear inequalities have polynomial-sizecertificates for both satisfiability and unsatisfiability(namely, basic feasible solutions for the system or its dual)

for optimization problems, our goal is to certify lower andupper bounds on the optimal objective value

to certify lower bounds, can use basic feasible solutions for (P)

how to certify upper bounds?

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

Page 86: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

certificates for optimization problemswe saw: systems of linear inequalities have polynomial-sizecertificates for both satisfiability and unsatisfiability(namely, basic feasible solutions for the system or its dual)

for optimization problems, our goal is to certify lower andupper bounds on the optimal objective value

to certify lower bounds, can use basic feasible solutions for (P)

how to certify upper bounds?

idea: to certify upper bounds for (P), we derive bycombining the constraints

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

{c x ⩽ γ}⊺

{Ax ⩽ b,x ⩾ 0}

Page 87: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

certificates for optimization problemsfor optimization problems, our goal is to certify lower andupper bounds on the optimal objective value

idea: to certify upper bounds for (P), we derive bycombining the constraints

concretely, if satisfies , then we can derive theinequality for

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

{c x ⩽ γ}⊺

{Ax ⩽ b,x ⩾ 0}

y ⩾ 0 A y ⩾ c⊺

{c x ⩽ γ}⊺ γ = b y⊺

Page 88: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

certificates for optimization problemsfor optimization problems, our goal is to certify lower andupper bounds on the optimal objective value

idea: to certify upper bounds for (P), we derive bycombining the constraints

concretely, if satisfies , then we can derive theinequality for

the following optimization problem, searches for the best suchupper bound on (P)

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

{c x ⩽ γ}⊺

{Ax ⩽ b,x ⩾ 0}

y ⩾ 0 A y ⩾ c⊺

{c x ⩽ γ}⊺ γ = b y⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

Page 89: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

certificates for optimization problemsfor optimization problems, our goal is to certify lower andupper bounds on the optimal objective value

idea: to certify upper bounds for (P), we derive bycombining the constraints

concretely, if satisfies , then we can derive theinequality for

the following optimization problem, searches for the best suchupper bound on (P)

strong-duality theorem: (P) and (D) have same optimal value(assuming at least one is feasible)

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

{c x ⩽ γ}⊺

{Ax ⩽ b,x ⩾ 0}

y ⩾ 0 A y ⩾ c⊺

{c x ⩽ γ}⊺ γ = b y⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

Page 90: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

Page 91: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

by convention: infeasible maximization has optimal value and infeasible minimization has optimal value

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

−∞∞

Page 92: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

by convention: infeasible maximization has optimal value and infeasible minimization has optimal value

special cases: max-flow min-cut, Hall’s theorem (bipartitematching), and Menger-type theorems (disjoint paths)

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

−∞∞

Page 93: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

by convention: infeasible maximization has optimal value and infeasible minimization has optimal value

special cases: max-flow min-cut, Hall’s theorem (bipartitematching), and Menger-type theorems (disjoint paths)

weak-duality: optimal value of (P) optimal value of (D)(holds by construction)

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

−∞∞

Page 94: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

Page 95: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

observation: if one of (P) or (D) is unbounded, the other has tobe infeasible (by weak duality)

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

Page 96: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

observation: if one of (P) or (D) is unbounded, the other has tobe infeasible (by weak duality)

thus, can assume one of (P) or (D) is feasible and bounded

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

Page 97: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

observation: if one of (P) or (D) is unbounded, the other has tobe infeasible (by weak duality)

thus, can assume one of (P) or (D) is feasible and bounded

proof for the case that (P) is feasible and bounded:

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

Page 98: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

proof for the case that (P) is feasible and bounded:

for optimal value of (P) and , consider the system

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

γ ε ⩾ 0{Ax ⩽ b, c x ⩾ γ + ε, x ⩾ 0} (∗)⊺

Page 99: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

proof for the case that (P) is feasible and bounded:

for optimal value of (P) and , consider the system

for , system is infeasible

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

γ ε ⩾ 0{Ax ⩽ b, c x ⩾ γ + ε, x ⩾ 0} (∗)⊺

ε > 0 (∗)

Page 100: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

proof for the case that (P) is feasible and bounded:

for optimal value of (P) and , consider the system

for , system is infeasible

by Farkas lemma,

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

γ ε ⩾ 0{Ax ⩽ b, c x ⩾ γ + ε, x ⩾ 0} (∗)⊺

ε > 0 (∗)

∃z ⩾ 0, y ⩾ 0A y ⩾ z ⋅ c  and  b y < z ⋅ (γ + ε)⊺ ⊺

Page 101: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

proof for the case that (P) is feasible and bounded:

for optimal value of (P) and , consider the system

for , system is infeasible

by Farkas lemma,

since is feasible for , we must have

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

γ ε ⩾ 0{Ax ⩽ b, c x ⩾ γ + ε, x ⩾ 0} (∗)⊺

ε > 0 (∗)

∃z ⩾ 0, y ⩾ 0A y ⩾ z ⋅ c  and  b y < z ⋅ (γ + ε)⊺ ⊺

(∗) ε = 0 b y ⩾ z ⋅ γ⊺

Page 102: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

strong-duality theorem: following optimization problemshave same optimal value (assuming at least one is feasible)

proof for the case that (P) is feasible and bounded:

for optimal value of (P) and , consider the system

for , system is infeasible

by Farkas lemma,

since is feasible for , we must have

thus, ; by rescaling , we can assume ; now hasobjective value for (D)

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

γ ε ⩾ 0{Ax ⩽ b, c x ⩾ γ + ε, x ⩾ 0} (∗)⊺

ε > 0 (∗)

∃z ⩾ 0, y ⩾ 0A y ⩾ z ⋅ c  and  b y < z ⋅ (γ + ε)⊺ ⊺

(∗) ε = 0 b y ⩾ z ⋅ γ⊺

z > 0 y z = 1 y

< γ + ε □

Page 103: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

Ellipsoid method(or LP in P)

Page 104: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them
Page 105: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

Khachiyan’s achievement received anattention in the nonscientific press that is—toour knowledge—unpreceded in mathematics.Newspapers and journals like TheGuardian, Der Spiegel, NieuweRotterdamsche Courant, Nepszabadsag,The Daily Yomiuri wrote about the “majorbreakthrough in the solution of real-worldproblems”. The ellipsoid method evenjumped on the front page of The New YorkTimes: “A Soviet Discovery Rocks World ofMathematics” (November 7, 1979). Much ofthe excitement of the journalists was,however, due to exaggerations andmisinterpretations - see LAWLER (1980) foran account of the treatment of theimplications of the ellipsoid method in thepublic press.

(from textbook by Grötschel, Lovasz, and Schrijver)

Page 106: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

relaxed feasibility problemgiven: , polyhedron

promise:

goal: find some point

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

Page 107: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

relaxed feasibility problemgiven: , polyhedron

promise:

goal: find some point

will see later: polynomial-time algorithm for this problem canbe turned to polynomial-time algorithm for linear prog.

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

Page 108: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

relaxed feasibility problemgiven: , polyhedron

promise:

goal: find some point

will see later: polynomial-time algorithm for this problem canbe turned to polynomial-time algorithm for linear prog.

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

Page 109: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

relaxed feasibility problemgiven: , polyhedron

promise:

goal: find some point

will see later: polynomial-time algorithm for this problem canbe turned to polynomial-time algorithm for linear prog.

bird’s eye view of ellipsoid methodcompute successively tighter outer-approximations for

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

P

Page 110: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

given: , polyhedron

promise:

goal: find some point

bird’s eye view of ellipsoid methodcompute successively tighter outer-approximations for

concretely, sequence of ellipsoids such that

where and center of is in

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

P

E ,… ,E ⊆ R0 Nn

E ,E ,⋯ ,E ⊇ P0 1 N

E = Ball(0,R)0 EN P

Page 111: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

given: , polyhedron

promise:

goal: find some point

bird’s eye view of ellipsoid methodcompute successively tighter outer-approximations for

concretely, sequence of ellipsoids such that

where and center of is in

key property: alg. ensures

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

P

E ,… ,E ⊆ R0 Nn

E ,E ,⋯ ,E ⊇ P0 1 N

E = Ball(0,R)0 EN P

vol(E ) ⩽ e ⋅ vol(E )k+1−1/n

k

Page 112: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

given: , polyhedron

promise:

goal: find some point

bird’s eye view of ellipsoid methodcompute successively tighter outer-approximations for

concretely, sequence of ellipsoids such that

where and center of is in

key property: alg. ensures

consequence: number of iterations as

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

P

E ,… ,E ⊆ R0 Nn

E ,E ,⋯ ,E ⊇ P0 1 N

E = Ball(0,R)0 EN P

vol(E ) ⩽ e ⋅ vol(E )k+1−1/n

k

N ⩽ poly(n, ⟨R⟩, ⟨ε⟩)

Page 113: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

given: , polyhedron

promise:

goal: find some point

bird’s eye view of ellipsoid methodcompute successively tighter outer-approximations for

concretely, sequence of ellipsoids such that

where and center of is in

key property: alg. ensures

consequence: number of iterations as

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

P

E ,… ,E ⊆ R0 Nn

E ,E ,⋯ ,E ⊇ P0 1 N

E = Ball(0,R)0 EN P

vol(E ) ⩽ e ⋅ vol(E )k+1−1/n

k

N ⩽ poly(n, ⟨R⟩, ⟨ε⟩)

( ) = ⩽ ⩽ eRε n

vol(Ball(0,R))vol(Ball(0,ε))

vol(E )0vol(E )N −N/n

Page 114: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

given: , polyhedron

promise:

goal: find some point

finding the next ellipsoid

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

Page 115: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

given: , polyhedron

promise:

goal: find some point

finding the next ellipsoidsuppose we already computed ellipsoids

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

E ⊃ ⋯ ⊃ E ⊇ P0 k

Page 116: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

given: , polyhedron

promise:

goal: find some point

finding the next ellipsoidsuppose we already computed ellipsoids

if center of satisfies , we can stop (set )

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

E ⊃ ⋯ ⊃ E ⊇ P0 k

sk Ek {Ax ⩽ b} N := k

Page 117: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

given: , polyhedron

promise:

goal: find some point

finding the next ellipsoidsuppose we already computed ellipsoids

if center of satisfies , we can stop (set )

otherwise, violates a constraint in

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

E ⊃ ⋯ ⊃ E ⊇ P0 k

sk Ek {Ax ⩽ b} N := k

sk {a x ⩽ b }i⊺

i {Ax ⩽ b}

Page 118: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

given: , polyhedron

promise:

goal: find some point

finding the next ellipsoidsuppose we already computed ellipsoids

if center of satisfies , we can stop (set )

otherwise, violates a constraint in

in this case, we let be the minimum-volume ellipsoid thatcontains half-ellipsoid

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

E ⊃ ⋯ ⊃ E ⊇ P0 k

sk Ek {Ax ⩽ b} N := k

sk {a x ⩽ b }i⊺

i {Ax ⩽ b}

Ek+1

E ∩ {x ∣ a (x − s ) ⩽ 0} ⊇ Pk i⊺

k

Page 119: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

given: , polyhedron

promise:

goal: find some point

finding the next ellipsoidsuppose we already computed ellipsoids

if center of satisfies , we can stop (set )

otherwise, violates a constraint in

in this case, we let be the minimum-volume ellipsoid thatcontains half-ellipsoid

remains to prove:

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

E ⊃ ⋯ ⊃ E ⊇ P0 k

sk Ek {Ax ⩽ b} N := k

sk {a x ⩽ b }i⊺

i {Ax ⩽ b}

Ek+1

E ∩ {x ∣ a (x − s ) ⩽ 0} ⊇ Pk i⊺

k

volE ⩽ e ⋅ volEk+1−1/n

k

Page 120: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

given: , polyhedron

promise:

goal: find some point

finding the next ellipsoidsuppose we already computed ellipsoids

if center of satisfies , we can stop (set )

otherwise, violates a constraint in

in this case, we let be the minimum-volume ellipsoid thatcontains half-ellipsoid

remains to prove:

also remains to prove (but will skip): no numerical issues

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

E ⊃ ⋯ ⊃ E ⊇ P0 k

sk Ek {Ax ⩽ b} N := k

sk {a x ⩽ b }i⊺

i {Ax ⩽ b}

Ek+1

E ∩ {x ∣ a (x − s ) ⩽ 0} ⊇ Pk i⊺

k

volE ⩽ e ⋅ volEk+1−1/n

k

Page 121: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry of ellpsoids

Page 122: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry of ellpsoids

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

Page 123: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry of ellpsoids

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

what’s an ellipsoid?

any non-singular affine transformation of

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

x ↦ Mx + s

unit ball E = Ball(0, 1) = {x ∣ x x ⩽ 1}0⊺

Page 124: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry of ellpsoids

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

what’s an ellipsoid?

any non-singular affine transformation of

what the volume of an ellipsoid?

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

x ↦ Mx + s

unit ball E = Ball(0, 1) = {x ∣ x x ⩽ 1}0⊺

Page 125: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

geometry of ellpsoids

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

what’s an ellipsoid?

any non-singular affine transformation of

what the volume of an ellipsoid?

an affine transformation changes any volume bya multiplicative factor of

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

x ↦ Mx + s

unit ball E = Ball(0, 1) = {x ∣ x x ⩽ 1}0⊺

x ↦ Mx + s

∣detM ∣

Page 126: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

what the volume of an ellipsoid?

an affine transformation changes any volume bya multiplicative factor of

therefore, if

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

x ↦ Mx + s

∣detM ∣

volE = ∣detM ∣ ⋅ volE0 E = M(E ) + s0

Page 127: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

what the volume of an ellipsoid?

an affine transformation changes any volume bya multiplicative factor of

therefore, if

strategy for proof of theorem:

first prove unit-ball case then extent to general case by “abstract nonsense”

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

x ↦ Mx + s

∣detM ∣

volE = ∣detM ∣ ⋅ volE0 E = M(E ) + s0

E = E0

Page 128: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

unit-ball case

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = E = {x ∣ x x ⩽ 1}0⊺

Page 129: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

unit-ball case

let be a halfspace with on its boundary

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = E = {x ∣ x x ⩽ 1}0⊺

H0 0

Page 130: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

unit-ball case

let be a halfspace with on its boundary

by rotational symmetry, we can assume

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = E = {x ∣ x x ⩽ 1}0⊺

H0 0

H = {x ∣ x ⩾ 0}0 1

Page 131: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

unit-ball case

let be a halfspace with on its boundary

by rotational symmetry, we can assume

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = E = {x ∣ x x ⩽ 1}0⊺

H0 0

H = {x ∣ x ⩾ 0}0 1

Page 132: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

unit-ball case

let be a halfspace with on its boundary

by rotational symmetry, we can assume

let , where

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = E = {x ∣ x x ⩽ 1}0⊺

H0 0

H = {x ∣ x ⩾ 0}0 1

E = M(E ) + t ⋅ e0′

0 1 M = diag(a, b,… , b)

Page 133: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

unit-ball case

let be a halfspace with on its boundary

by rotational symmetry, we can assume

let , where

claim: if and , then

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = E = {x ∣ x x ⩽ 1}0⊺

H0 0

H = {x ∣ x ⩾ 0}0 1

E = M(E ) + t ⋅ e0′

0 1 M = diag(a, b,… , b)

⩽ 1a2

(1−t)2 + ⩽ 1a2t2

b21 E ⊇ E ∩ H0

′0 0

Page 134: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

unit-ball case

let be a halfspace with on its boundary

by rotational symmetry, we can assume

let , where

claim: if and , then

note

the conditions ensure …

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = E = {x ∣ x x ⩽ 1}0⊺

H0 0

H = {x ∣ x ⩾ 0}0 1

E = M(E ) + t ⋅ e0′

0 1 M = diag(a, b,… , b)

⩽ 1a2

(1−t)2 + ⩽ 1a2t2

b21 E ⊇ E ∩ H0

′0 0

E = {y ∣ (y − t ⋅ e ) M (y − t ⋅ e ) ⩽ 1}0′

1⊺ −2

1

e , e ∈ E1 2 0′

Page 135: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

unit-ball case

let be a halfspace with on its boundary

by rotational symmetry, we can assume

let , where

claim: if and , then

claim: minimum value of subject to these conditions satisfies

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = E = {x ∣ x x ⩽ 1}0⊺

H0 0

H = {x ∣ x ⩾ 0}0 1

E = M(E ) + t ⋅ e0′

0 1 M = diag(a, b,… , b)

⩽ 1a2

(1−t)2 + ⩽ 1a2t2

b21 E ⊇ E ∩ H0

′0 0

detM = abn−1

⩽ e−1/(2n+2)

Page 136: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

general case

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = T (E ) + s0

Page 137: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

general case

then, is halfspace with on its boundary

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = T (E ) + s0

H = T (H − s)0−1 0

Page 138: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

general case

then, is halfspace with on its boundary

let be minimum-volume ellpsoid containing

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = T (E ) + s0

H = T (H − s)0−1 0

E0′ E ∩ H0 0

Page 139: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

general case

then, is halfspace with on its boundary

let be minimum-volume ellpsoid containing

we know (from unit-ball case)

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = T (E ) + s0

H = T (H − s)0−1 0

E0′ E ∩ H0 0

volE ⩽ e volE0′ −1/(2n+2)

0

Page 140: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

general case

then, is halfspace with on its boundary

let be minimum-volume ellpsoid containing

we know (from unit-ball case)

choose

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = T (E ) + s0

H = T (H − s)0−1 0

E0′ E ∩ H0 0

volE ⩽ e volE0′ −1/(2n+2)

0

E = T (E ) + s′0′

Page 141: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

theorem: let be an ellipsoid with center let be a halfspace with on its boundarythen, ellipsoid that contains such that

general case

then, is halfspace with on its boundary

let be minimum-volume ellpsoid containing

we know (from unit-ball case)

choose

then

E ⊆ Rn s

H ⊆ Rn s

∃ E ⊆ R′ n E ∩ H

volE ⩽ e ⋅ volE′ −1/(2n+2)

E = T (E ) + s0

H = T (H − s)0−1 0

E0′ E ∩ H0 0

volE ⩽ e volE0′ −1/(2n+2)

0

E = T (E ) + s′0′

= ⩽ evol(E)vol(E )′

vol(E )0

vol(E )0′

−1/(2n+2)

Page 142: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

feasibility: relaxed vs non-relaxedlast time: poly-time algorithm for relaxed feasibility problem

given: , polyhedron

promise:

goal: find some point

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

Page 143: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

feasibility: relaxed vs non-relaxedlast time: poly-time algorithm for relaxed feasibility problem

given: , polyhedron

promise:

goal: find some point

today: turn it into algorithm for general linear programming

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

Page 144: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

feasibility: relaxed vs non-relaxedlast time: poly-time algorithm for relaxed feasibility problem

given: , polyhedron

promise:

goal: find some point

today: turn it into algorithm for general linear programming

first step: algorithm for the (non-relaxed) feasibility problem

given: system of linear inequalities

goal: decide if system has a solution or not

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

{Ax ⩽ b}

Page 145: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

first step: algorithm for the (non-relaxed) feasibility problem

given: system of linear inequalities

goal: decide if system has a solution or not

strategy: compute new system such that

if feasible, then satisfies relaxedfeasibility promise for with

�.

if infeasible, then infeasible�.

last time: poly-time algorithm for relaxed feasibility problem

given: , polyhedron

promise:

goal: find some point

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

{Ax ⩽ b}

{ x ⩽ }A~

b~

{Ax ⩽ b} { x ⩽ }A~

b~

R, ε ⟨R⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩){Ax ⩽ b} { x ⩽ }A

~b~

Page 146: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

first step: algorithm for the (non-relaxed) feasibility problem

given: system of linear inequalities

goal: decide if system has a solution or not

strategy: compute new system such that

if feasible, then satisfies relaxedfeasibility promise for with

�.

if infeasible, then infeasible�.

it follows that the relaxed-feasibility algorithm finds solution to if and only if is feasible

last time: poly-time algorithm for relaxed feasibility problem

given: , polyhedron

promise:

goal: find some point

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

{Ax ⩽ b}

{ x ⩽ }A~

b~

{Ax ⩽ b} { x ⩽ }A~

b~

R, ε ⟨R⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩){Ax ⩽ b} { x ⩽ }A

~b~

{ x ⩽ }A~

b~

{Ax ⩽ b}

Page 147: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

first step: algorithm for the (non-relaxed) feasibility problem

given: system of linear inequalities

goal: decide if system has a solution or not

strategy: compute new system such that

if feasible, then satisfies relaxedfeasibility promise for with

�.

if infeasible, then infeasible�.

it follows that the relaxed-feasibility algorithm finds solution to if and only if is feasible can decide

feasibility of in polynomial time

last time: poly-time algorithm for relaxed feasibility problem

given: , polyhedron

promise:

goal: find some point

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

{Ax ⩽ b}

{ x ⩽ }A~

b~

{Ax ⩽ b} { x ⩽ }A~

b~

R, ε ⟨R⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩){Ax ⩽ b} { x ⩽ }A

~b~

{ x ⩽ }A~

b~

{Ax ⩽ b} ⇝{Ax ⩽ b}

Page 148: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

first step: algorithm for the (non-relaxed) feasibility problem

given: system of linear inequalities

goal: decide if system has a solution or not

strategy: compute new system such that

if feasible, then satisfies relaxedfeasibility promise for with

�.

if infeasible, then infeasible�.

remains to show: construction of

last time: poly-time algorithm for relaxed feasibility problem

given: , polyhedron

promise:

goal: find some point

R > ε > 0 P = {x ∈ R ∣ Ax ⩽ b}n

∃y ∈ R . Ball(y, ε) ⊆ P ⊆ Ball(0,R)n

y ∈ P

{Ax ⩽ b}

{ x ⩽ }A~

b~

{Ax ⩽ b} { x ⩽ }A~

b~

R, ε ⟨R⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩){Ax ⩽ b} { x ⩽ }A

~b~

{ x ⩽ }A~

b~

Page 149: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-large promise

lemma: feasible if and only if the followingsystem is feasible for ,

lemma follows from the fact that feasible systems of linearinequalities always have solutions with small encoding size

{Ax ⩽ b}R = 100⟨A⟩+⟨b⟩

{Ax ⩽ b, −R ⋅ 1 ⩽ x ⩽ R ⋅ 1}

Page 150: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

Page 151: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

let be a system of linear inequalities

for , let be set of solutions of

{Ax ⩽ b}

η ⩾ 0 Pη {Ax ⩽ b + η ⋅ 1}

Page 152: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

let be a system of linear inequalities

for , let be set of solutions of

lemma: with and

if , then for some �. if , then �.

{Ax ⩽ b}

η ⩾ 0 Pη {Ax ⩽ b + η ⋅ 1}

∃η, ε > 0 ⟨η⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

P ≠ ∅0 Ball(y, ε) ⊆ Pη y ∈ Rn

P = ∅0 P = ∅η

Page 153: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

let be a system of linear inequalities

for , let be set of solutions of

lemma: with and

if , then for some �. if , then �.

proof of 1: let

{Ax ⩽ b}

η ⩾ 0 Pη {Ax ⩽ b + η ⋅ 1}

∃η, ε > 0 ⟨η⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

P ≠ ∅0 Ball(y, ε) ⊆ Pη y ∈ Rn

P = ∅0 P = ∅η

y ∈ P0

Page 154: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

let be a system of linear inequalities

for , let be set of solutions of

lemma: with and

if , then for some �. if , then �.

proof of 1: let

then,

{Ax ⩽ b}

η ⩾ 0 Pη {Ax ⩽ b + η ⋅ 1}

∃η, ε > 0 ⟨η⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

P ≠ ∅0 Ball(y, ε) ⊆ Pη y ∈ Rn

P = ∅0 P = ∅η

y ∈ P0

Ax = Ay + A(x − y) ⩽ b + ∥A(x − y)∥ ⋅ 1∞

Page 155: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

let be a system of linear inequalities

for , let be set of solutions of

lemma: with and

if , then for some �. if , then �.

proof of 1: let

then,

also

{Ax ⩽ b}

η ⩾ 0 Pη {Ax ⩽ b + η ⋅ 1}

∃η, ε > 0 ⟨η⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

P ≠ ∅0 Ball(y, ε) ⊆ Pη y ∈ Rn

P = ∅0 P = ∅η

y ∈ P0

Ax = Ay + A(x − y) ⩽ b + ∥A(x − y)∥ ⋅ 1∞

∥A(x − y)∥ ⩽ 2 ∥x − y∥∞⟨A⟩

2

Page 156: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

let be a system of linear inequalities

for , let be set of solutions of

lemma: with and

if , then for some �. if , then �.

proof of 1: let

then,

also

therefore, whenever

{Ax ⩽ b}

η ⩾ 0 Pη {Ax ⩽ b + η ⋅ 1}

∃η, ε > 0 ⟨η⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

P ≠ ∅0 Ball(y, ε) ⊆ Pη y ∈ Rn

P = ∅0 P = ∅η

y ∈ P0

Ax = Ay + A(x − y) ⩽ b + ∥A(x − y)∥ ⋅ 1∞

∥A(x − y)∥ ⩽ 2 ∥x − y∥∞⟨A⟩

2

Ax ⩽ b + η ⋅ 1 ∥x − y∥ ⩽ η ⋅ 22−⟨A⟩ □

Page 157: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

let be a system of linear inequalities

for , let be set of solutions of

lemma: with and

if , then for some �. if , then �.

proof of 2:

{Ax ⩽ b}

η ⩾ 0 Pη {Ax ⩽ b + η ⋅ 1}

∃η, ε > 0 ⟨η⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

P ≠ ∅0 Ball(y, ε) ⊆ Pη y ∈ Rn

P = ∅0 P = ∅η

Page 158: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

let be a system of linear inequalities

for , let be set of solutions of

lemma: with and

if , then for some �. if , then �.

proof of 2: surprising that this part of lemma is true

strategy: argue about infeasibility certificates

{Ax ⩽ b}

η ⩾ 0 Pη {Ax ⩽ b + η ⋅ 1}

∃η, ε > 0 ⟨η⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

P ≠ ∅0 Ball(y, ε) ⊆ Pη y ∈ Rn

P = ∅0 P = ∅η

Page 159: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

let be a system of linear inequalities

for , let be set of solutions of

lemma: with and

if , then for some �. if , then �.

proof of 2: surprising that this part of lemma is true

strategy: argue about infeasibility certificates

if , then by Farkas lemma with and; we may also assume

{Ax ⩽ b}

η ⩾ 0 Pη {Ax ⩽ b + η ⋅ 1}

∃η, ε > 0 ⟨η⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

P ≠ ∅0 Ball(y, ε) ⊆ Pη y ∈ Rn

P = ∅0 P = ∅η

P = ∅ ∃y ⩾ 0 A y = 0⊺

b y = −1⊺ ⟨y⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

Page 160: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

let be a system of linear inequalities

for , let be set of solutions of

lemma: with and

if , then for some �. if , then �.

proof of 2: surprising that this part of lemma is true

strategy: argue about infeasibility certificates

if , then by Farkas lemma with and; we may also assume

claim: also certifies

{Ax ⩽ b}

η ⩾ 0 Pη {Ax ⩽ b + η ⋅ 1}

∃η, ε > 0 ⟨η⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

P ≠ ∅0 Ball(y, ε) ⊆ Pη y ∈ Rn

P = ∅0 P = ∅η

P = ∅ ∃y ⩾ 0 A y = 0⊺

b y = −1⊺ ⟨y⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

y P = ∅η

Page 161: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

not-too-small promise

let be a system of linear inequalities

for , let be set of solutions of

lemma: with and

if , then for some �. if , then �.

proof of 2: surprising that this part of lemma is true

strategy: argue about infeasibility certificates

if , then by Farkas lemma with and; we may also assume

claim: also certifies

proof: for small enough

{Ax ⩽ b}

η ⩾ 0 Pη {Ax ⩽ b + η ⋅ 1}

∃η, ε > 0 ⟨η⟩ + ⟨ε⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

P ≠ ∅0 Ball(y, ε) ⊆ Pη y ∈ Rn

P = ∅0 P = ∅η

P = ∅ ∃y ⩾ 0 A y = 0⊺

b y = −1⊺ ⟨y⟩ ⩽ poly(⟨A⟩, ⟨b⟩)

y P = ∅η

(b + η ⋅ 1) y ⩽ −1 + η ⋅ 2 < 0⊺ O(⟨y⟩) η

Page 162: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

decision vs searchso far: we saw a poly-time algorithm for deciding thefeasibility of systems of linear inequalities

Page 163: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

decision vs searchso far: we saw a poly-time algorithm for deciding thefeasibility of systems of linear inequalities

next: if solution exists, can also find one in poly time

Page 164: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

decision vs searchso far: we saw a poly-time algorithm for deciding thefeasibility of systems of linear inequalities

next: if solution exists, can also find one in poly time

idea: look for basic feasible solution because it is enough toknow the list of tight constraints for such a solution

Page 165: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

decision vs searchso far: we saw a poly-time algorithm for deciding thefeasibility of systems of linear inequalities

next: if solution exists, can also find one in poly time

idea: look for basic feasible solution because it is enough toknow the list of tight constraints for such a solution

strategy: given a system , find -maximalcoordinate set such that remains feasible

{Ax = b,x ⩾ 0} ⊆S {Ax = b, x ⩾ 0, x = 0}S

Page 166: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

decision vs searchso far: we saw a poly-time algorithm for deciding thefeasibility of systems of linear inequalities

next: if solution exists, can also find one in poly time

idea: look for basic feasible solution because it is enough toknow the list of tight constraints for such a solution

strategy: given a system , find -maximalcoordinate set such that remains feasible

can find such a set by starting with and growing oneby one using the algorithm for deciding feasibility

{Ax = b,x ⩾ 0} ⊆S {Ax = b, x ⩾ 0, x = 0}S

S = ∅ S

Page 167: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

decision vs searchso far: we saw a poly-time algorithm for deciding thefeasibility of systems of linear inequalities

next: if solution exists, can also find one in poly time

idea: look for basic feasible solution because it is enough toknow the list of tight constraints for such a solution

strategy: given a system , find -maximalcoordinate set such that remains feasible

can find such a set by starting with and growing oneby one using the algorithm for deciding feasibility

turns out: if we found such a set, the linear equations uniquely determine a feasible solution

for the given system

{Ax = b,x ⩾ 0} ⊆S {Ax = b, x ⩾ 0, x = 0}S

S = ∅ S

{Ax = b,x = 0}S x

Page 168: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

feasible vs optimal solutions

last step: turn our algorithm for finding a feasible solution to alinear program into algorithm for finding an optimal solution

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

Page 169: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

feasible vs optimal solutions

last step: turn our algorithm for finding a feasible solution to alinear program into algorithm for finding an optimal solution

good general strategy: use binary search to find the largest such that is feasible

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

γ

{c x ⩾ γ,Ax ⩽ b,x ⩾ 0}⊺

Page 170: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

feasible vs optimal solutions

last step: turn our algorithm for finding a feasible solution to alinear program into algorithm for finding an optimal solution

good general strategy: use binary search to find the largest such that is feasible

shortcut for linear programs: use strong duality; find“matching pair” of feasible solutions to (P) and its dual (D)

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

γ

{c x ⩾ γ,Ax ⩽ b,x ⩾ 0}⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

Page 171: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them

feasible vs optimal solutions

last step: turn our algorithm for finding a feasible solution to alinear program into algorithm for finding an optimal solution

good general strategy: use binary search to find the largest such that is feasible

shortcut for linear programs: use strong duality; find“matching pair” of feasible solutions to (P) and its dual (D)

concretely, find feasible solutions and for the followingsystem—guaranteed to be optimal solutions for (P) and (D)

maximize c x subject to Ax ⩽ b and x ⩾ 0 (P)⊺

γ

{c x ⩾ γ,Ax ⩽ b,x ⩾ 0}⊺

minimize b y subject to A y ⩾ c and y ⩾ 0 (D)⊺ ⊺

x y

{c x = b y, Ax ⩽ b,x ⩾ 0, A y ⩾ c, y ⩾ 0}⊺ ⊺ ⊺

Page 172: linear programming - ETH Z · theorem: every feasible, bounded linear program in equational form has a BFS as optimum there may be (infinitely) many optima, but at least one of them