Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of...

14
Algorithms Lecture 10 Lecturer: Moni Naor

Transcript of Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of...

Page 1: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Algorithms

Lecture 10

Lecturer: Moni Naor

Page 2: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Linear Programming in Small Dimension

Canonical form of linear programming

Maximize: c1 ¢x1 + c2 ¢ x2 … cd ¢xd

Subject to: a1,1 ¢ x1 + a1,2 ¢ x2 … a1,d ¢ xd · b1

a2,1 ¢ x1 + a2,2 ¢ x2 … a2,d ¢ xd · b1

...

an,1 ¢ x1 + an,2 ¢ x2 … an,d ¢ xd · bn

n – number of constraints andd – number of variables or dimension

Page 3: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Linear Programming in Two Dimensions

Feasible regionOptimal

vertex

Page 4: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

What is special in low dimension

• Only d constraints determine the solution – The optimal value on those d constraints determine the

global one– Problem is reduced to finding those constraints that

matter– We know that equality hold in those constraints

• Generic algorithms:– Fourier-Motzkin: (n/2)2d – Worst case of Simplex: number of bfs/vertices (nd )

Page 5: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Key Observation• If we know that an inequality constraints is

defining: – we can reduce the number variables

• Projection • Substitution

Feasible regionOptimal

vertex

4X1 - 6x 2 =4

Page 6: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Incremental Algorithm BB

Input: A set of n constraints H on d variables Output: the set of defining constraints 0. If |H|=d output BB(H)=H1. Pick a random constraint If h 2 H

recursively find BB(H \ h)

2. If BB(H \ h) does not violate h output BB(H \ h) else project all the constraints onto h and

recursively solve this (n-1,d-1) lp program

Page 7: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Correctness, Termination and Analysis• Correctness: by induction…• Termination: if the non

defining constraints chosen– No need to rerun

• Analysis: probability that h is one the defining constraints is d/n

0. If |H|=d output BB(H)=H1. Pick a random constraint If

h 2 H recursively find BB(H \ h)

2. If BB(H \ h) does not violate h output BB(H \ h)

else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

Page 8: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Analysis

Analysis: probability that h is one the defining constraints is d/n

T(d,n) = d/n T(d-1,n-1) + T(d,n-1)by induction= d/n (d-1)! (n-1) + d! (n-1)= (n-1)d!(1/n +1)· nd!

0. If |H|=d output BB(H)=H1. Pick a random constraint If h 2

H recursively find BB(H \ h)

2. If BB(H \ h) does not violate h output BB(H \ h)

else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

Page 9: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

How to improve

The algorithm is wasteful: When the solution does not fit

the new a new is computed from scratch

0. If |H|=d output BB(H)=H1. Pick a random constraint If

h 2 H recursively find BB(H \ h)

2. If BB(H \ h) does not violate h output BB(H \ h)

else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

Page 10: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Random Sampling ideaBuild the basis by adding the constraints in a manner related to history Input: A set of n constraints H on d variables Output: the set of defining constraints 0. If |H|= c d2 return simplex on HS Ã Repeat• Pick random R ½ H of size r• Solve recursively on S [ R solution is u• V = set of constraints in H violated by u• If |V| · t, then S Ã S [ V Until V=

Page 11: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Correctness, Termination and Analysis

Claim: Each time we augment S (S Ã S [ V), we add to S a new constraint from the ``real” basis B of H– If u did not violate any constraint in B it would be optimal– So V must contain an element from B which was not in S before

• Since |B|=d, we can augment S at most d times• Therefore the number of constraints in the recursive call is

|R|+|S| · r +dt• Important factor for analysis: what is the probability of

successful augmentation

Page 12: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Sampling Lemma

For any H and S ½ H The expected (over R) number of constraints V that violate u (optimum on S [ R) is at most nd/r

ProofLet X(R,h) be 1 iff h violated h(S [ R)Need to bound ER [h X(R,h)] = 1 / #R |R|=r h X(R,h) instead consider all subsets Q = R [ h of size r+1= 1 / #R |Q|=r+1 h2 Q X(Q\{h},h)

= (#Q/#R) (r+1)¢ ProbQ,h 2 Q X(Q\{h},h)· n ¢ d / (r+1)

Page 13: Algorithms Lecture 10 Lecturer: Moni Naor. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2.

Analysis• Setting t= 2 nd /r implies (from Markov’s inequality):

– number of recursive call until a successful (V · t) augmentation is constantNumber of constraints in recursive call bounded by r+O(d2n/r)

Setting r=d n1/2 means that this is O(r)Total expected running timeT(n) · 2 d T(d n1/2 ) + O(d2 n)

Result O( (log n)log d (Simplex time) ) + O(d2 n)

Can be improved to O(dd +d2n)

Can be improved to O(dd1/2 +d2n) using [Kalai, Matousek-Sharir-Welzl]