NATCOR Convex Optimization Linear Programming 1 · Solving LP problems: The standard simplex method...
Transcript of NATCOR Convex Optimization Linear Programming 1 · Solving LP problems: The standard simplex method...
NATCOR Convex OptimizationLinear Programming 1
Julian Hall
School of MathematicsUniversity of Edinburgh
5 June 2018
What is linear programming (LP)?
The most important model used in optimal decision-making
Grew out of US Army Air Force logistics problems in WW2
First practical LP problem was formulated by George Dantzig in 1946
Dantzig invented the principal solution technique—the simplexalgorithm—in 1947
In the “Top 10 algorithms of the 20th century”
G. B. Dantzig (1914-2005)“The algorithm that runs the world”
New Scientist (2012)
LP and the simplex method have revolutionised organised humandecision-making
.J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 2 / 18
What is linear programming (LP)?
The diet problem
Given
Costs of foodstuffsNutrient information and daily requirements
What is the cheapest diet?
A linear programming model
Let xj be the amount of food j purchased (xj ≥ 0), for j = 1, . . . , n
Cost cj of food j gives total cost f =n∑
j=1
cjxj to be minimized
Let aij be the amount of nutrient i in food j
Matching the daily requirement bi givesn∑
j=1
aijxj = bi , for i = 1, . . . ,m
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 3 / 18
Solving LP problems: The standard simplex method
General LP problem in standard form is
maximize f = cTx such that Ax = b and x ≥ 0
with m equations and n variables (m < n)
RHS
aq
cq cT
apq
b
bp
N
B
The standard (tableau) simplex method
Choose a column q with positive entry in cChoose a row p with least ratio between components in b and aq
Update the tableau
Need a better idea of what’s going on and how to solve LP problems efficiently
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 4 / 18
Solving LP problems: The feasible region
General LP problem is
maximize f = cTx such that Ax = b and x ≥ 0
with m equations and n variables (m < n)
The feasible region
Solution of Ax = b is a n −m dimensionalhyperplane in Rn
Intersection with x ≥ 0 is the feasible region K
K is a polyhedronA vertex has
n −m zero componentsm components given by Ax = b
A solution of the LP occurs at a vertex of K
3x
x2
1x
K
n = 3; m = 1
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 5 / 18
The simplex algorithm: Choosing where to move
x
c3
c1
x3
K
x2
x1
General LP problem is
maximize f = cTx such that Ax = b and x ≥ 0
At a vertex x , consider moving along an edge of Kto improve the value of f
Corresponds to increasing a variable xj from zeroRate of change cj of f with xj is known
If no positive cj then x is optimal so stop
Increase variable xq with greatest cj
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 6 / 18
The simplex algorithm: Choosing how far to go
αdx; f
x + αd; f + αc1
x3
K
x2
x1
Increase variable xq with greatest cj > 0
Edge direction d corresponding xq is known
Move along x + αdStop when first component of x + αd is zeroedxq increases to α > 0
Objective increases by α cq > 0
Repeat!
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 7 / 18
The simplex algorithm: Basic and nonbasic variables
At a vertex xThere are n −m nonbasic variables xN with value zero
Their indices form a set NThe columns of A corresponding to N form the matrix NThe components of c corresponding to N form the vector cN
There are m basic variables xB with values uniquely defined by the m equations
Their indices form a set BThe columns of A corresponding to B form the nonsingular basis matrix BThe components of c corresponding to B form the vector cB
The partitioned LP in standard form is then
maximize f = cTB xB + cT
N xN
subject to BxB + NxN = bxB ≥ 0, xN ≥ 0
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 8 / 18
The simplex algorithm: The reduced objective and optimality
Using the partitioned equations
BxB +NxN = b ⇒ xB = B−1(b−NxN) = b−B−1NxN where b = B−1b
Hence, substituting for xB , the objective function is
f = cTB xB + cT
N xN = cTB (b − B−1NxN) + cT
N xN = f + cTxN
where
f = cTB b is the objective value when xN = 0
c = cN − NTB−TcB is the vector of reduced costs
Behaviour of f as xN increases from zero yields the optimality condition c ≤ 0
This is the key to solving LP problems
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 9 / 18
The simplex algorithm: Data requirements
Each simplex iteration requires
Reduced costs c = cN − NTB−TcB
Reduced RHS b = B−1bEdge direction d corresponding to increasing xq
RHS
aq
cq cT
apq
b
bp
N
B
Know xB = b − B−1NxN
xN = xqeq when increasing just xqHence xB = b − xq × B−1Neq
Edge direction is thus
d =
[eq
−aq
]where aq = B−1(Neq) = B−1aq
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 10 / 18
The simplex algorithm: Computational requirements
Find reduced costs c = cN − NT (B−TcB) thus:Solve BTπ = cB
Form c = cN − NTπ
Find basic edge direction aq = B−1aq thus:Solve B aq = aq
Reduced RHS b is maintained by updating
The revised simplex method
Data required by the algorithm can be found by solving square systems of equations
BTπ = cB and B aq = aqEfficiency depends on
How B−1 is represented
How the following relation between successive matrices B is exploited
B := B + (ap − aq)eTp
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 11 / 18
The simplex algorithm: Efficient implementation
Standard simplex method
Maintains B−1N, b and c in a rectangular tableau
Requires O(mn) storage and O(mn) computation per iteration
Inefficient and prohibitively expensive for large problems
Revised simplex method
Computes aq = B−1aq as required, forms c and updates b = B−1bRequires up to O(m2) storage and O(m2) computation per iteration but vastlyless for sparse LP problems
Efficient for large (sparse) problems
Important to exploit matrix sparsity
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 12 / 18
The simplex algorithm: Recap
αdx; f
x + αd; f + αc1
x3
K
x2
x1
Increase variable xq with greatest cj > 0
Edge direction d corresponding xq is known
Move along x + αdStop when first component of x + αd is zeroedxq increases to α > 0
Objective increases by α cq > 0
Repeat!
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 13 / 18
The simplex algorithm: Does it terminate? In how many iterations?
Proof of termination
In each iteration the objective increases by α cq > 0So, cannot re-visit vertices
Number of vertices is finiteSo, the algorithm terminates
In practice the algorithm visits
O(m + n)� n!
m!(n −m)!vertices
Could it visit all the vertices?
x∗
x0
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 14 / 18
The simplex algorihtm: The worst case can happen!
Klee-Minty problems
maximizen∑
j=1
10j−1xj
subject to xi + 2n∑
j=i+1
10j−ixj ≤ 100n−i for i = 1, . . . , n
x1 ≥ 0, . . . , xn ≥ 0
Problems have n variables and n constraints
Feasible region has 2n vertices
Simplex algorithm visits all of them! It is exponential in time
Led to better simplex variants which weight cj by ‖d j‖ Goldfarb and Reid (1977)
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 15 / 18
Bigger LP problems
Suppose N people are choosing diets with limits on the available food
Person k has an individual LP min cTk xk s.t. Akxk = bk and xk ≥ 0
Person k consumes xkj = [xk ]j of food j soN∑
k=1
xkj ≤ bj
This availability constraint links the individual LPs
Dantzig-Wolfe structure
min f = cT1 x1 + . . . + cT
NxN
s.t. A01x1 + . . . + A0NxN ≤ b0
A1x1 = b1
. . ....
ANxN = bN
x1 ≥ 0 . . . xN ≥ 0
Solution methods can exploit problem structure
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 16 / 18
Simplex method: Why use it?
Interior point methods (IPM) are the “modern” alternative to the simplex method
For single LP problems IPM are generally faster
For some classes of single LP problems the simplex method is faster
When solving sequences of related LP problems the simplex method is preferable
Branch-and-bound for discrete optimizationSequential linear programming for nonlinear optimization
Why?
Simplex method yields a basic feasible solutionSimplex method can be re-started easily from an optimal solution of one LP to solvea related LP quickly
71 years old and going strong!
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 17 / 18
Linear Programming 1: Summary
Described the simplex algorithm
Identified that efficient implementation depends on techniques for solving relatedsystems of equations
Remains to consider how to exploit LP problem structure and matrix sparsity:Wednesday 14:30–15:30; 16:00–17:00
J. A. J. Hall NATCOR Convex Optimization: Linear Programming 1 18 / 18