GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED...

39
GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED OPTIMIZATION PHILIP E. GILL * WALTER MURRAY MICHAEL A. SAUNDERS ARNE DRUD ERWIN KALVELAGEN § May 10, 2002 1 Introduction This section describes the GAMS interface to the general-purpose NLP solver SNOPT, (Sparse Nonlinear Optimizer) which implements a sequential quadratic programming (SQP) method for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. The optimization problem is assumed to be stated in the form NP minimize x or maximize x f (x) subject to F (x) b 1 Gx b 2 l x u, (1) where x n , f (x) is a linear or nonlinear smooth objective function, l and u are constant lower and upper bounds, F (x) is a set of nonlinear constraint functions, G is a sparse matrix, is a vector of relational operators (, or =), and b 1 and b 2 are right-hand side constants. F (x) b 1 are the nonlinear constraints of the model and Gx b 2 form the linear constraints. The gradients of f and F i are automatically provided by GAMS, using its automatic differentiation engine. * Department of Mathematics, University of California, San Diego, La Jolla, CA 92093- 0112. Department of EESOR, Stanford University, Stanford, CA 94305-4023 ARKI Consulting and Development, Bagsvaerd, Denmark § GAMS Development Corp., Washington DC 1

Transcript of GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED...

Page 1: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

GAMS/SNOPT: AN SQP ALGORITHM FOR

LARGE-SCALE CONSTRAINED

OPTIMIZATION

PHILIP E. GILL∗ WALTER MURRAY†

MICHAEL A. SAUNDERS† ARNE DRUD‡

ERWIN KALVELAGEN§

May 10, 2002

1 Introduction

This section describes the GAMS interface to the general-purpose NLP solverSNOPT, (Sparse Nonlinear Optimizer) which implements a sequential quadraticprogramming (SQP) method for solving constrained optimization problems withsmooth nonlinear functions in the objective and constraints. The optimizationproblem is assumed to be stated in the form

NP minimizex

or maximizex

f(x)

subject toF (x) ∼ b1

Gx ∼ b2

l ≤ x ≤ u,

(1)

where x ∈ <n, f(x) is a linear or nonlinear smooth objective function, l andu are constant lower and upper bounds, F (x) is a set of nonlinear constraintfunctions, G is a sparse matrix, ∼ is a vector of relational operators (≤, ≥ or=), and b1 and b2 are right-hand side constants. F (x) ∼ b1 are the nonlinearconstraints of the model and Gx ∼ b2 form the linear constraints.

The gradients of f and Fi are automatically provided by GAMS, using itsautomatic differentiation engine.

∗Department of Mathematics, University of California, San Diego, La Jolla, CA 92093-0112.

†Department of EESOR, Stanford University, Stanford, CA 94305-4023‡ARKI Consulting and Development, Bagsvaerd, Denmark§GAMS Development Corp., Washington DC

1

Page 2: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

The bounds may include special values -INF or +INF to indicate lj = −∞ oruj = +∞ for appropriate j. Free variables have both bounds infinite and fixedvariables have lj = uj .

1.1 Problem Types

If the nonlinear functions are absent, the problem is a linear program (LP)and SNOPT applies the primal simplex method [2]. Sparse basis factors aremaintained by LUSOL [11] as in MINOS [14].

If only the objective is nonlinear, the problem is linearly constrained (LC)and tends to solve more easily than the general case with nonlinear constraints(NC). Note that GAMS models have an objective variable instead of an objec-tive function. The GAMS/SNOPT link will try to substitute out the objectivevariable and reformulate the model such that SNOPT will see a true objectivefunction.

For both linearly and nonlinearly constrained problems SNOPT applies asparse sequential quadratic programming (SQP) method [6], using limited-memory quasi-Newton approximations to the Hessian of the Lagrangian. Themerit function for steplength control is an augmented Lagrangian, as in thedense SQP solver NPSOL [7, 9].

In general, SNOPT requires less matrix computation than NPSOL and fewerevaluations of the functions than the nonlinear algorithms in MINOS [12, 13]. Itis suitable for nonlinear problems with thousands of constraints and variables,but not thousands of degrees of freedom. (Thus, for large problems there shouldbe many constraints and bounds, and many of them should be active at asolution.)

1.2 Selecting the SNOPT Solver

The GAMS system can be instructed to use the SNOPT solver by incorporatingthe following option in the GAMS model:

option NLP=SNOPT;

If the model contains non-smooth functions like abs(x), or max(x, y) youcan try to get it solved by SNOPT using

option DNLP=SNOPT;

These models have discontinuous derivatives however, and SNOPT was notdesigned for solving such models. Discontinuities in the gradients can sometimesbe tolerated if they are not too close to an optimum.

It is also possible to specify NLP=SNOPT or DNLP=SNOPT on the command line,as in:

> gamslib chem> gams chem nlp=snopt

2

Page 3: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

2 Description of the method

Here we briefly describe the main features of the SQP algorithm used in SNOPT

and introduce some terminology. The SQP algorithm is fully described in [6].

2.1 Objective function reconstruction

The first step GAMS/SNOPT performs is to try to reconstruct the objectivefunction. In GAMS, optimization models minimize or maximize an objectivevariable. SNOPT however works with an objective function. One way of dealingwith this is to add a dummy linear function with just the objective variable.Consider the following GAMS fragment:

obj.. z =e= sum(i, sqr(resid(i)));

model m /all/;solve m using nlp minimizing z;

This can be cast in form (1) by saying minimize z subject to z =∑

i resid2i

and the other constraints in the model. Although simple, this approach is notalways preferable. Especially when all constraints are linear it is important tominimize

∑i resid2

i directly. This can be achieved by a simple reformulation: zcan be substituted out. The substitution mechanism carries out the formulationif all of the following conditions hold:

• the objective variable z is a free continuous variable (no bounds are definedon z),

• z appears linearly in the objective function,

• the objective function is formulated as an equality constraint,

• z is only present in the objective function and not in other constraints.

For many models it is very important that the nonlinear objective functionbe used by SNOPT. For instance the model chem.gms from the model librarysolves in 16 iterations. When we add the bound

energy.lo = 0;

on the objective variable energy and thus preventing it from being substitutedout, SNOPT will not be able to find a feasible point for the given starting point.

This reformulation mechanism has been extended for substitutions along thediagonal. For example, the GAMS model

variables x,y,z;equations e1,e2;e1..z =e= y;e2..y =e= sqr(1+x);model m /all/;option nlp=snopt;solve m using nlp minimizing z;

3

Page 4: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

will be reformulated as an unconstrained optimization problem

minimize f(x) = (1 + x)2.

These additional reformulations can be turned off by using the statement optionreform = 0; (see §4.1).

2.2 Constraints and slack variables

The m general constraints of the problem (1) are formed by F (x) ∼ b1 andGx ∼ b2. SNOPT will add to each general constraint a slack variable si withappropriate bounds. The problem defined by (1) can therefore be rewritten inthe following equivalent form:

minimizex,s

f(x)

subject to

(F (x)Gx

)− s = 0, l ≤

(x

s

)≤ u.

where a maximization problem is cast into a minimization by multiplying theobjective function by −1.

The linear and nonlinear general constraints become equalities of the formF (x)− sN = 0 and Gx− sL = 0, where sL and sN are known as the linear andnonlinear slacks.

2.3 Major iterations

The basic structure of SNOPT’s solution algorithm involves major and minoriterations. The major iterations generate a sequence of iterates (xk) that sat-isfy the linear constraints and converge to a point that satisfies the first-orderconditions for optimality. At each iterate {xk} a QP subproblem is used togenerate a search direction towards the next iterate {xk+1}. The constraintsof the subproblem are formed from the linear constraints Gx− sL = 0 and thenonlinear constraint linearization

F (xk) + F ′(xk)(x− xk)− sN = 0,

where F ′(xk) denotes the Jacobian: a matrix whose rows are the first derivativesof F (x) evaluated at xk. The QP constraints therefore comprise the m linearconstraints

F ′(xk)x −sN = −F (xk) + F ′(xk)xk,

Gx −sL = 0,

where x and s are bounded by l and u as before. If the m × n matrix A andm-vector b are defined as

A =

(F ′(xk)

G

)and b =

(−F (xk) + F ′(xk)xk

0

),

4

Page 5: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

then the QP subproblem can be written as

minimizex,s

q(x) subject to Ax− s = b, l ≤

(x

s

)≤ u, (2)

where q(x) is a quadratic approximation to a modified Lagrangian function [6].

2.4 Minor iterations

Solving the QP subproblem is itself an iterative procedure, with the minoriterations of an SQP method being the iterations of the QP method. At eachminor iteration, the constraints Ax− s = b are (conceptually) partitioned intothe form

BxB + SxS + NxN = b,

where the basis matrix B is square and nonsingular. The elements of xB , xS

and xN are called the basic, superbasic and nonbasic variables respectively; theyare a permutation of the elements of x and s. At a QP solution, the basicand superbasic variables will lie somewhere between their bounds, while thenonbasic variables will be equal to one of their upper or lower bounds. At eachiteration, xS is regarded as a set of independent variables that are free to movein any desired direction, namely one that will improve the value of the QPobjective (or the sum of infeasibilities). The basic variables are then adjustedin order to ensure that (x, s) continues to satisfy Ax − s = b. The numberof superbasic variables (nS say) therefore indicates the number of degrees offreedom remaining after the constraints have been satisfied. In broad terms, nS

is a measure of how nonlinear the problem is. In particular, nS will always bezero at a solution for LP problems.

If it appears that no improvement can be made with the current definitionof B, S and N , a nonbasic variable is selected to be added to S, and the processis repeated with the value of nS increased by one. At all stages, if a basic orsuperbasic variable encounters one of its bounds, the variable is made nonbasicand the value of nS is decreased by one.

Associated with each of the m equality constraints in Ax−s = b are the dualvariables πi. Similarly, each variable in (x, s) has an associated reduced gradientdj . The reduced gradients for the variables x are the quantities g−AT π, where gis the gradient of the QP objective, and the reduced gradients for the slacks arethe dual variables π. The QP subproblem is optimal if dj ≥ 0 for all nonbasicvariables at their lower bounds, dj ≤ 0 for all nonbasic variables at their upperbounds, and dj = 0 for other variables, including superbasics. In practice, anapproximate QP solution is found by relaxing these conditions on dj (see theMinor optimality tolerance in §4.3).

5

Page 6: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

2.5 The merit function

After a QP subproblem has been solved, new estimates of the NP solution arecomputed using a linesearch on the augmented Lagrangian merit function

M(x, s, π) = f(x)− πT(F (x)− sN

)+ 1

2

(F (x)− sN

)TD(F (x)− sN

), (3)

where D is a diagonal matrix of penalty parameters. If (xk, sk, πk) denotes thecurrent solution estimate and (xk, sk, πk) denotes the optimal QP solution, thelinesearch determines a step αk (0 < αk ≤ 1) such that the new point xk+1

sk+1

πk+1

=

xk

sk

πk

+ αk

xk − xk

sk − sk

πk − πk

(4)

gives a sufficient decrease in the merit function. When necessary, the penaltiesin D are increased by the minimum-norm perturbation that ensures descentfor M [9]. As in NPSOL, sN is adjusted to minimize the merit function as afunction of s prior to the solution of the QP subproblem. For more details, see[7, 3].

2.6 Treatment of constraint infeasibilities

SNOPT makes explicit allowance for infeasible constraints. Infeasible linearconstraints are detected first by solving a problem of the form

FLP minimizex,v,w

eT (v + w)

subject to Gx− v + w ∼ b2, l ≤ x ≤ u, v, w ≥ 0,

where e is a vector of ones. This is equivalent to minimizing the sum of thegeneral linear constraint violations subject to the simple bounds. (In the linearprogramming literature, the approach is often called elastic programming. Wealso describe it as minimizing the `1 norm of the infeasibilities.)

If the linear constraints are infeasible (v 6= 0 or w 6= 0), SNOPT terminateswithout computing the nonlinear functions.

If the linear constraints are feasible, all subsequent iterates satisfy the linearconstraints. (Such a strategy allows linear constraints to be used to definea region in which the functions can be safely evaluated.) SNOPT proceeds tosolve NP as given, using search directions obtained from a sequence of quadraticprogramming subproblems (2).

If a QP subproblem proves to be infeasible or unbounded (or if the dualvariables π for the nonlinear constraints become large), SNOPT enters “elastic”mode and solves the problem

NP(γ) minimizex,v,w

f(x) + γeT (v + w)

subject to

(F (x)− v + w

Gx

)∼ b, l ≤ x ≤ u, v, w ≥ 0,

6

Page 7: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

where γ is a nonnegative parameter (the elastic weight), and f(x) + γeT (v + w)is called a composite objective. If γ is sufficiently large, this is equivalent tominimizing the sum of the nonlinear constraint violations subject to the linearconstraints and bounds. A similar `1 formulation of NP is fundamental to theS`1QP algorithm of Fletcher [4]. See also Conn [1].

3 Starting points and advanced bases

A good starting point may be essential for solving nonlinear models. We showhow such a starting point can be specified in a GAMS environment, and howSNOPT will use this information.

A related issue is the use of “restart” information in case a number of relatedmodels is solved in a row. Starting from an optimal point from a previous solvestatement is in such situations often beneficial. In a GAMS environment thismeans reusing primal and dual information, which is stored in the .L and .Mfields of variables and equations.

3.1 Starting points

To specify a starting point for SNOPT use the .L level values in GAMS. Forexample, to set all variables xi,j := 1 use x.l(i,j)=1;. The default values forlevel values are zero.

Setting a good starting point can be crucial for getting good results. As an(artificial) example consider the problem where we want to find the smallestcircle that contains a number of points (xi, yi):

Example minimizer,a,b

r

subject to (xi − a)2 + (yi − b)2 ≤ r2, r ≥ 0.

This problem can be modeled in GAMS as follows.

set i points /p1*p10/;

parametersx(i) x coordinates,y(i) y coordinates;

* fill with random datax(i) = uniform(1,10);y(i) = uniform(1,10);

variablesa x coordinate of center of circleb y coordinate of center of circler radius;

equationse(i) points must be inside circle;

e(i).. sqr(x(i)-a) + sqr(y(i)-b) =l= sqr(r);

7

Page 8: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

r.lo = 0;

model m /all/;option nlp=snopt;solve m using nlp minimizing r;

Without help, SNOPT will not be able to find an optimal solution. Theproblem will be declared infeasible. In this case, providing a good startingpoint is very easy. If we define

xmin = mini

xi,

ymin = mini

yi,

xmax = maxi

xi,

ymax = maxi

yi,

then good estimates are

a = (xmin + xmax)/2,

b = (ymin + ymax)/2,

r =√

(a− xmin)2 + (b− ymin)2.

Thus we include in our model:

parameters xmin,ymin,xmax,ymax;xmin = smin(i, x(i));ymin = smin(i, x(i));xmax = smax(i, x(i));ymax = smax(i, y(i));

* set starting pointa.l = (xmin+xmax)/2;b.l = (ymin+ymax)/2;r.l = sqrt( sqr(a.l-xmin) + sqr(b.l-ymin) );

and now the model solves very easily.Level values can also be set implicitly as a result of assigning bounds. When

a variable is bounded away from zero, for instance by the statement Y.LO = 1;,the SOLVE statement will override the default level of zero of such a variable inorder to make it feasible.

Note: another way to formulate the model would be to minimize r2 insteadof r. This allows SNOPT to solve the problem even with the default startingpoint.

3.2 Advanced basis

GAMS automatically passes on level values and basis information from one solveto the next. Thus, when we have two solve statements in a row, with just afew changes in between SNOPT will typically need very few iterations to find anoptimal solution in the second solve. For instance, when we add a second solveto the chem.gms model from the model library:

8

Page 9: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

model mixer chemical mix for N2H4+O2 / all /;

solve mixer minimizing energy using nlp;solve mixer minimizing energy using nlp;

we observe the following log:

[erwin@hamilton]$ gams chem nlp=snoptGAMS 2.50A Copyright (C) 1987-1999 GAMS Development. All rights reservedLicensee: hamilton G990622:1048CP-LNX

GAMS Development--- Starting compilation--- chem.gms(48) 1 Mb--- Starting execution--- chem.gms(42) 1 Mb--- Generating model MIXER--- chem.gms(46) 1 Mb--- 5 rows, 12 columns, and 37 non-zeroes.--- Executing SNOPT

GAMS/SNOPT X86/LINUX version 5.3.4-007-035P. E. Gill, UC San DiegoW. Murray and M. A. Saunders, Stanford University

Work space allocated -- 0.02 Mb

Major Minor Step nObj Objective Optimal nS PD0 5 0.0E+00 1 3.292476E+01 2.1E-01 0 TF r1 4 1.0E+00 2 4.517191E+01 2.2E-01 1 TF n r2 10 8.6E-02 5 4.533775E+01 1.7E-01 6 TF s3 3 1.0E+00 6 4.608439E+01 7.6E-02 6 TF4 2 1.0E+00 7 4.667813E+01 1.4E-01 5 TF5 2 1.0E+00 8 4.751149E+01 2.2E-02 6 TF6 3 1.0E+00 9 4.757024E+01 2.1E-02 4 TF7 2 7.1E-01 11 4.763634E+01 3.3E-02 5 TF8 3 1.0E+00 12 4.768395E+01 3.3E-02 7 TF9 3 4.6E-01 14 4.769958E+01 1.9E-02 5 TF

10 2 1.0E+00 15 4.770539E+01 1.5E-02 6 TF

Major Minor Step nObj Objective Optimal nS PD11 1 1.0E+00 16 4.770639E+01 6.2E-04 6 TF12 1 1.0E+00 17 4.770650E+01 5.0E-03 6 TF13 1 1.0E+00 18 4.770651E+01 1.6E-04 6 TF14 1 1.0E+00 19 4.770651E+01 1.8E-05 6 TF15 1 1.0E+00 20 4.770651E+01 2.7E-05 6 TF16 1 1.0E+00 21 4.770651E+01 7.6E-07 6 TT

EXIT - Optimal Solution found.

--- Restarting execution--- chem.gms(46) 1 Mb--- Reading solution for model MIXER--- chem.gms(46) 1 Mb--- Generating model MIXER--- chem.gms(47) 1 Mb--- 5 rows, 12 columns, and 37 non-zeroes.--- Executing SNOPT

GAMS/SNOPT X86/LINUX version 5.3.4-007-035P. E. Gill, UC San DiegoW. Murray and M. A. Saunders, Stanford University

Work space allocated -- 0.02 Mb

Major Minor Step nObj Objective Optimal nS PD

9

Page 10: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

0 0 0.0E+00 1 4.770651E+01 7.4E-07 0 TT r

EXIT - Optimal Solution found.

--- Restarting execution--- chem.gms(47) 1 Mb--- Reading solution for model MIXER--- chem.gms(47) 1 Mb*** Status: Normal completion[erwin@hamilton]$

The first solve takes 16 iterations, while the second solve needs exactlyzero iterations.

Basis information is passed on using the marginals of the variables and equa-tions. In general the rule is:

X.M = 0 basicX.M 6= 0 nonbasic if level value is at bound, superbasic otherwise

A marginal value of EPS means that the numerical value of the marginal iszero, but that the status is nonbasic or superbasic. The user can specify a basisby assigning zero or nonzero values to the .M values. It is further noted that iftoo many .M values are zero, the basis is rejected. This happens for instancewhen two subsequent models are too different. This decision is made based onthe value of the bratio option (see §4.1).

4 Options

In many cases NLP models can be solved with GAMS/SNOPT without usingsolver options. For special situations it is possible to specify non-standard valuesfor some or all of the options.

4.1 GAMS options

The following GAMS options affect the behavior of SNOPT.

NLPThis option selects the NLP solver. Example: option NLP=SNOPT;. Seealso §1.2.

DNLPSelects the DNLP solver for models with discontinuous or non-differentiablefunctions. Example: option DNLP=SNOPT;. See also §1.2.

RMINLPSelects the Relaxed Non-linear Mixed-Integer (RMINLP) solver. By re-laxing the integer conditions in an MINLP, the model becomes effectivelyan NLP. Example: option RMINLP=SNOPT;. See also §1.2.

10

Page 11: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

iterlimSets the (minor) iteration limit. Example: option iterlim=50000;. Thedefault is 10000. SNOPT will stop as soon as the number of minor itera-tions exceeds the iteration limit. In that case the current solution will bereported.

reslimSets the time limit or resource limit. Depending on the architecture thisis wall clock time or CPU time. SNOPT will stop as soon as more thanreslim seconds have elapsed since SNOPT started. The current solutionwill be reported in this case. Example: option reslim = 600;. Thedefault is 1000 seconds.

domlimSets the domain violation limit. Domain errors are evaluation errors in thenonlinear functions. An example of a domain error is trying to evaluate√

x for x < 0. Other examples include taking logs of negative numbers,and evaluating xy for x < 0 (xy is evaluated as exp(y log x)). Whensuch a situation occurs the number of domain errors is increased by one,and SNOPT will stop if this number exceeds the limit. If the limit hasnot been reached, a reasonable number is returned (e.g., in the case of√

x, x < 0 a zero is passed back) and SNOPT is asked to continue. In manycases SNOPT will be able to recover from these domain errors, especiallywhen they happen at some intermediate point. Nevertheless it is best toadd appropriate bounds or linear constraints to ensure that these domainerrors don’t occur. For example, when an expression log(x) is presentin the model, add a statement like x.lo = 0.001;. Example: optiondomlim=100;. The default value is 0.

bratioBasis acceptance test. When several models are solved in a row, GAMS

automatically passes dual information to SNOPT so that it can reconstructan advanced basis. When too many new variables or constraints enter themodel, it may be better not to use existing basis information, but to crasha new basis instead. The bratio determines how quickly an existing basisis discarded. A value of 1.0 will discard any basis, while a value of 0.0 willretain any basis. Example: option bratio=1.0;. Default: bratio = 0.25.

sysoutDebug listing. When turned on, extra information printed by SNOPT

will be added to the listing file. Example: option sysout=on;. Default:sysout = off.

workThe work option sets the amount of memory SNOPT can use. By defaultan estimate is used based on the model statistics (number of (nonlinear)equations, number of (nonlinear) variables, number of (nonlinear) nonze-roes etc.). In most cases this is sufficient to solve the model. In some

11

Page 12: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

extreme cases SNOPT may need more memory, and the user can specifythis with this option. For historical reasons work is specified in “doublewords” or 8 byte quantities. For example, option work=100000; will askfor 0.76 MB (a megabyte being defined as 1024× 1024 bytes).

reformThis option will instruct the reformulation mechanism described in §2.1to substitute out equality equations. The default value of 100 will causethe procedure to try further substitutions along the diagonal after theobjective variable has been removed. Any other value will prohibit thisdiagonal procedure. Example: option reform = 0;. Default: reform =100.

4.2 Model suffices

A model identifier in GAMS can have several suffices to which you can assignvalues. A small GAMS fragment illustrates this:

model m /all/;m.iterlim = 3000;solve m minimizing z using nlp;

Options set by assigning to the suffixed model identifier override the globaloptions. For example,

model m /all/;m.iterlim = 3000;option iterlim = 100;solve m minimizing z using nlp;

will use an iteration limit of 3000 for this solve.

m.iterlimSets the iteration limit. Overrides the global iteration limit. Example:m.iterlim=50000; The default is 10000. See also §4.1.

m.reslimSets the resource or time limit. Overrides the global resource limit. Ex-ample: m.reslim=600; The default is 1000 seconds. See also §4.1.

m.bratioSets the basis acceptance test parameter. Overrides the global setting.Example: m.bratio=1.0; The default is 0.25. See also §4.1.

m.scaleoptWhether or not to scale the model using user-supplied scale factors. Theuser can provide scale factors using the .scale variable and equationsuffix. For example, x.scale(i,j) = 100; will assign a scale factor of100 to all xi,j variables. The variables SNOPT will see are scaled by

12

Page 13: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

a factor 1/variable scale, so the modeler should use scale factors thatrepresent the order of magnitude of the variable. In that case SNOPT

will see variables that are scaled around 1.0. Similarly equation scales canbe assigned to equations, which are scaled by a factor 1/equation scale.Example: m.scaleopt=1; will turn scaling on. The default is not to usescaling, and the default scale factors are 1.0. Automatic scaling is providedby the SNOPT option scale option.

m.optfileSets whether or not to use a solver option file. Solver specific SNOPT

options are specified in a file called snopt.opt, see §4.3. To tell SNOPT

to use this file, add the statement: option m.optfile=1;. The default isnot to use an option file.

m.workspaceThe workspace option sets the amount of memory that SNOPT can use.By default an estimate is used based on the model statistics (number of(nonlinear) equations, number of (nonlinear) variables, number of (non-linear) nonzeroes, etc.). In most cases this is sufficient to solve the model.In some extreme cases SNOPT may need more memory, and the user canspecify this with this option. The amount of memory is specified in MB.Example: m.workspace = 5;.

4.3 SNOPT options

SNOPT options are specified in a file called snopt.opt which should reside inthe working directory (this is the project directory when running models fromthe IDE). To tell SNOPT to read this file, use the statement m.optfile = 1;in the model (see §4.2).

An example of a valid snopt.opt is:

Hessian full memoryHessian frequency 20

Users familiar with the SNOPT distribution from Stanford University willnotice that the begin and end keywords are missing. These markers are optionalin GAMS/SNOPT and will be ignored. Therefore, the following option file is alsoaccepted:

beginHessian full memoryHessian frequency 20end

All options are case-insensitive. A line is a comment line if it starts with anasterisk, *, in column one.

Here follows a description of all SNOPT options that are possibly useful ina GAMS environment:

13

Page 14: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

Check frequency iEvery ith minor iteration after the most recent basis factorization, a nu-merical test is made to see if the current solution x satisfies the generallinear constraints (including linearized nonlinear constraints, if any). Theconstraints are of the form Ax−s = b, where s is the set of slack variables.To perform the numerical test, the residual vector r = b−Ax + s is com-puted. If the largest component of r is judged to be too large, the currentbasis is refactorized and the basic variables are recomputed to satisfy thegeneral constraints more accurately.Check frequency 1 is useful for debugging purposes, but otherwise thisoption should not be needed.Default: check frequency 60.

Crash option iExcept on restarts, a CRASH procedure is used to select an initial basisfrom certain rows and columns of the constraint matrix ( A − I ). TheCrash option i determines which rows and columns of A are eligibleinitially, and how many times CRASH is called. Columns of −I are usedto pad the basis where necessary.

Crash option 0: The initial basis contains only slack variables: B = I.Crash option 1: CRASH is called once, looking for a triangular basis

in all rows and columns of the matrix A.Crash option 2: CRASH is called twice (if there are nonlinear con-

straints). The first call looks for a triangular basis in linear rows,and the iteration proceeds with simplex iterations until the linearconstraints are satisfied. The Jacobian is then evaluated for the firstmajor iteration and CRASH is called again to find a triangular basisin the nonlinear rows (retaining the current basis for linear rows).

Crash option 3: CRASH is called up to three times (if there are nonlin-ear constraints). The first two calls treat linear equalities and linearinequalities separately. As before, the last call treats nonlinear rowsbefore the first major iteration.

If i ≥ 1, certain slacks on inequality rows are selected for the basis first.(If i ≥ 2, numerical values are used to exclude slacks that are close toa bound.) CRASH then makes several passes through the columns of A,searching for a basis matrix that is essentially triangular. A column isassigned to “pivot” on a particular row if the column contains a suitablylarge element in a row that has not yet been assigned. (The pivot elementsultimately form the diagonals of the triangular basis.) For remainingunassigned rows, slack variables are inserted to complete the basis.Default: Crash option 3 for linearly constrained problems and Crashoption 0; for problems with nonlinear constraints.

Crash tolerance rThe Crash tolerance r allows the starting procedure CRASH to ignore

14

Page 15: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

certain “small” nonzeros in each column of A. If amax is the largest elementin column j, other nonzeros aij in the column are ignored if |aij | ≤ amax×r.(To be meaningful, r should be in the range 0 ≤ r < 1.)When r > 0.0, the basis obtained by CRASH may not be strictly tri-angular, but it is likely to be nonsingular and almost triangular. Theintention is to obtain a starting basis containing more columns of A andfewer (arbitrary) slacks. A feasible solution may be reached sooner onsome problems.For example, suppose the first m columns of A are the matrix shown un-der LU factor tolerance; i.e., a tridiagonal matrix with entries −1, 4,−1. To help CRASH choose all m columns for the initial basis, we wouldspecify Crash tolerance r for some value of r > 1/4.Default: Crash tolerance 0.1

Elastic weight ωThis parameter denoted by ω determines the initial weight γ associatedwith problem NP(γ).At any given major iteration k, elastic mode is started if the QP subprob-lem is infeasible, or the QP dual variables are larger in magnitude thanω(1 + ‖g(xk)‖2), where g is the objective gradient. In either case, the QPis re-solved in elastic mode with γ = ω(1 + ‖g(xk)‖2).Thereafter, γ is increased (subject to a maximum allowable value) at anypoint that is optimal for problem NP(γ), but not feasible for NP. After therth increase, γ = ω10r(1 + ‖g(xk1)‖2), where xk1 is the iterate at whichγ was first needed.Default: Elastic weight 10000.0

Expand frequency iThis option is part of the EXPAND anti-cycling procedure [8] designed tomake progress even on highly degenerate problems.For linear models, the strategy is to force a positive step at every iteration,at the expense of violating the bounds on the variables by a small amount.Suppose that the Minor feasibility tolerance is δ. Over a period ofi iterations, the tolerance actually used by SNOPT increases from 0.5δ toδ (in steps of 0.5δ/i).For nonlinear models, the same procedure is used for iterations in whichthere is only one superbasic variable. (Cycling can occur only when thecurrent solution is at a vertex of the feasible region.) Thus, zero stepsare allowed if there is more than one superbasic variable, but otherwisepositive steps are enforced.Increasing the expand frequency helps reduce the number of slightly in-feasible nonbasic basic variables (most of which are eliminated during aresetting procedure). However, it also diminishes the freedom to choose alarge pivot element (see Pivot tolerance).Default: Expand frequency 10000

Factorization frequency k

15

Page 16: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

At most k basis changes will occur between factorizations of the basismatrix.

• With linear programs, the basis factors are usually updated everyiteration. The default k is reasonable for typical problems. Smallervalues (say k = 75 or k = 50) may be more efficient on problems thatare rather dense or poorly scaled.

• When the problem is nonlinear, fewer basis updates will occur asan optimum is approached. The number of iterations between basisfactorizations will therefore increase. During these iterations a test ismade regularly (according to the Check frequency) to ensure thatthe general constraints are satisfied. If necessary the basis will berefactorized before the limit of k updates is reached.

Default: Factorization frequency 100 for linear programs and Factorizationfrequency 50 for nonlinear models.

Feasibility tolerance tSee Minor feasibility tolerance.Default: Feasibility tolerance 1.0e-6

Feasible point onlyThis options means “Ignore the objective function” while finding a feasiblepoint for the linear and nonlinear constraints. It can be used to check thatthe nonlinear constraints are feasible.Default: turned off.

Feasible exitIf SNOPT is about to terminate with nonlinear constraint violations, theoption Feasible exit requests further effort to satisfy the nonlinear con-straints while ignoring the objective function.Default: turned off.

Hessian Full memoryThis option selects the full storage method for storing and updating theapproximate Hessian. (SNOPT uses a quasi-Newton approximation to theHessian of the Lagrangian. A BFGS update is applied after each majoriteration.)If Hessian Full memory is specified, the approximate Hessian is treatedas a dense matrix and the BFGS updates are applied explicitly. Thisoption is most efficient when the number of nonlinear variables n1 is nottoo large (say, less than 75). In this case, the storage requirement is fixedand one can expect 1-step Q-superlinear convergence to the solution.Default: turned on when the number of nonlinear variables n1 ≤ 75.

Hessian Limited memoryThis option selects the limited memory storage method for storing and

16

Page 17: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

updating the approximate Hessian. (SNOPT uses a quasi-Newton approx-imation to the Hessian of the Lagrangian. A BFGS update is applied aftereach major iteration.)Hessian Limited memory should be used on problems where the numberof nonlinear variables n1 is very large. In this case a limited-memory pro-cedure is used to update a diagonal Hessian approximation Hr a limitednumber of times. (Updates are accumulated as a list of vector pairs. Theyare discarded at regular intervals after Hr has been reset to their diago-nal.)Default: turned on when the number of nonlinear variables n1 > 75.

Hessian frequency iIf Hessian Full is selected and i BFGS updates have already been car-ried out, the Hessian approximation is reset to the identity matrix. (Forcertain problems, occasional resets may improve convergence, but in gen-eral they should not be necessary.)Hessian Full memory and Hessian frequency = 20 have a similar ef-fect to Hessian Limited memory and Hessian updates = 20 (exceptthat the latter retains the current diagonal during resets).Default: Hessian frequency 99999999 (i.e. never).

Hessian updates iIf Hessian Limited memory is selected and i BFGS updates have alreadybeen carried out, all but the diagonal elements of the accumulated updatesare discarded and the updating process starts again.Broadly speaking, the more updates stored, the better the quality of theapproximate Hessian. However, the more vectors stored, the greater thecost of each QP iteration. The default value is likely to give a robust algo-rithm without significant expense, but faster convergence can sometimesbe obtained with significantly fewer updates (e.g., i = 5).Default: Hessian updates 20 (only when limited memory storage modelis used).

Infeasible exit okIf SNOPT is about to terminate with nonlinear constraint violations, thecompanion option Feasible exit requests further effort to satisfy thenonlinear constraints while ignoring the objective function. Infeasibleexit ok does not do this extra effort.Default: turned on.

Iterations limit kThis is the maximum number of minor iterations allowed (i.e., iterationsof the simplex method or the QP algorithm), summed over all majoriterations. This option overrides the GAMS iterlim options.Default: specified by GAMS.

Linesearch tolerance tThis controls the accuracy with which a steplength will be located along

17

Page 18: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

the direction of search each iteration. At the start of each linesearcha target directional derivative for the merit function is identified. Thisparameter determines the accuracy to which this target value is approxi-mated.

• t must be a real value in the range 0.0 ≤ t ≤ 1.0.

• The default value t = 0.9 requests just moderate accuracy in thelinesearch.

• If the nonlinear functions are cheap to evaluate (this is usually thecase for GAMS models), a more accurate search may be appropriate;try t = 0.1, 0.01 or 0.001. The number of major iterations mightdecrease.

• If the nonlinear functions are expensive to evaluate, a less accuratesearch may be appropriate. In the case of running under GAMS whereall gradients are known, try t = 0.99. (The number of major iter-ations might increase, but the total number of function evaluationsmay decrease enough to compensate.)

Default: Linesearch tolerance 0.9.

Log frequency kSee Print frequency.Default: Log frequency 1

LU factor tolerance r1

LU update tolerance r2

These tolerances affect the stability and sparsity of the basis factorizationB = LU during refactorization and updating, respectively. They mustsatisfy r1, r2 ≥ 1.0. The matrix L is a product of matrices of the form(

1µ 1

),

where the multipliers µ satisfy |µ| ≤ ri. Smaller values of ri favor stability,while larger values favor sparsity.

• For large and relatively dense problems, r1 = 5.0 (say) may givea useful improvement in stability without impairing sparsity to aserious degree.

• For certain very regular structures (e.g., band matrices) it may benecessary to reduce r1 and/or r2 in order to achieve stability. For

18

Page 19: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

example, if the columns of A include a submatrix of the form

4 −1−1 4 −1

−1 4 −1. . . . . . . . .

−1 4 −1−1 4

,

both r1 and r2 should be in the range 1.0 ≤ ri < 4.0.

Defaults for linear models: LU factor tolerance 100.0 and LU updatetolerance 10.0.The defaults for nonlinear models are LU factor tolerance 5.0 and LUupdate tolerance 5.0.

LU density tolerance r1

The density tolerance r1 is used during LU factorization of the basis ma-trix. Columns of L and rows of U are formed one at a time, and the re-maining rows and columns of the basis are altered appropriately. At anystage, if the density of the remaining matrix exceeds r1, the Markowitzstrategy for choosing pivots is terminated. The remaining matrix is fac-tored by a dense LU procedure. Raising the density tolerance towards 1.0may give slightly sparser LU factors, with a slight increase in factorizationtime.Default: LU density tolerance 0.6

LU singularity tolerance r2

The singularity tolerance r2 helps guard against ill-conditioned basis ma-trices. When the basis is refactorized, the diagonal elements of U aretested as follows: if |Ujj | ≤ r2 or |Ujj | < r2 maxi |Uij |, the jth column ofthe basis is replaced by the corresponding slack variable. (This is mostlikely to occur after a restart, or at the start of a major iteration.)In some cases, the Jacobian may converge to values that make the ba-sis exactly singular. (For example, a whole row of the Jacobian couldbe zero at an optimal solution.) Before exact singularity occurs, the ba-sis could become very ill-conditioned and the optimization could progressvery slowly (if at all). Setting a larger tolerance r2 = 1.0e-5, say, mayhelp cause a judicious change of basis.Default: LU singularity tolerance 3.7e-11 for most machines. Thisvalue corresponds to ε2/3, where ε is the relative machine precision.

Major feasibility tolerance εr

This specifies how accurately the nonlinear constraints should be satisfied.The default value of 1.0e-6 is appropriate when the linear and nonlinearconstraints contain data to about that accuracy.

19

Page 20: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

Let rowerr be the maximum nonlinear constraint violation, normalizedby the size of the solution. It is required to satisfy

rowerr = maxi

violi/‖x‖ ≤ εr, (5)

where violi is the violation of the ith nonlinear constraint (i = 1 : nnCon,nnCon being the number of nonlinear constraints).In the GAMS/SNOPT iteration log, rowerr appears as the quantity labeled“Feasibl”. If some of the problem functions are known to be of lowaccuracy, a larger Major feasibility tolerance may be appropriate.Default: Major feasibility tolerance 1.0e-6.

Major optimality tolerance εd

This specifies the final accuracy of the dual variables. On successful ter-mination, SNOPT will have computed a solution (x, s, π) such that

maxgap = maxj

gapj/‖π‖ ≤ εd, (6)

where gapj is an estimate of the complementarity gap for variable j (j =1 : n + m). The gaps are computed from the final QP solution using thereduced gradients dj = gj − πT aj (where gj is the jth component of theobjective gradient, aj is the associated column of the constraint matrix( A − I ), and π is the set of QP dual variables):

gapj =

{dj min{xj − lj , 1} if dj ≥ 0;

−dj min{uj − xj , 1} if dj < 0.

In the GAMS/SNOPT iteration log, maxgap appears as the quantity labeled“Optimal”.Default: Major optimality tolerance 1.0e-6.

Major iterations limit kThis is the maximum number of major iterations allowed. It is intendedto guard against an excessive number of linearizations of the constraints.Default: Major iterations limit max{1000, 3 max{m,n}}.

Major print level pThis controls the amount of output to the GAMS listing file each majoriteration. This output is only visible if the sysout option is turned on (see§4.1). Major print level 1 gives normal output for linear and nonlin-ear problems, and Major print level 11 gives additional details of theJacobian factorization that commences each major iteration.In general, the value being specified may be thought of as a binary numberof the form

Major print level JFDXbs

where each letter stands for a digit that is either 0 or 1 as follows:

20

Page 21: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

s a single line that gives a summary of each major iteration. (This entryin JFDXbs is not strictly binary since the summary line is printedwhenever JFDXbs ≥ 1).

b BASIS statistics, i.e., information relating to the basis matrix when-ever it is refactorized. (This output is always provided if JFDXbs ≥10).

X xk, the nonlinear variables involved in the objective function or theconstraints.

D πk, the dual variables for the nonlinear constraints.

F F (xk), the values of the nonlinear constraint functions.

J J(xk), the Jacobian.

To obtain output of any items JFDXbs, set the corresponding digit to 1,otherwise to 0.If J=1, the Jacobian will be output column-wise at the start of each majoriteration. Column j will be preceded by the value of the correspondingvariable xj and a key to indicate whether the variable is basic, superbasicor nonbasic. (Hence if J=1, there is no reason to specify X=1 unless theobjective contains more nonlinear variables than the Jacobian.) A typicalline of output is

3 1.250000D+01 BS 1 1.00000E+00 4 2.00000E+00

which would mean that x3 is basic at value 12.5, and the third column ofthe Jacobian has elements of 1.0 and 2.0 in rows 1 and 4.Major print level 0 suppresses most output, except for error messages.Default: Major print level 00001

Major step limit rThis parameter limits the change in x during a linesearch. It applies toall nonlinear problems, once a “feasible solution” or “feasible subproblem”has been found.

1. A linesearch determines a step α over the range 0 < α ≤ β, whereβ is 1 if there are nonlinear constraints, or the step to the nearestupper or lower bound on x if all the constraints are linear. Normally,the first steplength tried is α1 = min(1, β).

2. In some cases, such as f(x) = aebx or f(x) = axb, even a moderatechange in the components of x can lead to floating-point overflow.The parameter r is therefore used to define a limit β = r(1+‖x‖)/‖p‖(where p is the search direction), and the first evaluation of f(x) isat the potentially smaller steplength α1 = min(1, β, β).

3. Wherever possible, upper and lower bounds on x should be used toprevent evaluation of nonlinear functions at meaningless points. TheMajor step limit provides an additional safeguard. The default

21

Page 22: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

value r = 2.0 should not affect progress on well behaved problems,but setting r = 0.1 or 0.01 may be helpful when rapidly varyingfunctions are present. A “good” starting point may be required.An important application is to the class of nonlinear least-squaresproblems.

4. In cases where several local optima exist, specifying a small value forr may help locate an optimum near the starting point.

Default: Major step limit 2.0.

Minor iterations limit kThis is the maximum number of minor iterations allowed for each QPsubproblem in the SQP algorithm. Current experience is that the majoriterations converge more reliably if the QP subproblems are allowed tosolve accurately. Thus, k should be a large value.In the major iteration log, a t at the end of a line indicates that thecorresponding QP was terminated by the limit k.Note that the SNOPT option Iterations limit or the GAMS iterlimoption defines an independent limit on the total number of minor iterations(summed over all QP subproblems).Default: Minor iterations limit max{1000, 5 max{m,n}}

Minor feasibility tolerance tSNOPT tries to ensure that all variables eventually satisfy their upperand lower bounds to within the tolerance t. This includes slack variables.Hence, general linear constraints should also be satisfied to within t.Feasibility with respect to nonlinear constraints is judged by the Majorfeasibility tolerance (not by t).

• If the bounds and linear constraints cannot be satisfied to within t,the problem is declared infeasible. Let sInf be the correspondingsum of infeasibilities. If sInf is quite small, it may be appropriateto raise t by a factor of 10 or 100. Otherwise, some error in the datashould be suspected.

• Nonlinear functions will be evaluated only at points that satisfy thebounds and linear constraints. If there are regions where a function isundefined, every attempt should be made to eliminate these regionsfrom the problem. For example, if f(x) =

√x1 +log x2, it is essential

to place lower bounds on both variables. If t = 1.0e-6, the boundsx1 ≥ 10−5 and x2 ≥ 10−4 might be appropriate. (The log singularityis more serious. In general, keep x as far away from singularities aspossible.)

• If Scale option ≥ 1, feasibility is defined in terms of the scaledproblem (since it is then more likely to be meaningful).

• In reality, SNOPT uses t as a feasibility tolerance for satisfying thebounds on x and s in each QP subproblem. If the sum of infeasibilities

22

Page 23: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

cannot be reduced to zero, the QP subproblem is declared infeasible.SNOPT is then in elastic mode thereafter (with only the linearizednonlinear constraints defined to be elastic). See the Elastic options.

Default: Minor feasilibility tolerance 1.0e-6.

Minor optimality tolerance tThis is used to judge optimality for each QP subproblem. Let the QPreduced gradients be dj = gj − πT aj , where gj is the jth component ofthe QP gradient, aj is the associated column of the QP constraint matrix,and π is the set of QP dual variables.

• By construction, the reduced gradients for basic variables are alwayszero. The QP subproblem will be declared optimal if the reducedgradients for nonbasic variables at their lower or upper bounds satisfy

dj/‖π‖ ≥ −t or dj/‖π‖ ≤ t

respectively, and if |dj |/‖π‖ ≤ t for superbasic variables.• In the above tests, ‖π‖ is a measure of the size of the dual variables.

It is included to make the tests independent of a large scale factoron the objective function. The quantity actually used is defined by

‖π‖ = max{σ/√

m, 1}, where σ =m∑

i=1

|πi|.

• If the objective is scaled down to be very small, the optimality testreduces to comparing dj against t.

Default: Minor optimality tolerance 1.0e-6.

Minor print level kThis controls the amount of output to the GAMS listing file during solutionof the QP subproblems. This option is only useful if the sysout option isturned on (see §4.1). The value of k has the following effect:

0 No minor iteration output except error messages.≥ 1 A single line of output each minor iteration (controlled by Print

frequency).≥ 10 Basis factorization statistics generated during the periodic refactor-

ization of the basis (see Factorization frequency). Statistics forthe first factorization each major iteration are controlled by the Majorprint level.

Default: Minor print level 0.

Optimality tolerance tSee Minor optimality tolerance.Default: Optimality tolerance 1.0e-6.

23

Page 24: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

Partial Price iThis parameter is recommended for large problems that have significantlymore variables than constraints. It reduces the work required for each“pricing” operation (when a nonbasic variable is selected to become su-perbasic).

• When i = 1, all columns of the constraint matrix ( A − I ) aresearched.

• Otherwise, A and I are partitioned to give i roughly equal segmentsAj , Ij (j = 1 to i). If the previous pricing search was successfulon Aj , Ij , the next search begins on the segments Aj+1, Ij+1. (Allsubscripts here are modulo i.)

• If a reduced gradient is found that is larger than some dynamic tol-erance, the variable with the largest such reduced gradient (of ap-propriate sign) is selected to become superbasic. If nothing is found,the search continues on the next segments Aj+2, Ij+2, and so on.

• Partial price t (or t/2 or t/3) may be appropriate for time-stagemodels having t time periods.

Default: Partial price 10 for linear models and Partial price 1 fornonlinear models.

Pivot tolerance r During solution of QP subproblems, the pivot tolerance isused to prevent columns entering the basis if they would cause the basisto become almost singular.

• When x changes to x + αp for some search direction p, a “ratiotest” is used to determine which component of x reaches an upper orlower bound first. The corresponding element of p is called the pivotelement.

• Elements of p are ignored (and therefore cannot be pivot elements)if they are smaller than the pivot tolerance r.

• It is common for two or more variables to reach a bound at essentiallythe same time. In such cases, the Minor Feasibility tolerance(say t) provides some freedom to maximize the pivot element andthereby improve numerical stability. Excessively small values of tshould therefore not be specified.

• To a lesser extent, the Expand frequency (say f) also provides somefreedom to maximize the pivot element. Excessively large values off should therefore not be specified.

Default: Pivot tolerance 3.7e-11 on most machines. This correspondsto ε2/3 where ε is the machine precision.

Print frequency kWhen sysout is turned on (see §4.1) and Minor print level ≥ 1, a line

24

Page 25: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

of the QP iteration log will be printed on the listing file every kth minoriteration.Default: Print frequency 1.

Scale option iThree scale options are available as follows:

Scale option 0: No scaling. This is recommended if it is known thatx and the constraint matrix (and Jacobian) never have very largeelements (say, larger than 1000).

Scale option 1: Linear constraints and variables are scaled by an itera-tive procedure that attempts to make the matrix coefficients as closeas possible to 1.0 (see Fourer [5]). This will sometimes improve theperformance of the solution procedures.

Scale option 2: All constraints and variables are scaled by the iterativeprocedure. Also, an additional scaling is performed that takes intoaccount columns of (A − I ) that are fixed or have positive lowerbounds or negative upper bounds.If nonlinear constraints are present, the scales depend on the Jaco-bian at the first point that satisfies the linear constraints. Scaleoption 2 should therefore be used only if (a) a good starting pointis provided, and (b) the problem is not highly nonlinear.

Default: Scale option 2 for linear models and Scale option 1 for NLP’s.

Scale tolerance rThis parameter affects how many passes might be needed through theconstraint matrix. On each pass, the scaling procedure computes theratio of the largest and smallest nonzero coefficients in each column:

ρj = maxi|aij |/ min

i|aij | (aij 6= 0).

If maxj ρj is less than r times its previous value, another scaling pass isperformed to adjust the row and column scales. Raising r from 0.9 to 0.99(say) usually increases the number of scaling passes through A. At most10 passes are made.Default: Scale tolerance 0.9.

Scale PrintThis option causes the row-scales r(i) and column-scales c(j) to be printed.The scaled matrix coefficients are aij = aijc(j)/r(i), and the scaled boundson the variables and slacks are lj = lj/c(j), uj = uj/c(j), where c(j) ≡r(j − n) if j > n.The listing file will only show these values if the sysout option is turnedon (see §4.1).Default: turned off.

25

Page 26: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

Solution YesThis option causes the SNOPT solution file to be printed to the GAMS

listing file. It is only visible if the sysout option is turned on (see §4.1).Default: turned off.

Superbasics limit iThis places a limit on the storage allocated for superbasic variables. Ide-ally, i should be set slightly larger than the “number of degrees of freedom”expected at an optimal solution.For linear programs, an optimum is normally a basic solution with no de-grees of freedom. (The number of variables lying strictly between theirbounds is no more than m, the number of general constraints.) The de-fault value of i is therefore 1.For nonlinear problems, the number of degrees of freedom is often calledthe “number of independent variables”.Normally, i need not be greater than n1 + 1, where n1 is the number ofnonlinear variables. For many problems, i may be considerably smallerthan n1. This will save storage if n1 is very large.

Unbounded objective value fmax

Unbounded step size αmax

These parameters are intended to detect unboundedness in nonlinear prob-lems. (They may not achieve that purpose!) During a line search, f isevaluated at points of the form x + αp, where x and p are fixed and αvaries. if |f | exceeds fmax or α exceeds αmax, iterations are terminatedwith the exit message Problem is unbounded (or badly scaled).In a GAMS environment no floating-point overflow errors should occurwhen singularities are present during the evaluation of f(x + αp) beforethe test can be made.Defaults: Unbounded objective value 1.0e+15 and Unbounded stepsize 1.0e+18.

Violation limit τThis keyword defines an absolute limit on the magnitude of the maximumconstraint violation after the line search. On completion of the line search,the new iterate xk+1 satisfies the condition

vi(xk+1) ≤ τ max{1, vi(x0)},

where x0 is the point at which the nonlinear constraints are first evaluatedand vi(x) is the ith nonlinear constraint violation vi(x) = max(0, li −Fi(x), Fi(x)− ui).The effect of this violation limit is to restrict the iterates to lie in anexpanded feasible region whose size depends on the magnitude of τ . Thismakes it possible to keep the iterates within a region where the objectiveis expected to be well-defined and bounded below. If the objective isbounded below for all values of the variables, then τ may be any large

26

Page 27: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

positive value.Default: Violation limit 10.

5 The SNOPT log

When GAMS/SNOPT solves a linearly constrained problem the following log isvisible on the screen:

[erwin@hamilton]$ gamslib chemModel chem.gms retrieved[erwin@hamilton]$ gams chem nlp=snoptGAMS 2.50A Copyright (C) 1987-1999 GAMS Development. All rights reservedLicensee: hamilton G990622:1048CP-LNX

GAMS Development--- Starting compilation--- chem.gms(47) 1 Mb--- Starting execution--- chem.gms(42) 1 Mb--- Generating model MIXER--- chem.gms(46) 1 Mb--- 5 rows, 12 columns, and 37 non-zeroes.--- Executing SNOPT

GAMS/SNOPT X86/LINUX version 5.3.4-007-035P. E. Gill, UC San DiegoW. Murray and M. A. Saunders, Stanford University

Work space allocated -- 0.02 Mb

Major Minor Step nObj Objective Optimal nS PD0 5 0.0E+00 1 3.292476E+01 2.1E-01 0 TF r1 4 1.0E+00 2 4.517191E+01 2.2E-01 1 TF n r2 10 8.6E-02 5 4.533775E+01 1.7E-01 6 TF s3 3 1.0E+00 6 4.608439E+01 7.6E-02 6 TF4 2 1.0E+00 7 4.667813E+01 1.4E-01 5 TF5 2 1.0E+00 8 4.751149E+01 2.2E-02 6 TF6 3 1.0E+00 9 4.757024E+01 2.1E-02 4 TF7 2 7.1E-01 11 4.763634E+01 3.3E-02 5 TF8 3 1.0E+00 12 4.768395E+01 3.3E-02 7 TF9 3 4.6E-01 14 4.769958E+01 1.9E-02 5 TF

10 2 1.0E+00 15 4.770539E+01 1.5E-02 6 TF

Major Minor Step nObj Objective Optimal nS PD11 1 1.0E+00 16 4.770639E+01 6.2E-04 6 TF12 1 1.0E+00 17 4.770650E+01 5.0E-03 6 TF13 1 1.0E+00 18 4.770651E+01 1.6E-04 6 TF14 1 1.0E+00 19 4.770651E+01 1.8E-05 6 TF15 1 1.0E+00 20 4.770651E+01 2.7E-05 6 TF16 1 1.0E+00 21 4.770651E+01 7.6E-07 6 TT

EXIT - Optimal Solution found.

--- Restarting execution--- chem.gms(46) 1 Mb--- Reading solution for model MIXER--- chem.gms(46) 1 Mb*** Status: Normal completion[erwin@hamilton]$

For a nonlinearly constrained problem, the log is somewhat different:

[erwin@hamilton]$ gamslib cheneryModel chenery.gms retrieved

27

Page 28: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

[erwin@hamilton]$ gams chenery nlp=snoptGAMS 2.50A Copyright (C) 1987-1999 GAMS Development. All rights reservedLicensee: hamilton G990622:1048CP-LNX

GAMS Development--- Starting compilation--- chenery.gms(240) 1 Mb--- Starting execution--- chenery.gms(222) 1 Mb--- Generating model CHENRAD--- chenery.gms(225) 1 Mb--- 39 rows, 44 columns, and 133 non-zeroes.--- Executing SNOPT

GAMS/SNOPT X86/LINUX version 5.3.4-007-035P. E. Gill, UC San DiegoW. Murray and M. A. Saunders, Stanford University

Work space allocated -- 0.07 Mb

Major Minor Step nCon Merit Feasibl Optimal nS Penalty PD0 39 0.0E+00 1 -1.653933E+07 1.4E+00 1.4E+00 6 0.0E+00 FF r i1 6 1.0E+00 2 6.215366E+03 9.8E-01 1.4E+00 6 0.0E+00 FF n r2 2 1.0E+00 3 -4.424844E+03 5.6E-01 7.8E-01 7 2.8E-01 FF s3 1 1.0E+00 4 2.756620E+02 1.1E-01 2.1E-01 7 2.8E-01 FF4 1 1.0E+00 6 5.640617E+02 5.6E-03 2.2E-01 7 2.8E-01 FF m5 6 1.0E+00 8 6.188177E+02 9.9E-03 2.6E-01 5 2.8E-01 FF m6 2 7.2E-01 11 6.827737E+02 2.7E-02 2.0E-01 4 2.8E-01 FF m7 4 5.9E-01 14 7.516259E+02 3.4E-02 9.4E-02 6 2.8E-01 FF m8 1 1.0E+00 15 8.437315E+02 6.7E-03 1.7E+00 6 2.8E-01 FF9 3 1.0E+00 17 8.756771E+02 7.1E-03 3.5E-01 4 2.8E-01 FF m

10 5 3.1E-01 21 9.010440E+02 2.4E-02 1.1E+00 6 2.8E-01 FF m

Major Minor Step nCon Merit Feasibl Optimal nS Penalty PD11 2 2.6E-01 24 9.168958E+02 2.5E-02 6.9E-01 5 2.8E-01 FF12 1 4.8E-01 26 9.404851E+02 2.9E-02 5.0E-01 5 2.8E-01 FF13 3 1.0E+00 27 9.983802E+02 1.6E-02 1.3E-01 6 2.8E-01 FF14 1 1.0E+00 28 1.013533E+03 6.2E-04 4.8E-02 6 2.8E-01 FF15 2 1.0E+00 29 1.021295E+03 1.1E-02 1.2E-02 5 2.8E-01 FF16 1 1.0E+00 30 1.032156E+03 5.2E-03 1.1E-02 5 2.8E-01 FF17 2 1.0E+00 31 1.033938E+03 6.7E-05 1.4E-02 4 2.8E-01 FF18 2 1.0E+00 32 1.036764E+03 4.5E-04 1.0E-02 3 2.8E-01 FF19 2 1.0E+00 33 1.037592E+03 6.5E-05 3.0E-02 2 2.8E-01 FF20 1 1.0E+00 34 1.039922E+03 4.6E-04 4.4E-02 2 2.8E-01 FF21 2 1.0E+00 35 1.040566E+03 1.4E-05 7.8E-02 3 2.8E-01 FF

Major Minor Step nCon Merit Feasibl Optimal nS Penalty PD22 4 1.0E+00 36 1.056256E+03 1.3E-02 5.6E-02 4 2.8E-01 FF23 2 1.0E+00 37 1.053213E+03 1.9E-04 3.1E-03 3 3.9E-01 FF24 2 1.0E+00 38 1.053464E+03 2.2E-05 4.7E-03 2 3.9E-01 FF25 1 1.0E+00 39 1.053811E+03 3.7E-05 1.8E-02 2 3.9E-01 FF26 1 1.0E+00 40 1.055352E+03 1.1E-03 3.9E-02 2 3.9E-01 FF27 1 1.0E+00 41 1.055950E+03 1.3E-03 1.5E-02 2 3.9E-01 FF28 1 1.0E+00 42 1.056047E+03 1.5E-06 1.3E-02 2 3.9E-01 FF29 1 1.0E+00 43 1.056991E+03 3.2E-04 2.2E-02 2 3.9E-01 FF30 1 1.0E+00 44 1.058439E+03 2.7E-03 1.8E-02 2 3.9E-01 FF31 3 1.0E+00 45 1.058885E+03 2.2E-04 9.3E-03 2 3.9E-01 FF32 1 1.0E+00 46 1.058918E+03 3.3E-05 1.6E-03 2 3.9E-01 FF

Major Minor Step nCon Merit Feasibl Optimal nS Penalty PD33 1 1.0E+00 47 1.058920E+03 4.6E-06 1.1E-05 2 3.9E-01 FF34 1 1.0E+00 48 1.058920E+03 5.1E-10 5.3E-07 2 3.9E-01 TT

EXIT - Optimal Solution found.

--- Restarting execution--- chenery.gms(225) 1 Mb--- Reading solution for model CHENRAD--- chenery.gms(239) 1 Mb

28

Page 29: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

*** Status: Normal completion[erwin@hamilton]$

GAMS prints the number of equations, variables and non-zero elements ofthe model it generated. This gives an indication of the size of the model. SNOPT

then says how much memory it allocated to solve the model, based on an esti-mate. If the user had specified a different amount using the work option or theworkspace model suffix, there would be a message like

Work space requested by user -- 0.76 MbWork space requested by solver -- 0.02 Mb

The SNOPT log shows the following columns:

Major The current major iteration number.

Minor is the number of iterations required by both the feasibility and opti-mality phases of the QP subproblem. Generally, Minor will be 1 in thelater iterations, since theoretical analysis predicts that the correct activeset will be identified near the solution (see §2).

Step The step length α taken along the current search direction p. The vari-ables x have just been changed to x + αp. On reasonably well-behavedproblems, the unit step will be taken as the solution is approached.

nObj The number of times the nonlinear objective function has been evaluated.nObj is printed as a guide to the amount of work required for the linesearch.

nCon The number of times SNOPT evaluated the nonlinear constraint func-tions.

Merit is the value of the augmented Lagrangian merit function (3). This func-tion will decrease at each iteration unless it was necessary to increase thepenalty parameters (see §2). As the solution is approached, Merit willconverge to the value of the objective at the solution.In elastic mode, the merit function is a composite function involving theconstraint violations weighted by the elastic weight.If the constraints are linear, this item is labeled Objective, the value ofthe objective function. It will decrease monotonically to its optimal value.

Feasibl is the value of rowerr, the maximum component of the scaled nonlinearconstraint residual (5). The solution is regarded as acceptably feasible ifFeasibl is less than the Major feasibility tolerance.If the constraints are linear, all iterates are feasible and this entry is notprinted.

Optimal is the value of maxgap, the maximum complementarity gap (6). Itis an estimate of the degree of nonoptimality of the reduced costs. BothFeasibl and Optimal are small in the neighborhood of a solution.

29

Page 30: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

nS The current number of superbasic variables.

Penalty is the Euclidean norm of the vector of penalty parameters used in theaugmented Lagrangian merit function (not printed if the constraints arelinear).

PD is a two-letter indication of the status of the convergence tests involvingprimal and dual feasibility of the iterates (see (5) and (6) in the descriptionof Major feasibility tolerance and Major optimality tolerance).Each letter is T if the test is satisfied, and F otherwise.If either of the indicators is F when SNOPT terminates with 0 EXIT --optimal solution found, the user should check the solution carefully.

The summary line may include additional code characters that indicate whathappened during the course of the iteration.

c Central differences have been used to compute the unknown componentsof the objective and constraint gradients. This should not happen in aGAMS environment.

d During the linesearch it was necessary to decrease the step in order to ob-tain a maximum constraint violation conforming to the value of Violationlimit.

l The norm-wise change in the variables was limited by the value of theMajor step limit. If this output occurs repeatedly during later itera-tions, it may be worthwhile increasing the value of Major step limit.

i If SNOPT is not in elastic mode, an “i” signifies that the QP subproblemis infeasible. This event triggers the start of nonlinear elastic mode, whichremains in effect for all subsequent iterations. Once in elastic mode, theQP subproblems are associated with the elastic problem NP(γ).If SNOPT is already in elastic mode, an “i” indicates that the minimizerof the elastic subproblem does not satisfy the linearized constraints. (Inthis case, a feasible point for the usual QP subproblem may or may notexist.)

M An extra evaluation of the problem functions was needed to define an ac-ceptable positive-definite quasi-Newton update to the Lagrangian Hessian.This modification is only done when there are nonlinear constraints.

m This is the same as “M” except that it was also necessary to modify theupdate to include an augmented Lagrangian term.

R The approximate Hessian has been reset by discarding all but the diagonalelements. This reset will be forced periodically by the Hessian frequencyand Hessian updates keywords. However, it may also be necessary toreset an ill-conditioned Hessian from time to time.

30

Page 31: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

r The approximate Hessian was reset after ten consecutive major iterationsin which no BFGS update could be made. The diagonals of the approxi-mate Hessian are retained if at least one update has been done since thelast reset. Otherwise, the approximate Hessian is reset to the identitymatrix.

s A self-scaled BFGS update was performed. This update is always usedwhen the Hessian approximation is diagonal, and hence always follows aHessian reset.

S This is the same as a “s” except that it was necessary to modify theself-scaled update to maintain positive definiteness.

n No positive-definite BFGS update could be found. The approximate Hes-sian is unchanged from the previous iteration.

t The minor iterations were terminated at the Minor iteration limit.

u The QP subproblem was unbounded.

w A weak solution of the QP subproblem was found.

Finally SNOPT prints an exit message. See §5.1.

5.1 EXIT conditions

When the solution procedure terminates, an EXIT -- message is printed tosummarize the final result. Here we describe each message and suggest possiblecourses of action.

0 EXIT -- optimal solution foundThis is the message we all hope to see! It is certainly preferable to every othermessage, and we naturally want to believe what it says, because this is surelyone situation where the computer knows best.

In all cases, a distinct level of caution is in order, even if it can wait untilnext morning. For example, if the objective value is much better than expected,we may have obtained an optimal solution to the wrong problem! Almost anyitem of data could have that effect if it has the wrong value. Verifying that theproblem has been defined correctly is one of the more difficult tasks for a modelbuilder.

If nonlinearities exist, one must always ask the question: could there be morethan one local optimum? When the constraints are linear and the objective isknown to be convex (e.g., a sum of squares) then all will be well if we areminimizing the objective: a local minimum is a global minimum in the sensethat no other point has a lower function value. (However, many points couldhave the same objective value, particularly if the objective is largely linear.)Conversely, if we are maximizing a convex function, a local maximum cannotbe expected to be global, unless there are sufficient constraints to confine thefeasible region.

31

Page 32: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

Similar statements could be made about nonlinear constraints defining con-vex or concave regions. However, the functions of a problem are more likely tobe neither convex nor concave. Our advice is always to specify a starting pointthat is as good an estimate as possible, and to include reasonable upper andlower bounds on all variables, in order to confine the solution to the specificregion of interest. We expect modelers to know something about their problem,and to make use of that knowledge as they themselves know best.

One other caution about “Optimal solution”s. Some of the variables orslacks may lie outside their bounds more than desired, especially if scalingwas requested. Max Primal infeas refers to the largest bound infeasibilityand which variable is involved. If it is too large, consider restarting with asmaller Minor feasibility tolerance (say 10 times smaller) and perhapsScale option 0.

Similarly, Max Dual infeas indicates which variable is most likely to be ata non-optimal value. Broadly speaking, if

Max Dual infeas/Norm of pi = 10−d,

then the objective function would probably change in the dth significant digit ifoptimization could be continued. If d seems too large, consider restarting withsmaller Major and Minor optimality tolerances.

Finally, Nonlinear constraint violn shows the maximum infeasibility fornonlinear rows. If it seems too large, consider restarting with a smaller Majorfeasibility tolerance.

1 EXIT -- the problem is infeasibleWhen the constraints are linear, this message can probably be trusted. Feasi-bility is measured with respect to the upper and lower bounds on the variablesand slacks. Among all the points satisfying the general constraints Ax− s = 0,there is apparently no point that satisfies the bounds on x and s. Violationsas small as the Minor feasibility tolerance are ignored, but at least onecomponent of x or s violates a bound by more than the tolerance.

When nonlinear constraints are present, infeasibility is much harder to rec-ognize correctly. Even if a feasible solution exists, the current linearization ofthe constraints may not contain a feasible point. In an attempt to deal withthis situation, when solving each QP subproblem, SNOPT is prepared to relaxthe bounds on the slacks associated with nonlinear rows.

If a QP subproblem proves to be infeasible or unbounded (or if the Lagrangemultiplier estimates for the nonlinear constraints become large), SNOPT entersso-called “nonlinear elastic” mode. The subproblem includes the original QPobjective and the sum of the infeasibilities—suitably weighted using the Elasticweight parameter. In elastic mode, the nonlinear rows are made “elastic”—i.e.,they are allowed to violate their specified bounds. Variables subject to elasticbounds are known as elastic variables. An elastic variable is free to violateone or both of its original upper or lower bounds. If the original problem hasa feasible solution and the elastic weight is sufficiently large, a feasible pointeventually will be obtained for the perturbed constraints, and optimization can

32

Page 33: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

continue on the subproblem. If the nonlinear problem has no feasible solution,SNOPT will tend to determine a “good” infeasible point if the elastic weightis sufficiently large. (If the elastic weight were infinite, SNOPT would locallyminimize the nonlinear constraint violations subject to the linear constraintsand bounds.)

Unfortunately, even though SNOPT locally minimizes the nonlinear con-straint violations, there may still exist other regions in which the nonlinearconstraints are satisfied. Wherever possible, nonlinear constraints should be de-fined in such a way that feasible points are known to exist when the constraintsare linearized.

2 EXIT -- the problem is unbounded (or badly scaled)EXIT -- violation limit exceeded -- the problem may be unbounded

For linear problems, unboundedness is detected by the simplex method whena nonbasic variable can apparently be increased or decreased by an arbitraryamount without causing a basic variable to violate a bound. A message prior tothe EXIT message will give the index of the nonbasic variable. Consider addingan upper or lower bound to the variable. Also, examine the constraints thathave nonzeros in the associated column, to see if they have been formulated asintended.

Very rarely, the scaling of the problem could be so poor that numerical errorwill give an erroneous indication of unboundedness. Consider using the Scaleoption.

For nonlinear problems, SNOPT monitors both the size of the current objec-tive function and the size of the change in the variables at each step. If either ofthese is very large (as judged by the Unbounded parameters—see §4), the prob-lem is terminated and declared UNBOUNDED. To avoid large function values,it may be necessary to impose bounds on some of the variables in order to keepthem away from singularities in the nonlinear functions.

The second message indicates an abnormal termination while enforcing thelimit on the constraint violations. This exit implies that the objective is notbounded below in the feasible region defined by expanding the bounds by thevalue of the Violation limit.

3 EXIT -- major iteration limit exceededEXIT -- minor iteration limit exceededEXIT -- too many iterations

Either the Iterations limit or the Major iterations limit was exceededbefore the required solution could be found. Check the iteration log to be surethat progress was being made. If so, repeat the run with higher limits. If not,consider specifying new initial values for some of the nonlinear variables.

4 EXIT -- requested accuracy could not be achievedA feasible solution has been found, but the requested accuracy in the dualinfeasibilities could not be achieved. An abnormal termination has occurred, butSNOPT is within 10−2 of satisfying the Major optimality tolerance. Checkthat the Major optimality tolerance is not too small.

33

Page 34: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

5 EXIT -- the superbasics limit is too small: nnnThe problem appears to be more nonlinear than anticipated. The current set ofbasic and superbasic variables have been optimized as much as possible and aPRICE operation is necessary to continue, but there are already nnn superbasics(and no room for any more).

In general, raise the Superbasics limit s by a reasonable amount, bearingin mind the storage needed for the reduced Hessian (about 1

2s2 double words).

6 EXIT -- constraint and objective values could not be calculatedThis exit should not occur in a GAMS environment.

7 EXIT -- subroutine funobj seems to be giving incorrect gradientsThis exit should not occur in a GAMS environment.

8 EXIT -- subroutine funcon seems to be giving incorrect gradientsThis exit should not occur in a GAMS environment.

9 EXIT -- the current point cannot be improved uponThe algorithm could not find a better solution although optimality was notachieved within the optimality tolerance. Possibly scaling can lead to betterfunction values and derivatives. Raising the optimality tolerance will prob-ably make this message go away.

10 EXIT -- cannot satisfy the general constraintsAn LU factorization of the basis has just been obtained and used to recomputethe basic variables xB , given the present values of the superbasic and nonbasicvariables. A step of “iterative refinement” has also been applied to increase theaccuracy of xB . However, a row check has revealed that the resulting solutiondoes not satisfy the current constraints Ax− s = 0 sufficiently well.

This probably means that the current basis is very ill-conditioned. Try Scaleoption 1 if scaling has not yet been used and there are some linear constraintsand variables.

For certain highly structured basis matrices (notably those with band struc-ture), a systematic growth may occur in the factor U . Try setting the LU factortolerance to 2.0 (or possibly even smaller, but not less than 1.0).

12 EXIT -- terminated from subroutine s1UserThis message appears when the resource limit was reached (see §4.1) or whenthe solver was interrupted by a Ctrl-C keyboard signal.

20 EXIT -- not enough integer/real storage for the basis factorsIncrease the workspace by using the work option or the workspace model suffix.

21 EXIT -- error in basis packageA preceding message will describe the error in more detail. This error shouldnot happen easily in a GAMS environment.

22 EXIT -- singular basis after nnn factorization attemptsThis exit is highly unlikely to occur. The first factorization attempt will havefound the basis to be structurally or numerically singular. (Some diagonals

34

Page 35: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

of the triangular matrix U were respectively zero or smaller than a certaintolerance.) The associated variables are replaced by slacks and the modifiedbasis is refactorized, but singularity persists. This must mean that the problemis badly scaled, or the LU factor tolerance is too much larger than 1.0.

30 EXIT -- the basis file dimensions do not match this problemThis exit should not occur in a GAMS environment.

31 EXIT -- the basis file state vector does not match this problemThis exit should not occur in a GAMS environment.

32 EXIT -- system error. Wrong no. of basic variables: nnnThis exit should never happen, and may indicate a configuration problem withyour GAMS/SNOPT version.

42 EXIT -- not enough 8-character storage to start solving the problemIncrease the workspace by using the work option or the workspace model suffix.

43 EXIT -- not enough integer storage to start solving the problemIncrease the work space by using the work option or the workspace model suffix.

44 EXIT -- not enough real storage to start solving the problemIncrease the work space by using the work option or the workspace model suffix.

45 EXIT -- Function evaluation error limit exceededThe function evaluation error limit was reached. When evaluating a non-linear function either in the objective or in the constraints, an evaluation erroroccurred and the allowed number of such errors was exceeded. Either increasethe domlim option (see §4.1) or preferably add bounds or linear constraints suchthat these errors cannot happen. The errors are most often caused by declaringx to be a free variable while the model contains functions like

√x or log(x).

Overflows in exponentiation xy are also a common cause for this exit. Inspectthe listing file for messages like

**** ERROR(S) IN EQUATION PRODF2 INSTANCES OF - UNDEFINED LOG OPERATION (RETURNED -0.1E5)

The equation name mentioned in this message gives a good indication where tolook in the model.

6 Listing file messages

The listing file (.lst file) also contains feedback on how the SNOPT solverperformed on a particular model. For the chem.gms model, the solve summarylooks like the following:

S O L V E S U M M A R Y

MODEL MIXER OBJECTIVE ENERGYTYPE NLP DIRECTION MINIMIZESOLVER SNOPT FROM LINE 46

35

Page 36: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

**** SOLVER STATUS 1 NORMAL COMPLETION**** MODEL STATUS 2 LOCALLY OPTIMAL**** OBJECTIVE VALUE -47.7065

RESOURCE USAGE, LIMIT 0.010 1000.000ITERATION COUNT, LIMIT 45 10000EVALUATION ERRORS 0 0

GAMS/SNOPT X86/LINUX version 5.3.4-007-035P. E. Gill, UC San DiegoW. Murray and M. A. Saunders, Stanford University

Work space allocated -- 0.02 Mb

EXIT - Optimal Solution found.

Major, Minor Iterations 16 45Funobj, Funcon calls 22 0Superbasics 6Aggregations 0Interpreter Usage 0.01 100.0%

Work space used by solver -- 0.02 Mb

The solver completed normally at a local (or possibly global) optimum. Acomplete list of possible solver status and model status values is in Tables 1 and2.

The resource usage (time used), iteration count and evaluation errors duringnonlinear function and gradient evaluation are all within their limits. Theselimits can be increased by the option reslim, iterlim and domlim (see §4.1).

The possible EXIT messages are listed in §5.1.The statistics following the EXIT message are as follows.

Major, minor iterations. The number of major and minor iterations for thisoptimization task. Note that the number of minor iterations is the sameas reported by ITERATION COUNT.

Funobj, Funcon calls. The number of times SNOPT evaluated the objectivefunction f(x) or the constraint functions Fi(x) and their gradients. For alinearly constrained problem the number of funcon calls should be zero.

Superbasics. This is number of superbasic variables in the reported solution.See §2.4.

Aggregations. The number of equations removed from the model by the ob-jective function recovery algorithm (see §2.1).

Interpreter usage. This line refers to how much time was spent evaluatingfunctions and gradients. Due to the low resolution of the clock and thesmall size of this model, it was concluded that 100% of the time was spentinside the routines that do function and gradient evaluations. For largermodels these numbers are more accurate.

36

Page 37: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

Model status Remarks1 OPTIMAL Applies only to linear models.2 LOCALLY OPTIMAL A local optimum in an NLP was

found. It may or may not be a globaloptimum.

3 UNBOUNDED For LP’s this message is reliable. Abadly scaled NLP can also cause thismessage to appear.

4 INFEASIBLE Applies to LP’s: the model is infea-sible.

5 LOCALLY INFEASIBLE Applies to NLP’s: Given the start-ing point, no feasible solution couldbe found although feasible pointsmay exist.

6 INTERMEDIATE INFEASIBLE The search was stopped (e.g., be-cause of an iteration or time limit)and the current point violates someconstraints or bounds.

7 INTERMEDIATE NONOPTIMAL The search was stopped (e.g., be-cause of an iteration or time limit)and the current point is feasible butviolates the optimality conditions.

8 INTEGER SOLUTION Does not apply to SNOPT.9 INTERMEDIATE NON-INTEGER Does not apply to SNOPT.10 INTEGER INFEASIBLE Does not apply to SNOPT.ERROR UKNOWN Check listing file for error messages.ERROR NO SOLUTION Check listing file for error messages.

Table 1: Model status values

37

Page 38: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

Solver status Remarks1 NORMAL COMPLETION SNOPT completed the optimization

task.2 ITERATION INTERRUPT Iteration limit was hit. Increase the

iterlim option (see §4.1).3 RESOURCE INTERRUPT Time limit was hit. Increase the

reslim option (see §4.1).4 TERMINATED BY SOLVER Check the listing file.5 EVALUATION ERROR LIMIT domlim error limit was exceeded.

See §4.1.6 UNKNOWN ERROR Check the listing file for error mes-

sages.ERROR SETUP FAILURE Id.ERROR SOLVER FAILURE Id.ERROR INTERNAL SOLVER FAILURE Id.ERROR SYSTEM FAILURE Id.

Table 2: Solver status values

References

[1] A. R. Conn, Constrained optimization using a nondifferentiable penaltyfunction, SIAM J. Numer. Anal., 10 (1973), pp. 760–779.

[2] G. B. Dantzig, Linear Programming and Extensions, Princeton Univer-sity Press, Princeton, New Jersey, 1963.

[3] S. K. Eldersveld, Large-scale sequential quadratic programming algo-rithms, PhD thesis, Department of Operations Research, Stanford Univer-sity, Stanford, CA, 1991.

[4] R. Fletcher, An `1 penalty method for nonlinear constraints, in Numeri-cal Optimization 1984, P. T. Boggs, R. H. Byrd, and R. B. Schnabel, eds.,Philadelphia, 1985, SIAM, pp. 26–40.

[5] R. Fourer, Solving staircase linear programs by the simplex method. 1:Inversion, Math. Prog., 23 (1982), pp. 274–313.

[6] P. E. Gill, W. Murray, and M. A. Saunders, SNOPT: An SQP algo-rithm for large-scale constrained optimization, Numerical Analysis Report97-2, Department of Mathematics, University of California, San Diego, LaJolla, CA, 1997.

[7] P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright, User’sguide for NPSOL (Version 4.0): a Fortran package for nonlinear program-ming, Report SOL 86-2, Department of Operations Research, Stanford Uni-versity, Stanford, CA, 1986.

38

Page 39: GAMS/SNOPT: AN SQP ALGORITHM FOR LARGE-SCALE CONSTRAINED ...web.stanford.edu/group/SOL/guides/snopt.pdf · gams/snopt: an sqp algorithm for large-scale constrained optimization philip

[8] , A practical anti-cycling procedure for linearly constrained optimiza-tion, Math. Prog., 45 (1989), pp. 437–474.

[9] , Some theoretical properties of an augmented Lagrangian merit func-tion, in Advances in Optimization and Parallel Computing, P. M. Pardalos,ed., North Holland, North Holland, 1992, pp. 101–128.

[10] , Sparse matrix methods in optimization, SIAM J. on Scientific andStatistical Computing, 5 (1984), pp. 562–589.

[11] , Maintaining LU factors of a general sparse matrix, Linear Algebraand its Applications, 88/89 (1987), pp. 239–270.

[12] B. A. Murtagh and M. A. Saunders, Large-scale linearly constrainedoptimization, Math. Prog., 14 (1978), pp. 41–72.

[13] , A projected Lagrangian algorithm and its implementation for sparsenonlinear constraints, Math. Prog. Study, 16 (1982), pp. 84–117.

[14] , MINOS 5.5 User’s Guide, Report SOL 83-20R, Department of Op-erations Research, Stanford University, Stanford, CA, Revised 1998.

39