INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great...

15
INFORMS OS Today The Newsletter of the INFORMS Optimization Society Volume 1 Number 1 March 2011 Contents Chair’s Column ................................. 1 Nominations for OS Prizes ..................... 13 Nominations for OS Officers ................... 14 Announcing the 4th OS Conference ............. 15 Featured Articles Hooked on Optimization George Nemhauser .............................. 2 First Order Methods for Large Scale Optimization: Error Bounds and Convergence Analysis Zhi-Quan Luo ................................... 7 Probability Inequalities for Sums of Random Matri- ces and Their Applications in Optimization Anthony Man-Cho So .......................... 10 Fast Multiple Splitting Algorithms for Convex Opti- mization Shiqian Ma ..................................... 11 Send your comments and feedback to the Editor: Shabbir Ahmed School of Industrial & Systems Engineering Georgia Tech, Atlanta, GA 30332 ([email protected]) Chair’s Column Jon Lee IBM T.J. Watson Research Center Yorktown Heights, NY 10598 ([email protected]) With this inaugural issue, the INFORMS Op- timization Society is publishing a new newsletter: “INFORMS OS Today.” It is a great pleasure to welcome Shabbir Ahmed ([email protected]) as its Editor. Our plan is to start by publishing one issue each Spring. At some point we may add a Fall issue. Please let us know what you think about the newsletter. The current issue features articles by the 2010 OS prize winners: George Nemhauser (Khachiyan Prize for Life-time Accomplishments in Optimiza- tion), Zhi-Quan (Tom) Luo (Farkas Prize for Mid- career Researchers), Anthony Man-Cho So (Prize for Young Researchers), and Shiqian Ma (Student Pa- per Prize). Each of these articles describes the prize- winning work in a compact form. Also, in this issue, we have announcements of key activities for the OS: Calls for nominations for the 2011 OS prizes, a call for nominations of candidates for OS officers, and the 2012 OS Conference (to be held at the University of Miami, February 24–26, 2012). I want to emphasize that one of our most important activities is our participation at the an- nual INFORMS meetings, the next one being at the Charlotte Convention Center, Charlotte, North Car- olina, November 13–16, 2011. Our participation at that meeting is centered on the OS sponsored clus- ters. The clusters and cluster chairs for that meeting mirror our list of Vice Chairs:

Transcript of INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great...

Page 1: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

INFORMS OS TodayThe Newsletter of the INFORMS Optimization Society

Volume 1 Number 1 March 2011

Contents

Chair’s Column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Nominations for OS Prizes . . . . . . . . . . . . . . . . . . . . . 13Nominations for OS Officers . . . . . . . . . . . . . . . . . . . 14Announcing the 4th OS Conference . . . . . . . . . . . . .15

Featured ArticlesHooked on OptimizationGeorge Nemhauser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

First Order Methods for Large Scale Optimization:Error Bounds and Convergence AnalysisZhi-Quan Luo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Probability Inequalities for Sums of Random Matri-ces and Their Applications in OptimizationAnthony Man-Cho So . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Fast Multiple Splitting Algorithms for Convex Opti-mizationShiqian Ma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

Send your comments and feedback to the Editor:

Shabbir AhmedSchool of Industrial & Systems EngineeringGeorgia Tech, Atlanta, GA 30332([email protected])

Chair’s ColumnJon Lee

IBM T.J. Watson Research CenterYorktown Heights, NY 10598

([email protected])

With this inaugural issue, the INFORMS Op-timization Society is publishing a new newsletter:“INFORMS OS Today.” It is a great pleasure towelcome Shabbir Ahmed ([email protected])as its Editor. Our plan is to start by publishing oneissue each Spring. At some point we may add a Fallissue. Please let us know what you think about thenewsletter.

The current issue features articles by the 2010OS prize winners: George Nemhauser (KhachiyanPrize for Life-time Accomplishments in Optimiza-tion), Zhi-Quan (Tom) Luo (Farkas Prize for Mid-career Researchers), Anthony Man-Cho So (Prize forYoung Researchers), and Shiqian Ma (Student Pa-per Prize). Each of these articles describes the prize-winning work in a compact form.

Also, in this issue, we have announcements of keyactivities for the OS: Calls for nominations for the2011 OS prizes, a call for nominations of candidatesfor OS officers, and the 2012 OS Conference (to beheld at the University of Miami, February 24–26,2012). I want to emphasize that one of our mostimportant activities is our participation at the an-nual INFORMS meetings, the next one being at theCharlotte Convention Center, Charlotte, North Car-olina, November 13–16, 2011. Our participation atthat meeting is centered on the OS sponsored clus-ters. The clusters and cluster chairs for that meetingmirror our list of Vice Chairs:

Page 2: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

2 INFORMS Optimization Society Newsletter

• Steven Dirkse, Computational Optimizationand Software ([email protected])

• Oleg A. Prokopyev, Global Optimization([email protected])

• Oktay Gunluk, Integer Programming([email protected])

• Miguel Anjos, Linear Programming and Com-plementarity ([email protected])

• Mauricio Resende, Networks([email protected])

• Frank E. Curtis, Nonlinear Programming([email protected])

• Huseyin Topaloglu, Stochastic Programming([email protected])

Their hard work is the means by which we have astrong presence within INFORMS — and we shouldhave a strong presence which reflects the OS mem-bership of about 1000! Please contact any of theVice Chairs to get involved.

Additionally for the Charlotte meeting, jointlywith the INFORMS Computing Society, the OSis co-sponsoring a mini-cluster on “Surrogate andderivative-free optimization.” Please contact ei-ther of the co-chairs for this cluster, Nick Sahini-dis ([email protected]) and Christine Shoemaker([email protected]) to get involved.

Pietro Belotti is the new OS webmaster, andhe will be pleased to get your feedback on ourrevamped website: www.informs.org/Community/

Optimization-Society.Finally, I would like to take this opportunity to

thank our Most-Recent Past Chair Nick Sahinidis,and our Secretary/Treasurer Marina Epelman formaking my job as Chair an enjoyable experience.

All of the OS officers and I look forward to seeingyou next at Charlotte in November — in particular,at the OS Business Meeting which is always a greatopportunity to have a couple of glasses of wine andcatch up with members of our community.

Hooked on OptimizationGeorge L. Nemhauser

Industrial and Systems EngineeringGeorgia Institute of Technology, Atlanta GA 30332

([email protected])

This article is based on a presentation I gave ata session sponsored by the Optimization Society atthe INFORMS 2010 annual meeting where I hadthe great honor of being chosen as the first recip-ient of the Khachiyan Prize for Life-time Accom-plishments in Optimization. I would like to thankthe prize committee members Martin Grotschel,Arkadi Nemirovski, Panos Pardalos and Tamas Ter-laky and my friends and colleagues Bill Cook, GerardCornuejols and Bill Pulleyblank who nominated me.

I was first introduced to optimization in a coursein operations research taught by Jack Mitten whenI was an M.S. student in chemical engineering atNorthwestern University in 1958. Chemical pro-cesses are a great source of optimization problems,but at that time not much was being done exceptfor the use of linear programming for blending prob-lems in the petro-chemical industry. When I learnedabout dynamic programming in this course and sawthe schematic diagram of a multi-time period inven-tory problem, I realized that it sort of looked likea flow diagram for a multi-stage chemical process,which I would now think of as a path in a graph.The difference was that the serial structure wouldnot suffice for many chemical processes. We neededmore general graph structures that included feed-back loops or directed cycles. So Jack Mitten, alsoa chemical engineer by training, and I embarkedon how to extend dynamic programming type de-composition to more general structures. I realizedthat I needed to learn more about optimization thanfluid flow, so after completing my Master’s degreeI switched to the I.E. department with its fledglingprogram in operations research for my PhD, and thiswork on dynamic programming became my thesistopic. I quickly learned about the trial and tribula-tions of research and publication. As my work wasabout to be completed, I submitted a paper on it andwas informed rather quickly that there was anotherpaper in process on the same topic written by the

Page 3: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

Volume 1 Number 1 March 2011 3

very distinguished chemical engineer/mathematicianRutherford (Gus) Aris, a professor at the Universityof Minnesota. Aris was an Englishman, and a truescholar and gentleman. He was also a noted callig-rapher and a professor of classics. After recoveringfrom the initial shock of this news, I carefully readAris’ paper on dynamic programming with feedbackloops, and discovered a fundamental flaw in his al-gorithm. It took some time to convince him of this,and we ultimately co-authored a paper [2], whichwas essentially my algorithm. This was a good in-troduction to the fact that the academic world wasnot an ivory tower. It also motivated by first book,Introduction to Dynamic Programming [19].

In 1961 I became an Assistant Professor in thedepartment of Operations Research and IndustrialEngineering at Johns Hopkins University. My firststudent Bill Hardgrave and I were trying to under-stand why the traveling salesman (TSP) problemseemed so much harder to solve than the shortestpath problem (SPP). We thought we might have hada breakthrough when we figured out how to reducethe TSP to the longest path problem (LPP). Perhapsthis was an early example of a polynomial reduction.But in our minds it was not, as is currently done, forthe purpose of proving NP-hardness, a concept thatwould only appear more than a decade later. Ouridea was that the LPP was closer to the SPP, somaybe we could solve the TSP by finding an efficientalgorithm for the LPP. Being engineers, we liked thenotion of physical models and especially the stringmodel of Minty for solving the SPP - pull from both

George L. Nemhauser

ends, and the first path to become tight is a short-est path. For the longest path build the graph withrubber bands instead of string - pull from both ends,and the last rubber band to remain slack is an edgeof the longest path. Now you could proceed recur-sively. We actually built these rubber band modelsfor graphs with up to about eight nodes, and ourconjecture seemed to hold for our examples. How-ever we were never able to prove the conjecture aswe were unable to describe the physical model math-ematically. Nevertheless, the work was published inOperations Research [15], my first of many papers inthe journal of which I became the editor several yearslater. We also submitted the paper to the 4th Inter-national Symposium on Mathematical Programmingheld in 1962, but it was rejected. So I didn’t get toone of these meetings until the 8th one in 1973. ButI haven’t missed one since then, and I wonder if Ihold the record for consecutive attendance at theseconferences.

Being an engineer, I tended to learn about opti-mization techniques and the underlying mathemat-ics only as I needed to solve problems. My firstreal involvement with integer programming (IP),other than teaching a little about cutting planes andbranch-and-bound, came from a seminar presenta-tion on school districting. This was around the timewhen congressional districting at the state level wasbeing examined by the federal courts. Many states,my current home state of Georgia was one of theworst, had congressional districts that were way outof balance with respect to the principle of “one man,one vote.” The average congressional district shouldhave had a population of around 400,000 then, butin Georgia they ranged from about 100,000 to nearly800,000. The Supreme Court had just declared suchdistricting plans to be unconstitutional, and redis-tricting was a hot topic. I had recently read a pa-per on solving vehicle routing problems using a setpartitioning model, and it was easy to see that setpartitioning was the right model for districting.

The difficulty was to describe feasible districtsby population, compactness, natural boundariesand possibly political requirements. These districtswould be constructed from much smaller populationunits such as counties in rural areas and smaller geo-graphical units in urban areas. The cost of a districtwould be its population deviation from the mean

Page 4: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

4 INFORMS Optimization Society Newsletter

given by the state’s population divided by the num-ber of congressional districts; i.e. the population ofa perfect district. Given the potential districts, wethen have the set partitioning problem of choosing aminimum cost set of districts such that each popu-lation unit is in exactly one district. My student,Rob Garfinkel worked on this districting problemfor his doctoral research. Rob had been a program-mer before beginning his graduate studies, and hemade great use of his machine language program-ming skills in implementing our implicit enumerationalgorithm [9]. Of course, our solutions were subopti-mal since we generated all of our candidate districtsup front. Branch-and-price hadn’t entered my mindthen.

Soon after that work was completed, in 1969 I leftHopkins for Cornell determined to learn more anddo more IP. I was convinced that IP modeling wasvery robust for operations research applications butthe models were hard to solve. One of the difficul-ties in getting students involved in integer program-ming was the absence of an IP book that could serveas a text for graduate students and also as a refer-ence for researchers. Fortunately when I moved toCornell, Garfinkel took a position at the Universityof Rochester only 90 miles from Ithaca, and we be-gan collaborating on our book Integer Programmingwhich was published in 1972 [10].

Also, motivated by the set partitioning work, Igot my first NSF grant in IP, which was to studytheory and algorithms for set packing, partitioningand covering. I worked with my student Les Trotteron polyhedral structure and algorithms for the nodepacking and independent set problem [22]. Our workwas influenced by Ray Fulkerson who joined the Cor-nell faculty a couple of years after I did and taughtus, among other things, the beautiful theory con-necting perfect graphs and the class of IPs known asnode packing. While Ray was not computationallyoriented, he had insight on what made integer pro-grams hard and gave us the family of Steiner tripleset covering problems that are now part of the MI-PLIB library of IP test problems and still remaindifficult to solve [8]. Ray also started the coopera-tion between Cornell and Waterloo where Jack Ed-monds and his students were doing exciting workon matroids, matching and other combinatorial op-timization problems. I first met Bill Cunningham,

Bill Pulleyblank and Vasek Chvatal when they weregraduate students at Waterloo.

I became interested in transportation and espe-cially using optimization to solve equilibrium prob-lems of traffic assignment. Together with my studentDeepak Merchant, we wrote the first papers on dy-namic traffic assignment [18].

Marshall Fisher visited Cornell in 1974, and to-gether with my student Gerard Cornuejols, weworked on a facility location problem that was moti-vated by a problem of locating bank accounts so thatcompanies could maximize the time they could holdmoney they owed and hence their cash flow. I thinkthis was some of the early work on the analysis of ap-proximation algorithms that is now so popular, andwe were fortunate to receive the Lanchester prize fora paper we published on it [5].

I began my long and very fruitful collaborationwith Laurence Wolsey in the fall of 1975 when Iwas the research director of the Center for Oper-ations Research and Econometrics (CORE) at theUniversity of Louvain. In generalizing the facilitylocation analysis, Laurence, Marshall and I figuredout that submodularity was the underlying princi-ple [7]. Laurence and I wrote several papers duringthis period, but nothing on computational integerprogramming. I think that was typical of the 1970s.We just didn’t have the software or the hardware toexperiment with computational ideas. At the time Ithink it was fair to say that from a practical stand-point branch-and-bound was it.

However, beginning in the late 1970s and early1980s a new era of computational integer program-ming based on integrating polyhedral combinatoricswith branch-and-bound was born. As LaurenceWolsey and I saw, and participated in these excit-ing developments, we thought that the field neededa new integer programming book so that researcherswould have good access to the rapid changes thatwere taking place. I think we first talked about thisproject in the late 1970s but I was chairman of theOR department at Cornell and didn’t have the timeuntil I had my sabbatical leave in 1983-84 at CORE.We worked very hard on the book that academicyear not realizing that it would take us five moreyears to complete it [23]. We were honored whenin 1989 it received the Lanchester prize. We triednot to get too distracted by all of the unsolved prob-

Page 5: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

Volume 1 Number 1 March 2011 5

lems that arise in the process of putting results frompapers into a book. But we couldn’t resist study-ing the basic question of finding for mixed-integerproblems the analog of Chvatal’s rounding for pureinteger programming. Solving that question led towhat we called mixed-integer rounding, which hasbecome a key tool for deriving cuts based on struc-ture for mixed-integer programs [24].

After returning from CORE, I spent one more yearat Cornell and then moved to Georgia Tech. Bythis time I really wanted to work on some real prob-lems and to apply integer programming in practice.Ellis Johnson came to Tech soon thereafter havingplayed a leadership role in the development of IBM’sOSL code for mixed integer programming. Thanksto Ellis, IBM provided support for us to teach MIPshort courses to people from industry using OSL.This opened up tremendous research opportunitieswith industry and led to long-term relations withseveral airlines and other companies mainly withsupply chain optimization problems [1, 14, 16, 26].These projects with industry generated many PhDdissertations including those of Pam Vance, DanAdelman, Greg Glockner, Ladislav Lettowsky, DiegoKlabjan, Andrew Schaefer, Jay Rosenberger and JeffDay. Martin Savelsbergh and Cindy Barnhart joinedour faculty and participated actively in our collab-orations with industry. Some of this work led usto thinking about column generation in integer pro-gramming and to branch-and-price [4]. During thisperiod, as Georgia Tech Faculty Representative forAthletics, I got involved with Mike Trick in sportsscheduling. Our first work was scheduling basket-ball for the Atlantic Coast Conference [21]. Lateron we formed a company called the Sports Schedul-ing Group, and we now, together with my studentKelly Easton, schedule major league baseball as wellas several other sports conferences

OSL had a much better MIP solver than earliercommercial codes using some of the new technologylike lifted cover cuts. But the ability to experimentwith new ideas for cuts, branching, preprocessingand primal heuristics was still missing. My studentGabriel Sigismondi and I began the development ofa MIP research code that was a branch-and-boundshell that would call an LP solver and would give theuser all of the LP information needed to write sub-routines for cuts, branching, etc. Martin Savelsbergh

joined us from the Netherlands as a postdoc and be-came the key person in the creation of our researchcode called MINTO [20], which for many years wasthe code widely used by researchers in integer pro-gramming to evaluate the implementation of theirtheoretical results. Many of the ideas from MINTO,developed with Martin, were incorporated in the ear-lier releases of CPLEX’s MIP code. In fact our stu-dent, Zhonghau Gu, became the key person in thedevelopment of the MIP code for subsequent releasesof CPLEX and GUROBI. Several of my GeorgiaTech students, including Alper Atamturk, Ismael deFarias, Zhonghau Gu, Andrew Miller, Jean-PhilippeRichard and Juan Pablo Vielma produced many newresults in integer programming [3, 6, 11, 25, 27].

More recently, I have been collaborating with mycolleague Shabbir Ahmed. Together with students,Yongpei Guan, Jim Luedtke and Juan Pablo Vielma,we have been making some progress on polyhedralaspects of stochastic integer programming [12, 13,17].

I have been extremely fortunate throughout myacademic career to have worked with an incrediblegroup of PhD students and postdocs. I owe thema great debt, and they help keep me a young 73.So I would like to end this note by recognizing allof my graduated PhD students and postdoctoralassociates by listing all of their names.

PHD STUDENTS

1961-69 Johns Hopkins (11): Bill Hardgrave,Hugh Bradley, Mike Thomas, Zev Ulmann, GilHoward, Hank Nuttle, Dennis Eklof, RobertGarfinkel, Gus Widhelm, Joe Bowman, SherwoodFrey.

1970-85 Cornell (15): Les Trotter, Jim Fer-gusson, Deepak Merchant, Pierre Dejax, MikeBall, Glenn Weber, Gerard Cornuejols, Wen-LianHsu, Yoshi Ikura, Gerard Chang, Ronny Aboudi,Victor Masch, Russ Rushmeier, Gabriel Sigismondi,Sung-Soo Park.

1985-2010 Georgia Tech (29): Heesang Lee,Anuj Mehrotra, Pam Vance, Erick Wikum, JohnZhu, Zhonghau Gu, Y. Wang, Ismael de Farias,Dasong Cao, Dan Adelman, Greg Glockner, AlperAtamturk, Ladislav Lettowsky, Diego Klabjan,

Page 6: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

6 INFORMS Optimization Society Newsletter

Dieter Vandenbussche, Ahmet Keha, YongpeiGuan, Yetkin Ileri, Jay Rosenberger, Jean PhilippeRichard, Kelly Easton, Jeff Day, James Luedtke,Faram Engineer, Renan Garcia, Michael Hewitt,Gizem Keysan, Alejandro Toriello, Byungsoo Na.

POST DOCTORAL STUDENTS (6):Ram Pandit, Natashia Boland, Petra Bauer, EmilieDanna, Bram Verweij, Menal Guzelsoy.

REFERENCES

[1] D. Adelman and G.L. Nemhauser. Price-directed controlof remnant inventory systems. Operations Research,47:889–898, 1999.

[2] R. Aris, G.L. Nemhauser and D.J. Wilde. Optimization ofmultistage cycle and branching systems by serial pro-cedures. AIChE Journal, 10:913–919, 1964.

[3] A. Atamturk, G.L. Nemhauser and. M. Savelsbergh. Themixed vertex packing problem. Mathematical Program-ming, 89:35–54, 2000.

[4] C. Barnhart, E.L. Johnson, G.L. Nemhauser, M. Savels-bergh and P. Vance. Branch-and-Price: Column gen-eration for solving huge integer problems. OperationsResearch, 46:316–329, 1998.

[5] G. Cornuejols, M.L. Fisher and G.L. Nemhauser. Locationof bank accounts to optimize float: An analytic studyof exact and approximate algorithms. Management Sci-ence, 23:789–810, 1977.

[6] I. deFarias and G.L. Nemhauser. A polyhedral study ofthe cardinality constrained knapsack problem. Mathe-matical Programming, 96:439–467, 2003.

[7] M.L. Fisher, G.L. Nemhauser and L.A. Wolsey. An analy-sis of approximations for maximizing submodular setfunctions-I. Mathematical Programming, 14:265–294,1978.

[8] D.R. Fulkerson, G.L. Nemhauser and L.E. Trotter, Jr.Two computationally difficult set covering problemsthat arise in computing the 1-width of incidence ma-trices of steiner triple systems. Mathematical Program-ming Study, 2:72–81, 1974.

[9] R.S. Garfinkel and G.L. Nemhauser. Optimal political dis-tricting by implicit enumeration techniques. Manage-ment Science, 16:495–508, 1970.

[10] R.S. Garfinkel and G.L. Nemhauser. Integer Program-ming. Wiley, 1972.

[11] Z. Gu, G.L. Nemhauser and M. Savelsbergh. Liftedflow cover inequalities for mixed 0-1 integer programs,Mathematical Programming, 85:439–467, 1999.

[12] Y. Guan, S. Ahmed and G.L. Nemhauser. Cutting planesfor multi-stage stochastic integer programs, OperationsResearch. 57:287–298 2009.

[13] Y. Guan, S. Ahmed, G.L. Nemhauser and A. Miller. Abranch-and-cut algorithm for the stochastic uncapac-itated lot-sizing problem. Mathematical Programming,105:55–84, 2006.

[14] C. Hane, C. Barnhart, E.L. Johnson, R. Marsten, G.L.Nemhauser and G. Sigismondi. The fleet assignmentproblem: Solving a large-scale integer program. Math-ematical Programming, 70:211–232, 1995.

[15] W.W. Hardgrave and G.L. Nemhauser. On the relationbetween the traveling-salesman and the longest-pathproblems. Operations Research, 10:647–657, 1962.

[16] D. Klabjan, E.L. Johnson and G.L. Nemhauser. Solvinglarge airline crew scheduling problems: Random pair-ing generation and strong branching. ComputationalOptimization and Applications, 20:73–91, 2001.

[17] J. Luedtke, S. Ahmed and G.L. Nemhauser. An inte-ger programming approach to linear programs withprobabilistic constraints. Mathematical Programming,122:247–272, 2010.

[18] D. Merchant and G.L. Nemhauser. A model and an al-gorithm for the dynamic traffic assignment problem.Transportation Science 12:183–199, 1978.

[19] G.L. Nemhauser. Introduction to Dynamic Programming.Wiley, 1966.

[20] G.L. Nemhauser, M. Savelsbergh and G. Sigismondi.MINTO: A Mixed INTeger Optimizer. Operations Re-search Letters, 15:47–58, 1994.

[21] G.L. Nemhauser and M. Trick. Scheduling a major col-lege basketball conference. Operations Research, 46:1–8, 1998.

[22] G.L. Nemhauser and L.E. Trotter, Jr. Vertex pack-ings: Structural properties and algorithms. Mathemat-ical Programming, 8:232–248, 1975.

[23] G.L. Nemhauser and L.A. Wolsey. Integer and Combina-torial Optimization. Wiley, 1988.

[24] Nemhauser and L.A. Wolsey. A recursive procedure togenerate all cuts in mixed-integer programs, Mathemat-ical Programming, 46:379–390, 1990.

[25] J.P. Richard, I. de Farias and G.L. Nemhauser. Liftedinequalities for 0-1 mixed integer programming: Ba-sic theory and algorithms. Mathematical Programming,98:89–113, 2003.

[26] J. Rosenberger, A. Schaefer, D. Goldsman, E. Johnson,A. Kleywegt and G.L. Nemhauser. A stochastic modelof airline operations. Transportation Science, 36:357–377, 2002.

Page 7: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

Volume 1 Number 1 March 2011 7

[27] J.P. Vielma, S. Ahmed and G.L. Nemhauser. Mixed-integer models for nonseparable piecewise linear opti-mization: Unifying framework and extensions. Opera-tions Research, 58:303–315, 2010.

First Order Methods forLarge Scale Optimization:

Error Bounds andConvergence Analysis

Zhi-Quan LuoDepartment of Electrical and Computer Engineering

University of Minnesota, Minneapolis, MN 55455, USA

([email protected])

1. PreambleNeedless to say, I am deeply honored by the recog-nition of the 2010 Farkas prize. Over the years, myresearch has been focused on various algorithmic andcomplexity issues in optimization that are stronglymotivated by engineering applications. I think the2010 Farkas prize is, more than anything, a recog-nition of this type of interdisciplinary optimizationwork.

Of course, doing research is not so much aboutwinning a prize as it is about having fun, especiallyif you have an opportunity to work with smart col-leagues and students. In this regard, I have beenblessed. Collaborations with brilliant people likePaul Tseng and Jos Sturm had been a real sourceof inspiration. I am really fortunate to have workedclosely with my professors at MIT, John Tsitsik-lis and Dimitri Bertsekas, and later with colleagueslike Jong-Shi Pang, Yinyu Ye and Shuzhong Zhang.They have taught me many things. Their energyand wisdom have shaped my research and enrichedmy academic life. To all of them and to the prizecommittee, I am deeply grateful.

If I have to pick one individual that had the mostimpact on my research over the last 20 years, itwould be my close friend Paul Tseng, with whomI had published 20 journal papers. That amountsto one joint paper per year, for 20 years! This initself is a true testament of the depth and breadth

of our collaboration. It is to the fond memories ofPaul that I dedicate this article.

In 1989 while I was finishing up my Ph.D thesisat MIT, Paul Tseng, then a postdoc with Bertsekas,posed an open question to me as a challenge: estab-lish the convergence of the matrix splitting algorithmfor convex QP with box constraints. We worked re-ally hard for a few months and eventually settledthis question. The set of techniques we developedfor solving this problem turned out to be quite usefulfor a host of other problems and algorithms. Thisthen led to a series of joint papers [1-7] publishedin the early part of 1990’s in which we developeda general framework for the convergence analysis offirst order methods for large scale optimization. Thisframework is broad, and allows one to establish lin-ear rate of convergence in the absence of strong con-vexity. Most interestingly, some of this work has adirect bearing on the L1-regularized sparse optimiza-tion found in many contemporary applications suchas compressive sensing and imaging processing. Inthe following, I will briefly outline this algorithmicframework and the main analysis tools (i.e., errorbounds), and mention their connections to the com-pressive sensing applications.

2. A General FrameworkLet f : <n 7→ < be a continuously differentiablefunction whose gradient is Lipschitz continuous onsome nonempty closed convex set X in <n. We are

Zhi-Quan Luo and Paul Tseng

Page 8: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

8 INFORMS Optimization Society Newsletter

interested in finding a stationary point of f over X,i.e., a point x ∈ <n satisfying

x = [x−∇f(x)]+X ,

where [·]+X denotes the orthogonal projection ontoX. We assume throughout that infx∈X f(x) > −∞and that the set of stationary points, denoted by X,is nonempty.

We consider a class of feasible descent methodsthat update the iterates according to

xr+1 := [xr − αr∇f(xr) + er]+X , r = 0, 1, ... (1)

where er is a sufficiently small “error” vector satis-fying

‖er‖ ≤ κ1‖xr+1 − xr‖, κ1 > 0,

and the step size αr can be chosen either as a suitableconstant, via the exact line search or by the Armijorule. This is a broad class that includes a gradi-ent projection algorithm of Goldstein and Levitinand Polyak, a certain matrix splitting algorithm forconvex QP, coordinate descent methods, the extra-gradient method of Korpelevich, and the proximalminimization algorithm of Martinet, among others.The aforementioned methods have been studied ex-tensively. Unfortunately, the existing analysis (priorto 1990) typically requires some nondegeneracy as-sumption on the problem (such as uniqueness of thesolution) which does not hold for many “real-world”problems.

A new line of analysis was introduced by Tsengand myself whose key lies in a certain error boundwhich estimates the distance to the solution set Xfrom an x ∈ X near X: for every υ ≥ infx∈X f(x)there exist scalars δ > 0 and τ > 0 such that

dist(x, X) ≤ τ‖x− [x−∇f(x)]+X‖, (2)

for all x ∈ X with f(x) ≤ υ and ‖x−[x−∇f(x)]+X‖ ≤δ. In addition, we make a mild assumption on f andX regarding the “proper” separation of the isocostsurfaces of f on the solution set X: there exists ascalar ε > 0 such that

x, y ∈ X, f(x) 6= f(y) ⇒ ‖x− y‖ ≥ ε. (3)

This condition holds whenever f takes on only a fi-nite number of values on X or whenever the con-nected components of X are properly separated from

each other. In particular, it holds automaticallywhen f is convex.

By using the error bound (2) and the isocost sur-face separation condition (3), it has been shown thata sequence generated by iterations of the form (1)converges at least linearly to a stationary point in X(even if X is not a singleton). This result implies,as an immediate consequence, linear rate of conver-gence for, respectively, the extragradient method,the proximal minimization algorithm, and the co-ordinate descent method. A nice feature of theseconvergence results is that they do not require anynondegeneracy assumption on the problem.

We summarize below the known sufficient condi-tions for both (2) and (3) to hold.

(a) (Strongly convex case). f is strongly convex.(b) (Quadratic case). f is quadratic. X is a poly-

hedral set.(c) (Composite case). f(x) = 〈q, x〉 + g(Ex), ∀x,

where E is an m× n matrix with no zero column, qis a vector in <n, and g is a strongly convex differen-tiable function in <m with ∇g Lipschitz continuousin <m. X is a polyhedral set.(d) (Dual functional case).

f(x) = 〈q, x〉+ maxy∈Y〈Ex, y〉 − g(y) ∀x,

where Y is a polyhedral set in <m, E is an m × nmatrix with no zero column, q is a vector in <n,and g is a strongly convex differentiable function in<m with ∇g Lipschitz continuous in <m. X is apolyhedral set.

3. L1-regularized MinimizationIn contemporary applications of large scale convexoptimization, we are interested in the L1-regularizedconvex QP:

minx∈<n

f(x) = λ‖x‖1 +1

2‖Ax− b‖2, (4)

where λ > 0 is a constant, A is an m×n matrix, andb is a vector in <m. The optimization problem (4)arises naturally in compressive sensing applicationswhere the goal is to obtain a sparse solution to a largebut under-determined linear system Ax = b. Noticethat if matrix A has full column rank, then f(x) isstrictly convex. However, in practical applications

Page 9: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

Volume 1 Number 1 March 2011 9

(e.g., compressive sensing), A is a fat matrix (i.e.,m n) and the objective function f(x) may notbe strictly convex, so there can be multiple optimalsolutions for (4).

The main difference between (4) and the problemin Section 2 is the inclusion of a non-smooth term‖x‖1 in the objective function. Can we extend therate of convergence analysis framework described inSection 2 to this non-smooth setup? The answer ispositive. In particular, Paul Tseng in 2001 (beforethe L1-minimization became a hot research topic)showed a general result [8] for the coordinate descentmethod as long as the non-smooth term is separableacross variables. Specialized to (4), Paul’s result im-plies that each limit point generated by the coordi-nate descent algorithm for solving (4) is a global op-timal solution. Recently, we have further strength-ened this result to show that the entire sequence ofiterates converge linearly to a global optimal solution(4) without any nondegeneracy assumption.

More generally, Paul Tseng considered [9] a classof proximal gradient method for minimizing over aconvex set X an objective function f(x) = f1(x) +f2(x), where both f1 and f2 are convex, but f2 mightbe non-smooth. He showed that as long as the localerror bound holds around the optimal solution set,the global linear convergence can be expected even inthis non-smooth setup. Remarkably, Paul was ableto establish a local error bound for the Group Lassocase where f(x) = λ

∑J∈J ‖xJ‖2+ 1

2‖Ax−b‖2. Here

J is a partition of 1, 2, ..., n and xJ is a subvec-tor of x with components taken from J ∈ J . It islikely that this framework can be further extendedto other types of non-smooth convex optimizationproblems/algorithms used for recovering sparse orlow-rank solutions.

REFERENCES

[1] Z.-Q. Luo and P. Tseng. Error bound and reduced-gradientprojection algorithms for convex minimization over apolyhedral set. SIAM Journal on Optimization, 3:43–59, 1993.

[2] Z.-Q. Luo and P. Tseng. On the convergence rate of dualascent methods for strictly convex minimization. Math-ematics of Operations Research, 18:846–867, 1993.

[3] Z.-Q. Luo and P. Tseng. Error bounds and convergenceanalysis of feasible descent methods: A general ap-proach. Annals of Operations Research, 46:157–178,1993.

[4] Z.-Q. Luo and P. Tseng. On the linear convergence ofdescent methods for convex essentially smooth mini-mization. SIAM Journal on Control & Optimization,30(2):408–425, 1992.

[5] Z.-Q. Luo and P. Tseng. Error bound and convergenceanalysis of matrix splitting algorithms for the affinevariational inequality problem. SIAM Journal on Op-timization, 2(1):43–54, 1992.

[6] Z.-Q. Luo and P. Tseng. On the convergence of the coor-dinate descent method for convex differentiable mini-mization. Journal of Optimization Theory and Appli-cations, 72(1):7–35, 1992.

[7] Z.-Q. Luo and P. Tseng. On the convergence of a matrixsplitting algorithm for the symmetric linear comple-mentarity problem. SIAM Journal on Control & Op-timization, 29(5):1037–1060, 1991.

[8] P. Tseng. Convergence of block coordinate descent methodfor nondifferentiable minimization. Journal of Opti-mization Theory and Applications, 201:473–492, 2001.

[9] P. Tseng. Approximation accuracy, gradient methods, anderror bound for structured convex optimization. Math-ematical Programming, Series B, 125(2):263–295, 2010.

Page 10: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

10 INFORMS Optimization Society Newsletter

Probability Inequalities forSums of Random Matricesand Their Applications in

OptimizationAnthony Man–Cho So

Department of Systems Engineering and EngineeringManagement, The Chinese University of Hong Kong,

Shatin, N. T., Hong Kong

([email protected])

In this article, I will highlight the main ideas inmy recent work Moment Inequalities for Sums ofRandom Matrices and Their Applications in Opti-mization (accepted for publication in MathematicalProgramming, 2009) and discuss some of the recentdevelopments.

The story begins with a groundbreaking work ofNemirovski [4], which revealed a close relationshipbetween the theoretical properties of a host of opti-mization problems and the behavior of a sum of cer-tain random matrices. Indeed, Nemirovski showedthat the construction of so–called safe tractableapproximations of certain chance constrained lin-ear matrix inequalities, as well as the analysisof a semidefinite relaxation of certain non–convexquadratic optimization problems, can be achieved byanswering the following question:

Question (Q) Let ξ1, . . . , ξh be independent meanzero random variables, each of which is either (i)

Nick Sahinidis presenting the 2010Young Researcher Prize to Anthony Man–Cho So

supported on [−1, 1], or (ii) normally distributedwith unit variance. Furthermore, let Q1, . . . , Qh bearbitrary m× n matrices satisfying

∑hi=1QiQ

Ti I

and∑h

i=1QTi Qi I. Under what conditions on

t > 0 will we have an exponential decay of the tail

probability Pr(∥∥∥∑h

i=1 ξiQi

∥∥∥∞≥ t)

? Here, ‖A‖∞denotes the spectral norm of the m× n matrix A.

In a technical tour de force, Nemirovski showedthat whenever t ≥ Ω((m + n)1/6), the aforemen-tioned tail probability will have an exponential de-cay. Moreover, he conjectured that the same resultwould hold for much smaller values of t, namely, fort ≥ Ω(

√ln(m+ n)). As argued in [4], the threshold

Ω(√

ln(m+ n)) is in some sense the best one couldhope for. Moreover, if the above conjecture is true,then it would immediately imply improved perfor-mance guarantees for various optimization problems.Therefore, there is great interest in determining thevalidity of this conjecture.

As it turns out, the behavior of the matrix–valuedrandom variable Sh ≡

∑hi=1 ξiQi has been exten-

sively studied in the functional analysis and prob-ability theory literature. One of the tools that isparticularly useful for addressing Nemirovski’s con-jecture is the so–called Khintchine–type inequali-ties. Roughly speaking, such inequalities provideupper bounds on the p–norm of the random vari-able ‖Sh‖∞ in terms of suitable normalizations of thematrices Q1, . . . , Qh. Once these bounds are avail-able, it is easy to derive tail bounds for ‖Sh‖∞ usingMarkov’s inequality. In my work, I showed that thevalidity of Nemirovski’s conjecture is in fact a sim-ple consequence of the so–called non–commutativeKhintchine’s inequality in functional analysis [3] (seealso [1]). Using this result, I was then able to obtainthe best known performance guarantees for a host ofoptimization problems.

Of course, this is not the end of the story. In fact,it is clear from recent developments that the behav-ior of a sum of random matrices plays an importantrole in the design and analysis of many optimizationalgorithms. One representative example is the anal-ysis of the nuclear norm minimization heuristic (see,e.g., [6] and the references therein). Furthermore,there has been some effort in developing easy–to–use recipes for deriving tail bounds on the spectralnorm of a sum of random matrices [5, 7]. In a re-

Page 11: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

Volume 1 Number 1 March 2011 11

cent work [2], we applied such recipes to constructsafe tractable approximations of chance–constrainedlinear matrix inequalities with dependent perturba-tions. Our results generalize the existing ones inseveral directions and further demonstrate the rel-evance of probability inequalities for matrix–valuedrandom variables in the context of optimization.

In summary, recent advances in probability the-ory have yielded many powerful tools for analyzingmatrix–valued random variables. These tools will beinvaluable as we tackle matrix–valued random vari-ables that arise quite naturally in many optimizationproblems.

REFERENCES

[1] A. Buchholz. Operator Khintchine inequality in non–commutative probability. Mathematische Annalen,319:1–16, 2001.

[2] S.-S. Cheung, A. M.-C. So, and K. Wang. Chance–constrained linear matrix inequalities with dependentperturbations: A safe tractable approximation ap-proach. Preprint, 2011.

[3] F. Lust-Piquard. Inegalites de Khintchine dans Cp (1 <p < ∞). Comptes Rendus de l’Academie des Sciencesde Paris, Serie I, 303(7):289–292, 1986.

[4] A. Nemirovski. Sums of random symmetric matri-ces and quadratic optimization under orthogonalityconstraints. Mathematical Programming, Series B,109(2–3):283–317, 2007.

[5] R. I. Oliveira. Sums of random hermitian matrices and aninequality by Rudelson. Electronic Communicationsin Probability, 15:203–212, 2010.

[6] B. Recht. A simpler approach to matrix completion.Preprint, available at http://arxiv.org/abs/0910.

0651, 2009.

[7] J. A. Tropp. User–friendly tail bounds for sums of randommatrices. Preprint, available at http://arxiv.org/

abs/1004.4389, 2010.

Fast Multiple SplittingAlgorithms for Convex

OptimizationShiqian Ma

Department of IEOR, Columbia UniversityNew York, NY 10027

([email protected])

Our paper “Fast Multiple Splitting Algorithmsfor Convex Optimization” [3] considers alternatingdirection type methods for minimizing the sum ofK(K ≥ 2) convex functions fi(x), i = 1, . . . ,K:

minx∈Rn

F (x) ≡K∑i=1

fi(x). (1)

Many problems arising in machine learning, medicalimaging and signal processing etc., can be formu-lated as minimization problems of the form of (1).For example, the robust principal component anal-ysis problem can be cast as minimizing the sum oftwo convex functions (see [2]):

minX∈Rm×n

‖X‖∗ + ρ‖M −X‖1, (2)

where ‖X‖∗ is the nuclear norm of X, which isdefined as the sum of the singular values of X,‖Y ‖1 :=

∑ij |Yij |, M ∈ Rm×n and ρ > 0 is a weight-

ing parameter. As another example, one version ofthe compressed sensing MRI problem can be cast asminimizing the sum of three convex functions (see[5]):

minx∈Rn

αTV (x) + β‖Wx‖1 + 12‖Ax− b‖

2, (3)

where TV (x) is the total variation function, W isa wavelet transform, A ∈ Rm×n, b ∈ Rm andα > 0, β > 0 are weighting parameters. Althoughthese problems can all be reformulated as conic pro-gramming problems, (e.g., (2) can be reformulatedas a semidefinite programming problem and (3) canbe reformulated as a second-order cone programmingproblem), polynomial-time methods such as interior-point methods are not practical for these problemsbecause they are usually of huge size. However, these

Page 12: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

12 INFORMS Optimization Society Newsletter

problems have a common property: although it isnot easy to minimize the sum of the fi’s, it is rela-tively easy to solve problems of the form

minx∈Rn

τfi(x) + 12‖x− z‖

22

for each of the functions fi(x) for any τ > 0 andz ∈ Rn. Thus, alternating direction type methodsthat split the objective function and minimize eachfunction fi alternatingly can be effective for theseproblems.

The main contribution of [3] is two classes of alter-nating direction type methods, which are called mul-tiple splitting algorithms (MSA) in [3], with prov-able iteration complexity bounds for obtaining anε-optimal solution. For the unconstrained problem

minx

f(x), (4)

x is called an ε-optimal solution to (4) if f(x) ≤f(x∗) + ε, where x∗ is an optimal solution to (4). It-eration complexity results for gradient methods havebeen well studied in the literature. It is known thatif one applies the classical gradient method with steplength τk

xk+1 := xk − τk∇f(xk) (5)

to solve (4), an ε-optimal solution is obtained inO(1/ε) iterations under certain conditions. In [6],Nesterov proposed techniques to accelerate (5). Oneof his acceleration techniques implements the follow-ing at each iteration:

xk+1 := yk − τk∇f(yk)

yk+1 := xk + k−1k+2(xk − xk−1). (6)

(Left to right) Miguel Anjos, Sven Leyffer,Shiqian Ma and Nick Sahinidis

Nesterov proved that the iteration complexity of (6)is O(1/

√ε) for obtaining an ε-optimal solution [6, 7].

He also proved that O(1/√ε) is the best bound one

can get if one uses only first-order information.Our MSA algorithms are based on the idea of split-

ting the objective function intoK parts and applyinga linearization technique to minimize each functionfi plus a proximal term, alternatingly. The basicversion of MSA iterates the following updates:

xi(k+1) := pi(wi(k), . . . , w

i(k)), ∀i = 1, . . . ,K

wi(k+1) := 1

K

∑Kj=1 x

j(k+1), ∀i = 1, . . . ,K,

(7)

with initial points w1(0) = . . . = wK

(0) = x0, where

pi(v1, . . . , vi−1, vi+1, . . . , vK)

:= arg minuQi(v1, . . . , vi−1, u, vi+1, . . . , vK),

Qi(v1, . . . , vi−1, u, vi+1, . . . , vK)

:= fi(u) +∑K

j=1,j 6=i fj(u, vj)

and

fi(u, v) := fi(v) + 〈∇fi(v), u− v〉+1

2µ‖u− v‖2.

Notice that Qi(v1, . . . , vi−1, u, vi+1, . . . , vK) is an

approximation to F (u) that keeps the functionfi(u) unchanged and linearizes the other functionsfj(u), j 6= i while adding proximal terms.

The following theorem gives the iteration com-plexity result for Algorithm MSA.

Theorem 1 Suppose x∗ is an optimal solutionto problem (1). Assume fi’s are smooth func-tions and their gradients are Lipschitz continuouswith Lipschitz constants L(fi), i=1,. . . ,K. If µ ≤1/maxiL(fi), then the sequence xi(k), w

i(k)

Ki=1

generated by MSA (7) satisfies:

mini=1,...,K

F (xi(k))− F (x∗) ≤ (K − 1)‖x0 − x∗‖2

2µk.

Thus (7) produces a sequence that converges in objec-tive function value. Also, when µ ≥ β/maxiL(fi)where 0 < β ≤ 1, to get an ε-optimal solution, thenumber of iterations required is O(1/ε).

The proof of this theorem basically follows an ar-gument used by Beck and Teboulle in [1] for prov-ing the iteration complexity of the Iterative Shrink-age/Thresholding Algorithm.

Page 13: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

Volume 1 Number 1 March 2011 13

A fast version of MSA (FaMSA) is also presentedin [3]. Under the same assumptions as in Theorem1, the iteration complexity bound of FaMSA is im-proved to O(1/

√ε) while the computational effort in

each iteration is almost the same as in MSA.To the best of our knowledge, the iteration com-

plexity results for MSA and FaMSA are the first onesof this type that have been given for alternating di-rection methods involving three or more directions(i.e., functions).

Our theorems require the functions to be smoothto prove the complexity. However, if any of the func-tions are not smooth as is the case in (2) and (3),we can use the smoothing technique proposed byNesterov in [8] to smooth them. This smoothingtechnique also guarantees that the gradients of thesmoothed functions are Lipschitz continuous.

The numerical results in [3] show that the mul-tiple splitting algorithms are much faster than theclassical interior-point method for solving some ran-domly created Fermat-Weber problems. Also, sincethe multiple splitting algorithms in [3] are all Jacobitype algorithms, their performance should be evenbetter if they can be implemented in a parallel com-puting environment.

Gauss-Seidel type alternating linearization meth-ods are studied in another paper [4].

REFERENCES

[1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.SIAM J. Imaging Sciences, 2(1):183–202, 2009.

[2] E. J. Candes, X. Li, Y. Ma, and J. Wright. Robustprincipal component analysis? Preprint available athttp://arxiv.org/abs/0912.3599, 2009.

[3] D. Goldfarb and S. Ma. Fast multiple splitting algorithmsfor convex optimization. Technical report, Depart-ment of IEOR, Columbia University. Preprint avail-able at http://arxiv.org/abs/0912.4570, 2009.

[4] D. Goldfarb, S. Ma, and K. Scheinberg. Fast alternat-ing linearization methods for minimizing the sum oftwo convex functions. Technical report, Departmentof IEOR, Columbia University. Preprint available athttp://arxiv.org/abs/0912.4571, 2010.

[5] S. Ma, W. Yin, Y. Zhang, and A. Chakraborty. An effi-cient algorithm for compressed MR imaging using to-tal variation and wavelets. IEEE International Con-ference on Computer Vision and Pattern Recognition(CVPR), pages 1–8, 2008.

[6] Y. E. Nesterov. A method for unconstrained convexminimization problem with the rate of convergenceO(1/k2). Dokl. Akad. Nauk SSSR, 269:543–547, 1983.

[7] Y. E. Nesterov. Introductory lectures on convex optimiza-tion. 87:xviii+236, 2004. A basic course.

[8] Y. E. Nesterov. Smooth minimization for non-smoothfunctions. Math. Program. Ser. A, 103:127–152, 2005.

Nominations for SocietyPrizes Sought

The Society awards four prizes, now annually, atthe INFORMS annual meeting. We seek nomina-tions and applications for each of them, due by June30, 2011. Details for each of the prizes, includingeligibility rules and past winners, can be found byfollowing the links from http://www.informs.org/

Community/Optimization-Society/Prizes.Each of the four awards includes a cash amount of

US$ 1,000 and a citation certificate. The award win-ners will be invited to give a presentation in a specialsession sponsored by the Optimization Society dur-ing the INFORMS annual meeting in Charlotte, NCin November 2011 (the winners will be responsiblefor their own travel expenses to the meeting).

The Khachiyan Prize is awarded for outstand-ing life-time contributions to the field of optimiza-tion by an individual or team. The topic of thecontribution must belong to the field of optimiza-tion in its broadest sense. Recipients of the IN-FORMS John von Neumann Theory Prize or theMPS/SIAM Dantzig Prize in prior years are not eli-gible for the Khachiyan Prize. The prize committeefor the Khachiyan Prize is as follows:

• George Nemhauser (Chair)[email protected]

• Tamas Terlaky

• Yurii Nesterov

• Lex Schrijver

Nominations and applications for the KhachiyanPrize should be made via EasyChair https://www.

easychair.org/conferences/?conf=ioskp2011.Please direct any inquiries to the prize-committeechair.

Page 14: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

14 INFORMS Optimization Society Newsletter

The Farkas Prize is awarded for outstanding con-tributions to the field of optimization by a researcheror a team of researchers. The contribution may be apublished paper, a submitted and accepted paper, aseries of papers, a book, a monograph, or software.The author(s) must have been awarded their termi-nal degree within twenty five calendar years preced-ing the year of the award. The prize committee forthe Farkas Prize is as follows:

• Gerard Cornuejols (Chair)[email protected]

• Alexander Shapiro

• David Shmoys

• Stephen Wright

Nominations and applications for the Farkas Prizeshould be made via email to the prize-committeechair. Please direct any inquiries to the prize-committee chair.

The Prize for Young Researchers is awardedto one or more young researcher(s) for an outstand-ing paper in optimization that is submitted to andaccepted, or published in a refereed professional jour-nal. The paper must be published in, or submittedto and accepted by, a refereed professional journalwithin the four calendar years preceding the yearof the award. All authors must have been awardedtheir terminal degree within eight calendar years pre-ceding the year of award. The prize committee forthe Prize for Young Researchers is as follows:

• Jim Renegar (Chair)[email protected]

• Dan Bienstock

• Endre Boros

• Tom McCormick

Nominations and applications for the Prize forYoung Researchers should be made via email to theprize-committee chair. Please direct any inquiries tothe prize-committee chair.

The Student Paper Prize is awarded to one ormore student(s) for an outstanding paper in opti-mization that is submitted to and received or pub-lished in a refereed professional journal within threecalendar years preceding the year of the award. Ev-ery nominee/applicant must be a student on thefirst of January of the year of the award. Any co-author(s) not nominated for the award should send a

letter indicating that the majority of the nominatedwork was performed by the nominee(s). The prizecommittee for the Student Paper Prize is as follows:

• Matthias Koppe (Chair)[email protected]

• Katya Scheinberg

• Jean-Philippe Richard

Nominations and applications for the Student Pa-per Prize should be made via email to the prize-committee chair. Please direct any inquiries to theprize-committee chair.

Nominations of Candidatesfor Society Officers Sought

Nick Sahinidis will have completed his term asMost-Recent Past-Chair of the Society at the con-clusion of the 2011 annual INFORMS meeting. JonLee is continuing as Chair through 2012. MarinaEpelman will have completed her (extended) termas Secretary/Treasurer, also at the conclusion of theINFORMS meeting. The Society is indebted to Nickand Marina for their work.

We would also like to thank four Society Vice-Chairs who will be completing their two-year termsat the conclusion of the INFORMS meeting: MiguelAnjos, Steven Dirkse, Oktay Gunluk and MauricioResende.

We are currently seeking nominations ofcandidates for the following positions:

• Chair-Elect

• Secretary/Treasurer

• Vice-Chair for Computational Optimiza-tion and Software

• Vice-Chair for Integer Programming

• Vice-Chair for Linear Programming andComplementarity

• Vice-Chair for Networks

Self nominations for all of these positions are en-couraged.

Page 15: INFORMS OS Today 1(1) pdfsubjectmanchoso/papers/OStoday0311.pdf\INFORMS OS Today." It is a great pleasure to welcome Shabbir Ahmed (sahmed@isye.gatech.edu) as its Editor. Our plan

Volume 1 Number 1 March 2011 15

To ensure smooth transition of the chairmanshipof the Society, the Chair-Elect is elected and servesa one-year term before assuming a two-year positionas Chair; thus this is a three-year commitment. Asstated in the Society Bylaws, “The Chair shall bethe chief administrative officer of the OS and shallbe responsible for the development and execution ofthe Society’s program. He/she shall (a) call and or-ganize meetings of the OS, (b) appoint ad hoc com-mittees as required, (c) appoint chairs and membersof standing committees, (d) manage the affairs of theOS between meetings, and (e) preside at OS Councilmeetings and Society membership meetings.”

According to Society Bylaws, “The Secre-tary/Treasurer shall conduct the correspondence ofthe OS, keep the minutes and records of the Society,maintain contact with INFORMS, receive reports ofactivities from those Society Committees that maybe established, conduct the election of officers andMembers of Council for the OS, make arrangementsfor the regular meetings of the Council and the mem-bership meetings of the OS. As treasurer, he/sheshall also be responsible for disbursement of the So-ciety funds as directed by the OS Council, prepareand distribute reports of the financial condition ofthe OS, help prepare the annual budget of the Soci-ety for submission to INFORMS. It will be the re-sponsibility of the outgoing Secretary/Treasurer tomake arrangements for the orderly transfer of all theSociety’s records to the person succeeding him/her.”The Secretary/Treasurer shall serve a two-year term.

According to Society Bylaws, “The main respon-sibility of the Vice Chairs will be to help INFORMSLocal Organizing committees identify cluster chairsand/or session chairs for the annual meetings. Ingeneral, the Vice Chairs shall serve as the point ofcontact with their sub-disciplines.” Vice Chairs shall

serve two-year terms.Please send your nominations or self-nominations

to Marina Epelman ([email protected]), includ-ing contact information for the nominee, by Sunday,July 31, 2011. Online elections will begin in mid-August, with new officers taking up their duties atthe conclusion of the 2011 annual INFORMS meet-ing.

Announcing the FourthINFORMS Optimization

Society Conference

The School of Business Administration at the Uni-versity of Miami is pleased to host the Fourth IN-FORMS Optimization Society conference to takeplace on the Coral Gables campus of the Univer-sity of Miami, February 24–26, 2012. The themeof the conference is “Optimization and Analytics:New Frontiers in Theory and Practice.” The Con-ference Co-Chairs are Anuj Mehrotra and MichaelA. Trick. The Organizing Committee includes Ed-ward Baker, Hari Natarajan and Tallys Yunes. Fur-ther details will gradually appear on the conferenceweb site at http://www.bus.miami.edu/events/

ios2012/index.html.Previous editions of the conference were at San

Antonio (2006), Atlanta (2008) and Gainesville(2010). Some of you may know/remember that theSchool of Business Administration at the Universityof Miami hosted the wonderful Mixed-Integer Pro-gramming meeting, MIP 2006. Anyone who was atthat meeting knows that it is a delightful venue –especially in February.