Mcg Thesis

download Mcg Thesis

of 155

  • date post

    27-Dec-2015
  • Category

    Documents

  • view

    25
  • download

    0

Embed Size (px)

description

Mcg thesis document

Transcript of Mcg Thesis

  • DISCIPLINED CONVEX PROGRAMMING

    A DISSERTATION

    SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING

    AND THE COMMITTEE ON GRADUATE STUDIES

    OF STANFORD UNIVERSITY

    IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

    FOR THE DEGREE OF

    DOCTOR OF PHILOSOPHY

    Michael Charles Grant

    December 2004

  • c Copyright by Michael Charles Grant 2005All Rights Reserved

    ii

  • I certify that I have read this dissertation and that, in my opinion, it is fully adequate

    in scope and quality as a dissertation for the degree of Doctor of Philosophy.

    Stephen Boyd (Principal Adviser)

    I certify that I have read this dissertation and that, in my opinion, it is fully adequate

    in scope and quality as a dissertation for the degree of Doctor of Philosophy.

    Yinyu Ye

    I certify that I have read this dissertation and that, in my opinion, it is fully adequate

    in scope and quality as a dissertation for the degree of Doctor of Philosophy.

    Michael Saunders

    Approved for the University Committee on Graduate Studies.

    iii

  • iv

  • Preface

    Convex programming unifies and generalizes least squares (LS), linear programming (LP), and

    quadratic programming (QP). It has received considerable attention recently for a number of rea-

    sons: its attractive theoretical properties; the development of efficient, reliable numerical algorithms;

    and the discovery of a wide variety of applications in both scientific and non-scientific fields. Courses

    devoted entirely to convex programming are available at Stanford and elsewhere.

    For these reasons, convex programming has the potential to become a ubiquitous numerical

    technology alongside LS, LP, and QP. Nevertheless, there remains a significant impediment to its

    more widespread adoption: the high level of expertise in both convex analysis and numerical algo-

    rithms required to use it. For potential users whose focus is the application, this prerequisite poses

    a formidable barrier, especially if it is not yet certain that the outcome will be better than with

    other methods. In this dissertation, we introduce a modeling methodology called disciplined convex

    programming whose purpose is to lower this barrier.

    As its name suggests, disciplined convex programming imposes a set of conventions to follow when

    constructing problems. Compliant problems are called, appropriately, disciplined convex programs,

    or DCPs. The conventions are simple and teachable, taken from basic principles of convex analysis,

    and inspired by the practices of experts who regularly study and apply convex optimization today.

    The conventions do not limit generality; but they do allow the steps required to solve DCPs to be

    automated and enhanced. For instance, determining if an arbitrary mathematical program is convex

    is an intractable task, but determining if that same problem is a DCP is straightforward. A number

    of common numerical methods for optimization can be adapted to solve DCPs. The conversion of

    DCPs to solvable form can be fully automated, and the natural problem structure in DCPs can be

    exploited to improve performance.

    Disciplined convex programming also provides a framework for collaboration between users with

    different levels of expertise. In short, disciplined convex programming allows applications-oriented

    users to focus on modeling, andas they would with LS, LP, and QPto leave the underlying

    mathematical details to experts.

    v

  • Acknowledgments

    To the members my reading committee, Stephen Boyd, Yinyu Ye, and Michael Saunders: thank you

    for your guidance, criticism, and endorsement. I am truly fortunate to have studied under men of

    such tremendous (and well-deserved) reputation. If my bibliography were trimmed to include only

    the works you have authored, it would remain an undeniably impressive list.

    To my adviser Stephen Boyd: it has been a true privilege to be under your tutelage. Your

    knowledge is boundless, your compassion for your students is heartfelt, and your perspective on

    engineering, mathematics, and academia is unique and attractive. I would not have returned to

    Stanford to finish this work for any other professor. I look forward to our continued collaboration.

    To the Durand 110 crowd and all of my other ISL colleagues, including David Banjerdpongchai,

    Ragu Balakrishnan, Young-Man Cho, Laurent El-Ghaoui, Maryam Fazel, Eric Feron, Marc Gold-

    burg, Arash Hassibi, Haitham Hindi, Herve Lebret, Miguel Lobo, Costis Maglaras, Denise Murphy,

    Greg Raleigh, Lieven Vandenberghe, and Clive Wu: thank you for your collaboration, encourage-

    ment, and friendship.

    To Alexandre dAspremont, Jon Dattoro, Arpita Ghosh, Kan-Lin Hsiung, Siddharth Joshi, Seung

    Jean Kim, Kwangmoo Koh, Alessandro Magnani, Almir Mutapcic, Dan ONeill, Sikandar Samar,

    Jun Sun, Lin Xiao, and Sunghee Yun: you have made this old veteran feel welcome in Professor

    Boyds group again, and I wish you true success in your research.

    To Doug Chrissan, Paul Dankoski, and V.K. Jones: thank you for your faithful friendship. Just

    so you know, it is a holiday at my house every day. Forgive me, though, because I still prefer Jing

    Jing over Su Hong to Go.

    To my parents, my brother, and my entire family: this would simply never have happened

    without your support, encouragement, and love. Thank you in particular, Mom and Dad, for your

    tireless efforts to provide us with a first-class educationand not just in school.

    Last but not least, to my darling wife Callie: how can I adequately express my love and gratitude

    to you? Thank you for your unwavering support during this endeavor, particularly in these last

    stages, and especially through the perils of home remodeling and the joys of parenthood. May the

    time spent completing this work represent but a paragraph in the book of our long lives together.

    vi

  • Contents

    Preface v

    Acknowledgments vi

    1 Introduction 1

    1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.1.1 The Euclidean norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.1.2 The Manhattan norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.1.3 The Chebyshev norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.1.4 The Holder norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    1.1.5 An uncommon choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    1.1.6 The expertise barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    1.1.7 Lowering the barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    1.2 Convex programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    1.2.1 Theoretical properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    1.2.2 Numerical algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    1.2.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    1.2.4 Convexity and differentiability . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    1.3 Convexity verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    1.3.1 Smooth NLP analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    1.3.2 Parsing methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    1.3.3 Empirical analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    1.4 Modeling frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    1.5 Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    1.6 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    2 Motivation 21

    2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    2.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    2.2.1 The dual problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    vii

  • 2.2.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    2.2.3 Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    2.2.4 Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    2.2.5 Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    2.2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    2.3 The solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    2.4 The smooth case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    2.5 The Euclidean norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    2.6 The Manhattan norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    2.7 The Chebyshev norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    2.8 The Holder lp norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    2.9 The largest-L norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    2.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .