Bhattacharyya et al (2009) Linear ctrl theory.pdf

930

Transcript of Bhattacharyya et al (2009) Linear ctrl theory.pdf

  • Linear ControlTheory

    Structure, Robustness,and Optimization

    DK632X_FM.indd 1 12/17/08 2:10:39 PM

  • AUTOMATION AND CONTROL ENGINEERINGA Series of Reference Books and Textbooks

    Series Editors

    1. Nonlinear Control of Electric Machinery, Darren M. Dawson, Jun Hu, and Timothy C. Burg

    2. Computational Intelligence in Control Engineering, Robert E. King3. Quantitative Feedback Theory: Fundamentals and Applications,

    Constantine H. Houpis and Steven J. Rasmussen4. Self-Learning Control of Finite Markov Chains, A. S. Poznyak, K. Najim,

    and E. Gmez-Ramrez5. Robust Control and Filtering for Time-Delay Systems,

    Magdi S. Mahmoud6. Classical Feedback Control: With MATLAB, Boris J. Lurie

    and Paul J. Enright7. Optimal Control of Singularly Perturbed Linear Systems

    and Applications: High-Accuracy Techniques, Zoran Gajif and Myo-Taeg Lim

    8. Engineering System Dynamics: A Unified Graph-Centered Approach,Forbes T. Brown

    9. Advanced Process Identification and Control, Enso Ikonen and Kaddour Najim

    10. Modern Control Engineering, P. N. Paraskevopoulos11. Sliding Mode Control in Engineering, edited by Wilfrid Perruquetti

    and Jean-Pierre Barbot12. Actuator Saturation Control, edited by Vikram Kapila

    and Karolos M. Grigoriadis13. Nonlinear Control Systems, Zoran Vuki, Ljubomir Kuljaa, Dali

    Donlagi, and Sejid Tesnjak14. Linear Control System Analysis & Design: Fifth Edition, John DAzzo,

    Constantine H. Houpis and Stuart Sheldon15. Robot Manipulator Control: Theory & Practice, Second Edition,

    Frank L. Lewis, Darren M. Dawson, and Chaouki Abdallah16. Robust Control System Design: Advanced State Space Techniques,

    Second Edition, Chia-Chi Tsui17. Differentially Flat Systems, Hebertt Sira-Ramirez

    and Sunil Kumar Agrawal

    FRANK L. LEWIS, PH.D., FELLOW IEEE, FELLOW IFAC

    ProfessorAutomation and Robotics Research Institute

    The University of Texas at Arlington

    SHUZHI SAM GE, PH.D.,FELLOW IEEE

    ProfessorInteractive Digital Media Institute

    The National University of Singapore

    DK632X_FM.indd 2 12/17/08 2:10:39 PM

  • 18. Chaos in Automatic Control, edited by Wilfrid Perruquetti and Jean-Pierre Barbot

    19. Fuzzy Controller Design: Theory and Applications, Zdenko Kovacic and Stjepan Bogdan

    20. Quantitative Feedback Theory: Fundamentals and Applications, Second Edition, Constantine H. Houpis, Steven J. Rasmussen, and Mario Garcia-Sanz

    21. Neural Network Control of Nonlinear Discrete-Time Systems,Jagannathan Sarangapani

    22. Autonomous Mobile Robots: Sensing, Control, Decision Making and Applications, edited by Shuzhi Sam Ge and Frank L. Lewis

    23. Hard Disk Drive: Mechatronics and Control, Abdullah Al Mamun,GuoXiao Guo, and Chao Bi

    24. Stochastic Hybrid Systems, edited by Christos G. Cassandras and John Lygeros

    25. Wireless Ad Hoc and Sensor Networks: Protocols, Performance, and Control, Jagannathan Sarangapani

    26. Modeling and Control of Complex Systems, edited by Petros A. Ioannou and Andreas Pitsillides

    27. Intelligent Freight Transportation, edited by Petros A. Ioannou28. Feedback Control of Dynamic Bipedal Robot Locomotion,

    Eric R. Westervelt, Jessy W. Grizzle, Christine Chevallereau, Jun Ho Choi,and Benjamin Morris

    29. Optimal and Robust Estimation: With an Introduction to StochasticControl Theory, Second Edition, Frank L. Lewis; Lihua Xie and Dan Popa

    30. Intelligent Systems: Modeling, Optimization, and Control, Yung C. Shinand Chengying Xu

    31. Optimal Control: Weakly Coupled Systems and Applications,Zoran Gajic, Myo-Taeg Lim, Dobrila Skataric, Wu-Chung Su, and Vojislav Kecman

    32. Deterministic Learning Theory for Identification, Control, and Recognition, Cong Wang and David J. Hill

    33. Linear Control Theory: Structure, Robustness, and Optimization,Shankar P. Bhattacharyya, Aniruddha Datta, and Lee H. Keel

    v

    DK632X_FM.indd 3 12/17/08 2:10:39 PM

  • DK632X_FM.indd 4 12/17/08 2:10:39 PM

  • Shankar P. BhattacharyyaTexas A&M University

    College Station, Texas, U.S.A.

    Aniruddha DattaTexas A&M University

    College Station, Texas, U.S.A.

    L. H. KeelTennessee State University

    Nashville, Tennessee, U.S.A.

    CRC Press is an imprint of theTaylor & Francis Group, an informa business

    Boca Raton London New York

    Linear ControlTheory

    Structure, Robustness,and Optimization

    DK632X_FM.indd 5 12/17/08 2:10:39 PM

  • CRC PressTaylor & Francis Group6000 Broken Sound Parkway NW, Suite 300Boca Raton, FL 334872742

    2009 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business

    No claim to original U.S. Government worksPrinted in the United States of America on acidfree paper10 9 8 7 6 5 4 3 2 1

    International Standard Book Number13: 9780849340635 (Hardcover)

    This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.

    Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.

    For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 9787508400. CCC is a notforprofit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

    Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.

    Library of Congress CataloginginPublication Data

    Datta, Aniruddha, 1963Linear control theory : structure, robustness, and optimization / authors, Aniruddha Datta, Lee H. Keel, Shankar P. Bhattacharyya.p. cm. (Automation and control engineering)

    A CRC title.Includes bibliographical references and index.ISBN 9780849340635 (hardcover : alk. paper)1. Linear control systems. I. Keel, L. H. (Lee H.) II. Bhattacharyya, S. P.

    (Shankar P.), 1946 III. Title. IV. Series.

    TJ220.D38 2009629.832dc22 2008052001

    Visit the Taylor & Francis Web site athttp://www.taylorandfrancis.comand the CRC Press Web site athttp://www.crcpress.com

    DK632X_FM.indd 6 12/17/08 2:10:39 PM

  • DEDICATION

    To my Guru, Ustad (Baba) Ali Akbar Khan, the greatest musi-cian in the world. Baba opened my eye to Nada Brahma throughthe divine music of the Seni Maihar Gharana.

    S.P. Bhattacharyya

    To My Beloved Wife Anindita

    A. Datta

    To My Beloved Wife Kuisook

    L.H. Keel

    vii

  • Contents

    PREFACE xvii

    I THREE TERM CONTROLLERS 1

    1 PID CONTROLLERS:

    AN OVERVIEW OF CLASSICAL THEORY 3

    1.1 Introduction to Control . . . . . . . . . . . . . . . . . . . . . 31.2 The Magic of Integral Control . . . . . . . . . . . . . . . . . . 51.3 PID Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 81.4 Classical PID Controller Design . . . . . . . . . . . . . . . . . 10

    1.4.1 The Ziegler-Nichols Step Response Method . . . . . . 101.4.2 The Ziegler-Nichols Frequency Response Method . . . 111.4.3 PID Settings Using the Internal Model Controller

    Design Technique . . . . . . . . . . . . . . . . . . . . . 151.4.4 Dominant Pole Design: The Cohen-Coon Method . . . 171.4.5 New Tuning Approaches . . . . . . . . . . . . . . . . . 17

    1.5 Integrator Windup . . . . . . . . . . . . . . . . . . . . . . . . 191.5.1 Setpoint Limitation . . . . . . . . . . . . . . . . . . . . 201.5.2 Back-Calculation and Tracking . . . . . . . . . . . . . 201.5.3 Conditional Integration . . . . . . . . . . . . . . . . . 21

    1.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211.7 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 24

    2 PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS 25

    2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2 Stabilizing Set . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.3 Signature Formulas . . . . . . . . . . . . . . . . . . . . . . . . 29

    2.3.1 Computation of (p) . . . . . . . . . . . . . . . . . . . 302.3.2 Alternative Signature Expression . . . . . . . . . . . . 32

    2.4 Computation of the PID Stabilizing Set . . . . . . . . . . . . 332.5 PID Design with Performance Requirements . . . . . . . . . . 38

    2.5.1 Signature Formulas for Complex Polynomials . . . . . 402.5.2 Complex PID Stabilization Algorithm . . . . . . . . . 422.5.3 PID Design with Guaranteed Gain and Phase Margins 442.5.4 Synthesis of PID Controllers with an H Criterion . . 442.5.5 PID Controller Design for H Robust Performance . 49

    ix

  • x Linear Control Theory

    2.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522.7 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 54

    3 PID CONTROLLERS FOR SYSTEMS

    WITH TIME DELAY 55

    3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.2 Characteristic Equations for Delay Systems . . . . . . . . . . 573.3 The Pade Approximation and Its Limitations . . . . . . . . . 60

    3.3.1 First Order Pade Approximation . . . . . . . . . . . . 623.3.2 Higher Order Pade Approximations . . . . . . . . . . . 65

    3.4 The Hermite-Biehler Theorem for Quasi-polynomials . . . . . 693.4.1 Applications to Control Theory . . . . . . . . . . . . . 71

    3.5 Stability of Systems with a Single Delay . . . . . . . . . . . . 773.6 PID Stabilization of First Order Systems with Time Delay . . 85

    3.6.1 The PID Stabilization Problem . . . . . . . . . . . . . 863.6.2 Open-Loop Stable Plant . . . . . . . . . . . . . . . . . 873.6.3 Open-Loop Unstable Plant . . . . . . . . . . . . . . . 105

    3.7 PID Stabilization of Arbitrary LTI Systems witha Single Time Delay . . . . . . . . . . . . . . . . . . . . . . . 1163.7.1 Connection between Pontryagins Theory and

    the Nyquist Criterion . . . . . . . . . . . . . . . . . . 1173.7.2 Problem Formulation and Solution Approach . . . . . 1213.7.3 Proportional Controllers . . . . . . . . . . . . . . . . . 1233.7.4 PI Controllers . . . . . . . . . . . . . . . . . . . . . . . 1263.7.5 PID Controllers for an Arbitrary LTI Plant with Delay 128

    3.8 Proofs of Lemmas 3.3, 3.4, and 3.5 . . . . . . . . . . . . . . . 1353.8.1 Preliminary Results . . . . . . . . . . . . . . . . . . . 1353.8.2 Proof of Lemma 3.3 . . . . . . . . . . . . . . . . . . . 1393.8.3 Proof of Lemma 3.4 . . . . . . . . . . . . . . . . . . . 1413.8.4 Proof of Lemma 3.5 . . . . . . . . . . . . . . . . . . . 141

    3.9 Proofs of Lemmas 3.7 and 3.9 . . . . . . . . . . . . . . . . . . 1443.9.1 Proof of Lemma 3.7 . . . . . . . . . . . . . . . . . . . 1443.9.2 Proof of Lemma 3.9 . . . . . . . . . . . . . . . . . . . 145

    3.10 An Example of Computing the Stabilizing Set . . . . . . . . . 1483.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1503.12 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 151

    4 DIGITAL PID CONTROLLER DESIGN 153

    4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 1554.3 Tchebyshev Representation and Root Clustering . . . . . . . 156

    4.3.1 Tchebyshev Representation of Real Polynomials . . . . 1564.3.2 Interlacing Conditions for Root Clustering and Schur

    Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 1584.3.3 Tchebyshev Representation of Rational Functions . . . 159

  • Table of Contents xi

    4.4 Root Counting Formulas . . . . . . . . . . . . . . . . . . . . . 160

    4.4.1 Phase Unwrapping and Root Distribution . . . . . . . 160

    4.4.2 Root Counting and Tchebyshev Representation . . . . 161

    4.5 Digital PI, PD, and PID Controllers . . . . . . . . . . . . . . 163

    4.6 Computation of the Stabilizing Set . . . . . . . . . . . . . . . 165

    4.6.1 Constant Gain Stabilization . . . . . . . . . . . . . . . 165

    4.6.2 Stabilization with PI Controllers . . . . . . . . . . . . 168

    4.6.3 Stabilization with PD Controllers . . . . . . . . . . . . 169

    4.7 Stabilization with PID Controllers . . . . . . . . . . . . . . . 170

    4.7.1 Maximally Deadbeat Control . . . . . . . . . . . . . . 173

    4.7.2 Maximal Delay Tolerance Design . . . . . . . . . . . . 175

    4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

    4.9 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 179

    5 FIRST ORDER CONTROLLERS FOR LTI SYSTEMS 181

    5.1 Root Invariant Regions . . . . . . . . . . . . . . . . . . . . . . 181

    5.2 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

    5.3 Robust Stabilization by First Order Controllers . . . . . . . . 189

    5.4 H Design with First Order Controllers . . . . . . . . . . . . 190

    5.5 First Order Discrete-Time Controllers . . . . . . . . . . . . . 195

    5.5.1 Computation of Root Distribution Invariant Regions . 196

    5.5.2 Delay Tolerance . . . . . . . . . . . . . . . . . . . . . . 201

    5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

    5.7 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 206

    6 CONTROLLER SYNTHESIS FREE OF

    ANALYTICAL MODELS 207

    6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

    6.2 Mathematical Preliminaries . . . . . . . . . . . . . . . . . . . 210

    6.3 Phase, Signature, Poles, Zeros, and Bode Plots . . . . . . . . 215

    6.4 PID Synthesis for Delay Free Continuous-Time Systems . . . 218

    6.5 PID Synthesis for Systems with Delay . . . . . . . . . . . . . 222

    6.6 PID Synthesis for Performance . . . . . . . . . . . . . . . . . 224

    6.7 An Illustrative Example: PID Synthesis . . . . . . . . . . . . 227

    6.8 Model Free Synthesis for First Order Controllers . . . . . . . 232

    6.9 Model Free Synthesis of First Order Controllers forPerformance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

    6.10 Data Based Design vs. Model Based Design . . . . . . . . . . 240

    6.11 Data-Robust Design via Interval Linear Programming . . . . 243

    6.11.1 Data Robust PID Design . . . . . . . . . . . . . . . . 244

    6.12 Computer-Aided Design . . . . . . . . . . . . . . . . . . . . . 251

    6.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255

    6.14 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 257

  • xii Linear Control Theory

    7 DATA DRIVEN SYNTHESIS OF THREE

    TERM DIGITAL CONTROLLERS 259

    7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

    7.2 Notation and Preliminaries . . . . . . . . . . . . . . . . . . . 2607.3 PID Controllers for Discrete-Time Systems . . . . . . . . . . 2627.4 Data Based Design: Impulse Response Data . . . . . . . . . . 270

    7.4.1 Example: Stabilizing Set from Impulse Response . . . 273

    7.4.2 Sets Satisfying Performance Requirements . . . . . . . 2767.5 First Order Controllers for Discrete-Time Systems . . . . . . 2787.6 Computer-Aided Design . . . . . . . . . . . . . . . . . . . . . 2827.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288

    7.8 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 289

    II ROBUST PARAMETRIC CONTROL 291

    8 STABILITY THEORY FOR POLYNOMIALS 293

    8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293

    8.2 The Boundary Crossing Theorem . . . . . . . . . . . . . . . . 2948.2.1 Zero Exclusion Principle . . . . . . . . . . . . . . . . . 301

    8.3 The Hermite-Biehler Theorem . . . . . . . . . . . . . . . . . . 3028.3.1 Hurwitz Stability . . . . . . . . . . . . . . . . . . . . . 302

    8.3.2 Hurwitz Stability for Complex Polynomials . . . . . . 3108.3.3 Schur Stability . . . . . . . . . . . . . . . . . . . . . . 3128.3.4 General Stability Regions . . . . . . . . . . . . . . . . 319

    8.4 Schur Stability Test . . . . . . . . . . . . . . . . . . . . . . . 3198.5 Hurwitz Stability Test . . . . . . . . . . . . . . . . . . . . . . 322

    8.5.1 Root Counting and the Routh Table . . . . . . . . . . 3268.5.2 Complex Polynomials . . . . . . . . . . . . . . . . . . 327

    8.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3288.7 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 332

    9 STABILITY OF A LINE SEGMENT 333

    9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3339.2 Bounded Phase Conditions . . . . . . . . . . . . . . . . . . . 3349.3 Segment Lemma . . . . . . . . . . . . . . . . . . . . . . . . . 341

    9.3.1 Hurwitz Case . . . . . . . . . . . . . . . . . . . . . . . 3419.4 Schur Segment Lemma via Tchebyshev Representation . . . . 3459.5 Some Fundamental Phase Relations . . . . . . . . . . . . . . 348

    9.5.1 Phase Properties of Hurwitz Polynomials . . . . . . . 3489.5.2 Phase Relations for a Segment . . . . . . . . . . . . . 356

    9.6 Convex Directions . . . . . . . . . . . . . . . . . . . . . . . . 3599.7 The Vertex Lemma . . . . . . . . . . . . . . . . . . . . . . . . 369

    9.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3749.9 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 378

  • Table of Contents xiii

    10 STABILITY MARGIN COMPUTATION 379

    10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37910.2 The Parametric Stability Margin . . . . . . . . . . . . . . . . 380

    10.2.1 The Stability Ball in Parameter Space . . . . . . . . . 38010.2.2 The Image Set Approach . . . . . . . . . . . . . . . . 382

    10.3 Stability Margin Computation . . . . . . . . . . . . . . . . . . 38410.3.1 2 Stability Margin . . . . . . . . . . . . . . . . . . . . 38710.3.2 Discontinuity of the Stability Margin . . . . . . . . . . 39110.3.3 2 Stability Margin for Time-Delay Systems . . . . . . 39210.3.4 and 1 Stability Margins . . . . . . . . . . . . . . . 395

    10.4 The Mapping Theorem . . . . . . . . . . . . . . . . . . . . . . 39610.4.1 Robust Stability via the Mapping Theorem . . . . . . 39910.4.2 Refinement of the Convex Hull Approximation . . . . 402

    10.5 Stability Margins of Multilinear Interval Systems . . . . . . . 40510.5.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 407

    10.6 Robust Stability of Interval Matrices . . . . . . . . . . . . . . 41610.6.1 Unity Rank Perturbation Structure . . . . . . . . . . . 41610.6.2 Interval Matrix Stability via the Mapping Theorem . . 41710.6.3 Numerical Examples . . . . . . . . . . . . . . . . . . . 418

    10.7 Robustness Using a Lyapunov Approach . . . . . . . . . . . . 42310.7.1 Robustification Procedure . . . . . . . . . . . . . . . . 427

    10.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43210.9 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 441

    11 STABILITY OF A POLYTOPE 443

    11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44311.2 Stability of Polytopic Families . . . . . . . . . . . . . . . . . . 444

    11.2.1 Exposed Edges and Vertices . . . . . . . . . . . . . . . 44511.2.2 Bounded Phase Conditions for Checking Robust

    Stability of Polytopes . . . . . . . . . . . . . . . . . . 44811.2.3 Extremal Properties of Edges and Vertices . . . . . . . 452

    11.3 The Edge Theorem . . . . . . . . . . . . . . . . . . . . . . . . 45511.3.1 Edge Theorem . . . . . . . . . . . . . . . . . . . . . . 45611.3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 464

    11.4 Stability of Interval Polynomials . . . . . . . . . . . . . . . . 47011.4.1 Kharitonovs Theorem for Real Polynomials . . . . . . 47011.4.2 Kharitonovs Theorem for Complex Polynomials . . . 47811.4.3 Interlacing and Image Set . . . . . . . . . . . . . . . . 48111.4.4 Image Set Based Proof of Kharitonovs Theorem . . . 48411.4.5 Image Set Edge Generators and Exposed Edges . . . . 48511.4.6 Extremal Properties of the Kharitonov Polynomials . 48711.4.7 Robust State Feedback Stabilization . . . . . . . . . . 492

    11.5 Stability of Interval Systems . . . . . . . . . . . . . . . . . . . 49811.5.1 Problem Formulation and Notation . . . . . . . . . . . 50011.5.2 The Generalized Kharitonov Theorem . . . . . . . . . 504

  • xiv Linear Control Theory

    11.5.3 Comparison with the Edge Theorem . . . . . . . . . . 51411.5.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 51511.5.5 Image Set Interpretation . . . . . . . . . . . . . . . . . 521

    11.6 Polynomic Interval Families . . . . . . . . . . . . . . . . . . . 52211.6.1 Robust Positivity . . . . . . . . . . . . . . . . . . . . . 52411.6.2 Robust Stability . . . . . . . . . . . . . . . . . . . . . 52811.6.3 Application to Controller Synthesis . . . . . . . . . . . 532

    11.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53611.8 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 546

    12 ROBUST CONTROL DESIGN 549

    12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54912.2 Interval Control Systems . . . . . . . . . . . . . . . . . . . . . 55112.3 Frequency Domain Properties . . . . . . . . . . . . . . . . . . 55312.4 Nyquist, Bode, and Nichols Envelopes . . . . . . . . . . . . . 56312.5 Extremal Stability Margins . . . . . . . . . . . . . . . . . . . 573

    12.5.1 Guaranteed Gain and Phase Margins . . . . . . . . . . 57412.5.2 Worst Case Parametric Stability Margin . . . . . . . . 574

    12.6 Robust Parametric Classical Design . . . . . . . . . . . . . . . 57712.6.1 Guaranteed Classical Design . . . . . . . . . . . . . . . 57712.6.2 Optimal Controller Parameter Selection . . . . . . . . 588

    12.7 Robustness Under Mixed Perturbations . . . . . . . . . . . . 59112.7.1 Small Gain Theorem . . . . . . . . . . . . . . . . . . . 59212.7.2 Small Gain Theorem for Interval Systems . . . . . . . 593

    12.8 Robust Small Gain Theorem . . . . . . . . . . . . . . . . . . 60212.9 Robust Performance . . . . . . . . . . . . . . . . . . . . . . . 60612.10 The Absolute Stability Problem . . . . . . . . . . . . . . . . . 60912.11 Characterization of the SPR Property . . . . . . . . . . . . . 615

    12.11.1 SPR Conditions for Interval Systems . . . . . . . . . . 61712.12 The Robust Absolute Stability Problem . . . . . . . . . . . . 62512.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63212.14 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 637

    III OPTIMAL AND ROBUST CONTROL 639

    13 THE LINEAR QUADRATIC REGULATOR 641

    13.1 An Optimal Control Problem . . . . . . . . . . . . . . . . . . 64113.1.1 Principle of Optimality . . . . . . . . . . . . . . . . . . 64213.1.2 Hamilton-Jacobi-Bellman Equation . . . . . . . . . . . 642

    13.2 The Finite-Time LQR Problem . . . . . . . . . . . . . . . . . 64513.2.1 Solution of the Matrix Ricatti Differential Equation . 64713.2.2 Cross Product Terms . . . . . . . . . . . . . . . . . . . 647

    13.3 The Infinite Horizon LQR Problem . . . . . . . . . . . . . . . 64813.3.1 General Conditions for Optimality . . . . . . . . . . . 64813.3.2 The Infinite Horizon LQR Problem . . . . . . . . . . . 650

  • Table of Contents xv

    13.4 Solution of the Algebraic Riccati Equation . . . . . . . . . . . 65113.5 The LQR as an Output Zeroing Problem . . . . . . . . . . . . 65813.6 Return Difference Relations . . . . . . . . . . . . . . . . . . . 66013.7 Guaranteed Stability Margins for the LQR . . . . . . . . . . . 661

    13.7.1 Gain Margin . . . . . . . . . . . . . . . . . . . . . . . 66313.7.2 Phase Margin . . . . . . . . . . . . . . . . . . . . . . . 66313.7.3 Single Input Case . . . . . . . . . . . . . . . . . . . . . 664

    13.8 Eigenvalues of the Optimal Closed Loop System . . . . . . . . 66513.8.1 Closed-Loop Spectrum . . . . . . . . . . . . . . . . . . 665

    13.9 Optimal Dynamic Compensators . . . . . . . . . . . . . . . . 66713.9.1 Dual Compensators . . . . . . . . . . . . . . . . . . . 670

    13.10 Servomechanisms and Regulators . . . . . . . . . . . . . . . . 67213.10.1 Notation and Problem Formulation . . . . . . . . . . . 67213.10.2 Reference and Disturbance Signal Classes . . . . . . . 67313.10.3 Solution of the Servomechanism Problem . . . . . . . 673

    13.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68013.12 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 687

    14 SISO H AND l1 OPTIMAL CONTROL 689

    14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68914.2 The Small Gain Theorem . . . . . . . . . . . . . . . . . . . . 69314.3 L-Stability and Robustness via the Small Gain Theorem . . . 70414.4 YJBK Parametrization of All Stabilizing

    Compensators (Scalar Case) . . . . . . . . . . . . . . . . . . . 70914.5 Control Problems in the H Framework . . . . . . . . . . . . 71614.6 H Optimal Control: SISO Case . . . . . . . . . . . . . . . . 725

    14.6.1 Dual Spaces . . . . . . . . . . . . . . . . . . . . . . . . 72914.6.2 Inner Product Spaces . . . . . . . . . . . . . . . . . . 73314.6.3 Orthogonality and Alignment in Noninner

    Product Spaces . . . . . . . . . . . . . . . . . . . . . . 73614.6.4 The All-Pass Property of H Optimal Controllers . . 73714.6.5 The Single-Input Single-Output Solution . . . . . . . . 742

    14.7 l1 Optimal Control: SISO Case . . . . . . . . . . . . . . . . . 74714.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75514.9 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 757

    15 H OPTIMAL MULTIVARIABLE CONTROL 759

    15.1 H Optimal Control Using Hankel Theory . . . . . . . . . . 75915.1.1 H and Hankel Operators . . . . . . . . . . . . . . . . 75915.1.2 State Space Computations of the Hankel Norm . . . . 76315.1.3 State Space Computation of an All-Pass Extension . . 76915.1.4 H Optimal Control Based on the YJBK

    Parametrization and Hankel Approximation Theory . 77115.1.5 LQ Return Difference Equality . . . . . . . . . . . . . 77615.1.6 State Space Formulas for Coprime Factorizations . . . 780

  • xvi Linear Control Theory

    15.2 The State Space Solution of H Optimal Control . . . . . . . 78415.2.1 The H Solution . . . . . . . . . . . . . . . . . . . . . 78415.2.2 The H2 Solution . . . . . . . . . . . . . . . . . . . . . 795

    15.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80415.4 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 806

    A SIGNAL SPACES 807

    A.1 Vector Spaces and Norms . . . . . . . . . . . . . . . . . . . . 807A.2 Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 819A.3 Equivalent Norms and Convergence . . . . . . . . . . . . . . . 825A.4 Relations between Normed Spaces . . . . . . . . . . . . . . . 828A.5 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 832

    B NORMS FOR LINEAR SYSTEMS 833

    B.1 Induced Norms for Linear Maps . . . . . . . . . . . . . . . . . 833B.2 Properties of Fourier and Laplace Transforms . . . . . . . . . 844

    B.2.1 Fourier Transforms . . . . . . . . . . . . . . . . . . . . 846B.2.2 Laplace Transforms . . . . . . . . . . . . . . . . . . . . 848

    B.3 Lp/lp Norms of Convolutions of Signals . . . . . . . . . . . . 849B.3.1 L1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . 849B.3.2 Lp Theory . . . . . . . . . . . . . . . . . . . . . . . . . 850

    B.4 Induced Norms of Convolution Maps . . . . . . . . . . . . . . 852B.5 Notes and References . . . . . . . . . . . . . . . . . . . . . . . 865

    IV EPILOGUE 867

    ROBUSTNESS AND FRAGILITY 869

    Feedback, Robustness, and Fragility . . . . . . . . . . . . . . . . . . 869Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . 885

    REFERENCES 887

    Index 905

  • PREFACE

    This book describes three major areas of Control Engineering Theory. In PartI we develop results directed at the design of PID and first order controllers forcontinuous and discrete time linear systems possibly containing delays. Thisclass of problems is crucially important in applications. The main featuresof our results are the computation of complete sets of controllers achievingstability and performance. They are developed for model based as well asmeasurement based approaches. In the latter case controller synthesis is basedon measured responses only and no identification is required. The results ofPart I constitute a modernized version of Classical Control Theory appropriateto the computer-aided design environment of the 21st century.

    In Part II we deal with the Robust Stability and Performance of systemsunder parametric as well as unstructured uncertainty. Several elegant andsharp results such as Kharitonovs Theorem and its extensions, the EdgeTheorem and the Mapping Theorem are described. The main thrust of theresults is to reduce the verification of stability and performance over the entireuncertainty set to certain extremal test sets, which are points or lines. Theseresults are useful to engineers as aids to robustness analysis and synthesis ofcontrol systems.

    Part III deals with Optimal Control of linear systems. We develop the stan-dard theories of the Linear Quadratic Regulator (LQR), H and

    1 optimalcontrol, and associated results. In the LQR chapter we include results on theservomechanism problem.

    We have been using this material successfully in a second graduate levelcourse in Control Systems for some time. It is our opinion that it givesa balanced coverage of elegant mathematical theory and useful engineeringoriented results that can serve the needs of a diverse group of students fromElectrical, Mechanical, Chemical, Aerospace, and Civil Engineering as well asComputer Science and Mathematics. It is possible to cover the entire book ina 14-week semester with a judicious choice of reading assignments.

    Many of the results described in the book were obtained in collaborationwith our graduate students and it is a pleasure to acknowledge the manycontributions of P.M.G. Ferreira, Herve Chapellat, Ming-Tzu Ho, GuillermoJ. Silva, Hao Xu, Sandipan Mitra, and Richard Tantaris.

    Part I contains material published in the earlier monograph PID Con-trollers for Time-Delay Systems by Guillermo J. Silva, A. Datta, and S.P.Bhattacharyya, Birkhaueser, 2005. Much of the material of Part II appearedin the earlier book Robust Control: The Parametric Approach by S.P. Bhat-

    xvii

  • xviii Linear Control Theory

    tacharyya, H. Chapellat and L.H. Keel, Prentice Hall, 1995. A.D. would liketo thank Professor M. G. Safonov of the University of Southern Californiafor teaching him the basics of H control theory almost two decades ago.Indeed, a lot of the material in Part III of this book is based on a SpecialTopics course taught by Professor Safonov at USC in the Spring of 1990. Theauthors would also like to thank Dr. Nripendra Sarkar, Dr. Ranadip Pal,Ms. Rouella Mendonca, and Mr. Ritwik Layek for assistance with Latex andfigures on several occasions.A book of this size and scope inevitably has errors and we welcome correc-

    tive feedback from the readers. We also apologize in advance for any omissionsor inaccuracies in referencing and would want to compensate for this in futureeditions.

    S. P. BhattacharyyaA. DattaL. H. Keel

    June 23, 2008College Station, TexasUSA

  • Part I

    THREE TERMCONTROLLERS

    In this part we deal with the analysis, synthesis, and design of the impor-tant special class of controllers known as three term controllers. Specificallywe cover the design of Proportional-Integral-Derivative (PID) controllers andfirst order controllers. Each of these controllers has three adjustable param-eters, is widely used in the control industry across several technologies andengineering disciplines, and offers the possibility of using 2-D and 3-D graph-ics as design aids. In Chapter 1, we briefly review some classical methods ofdesigning PID controllers. This is followed by Chapter 2, which is devotedto computing the complete PID stabilizing set for a linear time invariant(LTI) continuous time plant without time delays, using recent results on rootcounting. We illustrate how this set is used in computer-aided design to satisfymultiple design specifications. In Chapter 3, the above results are extended to(LTI) plants cascaded with a delay. These are especially important in processindustries. In Chapter 4, we develop corresponding results for digital PIDcontrollers for discrete time plants again determining complete sets of con-trollers satisfying several specifications. In Chapter 5, we cover the design offirst order controllers for continuous time and discrete time LTI plants. Thestabilizing and performance attaining sets can be displayed using 2-D and 3-Dgraphics. Chapters 6 and 7 focus on direct data driven synthesis of controllers.It is shown that complete sets of stabilizing PID and first order controllersfor an LTI system can be calculated directly from the frequency response orimpulse response data of the plant without producing an identified analyticalmodel. The design methods presented in this part have been implemented inLabView and Matlab. The Matlab toolbox is in the public domain and canbe downloaded from http://procit.chungbuk.ac.kr.

  • 1PID CONTROLLERS:AN OVERVIEW OF CLASSICAL THEORY

    In this chapter we give a quick overview of control theory, explaining whyintegral feedback control works so well, describing PID (Proportional - Integral- Derivative) controllers, and summarizing some of the classical techniques forPID controller design. This background will also serve to motivate recentbreakthroughs on PID control, presented in the subsequent chapters.

    1.1 Introduction to Control

    Control theory and control engineering deal with dynamic systems such asaircraft, spacecraft, ships, trains, and automobiles, chemical and industrialprocesses such as distillation columns and rolling mills, electrical systems suchas motors, generators, and power systems, and machines such as numericallycontrolled lathes and robots. In each case the setting of the control problemis represented by the following elements:

    1. There are certain dependent variables, called outputs, to be controlled,which must be made to behave in a prescribed way. For instance it maybe necessary to assign the temperature and pressure at various pointsin a process, or the position and velocity of a vehicle, or the voltageand frequency in a power system, to given desired fixed values, despiteuncontrolled and unknown variations at other points in the system.

    2. Certain independent variables, called inputs, such as voltage applied tothe motor terminals, or valve position, are available to regulate andcontrol the behavior of the system. Other dependent variables, such asposition, velocity, or temperature, are accessible as dynamic measure-ments on the system.

    3. There are unknown and unpredictable disturbances impacting the sys-tem. These could be, for example, the fluctuations of load in a powersystem, disturbances such as wind gusts acting on a vehicle, externalweather conditions acting on an air conditioning plant, or the fluctuat-ing load torque on an elevator motor, as passengers enter and exit.

    3

  • 4 THREE TERM CONTROLLERS

    4. The equations describing the plant dynamics, and the parameters con-tained in these equations, are not known at all or at best known im-precisely. This uncertainty can arise even when the physical laws andequations governing a process are known well, for instance, becausethese equations were obtained by linearizing a nonlinear system aboutan operating point. As the operating point changes so do the systemparameters.

    These considerations suggest the following general representation of theplant or system to be controlled. In Figure 1.1 the inputs or outputs showncould actually be representing a vector of signals. In such cases the plant issaid to be a multivariable plant as opposed to the case where the signals arescalar, in which case the plant is said to be a scalar or monovariable plant.

    Dynamic System

    or

    Plant

    measurements

    outputs to be controlled

    disturbances

    control inputs

    Figure 1.1

    A general plant.

    Control is exercised by feedback, which means that the corrective controlinput to the plant is generated by a device that is driven by the availablemeasurements. Thus, the controlled system can be represented by the feedbackor closed-loop system shown in Figure 1.2.The control design problem is to determine the characteristics of the con-

    troller so that the controlled outputs can be

    1. Set to prescribed values called references;

    2. Maintained at the reference values despite the unknown disturbances;

    3. Conditions (1) and (2) are met despite the inherent uncertainties andchanges in the plant dynamic characteristics.

  • PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY 5

    Plant

    Controller measurements

    controlled output

    disturbances

    reference inputs

    Figure 1.2

    A feedback control system.

    The first condition above is called tracking, the second, disturbance rejec-tion, and the third, robustness of the system. The simultaneous satisfaction of(1), (2), and (3) is called robust tracking and disturbance rejection and controlsystems designed to achieve this are called robust servomechanisms.In the next section we discuss how integral and PID control are useful in

    the design of robust servomechanisms.

    1.2 The Magic of Integral Control

    Integral control is used almost universally in the control industry to designrobust servomechanisms. Integral action is most easily implemented by com-puter control. It turns out that hydraulic, pneumatic, electronic, and me-chanical integrators are also commonly used elements in control systems. Inthis section we explain how integral control works in general to achieve robusttracking and disturbance rejection.Let us first consider an integrator as shown in Figure 1.3.The input-output relationship is

    y(t) = K

    t0

    u()d + y(0) (1.1)

    or, in differential form,dy(t)

    dt= Ku(t) (1.2)

  • 6 THREE TERM CONTROLLERS

    Integratoru(t) y(t)

    Figure 1.3

    An integrator.

    where K is the integrator gain.Now suppose that the output y(t) is a constant. It follows from (1.2) that

    dy(t)

    dt= 0 = Ku(t) for all t > 0. (1.3)

    Equation (1.3) proves the following important facts about the operation of anintegrator:

    1. If the output of an integrator is constant over a segment of time, thenthe input must be identically zero over that same segment.

    2. The output of an integrator changes as long as the input is nonzero.

    The simple facts stated above suggest how an integrator can be used to solvethe servomechanism problem. If a plant output y(t) is to track a constantreference value r, despite the presence of unknown constant disturbances, itis enough to

    a. attach an integrator to the plant and make the error

    e(t) = r y(t)

    the input to the integrator;

    b. ensure that the closed-loop system is asymptotically stable so that un-der constant reference and disturbance inputs, all signals, including theintegrator output, reach constant steady-state values.

    This is depicted in the block diagram shown in Figure 1.4. If the systemshown in Figure 1.4 is asymptotically stable, and the inputs r and d (distur-bances) are constant, it follows that all signals in the closed loop will tend toconstant values. In particular the integrator output v(t) tends to a constantvalue. Therefore, by the fundamental fact about the operation of an inte-grator established above, it follows that the integrator input tends to zero.Since we have arranged that this input is the tracking error it follows thate(t) = r y(t) tends to zero and hence y(t) tracks r as t.We emphasize that the steady-state tracking property established above is

    very robust. It holds as long as the closed loop is asymptotically stable and is

  • PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY 7

    Plant

    ControllerIntegrator

    +

    y

    rev

    ym

    u

    disturbance

    d

    Figure 1.4

    Servomechanism.

    (1) independent of the particular values of the constant disturbances or refer-ences, (2) independent of the initial conditions of the plant and controller, and(3) independent of whether the plant and controller are linear or nonlinear.Thus, the tracking problem is reduced to guaranteeing that stability is as-sured. In many practical systems stability of the closed-loop system can evenbe ensured without detailed and exact knowledge of the plant characteristicsand parameters; this is known as robust stability.

    We next discuss how several plant outputs y1, y2, . . . , ym can be pinneddown to prescribed but arbitrary constant reference values r1, r2, . . . , rm inthe presence of unknown but constant disturbances d1, d2, . . . , dq. The pre-vious argument can be extended to this multivariable case by attaching mintegrators to the plant and driving each integrator with its correspondingerror input ei = ri yi, i = 1, . . . ,m. This is shown in the configuration inFigure 1.5.

    Once again it follows that as long as the closed-loop system is stable,all signals in the system must tend to constant values and integral actionforces the ei(t), i = 1, . . . ,m to tend to zero asymptotically, regardless ofthe actual values of the disturbances dj , j = 1, . . . , q or the values of theri, i = 1, . . . ,m. The existence of steady-state inputs u1, u2, . . . , ur that makeyi = ri, i = 1, . . . ,m for arbitrary ri, i = 1, . . . ,m requires that the plantequations relating yi, i = 1, . . . ,m to uj , j = 1, . . . , r be invertible for constantinputs. In the case of linear time-invariant systems this is equivalent to therequirement that the corresponding transfer matrix have rank equal to m ats = 0. Sometimes this is restated as two conditions: (1) r m or at least asmany control inputs as outputs to be controlled and (2) G(s) has no trans-mission zero at s = 0. The architecture of the block diagram of Figure 1.5 iseasily modified to handle servomechanism problems for more general classesof reference and disturbance signals such as ramps or sinusoids of specified fre-quency. The only modification required is that the integrators be replaced by

  • 8 THREE TERM CONTROLLERS

    Plant

    Integrator

    Integrator

    Integrator

    Controller

    u1

    ur

    d1 dq

    y1

    ......

    r1

    r2

    rm

    ... y2

    ym +

    +

    +

    Figure 1.5

    Multivariable servomechanism.

    corresponding signal generators of these external signals. See the treatmentof this general servo problem in Chapter 13.

    In general, the addition of an integrator to the plant tends to make thesystem less stable. This is because the integrator is an inherently unstabledevice; for instance, its response to a step input, a bounded signal, is a ramp,an unbounded signal. Therefore, the problem of stabilizing the closed loopbecomes a critical issue even when the plant is stable to begin with.

    Since integral action and thus the attainment of zero steady-state erroris independent of the particular value of the integrator gain K, we can seethat this gain can be used to try to stabilize the system. This single degreeof freedom is sometimes insufficient for attaining stability and an acceptabletransient response, and additional gains are introduced as explained in thenext section. This leads naturally to the PID controller structure commonlyused in industry.

    1.3 PID Controllers

    In the last section we saw that when an integrator is part of an asymptoticallystable system and constant inputs are applied to the system, the integratorinput is forced to become zero. This simple and powerful principle is the basisfor the design of linear, nonlinear, single-input single-output, and multivari-

  • PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY 9

    able servomechanisms. All we have to do is (1) attach as many integratorsas outputs to be regulated, (2) drive the integrators with the tracking errorsrequired to be zeroed, and (3) stabilize the closed-loop system by using anyadjustable parameters.As argued in the last section the input zeroing property is independent of

    the gain cascaded to the integrator. Therefore, this gain can be freely usedto attempt to stabilize the closed-loop system and achieve performance spec-ifications such as a good transient response and robust stability. Additionalfree parameters for stabilization can be obtained, without destroying the in-put zeroing property, by adding parallel branches to the controller, processingin addition to the integral of the error, the error itself and its derivative,when it can be obtained. This leads to the PID controller structure shown inFigure 1.6.

    Differentiator

    Integrator

    kd

    ki

    kp

    +e(t) u(t)

    Figure 1.6

    PID controller.

    As long as the closed loop is stable it is clear that the input to the integratorwill be driven to zero independent of the values of the gains. Thus, thefunction of the gains kp, ki, and kd is to stabilize the closed-loop system ifpossible and to adjust the transient response of the system and robustify thestability of the system.In general the derivative can be computed or obtained if the error is varying

    slowly. Since the response of the derivative to high-frequency inputs is muchhigher than its response to slowly varying signals (see Figure 1.7), the deriva-tive term is usually omitted if the error signal is corrupted by high-frequencynoise. In such cases the derivative gain kd is set to zero or equivalently thedifferentiator is switched off and the controller is a proportional-integral or PIcontroller. Such controllers are most common in industry.In subsequent chapters of the book, we present solutions to the problem of

    stabilization of a linear time-invariant plant by a PID controller. Both delay-free systems and systems with time delay are considered. These solutionsuncover the entire set of stabilizing controllers in a computationally tractable

  • 10 THREE TERM CONTROLLERS

    Differentiator

    Differentiator

    signal response to signal

    noise response to noise

    Figure 1.7

    Response of derivative to signal and noise.

    way.In the rest of this introductory chapter we briefly discuss the classical tech-

    niques for PID controller design. Many of them are based on empirical obser-vations.

    1.4 Classical PID Controller Design

    1.4.1 The Ziegler-Nichols Step Response Method

    The PID controller we are concerned with is implemented as follows:

    C(s) = kp +ki

    s+ kds (1.4)

    where kp is the proportional gain, ki is the integral gain, and kd is the deriva-tive gain. The derivative term is often replaced by

    kds

    1 + Tds,

    where Td is a small positive value that is usually fixed. This circumvents theproblem of pure differentiation when the error signals are contaminated bynoise.The Ziegler-Nichols step response method is an experimental open-loop

    tuning method and is applicable to open-loop stable plants. This method firstcharacterizes the plant by two parameters A and L obtained from its stepresponse. A and L can be determined graphically from a measurement of thestep response of the plant as illustrated in Figure 1.8. First, the point on thestep response curve with the maximum slope is determined and the tangentis drawn. The intersection of the tangent with the vertical axis gives A, whilethe intersection of the tangent with the horizontal axis gives L. Once A and

  • PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY 11

    A

    L

    point of maximum slope

    Figure 1.8

    Graphical determination of parameters A and L.

    L are determined, the PID controller parameters are then given in terms ofA and L by the following formulas:

    kp =1.2

    A, ki =

    0.6

    AL, kd =

    0.6L

    A.

    These formulas for the controller parameters were selected to obtain an am-plitude decay ratio of 0.25, which means that the first overshoot decays to14 th of its original value after one oscillation. Intense experimentation showedthat this criterion gives a small settling time.

    1.4.2 The Ziegler-Nichols Frequency Response Method

    The Ziegler-Nichols frequency response method is a closed-loop tuning method.This method first determines the point where the Nyquist curve of the plantG(s) intersects the negative real axis. It can be obtained experimentally inthe following way: Turn the integral and differential actions off and set thecontroller to be in the proportional mode only and close the loop as shown inFigure 1.9. Slowly increase the proportional gain kp until a periodic oscillationin the output is observed. This critical value of kp is called the ultimate gain(ku). The resulting period of oscillation is referred to as the ultimate period(Tu). Based on ku and Tu, the Ziegler-Nichols frequency response method

  • 12 THREE TERM CONTROLLERS

    gives the following simple formulas for setting PID controller parameters:

    kp = 0.6ku, ki =1.2kuTu

    , kd = 0.075kuTu. (1.5)

    r = 0

    +

    kp G(s)

    PlantProportional Controller

    y

    Figure 1.9

    The closed-loop system with the proportional controller.

    This method can be interpreted in terms of the Nyquist plot. Using PIDcontrol it is possible to move a given point on the Nyquist curve to an arbitraryposition in the complex plane. Now, the first step in the frequency responsemethod is to determine the point(

    1

    ku, 0

    )

    where the Nyquist curve of the open-loop transfer function intersects thenegative real axis. We will study how this point is changed by the PIDcontroller. Using (1.5) in (1.4), the frequency response of the controller at theultimate frequency wu is

    C(jwu) = 0.6ku j

    (1.2kuTuwu

    )+ j(0.075kuTuwu)

    = 0.6ku(1 + j0.4671) since Tuwu = 2 .

    From this we see that the controller gives a phase advance of 25 degrees atthe ultimate frequency. The loop gain is then

    Gloop(jwu) = G(jwu)C(jwu) = 0.6(1 + j0.4671) = 0.6 j0.28

    Thus, the point (

    1

    ku, 0

    )is moved to the point (-0.6, -0.28). The distance from this point to the criticalpoint 1 + j0 is almost 0.5. This means that the frequency response methodgives a sensitivity greater than 2.

  • PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY 13

    The procedure described above for measuring the ultimate gain and ulti-mate period requires that the closed-loop system be operated close to insta-bility. To avoid damaging the physical system, this procedure needs to beexecuted carefully. Without bringing the system to the verge of instability,an alternative method was proposed by Astrom and Hagglund using a relayto generate a relay oscillation for measuring the ultimate gain and ultimateperiod. This is done by using the relay feedback configuration shown in Fig-ure 1.10. In Figure 1.10, the relay is adjusted to induce a self-sustainingoscillation in the loop.

    r = 0

    +

    G(s)

    Plant

    yd

    d

    Relay

    Figure 1.10

    Block diagram of relay feedback.

    This relay feedback can be used to determine the ultimate gain and ultimateperiod. The relay block is a nonlinear element that can be represented bya describing function. This describing function is obtained by applying asinusoidal signal asin(wt) at the input of the nonlinearity and calculating theratio of the Fourier coefficient of the first harmonic at the output to a. Thisfunction can be thought of as an equivalent gain of the nonlinear system. Forthe case of the relay its describing function is given by

    N(a) =4d

    a

    where a is the amplitude of the sinusoidal input signal and d is the relayamplitude. The conditions for the presence of limit cycle oscillations can bederived by investigating the propagation of a sinusoidal signal around theloop. Since the plant G(s) acts as a low pass filter, the higher harmonicsproduced by the nonlinear relay will be attenuated at the output of the plant.Hence, the condition for oscillation is that the fundamental sine waveformcomes back with the same amplitude and phase after traversing through theloop. This means that for sustained oscillations at a frequency of , we musthave

    G(j)N(a) = 1 . (1.6)

    This equation can be solved by plotting the Nyquist plot of G(s) and the line 1N(a) . As shown in Figure 1.11, the plot of

    1N(a) is the negative real axis,

  • 14 THREE TERM CONTROLLERS

    so the solution to (1.6) is given by the two conditions:

    |G(ju)| =a

    4d

    =

    1

    ku,

    and

    arg [G(ju)] = .

    Im[G(j)]

    Re[G(j)]

    G(j)

    1

    N(a)

    = u

    Figure 1.11

    Nyquist plots of the plant G(j) and the describing function 1N(a) .

    The ultimate gain and ultimate period can now be determined by measuringthe amplitude and period of the oscillations. This relay feedback technique iswidely used in automatic PID tuning.

    REMARK 1.1 Both Ziegler-Nichols tuning methods require very littleknowledge of the plants and simple formulas are given for controller parametersettings. These formulas are obtained by extensive simulations of many stableand simple plants. The main design criterion of these methods is to obtain aquarter amplitude decay ratio for the load disturbance response. It has beencriticized because little emphasis is given to measurement noise, sensitivityto process variations, and setpoint response. Even though these methodsprovide good rejection of load disturbance, the resulting closed-loop systemcan be poorly damped and sometimes can have poor stability margins.

  • PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY 15

    1.4.3 PID Settings Using the Internal Model ControllerDesign Technique

    The internal model controller (IMC) structure has become popular in pro-cess control applications. This structure, in which the controller includes anexplicit model of the plant, is particularly appropriate for the design and im-plementation of controllers for open-loop stable systems. The fact that manyof the plants encountered in process control happen to be open-loop stablepossibly accounts for the popularity of IMC among practicing engineers. Inthis section, we consider the IMC configuration for a stable plant G(s) asshown in Figure 1.12. The IMC controller consists of a stable IMC parameterQ(s) and a model of the plant G(s), which is usually referred to as the internalmodel. F (s) is the IMC filter chosen to enhance robustness with respect tothe modelling error and to make the overall IMC parameter Q(s)F (s) proper.From Figure 1.12 the equivalent feedback controller C(s) is

    C(s) =F (s)Q(s)

    1 F (s)Q(s)G(s).

    G(s)

    Internal Model

    G(s)

    F (s) Q(s)+

    +

    y

    yur

    Internal Model Controller

    Figure 1.12

    The IMC configuration.

    The IMC design objective considered in this section is to choose Q(s) whichminimizes the L2 norm of the tracking error r y, that is, achieves an H2-optimal control design. In general, complex models lead to complex IMC H2-optimal controllers. However, it has been shown that, for first-order plantswith deadtime and a step command signal, the IMC H2-optimal design resultsin a controller with a PID structure. This will be clearly borne out by thefollowing discussion.Assume that the plant to be controlled is a first-order model with deadtime:

    G(s) =k

    1 + TseLs .

  • 16 THREE TERM CONTROLLERS

    The control objective is to minimize the L2 norm of the tracking error dueto setpoint changes. Using Parsevals Theorem, this is equivalent to choosingQ(s) for which min [1 G(s)Q(s)]R(s)2 is achieved, where R(s) =

    1sis the

    Laplace transform of the unit step command.Approximating the deadtime with a first-order Pade approximation, we

    have

    eLs =1 L2 s

    1 + L2 s.

    The resulting rational transfer function of the internal model G(s) is given by

    G(s) =

    (k

    1 + Ts

    )(1 L2 s

    1 + L2 s

    ).

    Choosing Q(s) to minimize the H2 norm of [1 G(s)Q(s)]R(s), we obtain

    Q(s) =1 + Ts

    k.

    Since this Q(s) is improper, we choose

    F (s) =1

    1 + s

    where > 0 is a small number. The equivalent feedback controller becomes

    C(s) =F (s)Q(s)

    1 F (s)Q(s)G(s)=

    (1 + Ts)(1 + L2 s)

    ks(L+ + L2 s)

    =(1 + Ts)(1 + L2 s)

    ks(L+ ). (1.7)

    From (1.7) we can extract the following parameters for a standard PID con-troller:

    kp =2T + L

    2k(L+ ),

    ki =1

    k(L+ ),

    kd =TL

    2k(L+ ).

    Since a first-order Pade approximation was used for the time delay, ensuringthe robustness of the design to modeling errors is all the more important.This can be done by properly selecting the design variable to achieve theappropriate compromise between performance and robustness. Morari andZafiriou [157] have proposed that a suitable choice for is > 0.2T and > 0.25L.

  • PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY 17

    REMARK 1.2 The IMC PID design procedure minimizes the L2 normof the tracking error due to setpoint changes. Therefore, as expected, thisdesign method gives good response to setpoint changes. However, for lagdominant plants the method gives poor load disturbance response because ofthe pole-zero cancellation inherent in the design methodology.

    1.4.4 Dominant Pole Design: The Cohen-Coon Method

    Dominant pole design attempts to position a few poles to achieve certaincontrol performance specifications. The Cohen-Coon method is a dominantpole design method. This tuning method is based on the first-order plantmodel with deadtime:

    G(s) =k

    1 + TseLs.

    The key feature of this tuning method is to attempt to locate three dominantpoles, a pair of complex poles and one real pole, such that the amplitude decayratio for load disturbance response is 0.25 and the integrated error

    0 |e(t)|dtis minimized. Thus, the Cohen-Coon method gives good load disturbancerejection. Based on analytical and numerical computation, Cohen and Coongave the following PID controller parameters in terms of k, T , and L:

    kp =1.35(1 0.82b)

    a(1 b),

    ki =1.35(1 0.82b)(1 0.39b)

    aL(1 b)(2.5 2b),

    kd =1.35L(0.37 0.37b)

    a(1 b)

    where

    a =kL

    T, b =

    L

    L+ T.

    Note that for small b, the controller parameters given by the above formulasare close to the parameters obtained by the Ziegler-Nichols step responsemethod.

    1.4.5 New Tuning Approaches

    The tuning methods described in the previous subsections are easy to useand require very little information about the plant to be controlled. However,since they do not capture all aspects of desirable PID performance, manyother new approaches have been developed. These methods can be classifiedinto three categories.

  • 18 THREE TERM CONTROLLERS

    1.4.5.1 Time Domain Optimization Methods

    The idea behind these methods is to choose the PID controller parametersto minimize an integral cost functional. Zhuang and Atherton [215] usedan integral criterion with data from a relay experiment. The time-weightedsystem error integral criterion was chosen as

    Jn() =

    0

    tne2(, t)dt

    where is a vector containing the controller parameters and e(, t) representsthe error signal. Experimentation showed that for n = 1, the controller ob-tained produced a step response of desirable form. This gave birth to theintegral square time error (ISTE) criterion. Another contribution is due toPessen [167], who used the integral absolute error (IAE) criterion:

    J() =

    0

    |e(, t)|dt.

    In order to minimize the above integral cost functions, Parsevals Theoremcan be invoked to express the time functions in terms of their Laplace trans-forms. Definite integrals of the form encountered in this approach have beenevaluated in terms of the coefficients of the numerator and denominator ofthe Laplace transforms (see [161]). Once the integration is carried out, theparameters of the PID controller are adjusted in such a way as to minimizethe integral cost function. Recently Atherton and Majhi [9] proposed a modi-fied form of the PID controller (see Figure 1.13). In this structure an internalproportional-derivative (PD) feedback is used to change the poles of the planttransfer function to more desirable locations and then a PI controller is usedin the forward loop. The parameters of the controller are obtained by mini-mization of the ISTE criterion.

    r(t)

    +

    PI

    d(t)

    +

    u(t)

    Plant

    PD

    y(t)

    Figure 1.13

    PI-PD feedback control structure.

  • PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY 19

    1.4.5.2 Frequency Domain Shaping

    These methods seek a set of controller parameters that give a desired frequencyresponse. Astrom and Hagglund [8] proposed the idea of using a set of rulesto achieve a desired phase margin specification. In the same spirit, Ho, Hang,and Zhou [102] developed a PID self-tuning method with specifications onthe gain and phase margins. Another contribution by Voda and Landau [199]presented a method to shape the compensated system frequency response.

    1.4.5.3 Optimal Control Methods

    This new trend has been motivated by the desire to incorporate several con-trol system performance objectives such as reference tracking, disturbancerejection, and measurement noise rejection. Grimble and Johnson [88] in-corporated all these objectives into an LQG optimal control problem. Theyproposed an algorithm to minimize an LQG-cost function where the controllerstructure is fixed to a particular PID industrial form. In a similar fashion,Panagopoulos, Astrom, and Hagglund [164] presented a method to design PIDcontrollers that captures demands on load disturbance rejection, set point re-sponse, measurement noise, and model uncertainty. Good load disturbancerejection was obtained by minimization of the integral control error. Good setpoint response was obtained by using a structure with two degrees of freedom.Measurement noise was dealt with by filtering. Robustness was achieved byrequiring a maximum sensitivity of less than a specified value.

    1.4.5.4 Design for Multiple Specifications

    Recent work based on finding the complete set of stabilizing PID controllershave opened up the possibility of finding sets of controllers achieving multiplespecifications. This is described in later chapters of this book.

    1.5 Integrator Windup

    An important element of the control strategy discussed in Section 1.2 is theactuator, which applies the control signal u to the plant. However, all ac-tuators have limitations that make them nonlinear elements. For instance,a valve cannot be more than fully opened or fully closed. During the regu-lar operation of a control system, it can very well happen that the controlvariable reaches the actuator limits. When this situation arises, the feedbackloop is broken and the system runs as an open loop because the actuator willremain at its limit independently of the process output. In this scenario, if thecontroller is of the PID type, the error will continue to be integrated. This inturn means that the integral term may become very large, which is commonly

  • 20 THREE TERM CONTROLLERS

    referred to as windup. In order to return to a normal state, the error signalneeds to have an opposite sign for a long period of time. As a consequenceof all this, a system with a PID controller may give large transients when theactuator saturates.

    The phenomenon of windup has been known for a long time. It may occurin connection with large setpoint changes or it may be caused by large distur-bances or equipment malfunction. Several techniques are available to avoidwindup when using an integral term in the controller. We describe some ofthese techniques in this section.

    1.5.1 Setpoint Limitation

    The easiest way to avoid integrator windup is to introduce limiters on thesetpoint variations so that the controller output will never reach the actuatorbounds. However, this approach has several disadvantages: (a) it leads toconservative bounds; (b) it imposes limitations on the controller performance;and (c) it does not prevent windup caused by disturbances.

    1.5.2 Back-Calculation and Tracking

    This technique is illustrated in Figure 1.14. If we compare this figure toFigure 1.6, we see that the controller has an extra feedback path. This pathis generated by measuring the actual actuator output u(t) and forming theerror signal es(t) as the difference between the output of the controller v(t)and the signal u(t). This signal es(t) is fed to the input of the integratorthrough a gain 1/Tt.

    e(t)

    ki

    kp

    Differentiator kd

    Integrator+

    + Actuator

    1

    Tt

    es(t)

    u(t)v(t)

    +

    Figure 1.14

    Controller with antiwindup.

    When the actuator is within its operating range, the signal es(t) is zero. Thus,

  • PID CONTROLLERS: AN OVERVIEW OF CLASSICAL THEORY 21

    it will not have any effect on the normal operation of the controller. Whenthe actuator saturates, the signal es(t) is different from zero. The normalfeedback path around the process is broken because the process input remainsconstant. However, there is a new feedback path around the integrator dueto es(t) 6= 0 and this prevents the integrator from winding up. The rate atwhich the controller output is reset is governed by the feedback gain 1/Tt. Theparameter Tt can thus be interpreted as the time constant that determineshow quickly the integral action is reset. In general, the smaller the value of Tt,the faster the integrator is reset. However, if the parameter Tt is chosen toosmall, spurious errors can cause saturation of the output, which accidentallyresets the integrator. Astrom and Hagglund [7] recommend Tt to be largerthan kd

    kand smaller than k

    ki.

    1.5.3 Conditional Integration

    Conditional integration is an alternative to the back-calculation technique.It simply consists of switching off the integral action when the control is farfrom the steady state. This means that the integral action is only used whencertain conditions are fulfilled, otherwise the integral term is kept constant.We now consider a couple of these switching conditions.One simple approach is to switch off the integral action when the control

    error e(t) is large. Another one is to switch off the integral action whenthe actuator saturates. However, both approaches have a disadvantage: thecontroller may get stuck at a nonzero control error e(t) if the integral termhas a large value at the time of switch off.Because of the previous disadvantage, a better approach is to switch off

    integration when the controller is saturated and the integrator update is suchthat it causes the control signal to become more saturated. For example,consider that the controller becomes saturated at its upper bound. Integrationis then switched off if the control error is positive, but not if it is negative.

    1.6 Exercises

    1.1 Carry out a Ziegler-Nichols step response design for the plant

    K

    1 + sT esL

    where K = 1, T = 1, L = 1. Find the gain and phase margins of the system.

    1.2 Repeat Problem 1.1 with K = 1 and

  • 22 THREE TERM CONTROLLERS

    (a) T = 1 and L [1, 10],

    (b) L = 1 and T [1, 10].

    In each case, determine the gain and phase margins and their variations withrespect to T and L.

    1.3 Prove that any strictly proper first order plant with transfer function

    P (s) =K

    1 + sT

    can be stabilized by the Proportional-Integral controller

    C(s) = Kp +Ki

    s.

    (a) Find the stabilizing sets S+ and S for T < 0 (unstable plant) and T > 0(stable plant), respectively in (Kp,Ki) space, and show that

    S+ S =

    andS+ S = IR2.

    (b) Determine the subsets of S+ and S for which the closed-loop charac-teristic roots are (i) real, (ii) complex.

    (c) Show that the steady state error to a ramp input can be made arbitrarilysmall.

    1.4 Consider the second order plant

    P (s) =K(s z)

    s2 + a1s+ a0=

    K(s z)

    (s p1)(s p2), K, z > 0

    with the feedback controller C(s) = Kp.

    (a) Prove that stabilization by constant gain is possible if and only if

    a1 0 (2.2)

    where T is usually fixed at a small positive value.For this chapter, the plant transfer function G(s) will be assumed to be

    rational so that

    G(s) =N(s)

    D(s)(2.3)

    where N(s), D(s) are polynomials in the Laplace variable s. With this plantdescription,the closed-loop characteristic polynomial becomes

    (s, kp, ki, kd) = sD(s) + (ki + kds2)N(s) + kpsN(s). (2.4)

    or with D(s) = D(s)(1 + sT ),

    (s, kp, ki, kd) = sD(s) + (ki + kds2)N(s) + kpsN(s). (2.5)

    depending on the particular type of C(s) being discussed. The problem ofstabilization using a PID controller is to determine the values of kp, ki and

  • PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS 27

    kd for which the closed-loop characteristic polynomial is Hurwitz, that is, hasall its roots in the open left half plane. Since plants with a zero at the origin(N(0) = 0) cannot be stabilized by PID controllers we exclude such plants atthe outset. Finding the complete set of stabilizing parameters is an importantfirst step in searching for subsets attaining various design objectives. We willnow develop an algorithm for computationally characterizing this set for agiven plant with a rational transfer function.

    2.2 Stabilizing Set

    The set of controllers of a given structure that stabilizes the closed loop is offundamental importance since every design must belong to this set and anyperformance specifications that are imposed must be achieved over this set.Write

    k := [kp, ki, kd] (2.6)

    and letSo := {k : (s,k) is Hurwitz} (2.7)

    denote the set of PID controllers that stabilize the closed-loop for the givenplant characterized by the transfer function (N(s), D(s)). It is emphasizedthat due to the presence of integral action on the error any controller inSo already provides asymptotic tracking and disturbance rejection for stepinputs. In general, additional design specifications on stability margins andtransient response are also required and subsets representing these must besought within So.The three dimensional set So is simply described by (2.7) but not necessarily

    simple to calculate. For example, a naive application of the Routh-Hurwitzcriterion to (s,k) will result in a description of So in terms of highly nonlinearand intractable inequalities.

    Example 2.1

    Consider the problem of choosing stabilizing PID gains for the plant G(s) =N(s)D(s) where

    D(s) = s5 + 8s4 + 32s3 + 46s2 + 46s+ 17

    N(s) = s3 4s2 + s+ 2.

    The closed-loop characteristic polynomial is

    (s, kp, ki, kd) = sD(s) + (ki + kps+ kds2)N(s)

    = s6 + (kd + 8)s5 + (kp 4kd + 32)s

    4 + (ki 4kp + kd + 46)s3

    +(4ki + kp + 2kd + 46)s2 + (ki + 2kp + 17)s+ 2ki

  • 28 THREE TERM CONTROLLERS

    Using the Routh-Hurwitz criterion to determine the stabilizing values for kp,ki and kd, we see that the following inequalities must hold.

    kd + 8 > 0

    kpkd 4k2d ki + 12kp kd + 210 > 0

    kikpkd 4k2pkd + 16kpk

    2d 6k

    3d k

    2i + 16kikp

    +63kikd 48k2p + 48kpkd

    263k2d + 428ki 336kp 683kd + 6852 > 0

    4k2i kpkd + 16kik2pkd 52kikpk

    2d

    6k3pkd + 24k2pk

    2d 6kpk

    3d 12k

    4d + 4k

    3i

    64k2i kp 264k2i kd + 198kik

    2p 9kikpkd + 1238kik

    2d

    72k3p 213k2pkd + 957kpk

    2d 1074k

    3d 1775k

    2i

    +2127kikp + 7688kikd 3924k2p + 3027kpkd 11322k

    2d

    10746ki 31338kp 1836kd + 206919 > 0

    6k3i kpkd + 24k2i k

    2pkd 84k

    2i kpk

    2d 6kik

    3pkd + 60kik

    2pk

    2d

    102kikpk3d 12k

    4pkd + 48k

    3pk

    2d 12k

    2pk

    3d 24kpk

    4d

    +6k4i 96k3i kp 390k

    3i kd + 294k

    2i k

    2p 285k

    2i kpkd

    +1476k2i k2d 60kik

    3p + 969kik

    2pkd 1221kikpk

    2d

    132kik3d 144k

    4p 528k

    3pkd + 2322k

    2pk

    2d 2250kpk

    3d

    204k4d 2487k3i + 273k

    2i kp 2484k

    2i kd + 5808kik

    2p

    +10530kikpkd + 34164kik2d 9072k

    3p + 2433k

    2pkd

    6375kpk2d 18258k

    3d 92961k

    2i + 79041kikp

    +184860kikd 129384k2p + 47787kpkd 192474k

    2d

    549027ki 118908kp 31212kd + 3517623 > 0

    2ki > 0

    (2.8)

    Clearly, the above inequalities are highly nonlinear and there is no straightforward method for obtaining a solution.

    In the sequel, we develop a novel and computationally efficient approach todetermining So. This is based on some root counting or signature formulas,which we develop next.

  • PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS 29

    2.3 Signature Formulas

    Let p(s) denote a polynomial of degree n with real coefficients and withoutzeros on the j axis. Write

    p(s) := p0 + p2s2 +

    peven(s2)

    +s(p1 + p3s

    2 + )

    podd(s2)

    (2.9)

    so thatp(j) = pr() + jpi() (2.10)

    where pr(), pi() are polynomials in with real coefficients with

    pr() = peven(2), (2.11)

    pi() = podd(2). (2.12)

    DEFINITION 2.1 The standard signum function sgn : R {1, 0, 1}is defined by

    sgn[x] =

    1 if x < 00 if x = 01 if x > 0.

    DEFINITION 2.2 Let p(s) be a given polynomial of degree n with realcoefficients and without zeros on the imaginary axis. Let C denote the openleft-half plane (LHP), C+ the open right-half plane (RHP), and l and r thenumbers of roots of p(s) in C and C+, respectively. Let p(j) denote theangle of p(j) and 21p(j) the net change, in radians, in the phase orangle of p(j) as runs from 1 to 2.

    LEMMA 2.1

    0 p(j) =

    2(l r). (2.13)

    PROOF Each LHP root contributes and each RHP root contributes to the net change in phase of p(j) as runs from to , and (2.13)follows from the symmetry about the real axis of the roots since p(s) has real

    coefficients.

    We call l r, the Hurwitz signature of p(s), and denote it as:

    (p) := l r. (2.14)

  • 30 THREE TERM CONTROLLERS

    2.3.1 Computation of (p)

    By the previous lemma the computation of (p) amounts to a determinationof the total phase change of p(j). To see how the total phase change may becalculated consider typical plots of p(j) where runs from 0 to + as inFigure 2.2. We note that the frequencies 0, 1, 2, 3, 4 are the points wherethe plot cuts or touches the real axis. In Figure 2.2(a), 3 is a point wherethe plot touches but does not cut the real axis.

    = 0

    12

    3

    4 = 0

    1

    23

    +

    +

    (b)(a)

    Figure 2.2

    (a) Plot of p(j) for p(s) of even degree. (b) Plot of p(j) for p(s) of odddegree.

    In Figure 2.2(a), we have

    0 p(j) = 10 p(j)

    0

    +21p(j)

    +32p(j) 0

    +43p(j)

    +4p(j) 0

    . (2.15)

    Observe that,

    10 p(j) = sgn[pi(0+)](sgn[pr(0) sgn[pr(1)]

    )2

    21p(j) = sgn[pi(+1 )](sgn[pr(1) sgn[pr(2)]

    )2

  • PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS 31

    32p(j) = sgn[pi(+2 )](sgn[pr(2) sgn[pr(3)]

    )2

    (2.16)

    43p(j) = sgn[pi(+3 )](sgn[pr(3) sgn[pr(4)]

    )2

    +4 p(j) = sgn[pi(+4 )](sgn[pr(4) sgn[pr()]

    )2

    and

    sgn[pi(+1 )] = sgn[pi(0

    +)]

    sgn[pi(+2 )] = sgn[pi(

    +1 )] = +sgn[pi(0

    +)] (2.17)

    sgn[pi(+3 )] = +sgn[pi(

    +2 )] = +sgn[pi(0

    +)]

    sgn[pi(+4 )] = sgn[pi(

    +3 )] = sgn[pi(0

    +)]

    and note also that 0, 1, 2, 4 are the real zeros of pi() of odd multiplicitywhereas 3 is a real zero of evenmultiplicity. From these relations, it is evidentthat (2.15) may be rewritten, skipping the terms involving 3 the root of evenmultiplicity so that

    0 p(j) = 10 p(j) +

    21p(j) + 42p(j) +

    4p(j)

    =

    2

    (sgn[pi(0

    +)](sgn[pr(0)] sgn[pr(1)]

    )sgn[pi(0

    +)](sgn[pr(1)] sgn[pr(2)]

    )+sgn[pi(0

    +)](sgn[pr(2)] sgn[pr(4)]

    )sgn[pi(0

    +)](sgn[pr(4)] sgn[pr()]

    )). (2.18)

    Equation (2.18) can be rewritten as

    0 p(j) =

    2sgn[pi(0

    +)](sgn[pr(0)] 2sgn[pr(1)] + 2sgn[pr(2)]

    2sgn[pr(4)] + sgn[pr()]). (2.19)

    In the case of Figure 2.2(b), that is, when p(s) is of odd degree, we have

    0 p(j) = 10 p(j)

    +

    +21p(j) 0

    +32p(j)

    ++3 p(j) 2

    (2.20)

    and 10 p(j), 21p(j), 32p(j) are as before whereas

    3p(j) =

    2sgn[pi(

    +3 )]sgn[pr(3)]. (2.21)

    We also have, as before,

    sgn[pi(+j )] = (1)

    jsgn[pi(0+)], j = 1, 2, 3 (2.22)

  • 32 THREE TERM CONTROLLERS

    Combining (2.20) - (2.22), we have, finally, for Figure 2.2(b),

    0 p(j) =

    2sgn[pi(0

    +)](sgn[pr(0)] 2sgn[pr(1)] + 2sgn[pr(2)]

    2sgn[pr(3)]). (2.23)

    We can now easily generalize the above formulas for the signature, based onLemma 2.1.

    THEOREM 2.1

    Let p(s) be a polynomial of degree n with real coefficients, without zeros onthe imaginary axis. Write

    p(j) = pr() + jpi()

    and let 0, 1, 3, , l1 denote the real nonnegative zeros of pi() withodd multiplicities with 0 = 0. ThenIf n is even,

    (p) = sgn[pi(0+)]

    sgn[pr(0)] + 2 l1

    j=1

    (1)jsgn[pr(j)] + (1)lsgn[pr()]

    .

    If n is odd,

    (p) = sgn[pi(0+)]

    sgn[pr(0)] + 2 l1

    j=1

    (1)jsgn[pr(j)]

    .

    2.3.2 Alternative Signature Expression

    In the previous subsection, we gave expressions for the signature of a poly-nomial p(s) in terms of the signs of the real part of p(j) at the zeros of theimaginary part. Here we dualize these formulas, that is, we develop signatureexpressions in terms of the signs of the imaginary part at the zeros of the realpart.Let v(s) denote a polynomial of degree n with real coefficients without j

    axis zeros. Write as before

    v(s) = veven(s2) + svodd(s

    2)

    so that

    v(j) = vr() + jvi()

    with

    vr() = veven(2), vi() = vodd(

    2).

  • PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS 33

    Let 0 < 1 < 2 < < l1 denote the real positive distinct zeros of vr()of odd multiplicities, and let l =.Observe that

    j+1j v(j) =

    2sgn

    [vr(

    +j )] (sgn [vi(j+1)] sgn [vi(j)]

    ),

    j = 1, 2, , l 2. (2.24)

    When n is odd,

    l1v(j) =

    2sgn

    [vr(

    +l1)

    ] (sgn [vi()] sgn [vi(l1)]

    )(2.25)

    and for n even, we have

    l1v(j) =

    2sgn

    [vr(

    +l1)

    ]sgn [vi(l1)] . (2.26)

    Also, it is easily verified that in both cases, even and odd,

    sgn[vr(

    +j+1)

    ]= sgn

    [vr(

    +j )]. (2.27)

    Combining (2.24) - (2.27) with

    0 v(j) = (l r)

    2= (v)

    2,

    we have the alternative signature formulas given below:

    THEOREM 2.2

    If n is even,

    (v) = sgn [vr(0)]

    (2sgn [vi(1)]2sgn [vi(2)]+ +(1)

    l22sgn [vi(l1)]

    ).

    If n is odd,

    (v) = sgn [vr(0)]

    (2sgn [vi(1)] 2sgn [vi(2)] +

    + (1)l22sgn [vi(l1] + (1)l1sgn [vi()]

    ).

    2.4 Computation of the PID Stabilizing Set

    Consider the plant, with rational transfer function

    P (s) =N(s)

    D(s)

  • 34 THREE TERM CONTROLLERS

    with the PID feedback controller

    C(s) =kps+ ki + kds

    2

    s(1 + sT ), T > 0.

    The closed-loop characteristic polynomial is

    (s) = sD(s)(1 + sT ) +(kps+ ki + kds

    2)N(s). (2.28)

    We form the new polynomial

    (s) := (s)N(s) (2.29)

    and note that the even-odd decomposition of (s) is of the form:

    (s) = even(s2, ki, kd) + sodd(s

    2, kp). (2.30)

    The polynomial (s) exhibits the parameter separation property, namely, thatkp appears only in the odd part and ki, kd only in the even part. This willfacilitate the computation of the stabilizing set using signature concepts.Let deg[D(s)] = n, deg[N(s)] = m n, and let z+ and z denote the

    number of RHP and LHP zeros of the plant, respectively, that is, zeros ofN(s). We assume, as a convenient technical assumption, that the plant hasno j axis zeros.

    THEOREM 2.3

    The closed-loop system is stable if and only if

    () = nm+ 2 + 2z+. (2.31)

    PROOF Closed-loop stability is equivalent to the requirement that then+ 2 zeros of (s) lie in the open LHP. This is equivalent to

    () = n+ 2 (2.32)

    and to

    () = n+ 2 + z+ z

    = n+ 2 + z+ (m z+)

    = (nm) + 2 + 2z+.

    Based on this, we can develop the following procedure to calculate S0, thestabilizing set:

  • PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS 35

    a) First, fix kp = k

    p and let 0 < 1 < 2 < < l1 denote the real,positive, finite frequencies which are zeros of

    odd(2, kp) = 0 (2.33)

    of odd multiplicities. Let 0 := 0 and l :=.

    b) Writej = sgn

    [vodd(0

    +, kp)]

    and determine strings of integers, i0, i1, such that:If n+m is even,

    j(i0 2i1 + 2i2 + + (1)

    l12il1 + (1)lil)= nm+ 2 + 2z+

    (2.34)If n+m is odd,

    j(i0 2i1 + 2i2 + + (1)

    l12il1)= nm+ 2 + 2z+ (2.35)

    c) Let I1, I2, I3, denote distinct strings {i0, i1, } satisfying (2.34) or(2.35). Then the stabilizing sets in ki, kd space, for kp = k

    p are givenby the linear inequalities

    even(2t , ki, kd

    )it > 0 (2.36)

    where the it range over each of the strings I1, I2, .

    d) For each string Ij , (2.36) generates a convex stability set Sj(k

    p) and thecomplete stabilizing set for fixed kp is the union of these convex sets

    S(kp) = jSj(k

    p). (2.37)

    e) The complete stabilizing set in (kp, ki, kd) space can be found by sweepingkp over the real axis and repeating the calculations (2.33) - (2.37). From(2.34) and (2.35), we can see that the range of sweeping can be restrictedto those values such that the number of roots l 1 can satisfy (2.34) or(2.35) in the most favorable case. For n+m even, this requires that

    2 + 2(l 1) nm+ 2 + 2z+

    or

    l 1 nm+ 2z+

    2(2.38)

    and for n+m odd, we need

    1 + 2(l 1) nm+ 2 + 2z+

    or

    l 1 nm+ 1 + 2z+

    2. (2.39)

    Thus, kp needs to be swept over those ranges where (2.33) is satisfiedwith l 1 given by (2.38) or (2.39).

  • 36 THREE TERM CONTROLLERS

    REMARK 2.1 If the PID controller with pure derivative action is used(T = 0) it is easy to see that the signature requirement for stability becomes

    () = nm+ 1 + 2z+.

    The following example illustrates the detailed calculations involved in deter-mining the stabilizing (kp, ki, kd) gain values.

    Example 2.2

    Consider the problem of determining stabilizing PID gains for the plant P (s) =N(s)D(s) where

    N(s) = s3 2s2 s 1

    D(s) = s6 + 2s5 + 32s4 + 26s3 + 65s2 8s+ 1.

    In this example we use the PID controller with T = 0. The closed-loopcharacteristic polynomial is

    (s, kp, ki, kd) = sD(s) + (ki + kds2)N(s) + kpsN(s).

    Here n = 6 and m = 3

    Ne(s2) = 2s2 1,

    No(s2) = s2 1,

    De(s2) = s6 + 32s4 + 65s2 + 1,

    Do(s2) = 2s4 + 26s2 8,

    andN(s) =

    (2s2 1

    ) s

    (s2 1

    ).

    Therefore, we obtain

    (s) = (s, kp, ki, kd)N(s)

    =[s2(s8 35s6 87s4 + 54s2 + 9

    )+(ki + kds

    2) (s6 + 6s4 + 3s2 + 1

    )]+s[(4s8 89s6 128s4 75s2 1

    )+ kp (s

    6 + 6s4 + 3s2 + 1)]

    so that

    (j, kp, ki, kd) =[p1() +

    (ki kd

    2)p2()

    ]+ j [q1() + kpq2()]

    where

    p1() = 10 358 + 876 + 544 92

    p2() = 6 + 64 32 + 1

    q1() = 49 + 897 1285 + 753

    q2() = 7 + 65 33 + .

  • PID CONTROLLERS FOR DELAY-FREE LTI SYSTEMS 37

    We find that z+ = 1 so that the signature requirement on (s) for stability is,

    () = nm+ 1 + 2z+ = 6.

    Since the degree of (s) is even, we see from the signature formulas that q()must have at least two positive real roots of odd multiplicity. The range ofkp such that q(, kp) has at least 2 real, positive, distinct, finite zeros withodd multiplicities was determined to be (24.7513, 1) which is the allowablerange for kp. For a fixed kp (24.7513, 1), for instance kp = 18, we have

    q(,18) = q1() 18q2()

    = 49 + 717 2365 + 1293 19.

    Then the real, nonnegative, distinct finite zeros of q(,18) with odd multi-plicities are

    0 = 0, 1 = 0.5195, 2 = 0.6055, 3 = 1.8804, 4 = 3.6848.

    Also define 5 =. Since

    sgn[q(0,18)] = 1,

    it follows that every admissible string

    I = {i0, i1, i2, i3, i4, i5}

    must satisfy{i0 2i1 + 2i2 2i3 + 2i4 i5} (1) = 6.

    Hence, the admissible strings are

    I1 = {1,1,1, 1,1, 1}

    I2 = {1, 1, 1, 1,1, 1}

    I3 = {1, 1,1,1,1, 1}

    I4 = {1, 1,1, 1, 1, 1}

    I5 = {1, 1,1, 1,1,1}.

    For I1 it follows that the stabilizing (ki, kd) values corresponding to kp = 18must satisfy the string of inequalities:

    p1(0) +(ki kd

    20

    )p2(0) < 0

    p1(1) +(ki kd

    21

    )p2(1) < 0

    p1(2) +(ki kd

    22

    )p2(2) < 0

    p1(3) +(ki kd

    23

    )p2(3) > 0

    p1(4) +(ki kd

    24

    )p2(4) < 0

    p1(5) +(ki kd

    25

    )p2(5) > 0

  • 38 THREE TERM CONTROLLERS

    Substituting for 0, 1, 2, 3, 4 and 5 in the above expressions, we obtain

    ki < 0

    ki 0.2699kd < 4.6836

    ki 0.3666kd < 10.0797 (2.40)

    ki 3.5358kd > 3.912

    ki 13.5777kd < 140.2055.

    The set of values of (ki, kd) for which (2.40) holds can be solved by linearprogramming and is denoted by S1. For I2, we have

    ki < 0

    ki 0.2699kd > 4.6836

    ki 0.3666kd > 10.0797 (2.41)

    ki 3.5358kd > 3.912

    ki 13.5777kd < 140.2055.

    The set of values of (ki, kd) for which (2.41) holds can also be solved by linearprogramming and is denoted by S2. Similarly, we obtain

    S3 = for I3

    S4 = for I4

    S5 = for I5.

    Then, the stabilizing set of (ki, kd) values when kp = 18 is given by

    S(18) = x=1,2,,5Sx

    = S1 S2.

    The set S(18) and the corresponding S1 and S2 are shown in Figure 2.3. Bysweeping over different kp values within the interval (24.7513, 1) and repeat-ing the above procedure at each stage, we can generate the set of stabilizing(kp, ki, kd) values. This set is shown in Figure 2.4.

    2.5 PID Design with Performance Requirements

    Control system performance can be specified by requirements on closed-loopstability margins such as guarantees on gain or phase margins, time-delay mar-gins as well as time domain performance specifications such as low overshootand fast settling time. Sometimes frequency domain inequalities or equiva-

    lently an H norm constraint on a closed-loop transfer function G(s) =N(s)D(s) :

    G(s) < (2.42)

  • PID