Handbook of Constraint Programming - University of …cs · audience of the handbook is...

969
Handbook of Constraint Programming Edited by F. Rossi,P. van Beek and T. Walsh Elsevier

Transcript of Handbook of Constraint Programming - University of …cs · audience of the handbook is...

  • Handbook of Constraint Programming

    Edited by F. Rossi, P. van Beek and T. Walsh

    Elsevier

  • Foreword

    UGO MONTANARIPisa, Italy

    March 2006

    v

  • Contents

    Foreword v

    Contents vii

    I Foundations 1

    1 Introduction 3Francesca Rossi, Peter van Beek, Toby Walsh

    1.1 Structure and Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    2 Constraint Satisfaction: An Emerging Paradigm 11Eugene C. Freuder and Alan K. Mackworth

    2.1 The Early Days . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2 The Constraint Satisfaction Problem: Representation and Reasoning . . . 142.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    3 Constraint Propagation 27Christian Bessiere

    3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.2 Formal Viewpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.3 Arc Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.4 Higher Order Consistencies . . . . . . . . . . . . . . . . . . . . . . . . . 483.5 Domain-based Consistencies Stronger than AC . . . . . . . . . . . . . . 553.6 Domain-based Consistencies Weaker than AC . . . . . . . . . . . . . . . 603.7 Constraint Propagation as Iteration of Reduction Rules . . . . . . . . . . 663.8 Specific Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    4 Backtracking Search Algorithms 83Peter van Beek

    4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844.2 Branching Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854.3 Constraint Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . 884.4 Nogood Recording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944.5 Non-Chronological Backtracking . . . . . . . . . . . . . . . . . . . . . . 1004.6 Heuristics for Backtracking Algorithms . . . . . . . . . . . . . . . . . . 1034.7 Randomization and Restart Strategies . . . . . . . . . . . . . . . . . . . 1094.8 Best-First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

    vii

  • viii CONTENTS

    4.9 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154.10 Comparing Backtracking Algorithms . . . . . . . . . . . . . . . . . . . . 116

    5 Tractable Structures for Constraint Satisfaction Problems 133Rina Dechter

    5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1345.2 Structure-Based Tractability in Inference . . . . . . . . . . . . . . . . . . 1375.3 Trading Time and Space by Hybrids of Search and Inference . . . . . . . 1555.4 Structure-based Tractability in Search . . . . . . . . . . . . . . . . . . . 1635.5 Summary and Bibliographical Notes . . . . . . . . . . . . . . . . . . . . 165

    6 The Complexity of Constraint Languages 169David Cohen and Peter Jeavons

    6.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1706.2 Examples of Constraint Languages . . . . . . . . . . . . . . . . . . . . . 1716.3 Developing an Algebraic Theory . . . . . . . . . . . . . . . . . . . . . . 1756.4 Applications of the Algebraic Theory . . . . . . . . . . . . . . . . . . . 1826.5 Constraint Languages Over an Infinite Set . . . . . . . . . . . . . . . . . 1876.6 Multi-Sorted Constraint Languages . . . . . . . . . . . . . . . . . . . . . 1886.7 Alternative Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . 1936.8 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

    7 Global Constraints 205Willem-Jan van Hoeve and Irit Katriel

    7.1 Notation and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . 2067.2 Examples of Global Constraints . . . . . . . . . . . . . . . . . . . . . . 2127.3 Complete Filtering Algorithms . . . . . . . . . . . . . . . . . . . . . . . 2187.4 Optimization Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 2257.5 Partial Filtering Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 2297.6 Global Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2367.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

    8 Local Search Methods 245Holger H. Hoos and Edward Tsang

    8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2468.2 Randomised Iterative Improvement Algorithms . . . . . . . . . . . . . . 2528.3 Tabu Search and Related Algorithms . . . . . . . . . . . . . . . . . . . . 2548.4 Penalty-based Local Search Algorithms . . . . . . . . . . . . . . . . . . 2588.5 Other Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2648.6 Local Search for Constraint Optimisation Problems . . . . . . . . . . . . 2658.7 Frameworks and Toolkits for Local Search . . . . . . . . . . . . . . . . . 2678.8 Conclusions and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . 268

  • ix

    9 Soft Constraints 279Pedro Meseguer, Francesca Rossi, Thomas Schiex

    9.1 Background: Classical Constraints . . . . . . . . . . . . . . . . . . . . . 2809.2 Specific Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2819.3 Generic Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2859.4 Relations among Soft Constraint Frameworks . . . . . . . . . . . . . . . 2899.5 Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2959.6 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2989.7 Combining Search and Inference . . . . . . . . . . . . . . . . . . . . . . 3119.8 Using Soft Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 3149.9 Promising Directions for Further Research . . . . . . . . . . . . . . . . . 319

    10 Symmetry in Constraint Programming 327Ian P. Gent, Karen E. Petrie, Jean-Francois Puget

    10.1 Symmetries and Group Theory . . . . . . . . . . . . . . . . . . . . . . . 32910.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33510.3 Reformulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33810.4 Adding Constraints Before Search . . . . . . . . . . . . . . . . . . . . . 34110.5 Dynamic Symmetry Breaking Methods . . . . . . . . . . . . . . . . . . 34810.6 Combinations of Symmetry Breaking Methods . . . . . . . . . . . . . . 36010.7 Successful Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 36110.8 Symmetry Expression and Detection . . . . . . . . . . . . . . . . . . . . 36210.9 Further Research Themes . . . . . . . . . . . . . . . . . . . . . . . . . . 36410.10Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366

    11 Modelling 375Barbara M. Smith

    11.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37611.2 Representing a Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 37711.3 Propagation and Search . . . . . . . . . . . . . . . . . . . . . . . . . . . 37711.4 Viewpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37911.5 Expressing the Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 38011.6 Auxiliary Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38411.7 Implied Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38511.8 Reformulations of CSPs . . . . . . . . . . . . . . . . . . . . . . . . . . 38911.9 Combining Viewpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . 39211.10Symmetry and Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . 39611.11Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 39811.12Supporting Modelling and Reformulation . . . . . . . . . . . . . . . . . 399

    II Extensions and Applications 405

    12 Constraint Logic Programming 407Kim Marriott, Peter J. Stuckey, Mark Wallace

    12.1 History of CLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40912.2 Semantics of Constraint Logic Programs . . . . . . . . . . . . . . . . . . 411

  • x CONTENTS

    12.3 CLP for Conceptual Modeling . . . . . . . . . . . . . . . . . . . . . . . 42312.4 CLP for Design Modeling . . . . . . . . . . . . . . . . . . . . . . . . . 42812.5 Search in CLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43512.6 Impact of CLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44012.7 Future of CLP and Interesting Research Questions . . . . . . . . . . . . . 442

    13 Constraints in Procedural and Concurrent Languages 451Thom Fruhwirth, Laurent Michel, and Christian Schulte

    13.1 Procedural and Object-Oriented Languages . . . . . . . . . . . . . . . . 45213.2 Concurrent Constraint Programming . . . . . . . . . . . . . . . . . . . . 46313.3 Rule-Based Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . 47113.4 Challenges and Opportunities . . . . . . . . . . . . . . . . . . . . . . . . 48313.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484

    14 Finite Domain Constraint Programming Systems 493Christian Schulte and Mats Carlsson

    14.1 Architecture for Constraint Programming Systems . . . . . . . . . . . . . 49414.2 Implementing Constraint Propagation . . . . . . . . . . . . . . . . . . . 50414.3 Implementing Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51114.4 Systems Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51514.5 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517

    15 Operations Research Methods in Constraint Programming 525J. N. Hooker

    15.1 Schemes for Incorporating OR into CP . . . . . . . . . . . . . . . . . . . 52515.2 Plan of the Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52615.3 Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52815.4 Mixed Integer/Linear Modeling . . . . . . . . . . . . . . . . . . . . . . . 53215.5 Cutting Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53415.6 Relaxation of Global Constraints . . . . . . . . . . . . . . . . . . . . . . 53715.7 Relaxation of Piecewise Linear and Disjunctive Constraints . . . . . . . . 54315.8 Lagrangean Relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . 54515.9 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . 54815.10Branch-and-Price Methods . . . . . . . . . . . . . . . . . . . . . . . . . 55215.11Benders Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 55415.12Toward Integration of CP and OR . . . . . . . . . . . . . . . . . . . . . 558

    16 Continuous and Interval Constraints 569Frederic Benhamou and Laurent Granvilliers

    16.1 From Discrete to Continuous Constraints . . . . . . . . . . . . . . . . . . 57216.2 The Branch-and-Reduce Framework . . . . . . . . . . . . . . . . . . . . 57316.3 Consistency Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 57516.4 Numerical Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58116.5 Hybrid Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58516.6 First Order Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 58816.7 Applications and Software packages . . . . . . . . . . . . . . . . . . . . 59116.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593

  • xi

    17 Constraints over Structured Domains 603Carmen Gervet

    17.1 History and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 60417.2 Constraints over Regular and Constructed Sets . . . . . . . . . . . . . . . 60717.3 Constraints over Finite Set Intervals . . . . . . . . . . . . . . . . . . . . 61117.4 Influential Extensions to Subset Bound Solvers . . . . . . . . . . . . . . 61717.5 Constraints over Maps, Relations and Graphs . . . . . . . . . . . . . . . 62617.6 Constraints over Lattices and Hierarchical Trees . . . . . . . . . . . . . . 62917.7 Implementation Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . 62917.8 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63117.9 Further Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631

    18 Randomness and Structure 637Carla Gomes and Toby Walsh

    18.1 Random Constraint Satisfaction . . . . . . . . . . . . . . . . . . . . . . 63818.2 Random Satisfiability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64218.3 Random Problems with Structure . . . . . . . . . . . . . . . . . . . . . . 64618.4 Runtime Variability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64918.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65518.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656

    19 Temporal CSPs 663Manolis Koubarakis

    19.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66419.2 Constraint-Based Formalisms for Reasoning About Time . . . . . . . . . 66719.3 Efficient Algorithms for Temporal CSPs . . . . . . . . . . . . . . . . . . 67519.4 More Expressive Queries for Temporal CSPs . . . . . . . . . . . . . . . 67919.5 First-Order Temporal Constraint Languages . . . . . . . . . . . . . . . . 68119.6 The Scheme of Indefinite Constraint Databases . . . . . . . . . . . . . . 68319.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689

    20 Distributed Constraint Programming 697Boi Faltings

    20.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69920.2 Distributed Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70020.3 Improvements and Variants . . . . . . . . . . . . . . . . . . . . . . . . . 71120.4 Distributed Local Search . . . . . . . . . . . . . . . . . . . . . . . . . . 71620.5 Open Constraint Programming . . . . . . . . . . . . . . . . . . . . . . . 71920.6 Further Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72220.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724

    21 Uncertainty and Change 729Kenneth N. Brown and Ian Miguel

    21.1 Background and Definitions . . . . . . . . . . . . . . . . . . . . . . . . 73021.2 Example: Course Scheduling . . . . . . . . . . . . . . . . . . . . . . . . 73021.3 Uncertain Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73121.4 Problems that Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736

  • xii CONTENTS

    21.5 Pseudo-dynamic Formalisms . . . . . . . . . . . . . . . . . . . . . . . . 75021.6 Challenges and Future Trends . . . . . . . . . . . . . . . . . . . . . . . 75121.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753

    22 Constraint-Based Scheduling and Planning 759Philippe Baptiste, Philippe Laborie, Claude Le Pape, Wim Nuijten

    22.1 Constraint Programming Models for Scheduling . . . . . . . . . . . . . . 76122.2 Constraint Programming Models for Planning . . . . . . . . . . . . . . . 76922.3 Constraint Propagation for Resource Constraints . . . . . . . . . . . . . . 77622.4 Constraint Propagation on Optimization Criteria . . . . . . . . . . . . . . 78322.5 Heuristic Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78722.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 792

    23 Vehicle Routing 799Philip Kilby and Paul Shaw

    23.1 The Vehicle Routing Problem . . . . . . . . . . . . . . . . . . . . . . . . 80023.2 Operations Research Approaches . . . . . . . . . . . . . . . . . . . . . . 80223.3 Constraint Programming Approaches . . . . . . . . . . . . . . . . . . . . 80723.4 Constraint Programming in Search . . . . . . . . . . . . . . . . . . . . . 81723.5 Using Constraint Programming as a Subproblem Solver . . . . . . . . . . 82123.6 CP-VRP in the Real World . . . . . . . . . . . . . . . . . . . . . . . . . 82323.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826

    24 Configuration 835Ulrich Junker

    24.1 What Is Configuration? . . . . . . . . . . . . . . . . . . . . . . . . . . . 83624.2 Configuration Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . 84224.3 Constraint Models for Configuration . . . . . . . . . . . . . . . . . . . . 85124.4 Problem Solving for Configuration . . . . . . . . . . . . . . . . . . . . . 86124.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866

    25 Constraint Applications in Networks 873Helmut Simonis

    25.1 Electricity Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87425.2 Water (Oil) Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87625.3 Data Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87725.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896

    26 Bioinformatics and Constraints 903Rolf Backofen and David Gilbert

    26.1 What Biologists Want from Bioinformatics . . . . . . . . . . . . . . . . 90426.2 The Central Dogma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90526.3 A Classification of Problem Areas . . . . . . . . . . . . . . . . . . . . . 90626.4 Sequence Related Problems . . . . . . . . . . . . . . . . . . . . . . . . . 90626.5 Structure Related Problems . . . . . . . . . . . . . . . . . . . . . . . . . 92026.6 Function Related Problems . . . . . . . . . . . . . . . . . . . . . . . . . 93326.7 Microarrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935

  • xiii

    Index 943

  • Part I

    Foundations

  • Handbook of Constraint Programming 3Edited by F. Rossi, P. van Beek and T. Walshc 2006 Elsevier All rights reserved

    Chapter 1

    Introduction

    Francesca Rossi, Peter van Beek, Toby Walsh

    Constraint programming is a powerful paradigm for solving combinatorial search problemsthat draws on a wide range of techniques from artificial intelligence, computer science, andoperations research. Constraint programming is programming partly in the sense of pro-gramming in mathematical programming. The user declaratively states the constraints onthe feasible solutions for a set of decision variables. However, constraint programming isalso programming in the sense of computer programming. The user additionally needsto program a search strategy. This typically draws upon standard methods like chronolog-ical backtracking and constraint propagation, but may use customized code like a problemspecific branching heuristic. Constraint programming is both inclusive and diverse. Thefield welcomes the full spectrum of researchfrom the theoretical, through the experimen-tal, to the appliedand brings together a wide array of disciplines including algorithms,artificial intelligence, combinatorial optimization, computer systems, computational logic,operations research, and programming languages.

    The aim of The Handbook of Constraint Programming is to capture the full breadthand depth of this field and to be encyclopedic in its scope and coverage. The intendedaudience of the handbook is researchers, graduate students, upper-year undergraduates,and practitioners who wish to learn about the state-of-the-art in constraint programming.Each chapter of the handbook is intended to be a self-contained survey of a topic, and iswritten by one or more authors who are leading researchers in the area. Readers wishing tolearn more background material than provided in a particular chapter may wish to consultone or more of the excellent introductory textbooks that are available (see., e.g., [1, 2, 3, 4]).

    1.1 Structure and Content

    The handbook is organized in two parts. The first part covers the basic foundations ofconstraint programming, including the history, basic search methods, computational com-plexity, and important issues in modeling a problem as a constraint program. The secondpart covers extensions to the basic framework, as well as applications.

  • 4 1. Introduction

    Part I: Foundations

    In Chapter 2, Eugene C. Freuder and Alan K. Mackworth survey the emergence of con-straint satisfaction as a new paradigm within artificial intelligence and computer science.Covering the two decades from 1965 to 1985, Freuder and Mackworth trace the devel-opment of two streams of work, which they call the language stream and the algorithmstream. The focus of the language stream was on declarative program languages and sys-tems for developing applications of constraints. The language stream gave many specialpurpose declarative languages and also general programming languages such as constraintlogic programming. The focus of the algorithm stream was on algorithms and heuristicsfor the constraint satisfaction framework. The algorithm stream gave constraint propa-gation algorithms such as algorithms for arc consistency and also heuristics and constraintpropagation within backtracking search. Ultimately, the language stream and the algorithmstream merged to form the core of the new field of constraint programming.

    In Chapter 3, Christian Bessiere surveys the extensive literature on constraint propa-gation. Constraint propagation is a central conceptperhaps the central conceptin thetheory and practice of constraint programming. Constraint propagation is a form of reason-ing in which, from a subset of the constraints and the domains, more restrictive constraintsor more restrictive domains are inferred. The inferences are justified by local consistencyproperties that characterize necessary conditions on values or set of values to belong toa solution. Arc consistency is currently the most important local consistency property inpractice and has received the most attention in the literature. The importance of constraintpropagation is that it can greatly simplify a constraint problem and so improve the effi-ciency of a search for a solution. There are two main approaches to performing constraintpropagation: the rules iteration approach and the algorithmic approach. In the rules it-eration approach, reduction rules specify conditions under which domain reductions canbe performed for a constraint. In the algorithmic approach, generic and special purposealgorithms are designed for constraints.

    The main algorithmic techniques for solving constraint satisfaction problems (CSPs)are backtracking search and local search. In Chapter 4, Peter van Beek surveys backtrack-ing search algorithms. A backtracking search algorithm performs a depth-first traversalof a search tree, where the branches out of a node represent alternative choices that mayhave to be examined in order to find a solution and the constraints are used to prune sub-trees containing no solutions. Backtracking search algorithms come with a guarantee thata solution will be found if one exists, and can be used to show that a CSP does not havea solution and to find a provably optimal solution. Many techniques for improving theefficiency of a backtracking search algorithm have been suggested and evaluated includingconstraint propagation, nogood recording, backjumping, heuristics for variable and valueordering, and randomization and restart strategies.

    A fundamental challenge in constraint programming is to understand the computationalcomplexity of problems involving constraints. In their most general form, constraint satis-faction problems (CSPs) are NP-Hard. To counter this pessimistic result, much work hasbeen done on identifying restrictions on constraint satisfaction problems such that solvingan instance can be done efficiently; that is, in polynomial time in the worst-case. Findingtractable classes of constraint problems is of theoretical interest of course, but also of prac-tical interest in the design of constraint programming languages and effective constraintsolvers. The restrictions on CSPs that lead to tractability fall into two classes: restrict-

  • F. Rossi, P. van Beek, T. Walsh 5

    ing the topology of the underlying graph of the CSP and restricting the type of the allowedconstraints. In Chapter 5, Rina Dechter surveys how the complexity of solving CSPs varieswith the topology of the underlying constraint graph. The results depend on properties ofthe constraint graph, such as the well-known graph parameter tree-width. In Chapter 6,David Cohen and Peter Jeavons survey how the complexity of solving CSPs varies withthe type of allowed constraints. Here, the results depend on algebraic properties of theconstraint relations.

    In Chapter 7, Willem-Jan van Hoeve and Irit Katriel survey global constraints. A globalconstraint is a constraint that can be over arbitrary subsets of the variables. The canonicalexample of a global constraint is the all-different constraint which states that thevariables in the constraint must be pairwise different. The power of global constraints istwo-fold. First, global constraints ease the task of modeling an application using constraintprogramming. The all-different constraint, for example, is a pattern that reoccursin many applications, including rostering, timetabling, sequencing, and scheduling appli-cations. Second, special purpose constraint propagation algorithms can be designed whichtake advantage of the semantics of the constraint and are therefore much more efficient.Van Hoeve and Katriel show that designing constraint propagation algorithms for globalconstraints draws on a wide variety of disciplines including graph theory, flow theory,matching theory, linear programming, and finite automaton.

    In Chapter 8, Holger H. Hoos and Edward Tsang survey local search algorithms forsolving constraint satisfaction problems. A local search algorithm performs a walk in adirected graph, where the nodes represent alternative assignments to the variables that mayhave to be examined and the number of violated constraints is used to guide the search fora solution. Local search algorithms cannot be used to show a CSP does not have a solutionor to find a provably optimal solution. However, such algorithms are often effective atfinding a solution if one exists and can be used to find an approximation to an optimalsolution. Many techniques and strategies for improving local search algorithms have beenproposed and evaluated including randomized iterative improvement, tabu search, penalty-based approaches, and alternative neighborhood and move strategies.

    In many real world problems, not all constraints are hard. Some constraint may besoft and express preferences that we would like to satisfy but do not insist upon. Otherreal world problems may be over-constrained. In both cases, an extension of the basicframework of constraint satisfaction to soft constraints is useful. In Chapter 9, PedroMeseguer, Francesca Rossi, and Thomas Schiex survey the different formalisms of softconstraints proposed in the literature. They describe the relationship between these differ-ent formalisms. In addition, they discuss how solving methods have been generalized todeal with soft constraints.

    The first part ends with two chapters concerned with modeling real world problems asCSPs. Symmetry occurs in many real world problems: machines in a factory might beidentical, nurses might have the same skills, delivery trucks might have the same capacity,etc. Symmetry can also be introduced when we model a problem as a CSP. For example,if introduce a decision variable for each machine, then we can permute those variablesrepresenting identical machines. Such symmetry enlarges the search space and must bedealt with if we are to solve problems of the size met in practice. In Chapter 10, Ian P.Gent, Karen E. Petrie, and Jean-Francois Puget survey the different forms of symmetryin constraint programming. They describe the three basic techniques used to deal withsymmetry: reformulating the problem, adding symmetry breaking constraints or modifying

  • 6 1. Introduction

    the search strategy to ignore symmetric states. Symmetry is one example of the sort ofissues that need to be considered when modeling a problem as a CSP. In Chapter 11,Barbara M. Smith surveys a range of other issues in modeling a problem as a CSP. Thisincludes deciding on an appropriate viewpoint (e.g. if we are scheduling exams, do thevariables represent exams and their values the times, or do the variables represent thetimes and their values the exams?), adding implied constraints to help prune the searchspace, and introducing auxiliary variables to make it easier to state the constraints or toimprove propagation.

    Part II: Extensions and Applications

    To increase the uptake, ease of use, extensibility, and flexibility of constraint technology,constraints and search have been integrated into several programming languages and pro-gramming paradigms. In Chapter 12, Kim Marriott, Peter J. Stuckey, and Mark Wallacesurvey constraint logic programming (CLP), the integration of constraint solving into logicprogramming languages. Constraint solving and logic programming are both declarativeparadigms, so their integration is quite natural. Further, the fact that constraints can be seenas relations or predicates, that a set of constraints can be viewed as the conjunction of theindividual constraints, and that backtracking search is a basic methodology for solving a setof constraints, makes constraint solving very compatible with logic programming, whichis based on predicates, logical conjunctions, and backtracking search. Marriott, Stuckey,and Wallace cover the elegant semantics of CLP, show the power of CLP in modeling con-straint satisfaction problems, and describe how to define specific search routines in CLPfor solving the model.

    In Chapter 13, Thom Fruhwirth, Laurent Michel, and Christian Schulte survey the inte-gration of constraints into procedural and object-oriented languages, concurrent languages,and rule-based languages. Integrating constraint solving into these more traditional pro-gramming paradigms faces new challenges as these paradigms generally lack support fordeclarative programming. These challenges include (i) allowing the specification of newsearch routines, while maintaining declarativeness, (ii) the design of declarative model-ing languages that are user-friendly and based on well-known programming metaphors,and (iii) the integration of constraint solving into multi-paradigm languages. Fruhwirth,Michel, and Schulte include a discussion of the technical aspects of integrating constraintsinto each programming paradigm, as well as the advantages and disadvantages of eachparadigm.

    In Chapter 14, Christian Schulte and Mats Carlsson survey finite domain constraintprogramming systems. One of the key properties of constraint programming systems isthe provision of widely reusable servicessuch as constraint propagation and backtrack-ing searchfor constructing constraint-based applications. Schulte and Carlsson discusswhich services are provided by constraint programming systems and also the key principlesand techniques in implementing and coordinating these services. For many applications,the constraint propagation, backtracking search, and other services provided by the con-straint programming system are sufficient. However, some applications require more, andmost constraint programming systems are extensible, allowing the user to define, for exam-ple, new constraint propagators or new search strategies. Schulte and Carlsson also providean overview of several well-known finite domain constraint programming systems.

  • F. Rossi, P. van Beek, T. Walsh 7

    Operations research (OR) and constraint programming (CP) are complementary frame-works with similar goals. In Chapter 15, John N. Hooker surveys some of the schemes forincorporating OR methods into CP. In constraint programming, constraints are used toreduce the domains of the variables. One method for incorporating an OR method is toapply it to a constraint to reduce the domains. For example, if a subset of the constraintsare linear inequalities, the domain of a variable in the subset can possibly be reduced byminimizing and maximizing the variable using linear programming on the subset of linearconstraints. This example is an instance of a popular scheme for incorporating OR into CP:create a relaxation of the CP problem in the form of an OR model, such as a linear pro-gramming model. Other schemes for creating hybrid OR/CP combinations decompose aproblem so that CP and OR are each used on the parts of the problem to which they are bestsuited. Hooker shows that OR/CP combinations using both relaxation and decompositioncan bring substantial computational benefits.

    Real-world problems often take us beyond finite domain variables. For example, toreason about power consumption, we might want a variable to range over the reals andto reason about communication networks we might want a variable to range over pathsin a graph. Constraint programming has therefore been extended to deal with more thanjust finite (or enumerated) domains of values. In Chapter 16, Frederic Benhamou and Lau-rent Granvilliers survey constraints over continuous and interval domains. The extension ofbacktracking search over finite domains to interval constraints is called branch-and-reduce:branching splits an interval and reduce narrows the intervals using a generalization of localconsistency and interval arithmetic. Hybrid techniques combining symbolic reasoning andconstraint propagation have also been designed. Benhamou and Granvilliers also discusssome of the applications of interval constraints and the available interval constraint soft-ware packages. In Chapter 17, Carmen Gervet surveys constraints over structured domains.Many combinatorial search problemssuch as bin packing, set covering, and networkdesigncan be naturally represented in the language of sets, multi-sets, strings, graphsand other structured objects. Constraint propagation has therefore been extended to dealwith constraints over variables which range over such datatypes.

    Early work in empirical comparisons of algorithms for solving constraint satisfactionproblems was hampered by a lack of realistic or hard test problems. The situation im-proved with the discovery of hard random problems that arise at a phase transition andthe investigation of alternative random models of constraint satisfaction, satisfiability, andoptimization problems. Experiments could now be performed which compared the algo-rithms on the hardest problems and systematically explored the entire space of randomproblems to see where one algorithm bettered another. In Chapter 18, Carla Gomes andToby Walsh survey these alternative random models. In addition to their interest as anexperimental testbed, insight gained from the study of hard problems has also led to thedesign of better algorithms. As one example, Gomes and Walsh discuss the technique ofrandomization and restarts for improving the efficiency of backtracking search algorithms.

    In Chapter 19, Manolis Koubarakis surveys temporal constraint satisfaction problemsfor representing and reasoning with temporal information. Temporal reasoning is impor-tant in many application areasincluding natural language understanding, database sys-tems, medical information systems, planning, and schedulingand constraint satisfactiontechniques play a large role in temporal reasoning. Constraint-based temporal reasoningformalisms for representing qualitative, metric, and combined qualitative-metric temporalinformation have been proposed in the literature and many efficient constraint satisfaction

  • 8 1. Introduction

    algorithms are known for these formalisms. Koubarakis also demonstrates the application-driven need for more expressive queries over temporal constraint satisfaction (especiallyqueries combining temporal and non-temporal information) and surveys various proposalsthat address this need including the scheme of indefinite constraint databases.

    In Chapter 20, Boi Faltings surveys distributed constraint satisfaction. In distributedconstraint satisfaction, constraint solving happens under the control of different indepen-dent agents, where each agent controls a single variable. The canonical example of theusefulness of this formalism is meeting scheduling, where each person has their own con-straints and there are privacy concerns that restrict the flow of information, but many ap-plications have been identified. Backtracking search and its improvements have been ex-tended to the distributed case. In synchronous backtracking, messages are passed fromagent to agent with only one agent being active at any one time. A message consists of ei-ther a partial instantiation or a message that signals the need to backtrack. In asynchronousbacktracking, all agents are active at once, and messages are sent to coordinate their the as-signments that are made to their individual variables. Asynchronous backtracking has beenthe focus of most of the work in distributed constraint satisfaction. Faltings also surveysthe literature on open constraint satisfaction, a form of distributed CSP where the domainsof the variables and the constraints may be incomplete or not fully known.

    The basic framework of constraint programming makes two assumptions that do nothold in many real world problems: that the problem being modeled is static and that theconstraints are known with certainty. For example, factory scheduling is inherently dy-namic and uncertain since the full set of jobs may not be known in advance, machines maybreak down, employees may be late or ill, and so on. In Chapter 21, Kenneth N. Brown andIan Miguel survey the uses and extensions of constraint programming for handling prob-lems subject to change and uncertainty. For dynamically changing problems, two of thealternatives are to record information about the problem structure during the solving pro-cess, such as explanation or nogood recording, so that re-solving can be done efficiently;and to search for robust or solutions that anticipate expected changes. For uncertain prob-lems, different types of uncertainty can be identified including the problem itself is intrin-sically imprecise; there is a set of possible realizations of the problem, one of which willbe the final version, and there are probability distributions over the full realizations. Aswell, many CSP formalisms have been proposed for handling uncertainty including fuzzy,mixed, uncertain, probabilistic, stochastic, and recurrent CSPs.

    Constraint programming has proven usefulindeed, it is often the method of choicein important applications from industry, business, manufacturing, and science. In the lastfive chapters of the handbook, some of these applications of constraint programming arehighlighted. Each of the chapters emphasizes why constraint programming has been suc-cessful in the given application domain. As well, in the best traditions of application-driven research, the chapters describe how focusing on real-world applications has led tobasic discoveries and improvements to existing constraint programming techniques. In afruitful cycle, these discoveries and improvements then led to new and more successfulapplications.

    In Chapter 22, Philippe Baptiste, Philippe Laborie, Claude Le Pape, and Wim Nuijtensurvey constraint programming approaches to scheduling and planning. Scheduling is thetask of assigning resources to a set of activities to minimize a cost function. Schedulingarises in diverse settings including in the allocation of gates to incoming planes at an air-port, crews to an assembly line, and processes to a CPU. Planning is a generalization of

  • F. Rossi, P. van Beek, T. Walsh 9

    scheduling where the set of activities to be scheduled is not known in advance. Constraintprogramming approaches to scheduling and planning have aimed at generality, with theability to seamlessly handle real-world side constraints. As well, much effort has goneinto improved implied constraints such as global constraints, edge-finding constraints andtimetabling constraints, which lead to powerful constraint propagation. Baptiste et al. showthat one of the reasons for the success of a constraint programming approach is its abilityto integrate efficient special purpose algorithms within a flexible and expressive paradigm.Additional advantages of a constraint propagation approach include the ability to formhybrids of backtracking search and local search and the ease with which domain specificscheduling and planning heuristics can be incorporated within the search routines.

    In Chapter 23, Philip Kilby and Paul Shaw survey constraint programming approachesto vehicle routing Vehicle Routing is the task of constructing routes for vehicles to visitcustomers at minimum cost. A vehicle has a maximum capacity which cannot be exceededand the customers may specify time windows in which deliveries are permitted. Muchwork on constraint programming approaches to vehicle routing has focused on alternativeconstraint models and additional implied constraints to increase the amount of pruningperformed by constraint propagation. Kilby and Shaw show that constraint programmingis well-suited for vehicle routing because of its ability to handle real-world (or side) con-straints. Vehicle routing problems that arise in practice often have unique constraints thatare particular to a business entity. In non-constraint programming approaches, such sideconstraints often have to be handled in an ad hoc manner. In constraint programming awide variety of side constraints can be handled simply by adding them to the core model.

    In Chapter 24, Ulrich Junker surveys constraint programming approaches to configu-ration. Configuration is the task of assembling or configuring a customized system from acatalog of components. Configuration arises in diverse settings including in the assemblyof home entertainment systems, cars and trucks, and travel packages. Junker shows thatconstraint programming is well-suited to configuration because of (i) its flexibility in mod-eling and the declarativeness of the constraint model, (ii) the ability to explain a failureto find a customized system when the configuration task is over-constrained and to sub-sequently relax the users constraints, (iii) the ability to perform interactive configurationwhere the user makes a sequence of choices and after each choice constraint propagation isused to restrict future possible choices, and (iv) the ability to incorporate reasoning aboutthe users preferences.

    In Chapter 25, Helmut Simonis surveys constraint programming approaches to applica-tions that arise in electrical, water, oil, and data (such as the Internet) distribution networks.The applications include design, risk analysis, and operational control of the networks.Simonis discusses the best alternative formulations or constraint models for these prob-lems. The constraint programming work on networks vividly illustrates the advantagesof application-driven research. The limited success in this domain of classical constraintprogramming approaches such as backtracking search, led to improvements in hybrid ap-proaches which combine both backtracking and local search or combine both constraintprogramming and operations research methods. A research hurdle that must still be over-come, however, is the complexity and implementation effort that is required to construct asuccessful hybrid system for an application.

    In Chapter 26, Rolf Backofen and David Gilbert survey constraint programming ap-proaches to problems that arise in bioinformatics. Bioinformatics is the study of infor-matics and computational problems that arise in molecular biology, evolution, and genet-

  • 10 1. Introduction

    ics. Perhaps the first and most well-known example problem in bioinformatics is DNAsequence alignment. More recently, constraint programming approaches have made sig-nificant progress on the important problem of protein structure prediction. The ultimategoals and implications of bioinformatics are profound: better drug design, identification ofgenetic risk factors, gene therapy, and genetic modification of food crops and animals.

    Bibliography

    [1] K. R. Apt. Principles of Constraint Programming. Cambridge University Press, 2003.[2] R. Dechter. Constraint Processing. Morgan Kaufmann, 2003.[3] T. Fruhwirth and S. Abdennadher. Essentials of Constraint Programming. Springer,

    2003.[4] K. Marriott and P. J. Stuckey. Programming with Constraints. The MIT Press, 1998.

  • Handbook of Constraint Programming 11Edited by F. Rossi, P. van Beek and T. Walshc 2006 Elsevier All rights reserved

    Chapter 2

    Constraint Satisfaction:An Emerging Paradigm

    Eugene C. Freuder and Alan K. Mackworth

    This chapter focuses on the emergence of constraint satisfaction, with constraint languages,as a new paradigm within artificial intelligence and computer science during the periodfrom 1965 (when Golomb and Baumert published Backtrack programming [34]) to 1985(when Mackworth and Freuder published The complexity of some polynomial networkconsistency algorithms for constraint satisfaction problems [55]). The rest of this hand-book will cover much of the material introduced here in more detail, as well as, of course,continuing on from 1986 into 2006.

    2.1 The Early Days

    Constraint satisfaction, in its basic form, involves finding a value for each one of a set ofproblem variables where constraints specify that some subsets of values cannot be usedtogether. As a simple example of constraint satisfaction, consider the task of choosingcomponent parts for the assembly of a bicycle, such as the frame, wheels, brakes, sprocketsand chain, that are all mutually compatible.

    Constraint satisfaction, like most fields of artificial intelligence, can be separated into(overlapping) concerns with representation and reasoning. The former can be divided intogeneric and application-specific concerns, the latter into search and inference. While con-straint satisfaction has often been pigeon-holed as a form of search, its real importancelies in its broad representational scope: it can be used effectively to model many otherforms of reasoning (e.g. temporal reasoning) and applied to many problem domains (e.g.scheduling). For this reason, constraint satisfaction problems are sometimes encounteredin application domains that are unaware that an academic community has been studying thesubject for years: one reason for the importance of a handbook such as this. Furthermore,while heuristic search methods are a major concern, the distinguishing feature of constraintsatisfaction as a branch of artificial intelligence is arguably the emphasis on inference, inthe form of constraint propagation, as opposed to search.

  • 12 2. Constraint Satisfaction: An Emerging Paradigm

    Constraint satisfaction problems have been tackled by a dizzying array of methods,from automata theory to ant algorithms, and are a topic of interest in many fields of com-puter science and beyond. These connections add immeasurably to the richness of thesubject, but are largely beyond the scope of this chapter. Here we will focus on the basicmethods involved in the establishment of constraint satisfaction as a branch of artificialintelligence. This new branch of artificial intelligence, together with related work on pro-gramming languages and systems that we can only touch upon here, laid the groundworkfor the flourishing of interest in constraint programming languages after 1985.

    Constraint satisfaction of course, predates 1965. The real world problems that we nowidentify as constraint satisfaction problems, like workforce scheduling, have naturally al-ways been with us. The toy 8-queens problem, which preoccupied so many of the earlyconstraint satisfaction researchers in artificial intelligence, is said to have been proposed in1848 by the chess player Max Bazzel. Mythology claims that a form of backtrack search,a powerful search paradigm that has become a central tool for constraint satisfaction, wasused by Theseus in the labyrinth in Crete. Backtrack search was used in recreational math-ematics in the nineteenth century [51], and was an early subject of study as computerscience and operations research emerged as academic disciplines after World War II. Bit-ner and Reingold [2] credit Lehmer with first using the term backtrack in the 1950s [50].Various forms of constraint satisfaction and propagation appeared in the computer scienceliterature in the 1960s [16, 15, 34, 75].

    In artificial intelligence interest in constraint satisfaction developed in two streams. Insome sense a common ancestor of both streams is Ivan Sutherlands groundbreaking 1963MIT Ph.D. thesis, Sketchpad: A man-machine graphical communication system [73].

    In one stream, the versatility of constraints led to applications in a variety of domains,and associated programming languages and systems. This stream we can call the languagestream. In 1964 Wilkes proposed that algebraic equations be allowed as constraint state-ments in procedural Algol-like programming languages, with relaxation used to satisfy theconstraints [80]. Around 1967, Elcock developed a declarative language, Absys, based onthe manipulation of equational constraints [22]. Burstall employed a form of constraintmanipulation as early as 1969 in a program for solving cryptarithmetic puzzles [9]. In thevery first issue of Artificial Intelligence in 1970, Fikes described REF-ARF, where the REFlanguage formed part of a general problem-solving system employing constraint satisfac-tion and propagation as one of its methods [23]. Kowalski used a form of constraint prop-agation for theorem proving [48]. Sussman and others at MIT applied a form of constraintpropagation to analysis, synthesis and fault localization for circuits [6, 17, 18, 67, 71], andSussman with Steele developed the CONSTRAINTS language [72]. Borning used con-straints in his ThingLab simulation laboratory [4, 5], whose kernel was an extension of theSmalltalk language; Lauriere used constraints in Alice, a language for solving combina-torial problems [49]. In the planning domain, Eastman did constraint structured spaceplanning with GSP, the General Space Planner [21], Stefik used constraint posting inMOLGEN, which planned gene-cloning experiments in molecular genetics [68, 69], andDescotte and Latombes GARI system, which generated the machining plans of mechani-cal parts, embedded a planner which made compromises among antagonistic constraints[20]. Fox, Allen and Strohm developed ISIS-II [25] a constraint-directed reasoning systemfor factory job-shop scheduling.

    In the other stream, an interest in constraint solving algorithms grew out of the ma-chine vision community; we cite some of the early work here. We refer to this stream as

  • E. C. Freuder, A. K. Mackworth 13

    the algorithm stream. The landmark Waltz filtering (arc consistency) constraint propa-gation algorithm appeared in a Ph.D. thesis on scene labeling [79], building upon workof Huffman [41] and Clowes [10]. Montanari developed path consistency and establisheda general framework for representing and reasoning about constraints in a seminal paperentitled Networks of constraints: fundamental properties and applications to picture pro-cessing [60]. Mackworth exploited constraints for machine vision [52], before providinga general framework for Consistency in networks of relations and new algorithms for arcand path consistency [53]. Freuder generalized arc and path consistency to k-consistency[26] shortly after completing a Ph.D. thesis on active vision. Barrow and Tenenbaum,with MSYS [1] and IGS [74], were also early users of constraints for image interpretation.Rosenfeld, Hummel and Zucker, in Scene labeling by relaxation operations, explored thecontinuous labeling problem, where constraints are not hard, specifying that values canor cannot be used together, but soft specifying degrees of compatibility [65]. Haralick,Davis, Rosenfeld and Milgram discussed Reduction operations for constraint satisfaction[38], and Haralick and Shapiro generalized those results in a two-part paper on The con-sistent labeling problem [36, 37]. Together with J. R. Ullman, they even discussed specialhardware for constraint propagation and parallel search computation in [76].

    The language and algorithm streams diverged, and both became more detached fromspecific application domains. While applications and commercial exploitation did prolif-erate, the academic communities focused more on general methods. While the generalityand scientific rigor of constraint programming is one of its strengths, we face a continu-ing challenge to reconnect these streams more firmly with their semantic problem-solvingroots.

    The language stream became heavily influenced by logic programming, in the form ofconstraint logic programming, and focused on the development of programming languagesand libraries. Hewitts Planner language [40] and its partial implementation as Micro-Planner [70] can be seen as an early logic programming language [3]. The major earlymilestone, though, was the development of Prolog by Colmerauer and others around 1972[14] and the logic as a programming language movement [39, 47]. Prolog can be framed asan early constraint programming language, solving equality constraints over terms (includ-ing variables) using the unification algorithm as the constraint solver. Colmerauer pushedthis view much further in his introduction of Prolog II in 1982 [13, 12]. The integrationof constraint propagation algorithms into interpreters for Planner-like languages was pro-posed by Mackworth [53]. Van Hentenryck developed and implemented CHIP (ConstraintHandling in Prolog) as a fully-fledged constraint logic programming language [77]. In aparallel development Jaffar et al. developed the CLP(X) family of constraint logic pro-gramming languages [42] including CLP(R) [44]. For more on these developments in thelanguage stream see the surveys in [11, 43] and other chapters in this handbook.

    The algorithm stream, influenced by the paradigm of artificial intelligence as search,as exemplified in Nilssons early textbook [61], and by the development of the science ofalgorithms, as exemplified by Knuths The Art of Computer Programming [45], focusedon algorithms and heuristics. The second stream remained more firmly within artificialintelligence, developing as one of the artificial intelligence communities built around rea-soning paradigms: constraint-based reasoning [29], case-based reasoning, and the like. Italso focused increasingly on the simple, but powerful and general, constraint satisfactionproblem (CSP) formulation and its variants. We shall focus primarily on this stream, andthe development of the CSP paradigm, in this chapter.

  • 14 2. Constraint Satisfaction: An Emerging Paradigm

    The challenge then became to reintegrate the language and algorithm streams, alongwith related disciplines, such as mathematical programming and constraint databases, intoa single constraint programming community. This process began in earnest in the 1990swhen Paris Kanellakis, Jean-Louis Lassez, and Vijay Saraswat chaired a workshop thatsoon led to the formation of an annual International Conference on Principles and Practiceof Constraint Programming, and, at the instigation of Zsofia Ruttkay, Gene Freuder estab-lished the Constraints journal, which provides a common forum for the many disciplinesinterested in constraint programming and constraint satisfaction and optimization, and themany application domains in which constraint technology is employed.

    2.2 The Constraint Satisfaction Problem: Representation andReasoning

    Here we consider the representation of constraint satisfaction problems, the varieties ofreasoning used by algorithms to solve them and the analysis of those solution methods.

    2.2.1 Representation

    The classic definition of a Constraint Satisfaction Problem (CSP) is as follows. A CSPP is a triple P = X,D,C where X is an n-tuple of variables X = x1, x2, . . . , xn,D is a corresponding n-tuple of domains D = D1, D2, . . . , Dn such that xi Di, C isa t-tuple of constraints C = C1, C2, . . . , Ct. A constraint Cj is a pair RSj , Sj whereRSj is a relation on the variables in Si = scope(Ci). In other words, Ri is a subset of theCartesian product of the domains of the variables in Si.1

    A solution to the CSP P is an n-tuple A = a1, a2, . . . , an where ai Di andeach Cj is satisfied in that RSj holds on the projection of A onto the scope Sj . In agiven task one may be required to find the set of all solutions, sol(P), to determine ifthat set is non-empty or just to find any solution, if one exists. If the set of solutions isempty the CSP is unsatisfiable. This simple but powerful framework captures a widerange of significant applications in fields as diverse as artificial intelligence, operationsresearch, scheduling, supply chain management, graph algorithms, computer vision andcomputational linguistics, to name but a few.

    The classic CSP paradigm can be both specialized and generalized in a variety of im-portant ways. One important specialization considers the extensionality/intensionality ofthe domains and constraints. If all the domains in D are finite sets, with extensional rep-resentations, then they, and the constraint relations, may be represented and manipulatedextensionally. However, even if the domains and the relations are intensionally represented,many of the techniques described in this chapter and elsewhere in the handbook still ap-ply. If the size of the scope of each constraint is limited to 1 or 2 then the constraintsare unary and binary and the CSP can be directly represented as a constraint graph withvariables as vertices and constraints as edges. If the arity of constraints is not so limitedthen a hypergraph is required with a hyperedge for each p-ary constraint (p > 2) connect-ing the p vertices involved. The satisfiability of propositional formulae, SAT, is another

    1 This is the conventional definition, which we will adhere to here. A more parsimonious definition of a CSPwould dispense with D entirely leaving the role of Di to be played by a unary constraint Cj with scope(Cj) =xi.

  • E. C. Freuder, A. K. Mackworth 15

    specialization of CSP, where the domains are restricted to be {T, F} and the constraintsare clauses. 3-SAT, the archetypal NP-complete decision problem, is a further restrictionwhere the scope of each constraint (clause) is 3 or fewer variables.

    The classic view of CSPs was initially developed by Montanari [60] and Mackworth[53]. It has strong roots in, and links with, SAT [16, 15, 54], relational algebra anddatabase theory [58], computer vision [10, 41, 79] and graphics [73].

    Various generalizations of the classic CSP model have been developed subsequently.One of the most significant is the Constraint Optimization Problem (COP) for which thereare several significantly different formulations, and the nomenclature is not always con-sistent [19]. Perhaps the simplest COP formulation retains the CSP limitation of allowingonly hard Boolean-valued constraints but adds a cost function over the variables, thatmust be minimized. This arises often, for example, in scheduling applications.

    2.2.2 Reasoning: Inference and Search

    We will consider the algorithms for solving CSPs under two broad categories: inferenceand search, and various combinations of those two approaches. If the domains Di are allfinite then the finite search space for putative solutions is = i Di (where is the joinoperator of relational algebra [58]). can, in theory, be enumerated and each n-tuple testedto determine if it is a solution. This blind enumeration technique can be improved uponusing two distinct orthogonal strategies: inference and search. In inference techniques,local constraint propagation can eliminate large subspaces from on the grounds that theymust be devoid of solutions. Search systematically explores , often eliminating subspaceswith a single failure. The success of both strategies hinges on the simple fact that a CSPis conjunctive: to solve it, all of the constraints must be satisfied so that a local failure ona subset of variables rules out all putative solutions with the same projection onto thosevariables. These two basic strategies are usually combined in most applications.

    2.2.3 Inference: Constraint Propagation Using Network Consistency

    The major development in inference techniques for CSPs was the discovery and develop-ment, in the 1970s, of network consistency algorithms for constraint propagation. Herewe will give an overview of that development.

    Analysis of using backtracking to solve CSPs shows that it almost always displayspathological thrashing behaviors [3]. Thrashing is the repeated exploration of failing sub-trees of the backtrack search tree that are essentially identicaldiffering only in assignmentsto variables irrelevant to the failure of the subtree. Because there is typically an exponentialnumber of such irrelevant assignments, thrashing is often the most significant factor in therunning time of backtracking.

    The first key insight behind all the consistency algorithms is that much thrashing be-haviour can be identified and eliminated, once and for all, by tightening the constraints,making implicit constraints explicit, using tractable, efficient polynomial-time algorithms.The second insight is that the level, or scope, of consistency, the size of the set of variablesinvolved in the local context, can be adjusted as a parameter from 1 up to n, each increasein level requiring correspondingly more work.

    For simplicity, we will initially describe the development of the consistency algorithmsfor CSPs with finite domains and unary and binary constraints only, though neither restric-

  • 16 2. Constraint Satisfaction: An Emerging Paradigm

    tion is necessary, as we shall see. We assume the reader is familiar with the basic elementsof graph theory, set theory and relational algebra.

    Consider a CSP P = X,D,C as defined above. The unary constraints are Ci= Rxi, xi. We use the shorthand notationRi to stand for Rxi. Similarly, the binaryconstraints are of the formCs = Rxi,xj, xi, xj where i 6= j. We use Rij to stand forRxi,xj.

    Node consistency is the simplest consistency algorithm. Node i comprised of vertex irepresenting variable xi with domain Di is node consistent iff Di Ri. If node i is notnode consistent it can be made so by computing:

    Di = Di

    RiDi Di

    A single pass through the nodes makes the network node consistent. The resultingCSP is P = X,D, C where D = D1, D2, . . . , Dn. We say P = NC(P). Clearlysol(P) = sol(P ). Let =i Di then || ||.

    Arc consistency is a technique for further tightening the domains using the binary con-straints. Consider node i with domain Di. Suppose there is a non-trivial relation Rijbetween variables xi and xj . We consider the arcs i, j and j, i separately. Arc i, j isarc consistent iff:

    Di i(Rij Dj)

    where is the projection operator. That is, for every member of Di, there is a correspond-ing element in Dj that satisfies Rij . Arc i, j can be tested for arc consistency and madeconsistent, if it is not so, by computing:

    Di = Di

    i(Rij Dj)Di Di

    (This is a semijoin [58]). In other words, delete all elements ofDi that have no correspond-ing element in Dj satisfying Rij . A network is arc consistent iff all its arcs are consistent.If all the arcs are already consistent a single pass through them is all that is needed to ver-ify this. If, however, at least one arc has to be made consistent (i.e. Di 6= Di there is adeletion from Di) then one must recheck some number of arcs. The basic arc consistencyalgorithm simply checks all the arcs repeatedly until a fixed point of no further domainreductions is reached. This algorithm is known as AC-1 [53].

    Waltz [79] realized that a more intelligent arc consistency bookkeeping scheme wouldonly recheck those arcs that could have become inconsistent as a direct result of deletionsfrom Di. Waltzs algorithm, now known as AC-2 [53], propagates the revisions of thedomains through the arcs until, again, a fixed point is reached. AC-3, presented by Mack-worth [53], is a generalization and simplification of AC-2. AC-3 is still the most widelyused and effective consistency algorithm. For each of these algorithms let P = AC(P)be the result of enforcing arc consistency on P . Then clearly sol(P) = sol(P ) and|| ||

    The best framework for understanding all the network consistency algorithms is tosee them as removing local inconsistencies from the network which can never be part ofany global solution. When those inconsistencies are removed they may propagate to cause

  • E. C. Freuder, A. K. Mackworth 17

    inconsistencies in neighbouring arcs that were previously consistent. Those inconsistenciesare in turn removed so the algorithm eventually arrives, monotonically, at a fixed pointconsistent network and halts. An inconsistent network has the same set of solutions asthe consistent network that results from applying a consistency algorithm to it; however,if one subsequently applies, say, a backtrack search to the consistent network the resultantthrashing behaviour can be no worse and almost always is much better, assuming the samevariable and value ordering.

    Path consistency [60] is the next level of consistency to consider. In arc consistencywe tighten the unary constraints using local binary constraints. In path consistency weanalogously tighten the binary constraints using the implicit induced constraints on triplesof variables.

    A path of length two from node i through nodem to node j, i,m, j, is path consistentiff:

    Rij ij(Rim Dm Rmj)

    That is, for every pair of values a, b allowed by the explicit relation Rij there is a valuec for xm such that a, c is allowed by Rim and c, b is allowed by Rmj .

    Path i,m, j can be tested for path consistency and made consistent, if it is not, bycomputing:

    Rij = Rij

    ij(Rim Dm Rmj)Rij Rij

    If the binary relations are represented as Boolean bit matrices then the combination of thejoin and projection operations (which is relational composition) becomes Boolean matrixmultiplication and the

    operation becomes simply pairwise bit operations. In otherwords, for all the values a, b allowed byRij if there is no value c for xm allowed byRimand Rmj the path is made consistent by changing that bit value in Rij from 1 to 0. Theway to think of this is that the implicit constraint on i, j imposed by node m throughthe relational composition Rim Rmj is made explicit in the new constraint Rij whenpath i,m, j is made consistent.

    As with arc consistency the simplest algorithm for enforcing path consistency for theentire network is to check and ensure path consistency for each length 2 path i,m, j. Ifany path has to be made consistent then the entire pass through the paths is repeated again.This is algorithm PC-1 [53, 60].

    The algorithm PC-2 [53] determines, when any path is made consistent, the set of otherpaths could have become inconsistent because they use the arc between that pair of verticesand queues those paths, if necessary, for further checking. PC-2 realizes substantial savingsover PC-1 just as AC-3 is more efficient than AC-1 [55].

    Typically, after path consistency is established, there are non-trivial binary constraintsbetween all pairs of nodes. As shown by Montanari [60], if all paths of length 2 are con-sistent then all paths of any length are consistent, so longer paths need not be considered.Once path consistency is established, there is a chain of values along any path satisfyingthe relations between any pair of values allowed at the start and the end of the path. Thisdoes not mean that there is necessarily a solution to the CSP. If a path traverses the entirenetwork with a chain of compatible values, if that path self-intersects at a node the two

  • 18 2. Constraint Satisfaction: An Emerging Paradigm

    values on the path at that node may be different. Indeed, it is a property of both arc consis-tency and path consistency that consistency may be established with non-empty domainsand relations even though there may be no global solution. Low-level consistency, with noempty domains, is a necessary but not sufficient condition for the existence of a solution.So, if consistency does empty any domain or relation there is no global solution.

    Parenthetically, we note that our abstract descriptions of these algorithms, in termsof relational algebra, are specifications not implementations. Implementations can oftenachieve efficiency savings by, for example, exploiting the semantics of a constraint such asthe all different global constraint, alldiff, that requires each variable in its scope to assumea different value.

    Briefly, let us establish that consistency algorithms do not require the finite domainor binary constraint restrictions on the CSP model. As long as we can perform , and

    operations on the domain and relational representations these algorithms are perfectlyadequate.

    Consider, for example, the trivial CSP P = x1, x2, [0, 3], [2, 5], =, x1, x2where x1 and x2 are reals. That is, x1 D1 = [0, 3], x2 D2 = [2, 5]. Arc consistencyon arc 1, 2 reduces D1 to [2, 3] and arc consistency on arc 2, 1 reduces D2 to [2, 3].

    If some of the constraints are p-ary (p > 2) we can generalize arc consistency. In thiscase we can represent each p-ary constraint C = RSj , Sj as a hyperedge connecting thevertices representing the variables in Sj . Consider a vertex xi Sj . We say we make thedirectional hyperarc xi, Sj xi generalized arc consistent by computing:

    Di = Di

    i(RSj (mSjxi Dm))Di Di

    In other words the hyperarc is made generalized arc consistent, if necessary, by deletingfrom Di any element that is not compatible with some tuple of its neighbours under therelation Rs. As with AC-3 any changes in Di may propagate to any other hyperarcs di-rected at node i. This is the generalized arc consistency algorithm GAC [53]. One canalso specialize arc consistency: Mackworth, Mulder and Havens exploited the propertiesof tree-structured variable domains in a hierarchical arc consistency algorithm HAC [57].

    While there is no immediately obvious graph theoretic concept analogous to nodes,arcs and paths to motivate a higher form of consistency, the fact that consideration of pathsof length two is, in fact, sufficient for path consistency, provides a natural motivation forthe concept of k-consistency introduced by Freuder in 1978 [26]. k-consistency requiresthat given consistent values for any k1 variables, there exists a value for any kth variable,such that all k values are consistent (i.e. the k values form a solution to the subprobleminduced by the k variables). Thus 2-consistency is equivalent to arc consistency, and 3-consistency to path consistency. Freuder provided a synthesis algorithm for finding all thesolutions to a CSP without search by achieving higher and higher levels of consistency.

    Freuder went on in 1985 to generalize further to (i, j)-consistency [28]. A constraintnetwork is (i, j)-consistent if, given consistent values for any i variables, there exist valuesfor any other j variables, such that all i + j values together are consistent. k-consistencyis (k 1, 1)-consistency. Special attention was paid to (1, j)-consistency, which is a gen-eralization of what would now be termed singleton consistency.

  • E. C. Freuder, A. K. Mackworth 19

    2.2.4 Search: Backtracking

    Backtrack is the fundamental complete search method for constraint satisfaction prob-lems, in the sense that one is guaranteed to find a solution if one exists. Even in 1965,Golomb and Baumert, in a JACM paper simply entitled Backtrack programming [34],were able to observe that the method had already been independently discovered manytimes. Golomb and Baumert believed their paper to be the first attempt to formulate thescope and methods of backtrack programming in its full generality, while acknowledgingthe fairly general exposition given five years earlier by Walker [78].

    Indeed, Golomb and Baumerts formulation is almost too general for our purposeshere in that it is presented as an optimization problem, with the objective to maximizea function of the variables. Arguably Golomb and Baumert are presenting branch andbound programming, where upper and lower bounds on what is possible or desirable atany point in the search can provide additional pruning of the search. What we would nowcall a classic CSP, the 8-queens problem, they formulate by specifying a function whosevalue is 0 when the queens do not attack each other, and 1 otherwise. It is worth noting alsothat in this optimization context, again even in 1965, Golomb and Baumert acknowledgethe existence of learning programs and hill climbing programs that converge on relativemaxima. They observe dryly that while the backtrack algorithm lacks such glamorousqualities as learning and progress, it has the more prosaic virtue of being exhaustive.

    Basic backtrack search builds up a partial solution by choosing values for variablesuntil it reaches a dead end, where the partial solution cannot be consistently extended.When it reaches a dead end it undoes the last choice it made and tries another. This is donein a systematic manner that guarantees that all possibilities will be tried. It improves onsimply enumerating and testing of all candidate solutions by brute force in that it checksto see if the constraints are satisfied each time it makes a new choice, rather than waitinguntil a complete solution candidate containing values for all variables is generated. Thebacktrack search process is often represented as a search tree, where each node (below theroot) represents a choice of a value for a variable, and each branch represents a candidatepartial solution. Discovering that a partial solution cannot be extended then correspondsto pruning a subtree from consideration. Other noteworthy early papers on backtrackinginclude Bitner and Reingolds Backtrack programming techniques [2] and Fillmore andWilliamsons On backtracking: a combinatorial description of the algorithm [24], whichused group theory to address symmetry issues.

    Heuristic search methods to support general purpose problem solving paradigms werestudied intensely from the early days of artificial intelligence, and backtracking played arole in the form of depth-first search of state spaces, problem reduction graphs, and gametrees [61]. In the 1970s as constraint satisfaction emerged as a paradigm of its own, back-track in the full sense we use the term here, for search involving constraint networks, gainedprominence in the artificial intelligence literature, leading to the publication in the Artifi-cial Intelligence journal at the beginning of the 1980s of Haralick and Elliotts IncreasingTree Search Efficiency for Constraint Satisfaction Problems [35]. This much-cited paperprovided what was, for the time, an especially thorough statistical and experimental evalu-ation of the predominant approaches to refining backtrack search.

    There are two major themes in the early work on improving backtracking: control-ling search and interleaving inference (constraint propagation) with search. Both of thesethemes are again evident even in Golomb and Baumert. They observe that all other things

  • 20 2. Constraint Satisfaction: An Emerging Paradigm

    being equal, it is more efficient to make the next choice from the set [domain] with fewestelements, an instance of what Haralick and Elliott dubbed the fail first principle, andthey discuss preclusion, where a choice for one variable rules out inconsistent choicesfor other variables, a form of what Haralick and Elliott called lookahead that they pre-sented as forward checking. Of course, preclusion and the smallest domain heuristicnicely complement one another.

    In general, one can look for efficient ways to manage search both going forward andbackward. When we move forward, extending partial solutions, we make choices aboutthe order in which we consider variables, values and constraints. This order can make anenormous different in the amount of work we have to do. When we move backwards afterhitting a dead end, we do not have to do this chronologically by simply undoing the lastchoice we made. We can be smarter about it. In general, constraint propagation, mostcommonly in the form of partial or complete arc consistency, can be carried out before,and/or during, search, in an attempt to prune the search space.

    Haralick and Elliott compared several forms of lookahead, carrying out different de-grees of partial arc consistency propagation after choosing a value. Oddly their full looka-head still did not maintain full arc consistency. However, restoring full arc consistencyafter choosing values had been proposed as early as 1974 by Gaschnig [31], and McGre-gor had even experimented with interleaving path consistency with search [59]. Mack-worth observed that one could generalize to the alternation of constraint manipulation andcase analysis, and proposed an algorithm that decomposed problems by splitting a variabledomain in half and then restoring arc consistency on the subproblems [53].

    Basic backtrack search backtracks chronologically to undo the last choice and try some-thing else. This can result in silly behavior, where the algorithm tries alternatives forchoices that clearly had no bearing on the failure that induced the backtracking. Stallmanand Sussman, in the context of circuit analysis, with dependency-directed backtracking[67], Gaschnig with backjumping [33], and Bruynooghe with intelligent backtracking[8] all addressed this problem. These methods in some sense remember the reasons forfailure in order to backtrack over legitimate culprits. Stallman and Sussman went furtherby learning new constraints (nogoods) from failure, which could be used to prune fur-ther search. Gaschnig used another form of memory in his backmarking algorithm toavoid redundant checking for consistency when backtracking [32].

    2.2.5 Analysis

    While it was recognized early on that solving CSPs was in general NP-hard, a varietyof analytical techniques were brought to bear to evaluate, predict or compare algorithmperformance and relate problem complexity to problem structure. In particular, there aretradeoffs to evaluate between the effort required to avoid search, e.g. by exercising moreintelligent control or carrying out more inference, and the reduction in search effort ob-tained.

    Knuth [46] and Purdom [63] used probing techniques to estimate the efficiency ofbacktrack programs. Haralick and Elliott carried out a statistical analysis [35], which wasrefined by Nudel [62] to compute expected complexities for classes of problems de-fined by basic problem parameters. Brown and Purdom investigated average time behavior[7, 64]. Mackworth and Freuder carried out algorithmic complexity analyses of worst casebehavior for various tractable propagation algorithms [55]. They showed the time com-

  • E. C. Freuder, A. K. Mackworth 21

    plexity for arc consistency to be linear in the number of constraints, settling an unresolvedissue. This result turned out to be important for constraint programming languages thatused arc consistency as a primitive operation [56]. Of course, experimental evaluation wascommon, though in the early days there was perhaps too much reliance on the n-queensproblem, and too little understanding of the potential pitfalls of experiments with randomproblems.

    Problem complexity can be related to problem structure. Seidel [66] developed a dy-namic programming synthesis algorithm, using a decomposition technique based on graphcutsets, that related problem complexity to a problem parameter that he called frontlength. Freuder [27] proved that problems with tree-structured constraint graphs weretractable by introducing the structural concept of the width of a constraint graph, anddemonstrating a connection between width and consistency level that ensured that tree-structured problems could be solved with backtrack-free search after arc consistency pre-processing. He subsequently related complexity to problem structure in terms of maximalbiconnected components [28] and stable sets [30].

    2.3 Conclusions

    This chapter has not been a complete history, and certainly not an exhaustive survey. Wehave focused on the major themes of the early period, but it is worth noting that manyvery modern sounding topics were also already appearing at this early stage. For exam-ple, even in 1965 Golomb and Baumert were making allusions to symmetry and problemreformulation.

    Golomb and Baumert concluded in 1965 [34]:

    Thus the success or failure of backtrack often depends on the skill and ingenu-ity of the programmer in his ability to adapt the basic methods to the problemat hand and in his ability to reformulate the problem so as to exploit the char-acteristics of his own computing device. That is, backtrack programming (asmany other types of programming) is somewhat of an art.

    As the rest of this handbook will demonstrate, much progress has been made in makingeven more powerful methods available to the constraint programmer. However, constraintprogramming is still somewhat of an art. The challenge going forward will be to makeconstraint programming more of an engineering activity and constraint technology moretransparently accessible to the non-programmer.

    Acknowledgements

    We are grateful to Peter van Beek for all his editorial comments, help and support duringthe preparation of this chapter. This material is based upon works supported by the ScienceFoundation Ireland under Grant No. Grant 00/PI.1/C075 and by the Natural Sciences andEngineering Research Council of Canada. Alan Mackworth is supported by a CanadaResearch Chair in Artificial Intelligence.

  • 22 2. Constraint Satisfaction: An Emerging Paradigm

    Bibliography

    [1] H. G. Barrow and J. M. Tenenbaum. MSYS: A system for reasoning about scenes.In SRI AICenter, 1975.

    [2] J. R. Bitner and E. M. Reingold. Backtrack programming techniques. Comm. ACM,18:651656, 1975.

    [3] D. G. Bobrow and B. Raphael. New programming languages for artificial intelligenceresearch. ACM Computing Surveys, 6(3):153174, Sept. 1974.

    [4] A. Borning. ThingLab an object-oriented system for building simulations usingconstraints. In R. Reddy, editor, Proceedings of the 5th International Joint Confer-ence on Artificial Intelligence, pages 497498, Cambridge, MA, Aug. 1977. WilliamKaufmann. ISBN 0-86576-057-8.

    [5] A. Borning. Thinglab: A constraint-oriented simulation laboratory. Report CS-79-746, Computer Science Dept., Stanford University, CA, 1979.

    [6] A. Brown. Qualitative knowledge, casual reasoning and the localization of failures.Technical Report AITR-362, MIT Artificial Intelligence Laboratory, Nov. 6 1976.URL http://dspace.mit.edu/handle/1721.1/6921.

    [7] C. A. Brown and P. W. Purdom Jr. An average time analysis of backtracking. SIAMJ. Comput., 10:583593, 1981.

    [8] M. Bruynooghe. Solving combinatorial search problems by intelligent backtracking.Information Processing Letters, 12:3639, 1981.

    [9] R. M. Burstall. A program for solving word sum puzzles. Computer Journal, 12(1):4851, Feb. 1969.

    [10] M. B. Clowes. On seeing things. Artificial Intelligence, 2:79116, 1971.[11] J. Cohen. Constraint logic programming languages. CACM, 33(7):5268, July 1990.

    ISSN 0001-0782. URL http://www.acm.org/pubs/toc/Abstracts/0001-0782/79209.html.

    [12] A. Colmerauer. Prolog II reference manual and theoretical model. Technical report,Groupe dIntelligence Arificielle, Univeriste dAix-Marseille II, Luminy, Oct. 1982.

    [13] A. Colmerauer. Prolog and infinite trees. In K. L. Clark and S.-A. Tarnlund, editors,Logic Programming, pages 231251. Academic Press, 1982.

    [14] A. Colmerauer and P. Roussel. The birth of Prolog. In R. L. Wexelblat, editor,Proceedings of the Conference on History of Programming Languages, volume 28(3)of ACM Sigplan Notices, pages 3752, New York, NY, USA, Apr. 1993. ACM Press.ISBN 0-89791-570-4.

    [15] M. Davis and H. Putnam. A computing procedure for quantification theory. J. ACM,7:201215, 1960.

    [16] M. Davis, G. Logemann, and D. Loveland. A machine program for theorem-proving.Comm. ACM, 5:394397, 1962.

    [17] J. de Kleer. Local methods for localizing faults in electronic circuits. TechnicalReport AIM-394, MIT Artificial Intelligence Laboratory, Nov. 6 1976. URL http://dspace.mit.edu/handle/1721.1/6921.

    [18] J. de Kleer and G. J. Sussman. Propagation of constraints applied to circuit synthesis.Technical Report AIM-485, MIT Artificial Intelligence Laboratory, Sept. 6 1978.URL http://hdl.handle.net/1721.1/5745.

    [19] R. Dechter. Constraint Processing. Morgan Kaufmann, 2003.[20] Y. Descotte and J.-C. Latombe. GARI : A problem solver that plans how to machine

  • E. C. Freuder, A. K. Mackworth 23

    mechanical parts. In International Joint Conference on Artificial Intelligence (IJCAI81), pages 766772, 1981.

    [21] C. M. Eastman. Automated space planning. Artificial Intelligence, 4(1):4164, 1973.[22] E. W. Elcock. Absys: the first logic programming language - A retrospective and a

    commentary. Journal of Logic Programming, 9(1):117, July 1990.[23] R. E. Fikes. REF-ARF: A system for solving problems stated as procedures. Artificial

    Intelligence, 1:27120, 1970.[24] J. P. Fillmore and S. G. Williamson. On backtracking: A combinatorial description

    of the algorithm. SIAM Journal on Computing, 3(1):4155, Mar. 1974.[25] M. S. Fox, B. P. Allen, and G. Strohm. Job-shop scheduling: An investigation in

    constraint-directed reasoning. In AAAI82, Proceedings, pages 155158, 1982.[26] E. C. Freuder. Synthesizing constraint expressions. Comm. ACM, 21:958966, 1978.[27] E. C. Freuder. A sufficient condition for backtrack-free search. J. ACM, 29:2432,

    1982.[28] E. C. Freuder. A sufficient condition for backtrack-bounded search. J. ACM, 32:

    755761, 1985.[29] E. C. Freuder and A. K. Mackworth. Introduction to the special volume on constraint-

    based reasoning. Artificial Intelligence, 58:12, 1992.[30] E. C. Freuder and M. J. Quinn. Taking advantage of stable sets of variables in con-

    straint satisfaction problems. In Proceedings of the Ninth International Joint Confer-ence on Artificial Intelligence, pages 10761078, Los Angeles, 1985.

    [31] J. Gaschnig. A constraint satisfaction method for inference making. In Proc. 12thAnnual Allerton Conf. on Circuit System Theory, pages 866874, U. Illinois, 1974.

    [32] J. Gaschnig. A general backtracking algorithm that eliminates most redundant tests.In Proceedings of the Fifth International Joint Conference on Artificial Intelligence,page 457, Cambridge, Mass., 1977.

    [33] J. Gaschnig. Experimental case studies of backtrack vs. Waltz-type vs. new algo-rithms for satisficing assignment problems. In Proceedings of the Second CanadianConference on Artificial Intelligence, pages 268277, Toronto, 1978.

    [34] S. Golomb and L. Baumert. Backtrack programming. J. ACM, 12:516524, 1965.[35] R. M. Haralick and G. L. Elliott. Increasing tree search efficiency for constraint

    satisfaction problems. Artificial Intelligence, 14:263313, 1980.[36] R. M. Haralick and L. G. Shapiro. The consistent labeling problem: Part I. IEEE

    Trans. Pattern Analysis and Machine Intelligence, 1(2):173184, Apr. 1979.[37] R. M. Haralick and L. G. Shapiro. The consistent labeling problem: Part II. IEEE

    Trans. Pattern Analysis and Machine Intelligence, 2(3):193203, May 1980.[38] R. M. Haralick, L. S. Davis, A. Rosenfeld, and D. L. Milgram. Reduction operations

    for constraint satisfaction. Inf. Sci, 14(3):199219, 1978. URL http://dx.doi.org/10.1016/0020-0255(78)90043-9.

    [39] P. J. Hayes. Computation and deduction. In Proc. 2nd International Symposium onMathematical Foundations of Computer Science, pages 105118. CzechoslovakianAcademy of Sciences, 1973.

    [40] C. Hewitt. PLANNER: A language for proving theorems in robots. In Proceedingsof the First International Joint Conference on Artificial Intelligence, pages 295301,Bedford, MA., 1969. Mitre Corporation.

    [41] D. A. Huffman. Impossible objects as nonsense sentences. In B. Meltzer andD. Michie, editors, Machine Intelligence 6, pages 295323. Edinburgh Univ. Press,

  • 24 2. Constraint Satisfaction: An Emerging Paradigm

    1971.[42] J. Jaffar and J.-L. Lassez. Constraint logic programming. In Fourteenth Annual

    ACM Symposium on Principles of Programming Languages (POPL), pages 111119,Munchen, 1987.

    [43] J. Jaffar and M. J. Maher. Constraint logic programming: A survey. Journal of LogicProgramming, 19(20):503581, 1994.

    [44] J. Jaffar, S. Mi