Borm_Chapters in Game Theory-In Honor of Stef Tijs

325

Transcript of Borm_Chapters in Game Theory-In Honor of Stef Tijs

Page 1: Borm_Chapters in Game Theory-In Honor of Stef Tijs
Page 2: Borm_Chapters in Game Theory-In Honor of Stef Tijs

CHAPTERS IN GAME THEORY

Page 3: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Series A: Philosophy and Methodology of the Social Sciences

Series B: Mathematical and Statistical Methods

Series C: Game Theory, Mathematical Programming and Operations Research

Series D: System Theory, Knowledge Engineering an Problem Solving

SERIES C: GAME THEORY, MATHEMATICAL PROGRAMMINGAND OPERATIONS RESEARCH

VOLUME 31

Editor-in Chief: H. Peters (Maastricht University); Honorary Editor: S.H. Tijs (Tilburg); EditorialBoard: E.E.C. van Damme (Tilburg), H. Keiding (Copenhagen), J.-F. Mertens (Louvain-la-Neuve),H. Moulin (Rice University), S. Muto (Tokyo University), T. Parthasarathy (New Delhi), B. Peleg(Jerusalem), T. E. S. Raghavan (Chicago), J. Rosenmüller (Bielefeld), A. Roth (Pittsburgh),D. Schmeidler (Tel-Aviv), R. Selten (Bonn), W. Thomson (Rochester, NY).

Scope: Particular attention is paid in this series to game theory and operations research, theirformal aspects and their applications to economic, political and social sciences as well as to socio-biology. It will encourage high standards in the application of game-theoretical methods toindividual and social decision making.

The titles published in this series are listed at the end of this volume.

THEORY AND DECISION LIBRARY

General Editors: W. Leinfellner (Vienna) and G. Eberlein (Munich)

Page 4: Borm_Chapters in Game Theory-In Honor of Stef Tijs

CHAPTERS IN GAME THEORY

In honor of Stef Tijs

Edited by

PETER BORMUniversity of Tilburg,

The Netherlands

and

HANS PETERSUniversity of Maastricht,

The Netherlands

KLUWER ACADEMIC PUBLISHERSNEW YORK, BOSTON, DORDRECHT, LONDON, MOSCOW

Page 5: Borm_Chapters in Game Theory-In Honor of Stef Tijs

eBook ISBN: 0-306-47526-XPrint ISBN: 1-4020-7063-2

©2004 Kluwer Academic PublishersNew York, Boston, Dordrecht, London, Moscow

Print ©2002 Kluwer Academic Publishers

All rights reserved

No part of this eBook may be reproduced or transmitted in any form or by any means, electronic,mechanical, recording, or otherwise, without written consent from the Publisher

Created in the United States of America

Visit Kluwer Online at: http://kluweronline.comand Kluwer's eBookstore at: http://ebooks.kluweronline.com

Dordrecht

Page 6: Borm_Chapters in Game Theory-In Honor of Stef Tijs

On the occasion of the 50th birthday of Stef Tijs in 1987 a volumeof surveys in game theory in Stef’s honor was composed1. All twelveauthors who contributed to that book still belong to the twenty-nineauthors involved in the present volume, published fifteen years lateron the occasion of Stef’s 65th birthday. Twenty-five of these twenty-nine authors wrote—or write, in one case—their Ph.D. theses under thesupervision of Stef Tijs. The other four contributors are indebted toStef Tijs to a different but hardly less decisive degree.

What makes a person deserve to be the honorable subject of a sci-entific liber amicorum, and that on at least two occasions in his life?If that person is called Stef Tijs then the answer includes at least thefollowing reasons. First of all, until now Stef has supervised about thirtyPh.D. students in game theory alone. More importantly than sheer num-bers, most of these students stayed in academics; for instance all thosewho contributed to the 1987 volume. It is beyond doubt that this facthas everything to do with the devotion, enthusiasm and deep knowledgeinvested by Stef in guiding students. Moreover, the number of his in-ternationally published papers has increased from about sixty in 1987to about two hundred now. His papers cover every field in game the-ory, and extend to related areas as social choice theory, mathematicaleconomics, and operations research. Last but not least, Stef’s numerouscoauthors come from and live in all parts of this world: he has been atrue missionary in game theory, and the contributors to this volume areproud to be among his apostles.

1H.J.M. Peters and O.J. Vrieze, eds., Surveys in Game Theory and Related Topics,CWI Tract 39, Amsterdam, 1987.

v

Preface

PETER BORM

HANS PETERS

Tilburg/MaastrichtFebruary 2002

Page 7: Borm_Chapters in Game Theory-In Honor of Stef Tijs

vi

About Stef Tijs

The first work of Stef Tijs in game theory was his Ph.D. dissertationSemi-infinite and infinite matrix games and bimatrix games (1975). Hetook his Ph.D. at the University of Nijmegen, where he had held a posi-tion since 1960. His Ph.D. advisors were A. van Rooij and F. Delbaen.From 1975 on he gradually started building a game theory school in theNetherlands with a strong international focus. In 1991 he left Nijmegento continue his research at Tilburg University. In 2000 he was awardeda doctorate honoris causa at the Miguel Hernandez University in Elche,Spain.

About this book

The authors of this book were asked to write on topics belonging to theirexpertise and having a connection with the work of Stef Tijs. Each con-tribution has been reviewed by two other authors. This has resulted infourteen chapters on different subjects: some of these can be consideredsurveys while other chapters present new results. Most contributionscan be positioned somewhere in between these categories. We brieflydescribe the contents of each chapter. For the references the readershould consult the list of references in each chapter under consideration.

Chapter 1, Stochastic cooperative games: theory and applications byPeter Borm and Jeroen Suijs, considers cooperative decision makingunder risk. It provides a brief survey on three existing models introducedby Charnes and Granot (1973), Suijs et al. (1999), and Timmer et al.(2000), respectively. It also compares their performance with respectto two applications: the allocation of random maintenance cost of acommunication network tree to its users, and the division of a stochasticestate among the creditors in a bankruptcy situation.

Chapter 2, Sequencing games: a survey by Imma Curiel, Herbert Ha-mers, and Flip Klijn, gives an overview of the start and the main de-velopments in the research area that studies the interaction between se-quencing situations and cooperative game theory. It focuses on resultsrelated to balancedness and convexity of sequencing games.

In Chapter 3, Game theory and the market by Eric van Damme andDave Furth, it is argued that both cooperative and non-cooperative gamemodels can substantially increase our understanding of the functioningof actual markets. In the first part of the chapter, by going back to the

Page 8: Borm_Chapters in Game Theory-In Honor of Stef Tijs

vii

work of the founding fathers von Neumann, Morgenstern, and Nash, abrief historical sketch of the differences and complementarities betweenthe two types of models is provided. In the second part, the main point isillustrated by means of examples of bargaining, oligopolistic interactionand auctions.

In Chapter 4, On the number of extreme points of the core of a transfer-able utility game by Jean Derks and Jeroen Kuipers, it is derived froma more general result that the upper core and the core of a transferableutility game have at most n! different extreme points, with n the numberof players. This maximum number is attained by strict convex gamesbut other games may have this property as well. It is shown that n! dif-ferent extreme core points can only be obtained by strict exact games,but that not all such games have n! different extreme points.

In Chapter 5, Consistency and potentials in cooperative TU-games: So-bolev’s reduced game revived by Theo Driessen, a consistency propertyfor a wide class of game-theoretic solutions that possess a potential rep-resentation is studied. The consistency property is based on a modifiedreduced game related to Sobolev’s. A detailed exposition of the devel-oped theory is given for semivalues of cooperative TU-games and theShapley and Banzhaf values in particular.

In Chapter 6, On the set of equilibria of a bimatrix game: a survey byMathijs Jansen, Peter Jurg, and Dries Vermeulen, the methods used bydifferent authors to write the set of equilibria of a bimatrix game as theunion of a finite number of polytopes, are surveyed.

Chapter 7, Concave and convex serial cost sharing by Maurice Koster,introduces the concave and convex serial rule, two new cost sharing rulesthat are closely related to the serial cost sharing rule of Moulin andShenker (1992). It is shown that the concave serial rule is the uniquerule that minimizes the range of cost shares subject to the excess lowerbounds. Analogous results are derived for the convex serial rule. Inparticular, these characterizations show that the serial cost sharing ruleis consistent with diametrically opposed equity properties, dependingon the nature of the cost function: the serial rule equals the concave(convex) serial rule in case of a concave (convex) cost function.

In Chapter 8, Centrality orderings in social networks by Herman Mon-suur and Ton Storcken, a centrality ordering arranges the vertices ina social network according to their centrality position in that network.

Page 9: Borm_Chapters in Game Theory-In Honor of Stef Tijs

viii

Centrality addresses notions like focal points of communication, poten-tial of communicational control, and being close to other network ver-tices. In social network studies they play an important role. Here thefocus is on the conceptual issue of what makes a position in a networkmore central than another position. Characterizations of the cover, themedian and degree centrality orderings are discussed.

In Chapter 9, The Shapley transfer procedure for NTU-games by Gert-Jan Otten and Hans Peters, the Shapley transfer procedure (Shapley,1969) is extended in order to associate with every solution correspon-dence for transferable utility games satisfying certain regularity condi-tions, a solution for nontransferable utility games. An existence and acharacterization result are presented. These are applied to the Shapleyvalue, the core, the nucleolus, and the

Chapter 10, The nucleolus as equilibrium price by Jos Potters, HansReijnierse, and Anita van Gellekom, studies exchange economies withindivisible goods and money. The notions of a stable equilibrium andregular prices are introduced. It is shown that the nucleolus concept forTU-games can be used to single out specific regular prices. Algorithmsto compute the nucleolus can therefore be used to determine regularprice vectors.

Chapter 11, Network formation, costs, and potential games by MarcoSlikker and Anne van den Nouweland, studies strategic-form games ofnetwork formation in which an exogenous allocation rule is used to de-termine the players’ payoffs in various networks. It is shown that suchgames are potential games if the cost-extended Myerson value is usedas the exogenous allocation rule. The question is then studied whichnetworks are formed according to the potential maximizer, a refinementof Nash equilibrium for potential games.

Chapter 12, Contributions to the theory of stochastic games by FrankThuijsman and Koos Vrieze, presents an introduction to the history andthe state of the art of the theory of stochastic games. Dutch contri-butions to the field, initiated by Stef Tijs, are addressed in particular.Several examples are provided to clarify the issues.

Chapter 13, Linear (semi-)infinite programs and cooperative games byJudith Timmer and Natividad Llorca, gives an overview of cooperativegames arising from linear semi-infinite or infinite programs.

Chapter 14, Population uncertainty and equilibrium selection: a maxi-mum likelihood approach by Mark Voorneveld and Henk Norde, intro-

Page 10: Borm_Chapters in Game Theory-In Honor of Stef Tijs

ix

duces a general class of games with population uncertainty and, in linewith the maximum likelihood principle, stresses those strategy profilesthat are most likely to yield an equilibrium in the game selected bychance. Under mild topological restrictions, an existence result for max-imum likelihood equilibria is derived. Also, it is shown how maximumlikelihood equilibria can be used as an equilibrium selection device forfinite strategic games.

About the authors

PETER BORM ([email protected]) is affiliated with the Department ofEconometrics of the University of Tilburg. He wrote his Ph.D. thesis,On game theoretic models and solution concepts, under the supervisionof Stef Tijs.

IMMA CURIEL ([email protected]) is affiliated with the Depart-ment of Mathematics and Statistics of the University of Maryland, Bal-timore County. She wrote her Ph.D. thesis, Cooperative game theoryand applications, under the supervision of Stef Tijs.

ERIC VAN DAMME ([email protected]) is affiliated with CentER,University of Tilburg. He wrote his master’s thesis under the supervisionof Stef Tijs and his Ph.D. thesis, Refinements of the Nash equilibriumconcept, under the supervision of Jaap Wessels and Reinhard Selten.

JEAN DERKS ([email protected]) is affiliated with the De-partment of Mathematics of the University of Maastricht. His Ph.D.thesis, On polyhedral cones of cooperative games, was written under thesupervision of Stef Tijs and Koos Vrieze.

THEO DRIESSEN ([email protected]) is affiliated with theDepartment of Mathematical Sciences of the University of Twente. Hewrote his Ph.D. thesis, Contributions to the theory of cooperative games:the and games, under the supervision of Stef Tijs andMichael Maschler.

DAVE FURTH ([email protected]) is affiliated with the Faculty of Eco-nomics and Econometrics of the University of Amsterdam. He wrote hisPh.D. thesis on oligopoly theory with Arnold Heertje and has been aregular guest of the game theory seminars organized since 1983 by StefTijs.

ANITA VAN GELLEKOM ([email protected]) works for anonprofit institution. Her Ph.D. thesis, Cost and profit sharing in acooperative environment, was written under the supervision of Stef Tijs.

Page 11: Borm_Chapters in Game Theory-In Honor of Stef Tijs

HERBERT HAMERS ([email protected]) is affiliated with the Depart-ment of Econometrics of the University of Tilburg. Stef Tijs supervisedhis Ph.D. thesis, Sequencing and delivery situations: a game theoreticapproach.

MATHIJS JANSEN ([email protected]) is affiliated with the De-partment of Quantitative Economics of the University of Maastricht.His Ph.D. supervisors were Frits Ruymgaart and T.E.S. Raghavan, andhis thesis Equilibria and optimal threat strategies in two-person gameswas written in close cooperation with Stef Tijs.

PETER JURG ([email protected]) works for a private software company. Hewrote his thesis, Some topics in the theory of bimatrix games, under thesupervision of Stef Tijs.

FLIP KLIJN ([email protected]) is affiliated with the Department of Statis-tics and Operations Research of the University of Vigo, Spain. His Ph.D.thesis, A game theoretic approach to assignment problems, was writtenunder the supervision of Stef Tijs.

MAURIC KOSTER ([email protected]) is affiliated with the Departmentof Economics and Econometrics of the University of Amsterdam. StefTijs supervised his thesis Cost sharing in production situations and net-work exploitation.

JEROEN KUIPERS ([email protected]) is affiliated withthe Department of Mathematics of the University of Maastricht. HisPh.D. thesis, Combinatorial methods in cooperative game theory, wassupervised by Stef Tijs and Koos Vrieze.

NATIVIDAD LLORCA ([email protected]) is a Ph.D. student, under thesupervision of Stef Tijs, at the Department of Statistics and AppliedMathematics of the University of Elche, Spain.

HERMAN MONSUUR ([email protected]) is affiliated with the RoyalNetherlands Naval College, section International Security Studies. HisPh.D. thesis, Choice, ranking and circularity in asymmetric relations,was supervised by Stef Tijs.

HENK NORDE ([email protected]) is a member of the Department ofEconometrics of the University of Tilburg, where his main research isin the area of game theory. He wrote a Ph.D. thesis in the field ofdifferential equations, supervised by Leonid Frank at the University ofNijmegen.

x

Page 12: Borm_Chapters in Game Theory-In Honor of Stef Tijs

xi

ANNE VAN DEN NOUWELAND ([email protected]) is affiliatedwith the Department of Economics of the University of Oregon, Eugene.She wrote her Ph.D. thesis, Games and graphs in economic situations,under the supervision of Stef Tijs.

GERT-JAN OTTEN ([email protected]) works for KPN Telecom. HisPh.D. thesis, On decision making in cooperative situations, was super-vised by Stef Tijs.

HANS PETERS ([email protected]) is affiliated with the Depart-ment of Quantitative Economics of the University of Maastricht. Hewrote his Ph.D. thesis, Bargaining game theory, under the supervisionof Stef Tijs.

JOS POTTERS ([email protected]) is affiliated with the Department ofMathematics of the University of Nijmegen. He wrote a Ph.D. thesis ona subject in geometry at the University of Leiden and cooperates withStef Tijs in the area of game theory since the beginning of the eighties.

HANS REIJNIERSE ([email protected]) is affiliated with the Depart-ment of Econometrics of the University of Tilburg. He wrote his Ph.D.thesis, Games, graphs, and algorithms, supervised by Stef Tijs.

MARCO SLIKKER ([email protected]) is affiliated with the Depart-ment of Technology Management of the Eindhoven University of Tech-nology. Stef Tijs supervised his Ph.D. thesis Decision making and coop-eration restrictions.

TON STORCKEN ([email protected]) is affiliated with the De-partment of Quantitative Economics of the University of Maastricht.His Ph.D. thesis, Possibility theorems for social welfare functions, wassupervised by Pieter Ruys, Stef Tijs, and Harrie de Swart.

JEROEN SUIJS ([email protected]) is affiliated with the CentER Ac-counting Research Group. He wrote his Ph.D. thesis, Cooperative deci-sion making in a stochastic environment, under the supervision of StefTijs.

FRANK THUIJSMAN ([email protected]) is affiliated with the De-partment of Mathematics of the University of Maastricht. He wrote hisPh.D. thesis, Optimality and equilibria in stochastic games, under thesupervision of Stef Tijs and Koos Vrieze.

JUDITH TIMMER ([email protected]) is afiliated with the De-partment of Mathematical Sciences of the University of Twente. Stef

Page 13: Borm_Chapters in Game Theory-In Honor of Stef Tijs

xii

Tijs supervised her Ph.D. thesis Cooperative behaviour, uncertainty andoperations research.

DRIES VERMEULEN ([email protected]) is affiliated with theDepartment of Quantitative Economics of the University of Maastricht.His Ph.D. thesis, Stability in non-cooperative game theory, was writtenunder the supervision of Stef Tijs.

MARK VOORNEVELD ([email protected]) works at the Universityof Stockholm and wrote a Ph.D. thesis, Potential games and interactivedecisions with multiple criteria, supervised by Stef Tijs.

KOOS VRIEZE ([email protected]) is affiliated with the De-partment of Mathematics of the University of Maastricht. His Ph.D.thesis, Stochastic games with finite state and action spaces, was writtenunder the supervision of Henk Tijms and Stef Tijs.

Page 14: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Contents

Stochastic Cooperative Games: Theory and ApplicationsBY PETER BORM AND JEROEN SUIJS

1.11.2

1.31.41.5

IntroductionCooperative Decision-Making under Risk1.2.11.2.2

1.2.3

Chance-Constrained GamesStochastic Cooperative Games with Transfer Pay-mentsStochastic Cooperative Games without TransferPayments

Cost Allocation in a Network TreeBankruptcy Problems with Random EstateConcluding Remarks

Sequencing Games: a SurveyBY IMMA CURIEL, HERBERT HAMERS, AND FLIP KLIJN

2.12.22.3

2.42.52.6

IntroductionGames Related to Sequencing GamesSequencing Situations and SequencingGamesOn Sequencing Games with Ready Times or Due DatesOn Sequencing Games with Multiple MachinesOn Sequencing Games with more Admissible Rearrange-ments

Game Theory and the MarketBY ERIC VAN DAMME AND DAVE FURTH

3.13.23.3

IntroductionVon Neumann, Morgenstern and NashBargaining

xiii

1

155

7

11151922

27

2729

313640

45

51

515257

3

2

1

Page 15: Borm_Chapters in Game Theory-In Honor of Stef Tijs

xiv CONTENTS

3.43.53.6

MarketsAuctionsConclusion

On the Number of Extreme Points of the Core of a Trans-ferable Utility GameBY JEAN DERKS AND JEROEN KUIPERS

4.14.24.34.44.5

IntroductionMain ResultsThe Core of a Transferable Utility GameStrict Exact GamesConcluding Remarks

Consistency and Potentials in Cooperative TU-Games:Sobolev’s Reduced Game RevivedBY THEO DRIESSEN

5.15.2

5.3

5.45.5

IntroductionConsistency Property for Solutions that Admit aPotentialConsistency Property for Pseudovalues: a Detailed Expo-sitionConcluding remarksTwo technical proofs

On the Set of Equilibria of a Bimatrix Game:a SurveyBY MATHIJS JANSEN, PETER JURG, AND DRIES VERMEULEN

6.16.26.36.46.56.66.76.86.96.10

IntroductionBimatrix Games and EquilibriaSome Observations by NashThe Approach of Vorobev and KuhnThe Approach of Mangasarian and WinkelsThe Approach of WinkelsThe Approach of JansenThe Approach of QuintasThe Approach of Jurg and JansenThe Approach of Vermeulen and Jansen

616977

83

8385889194

99

99

102

108116116

121

121124124126129131133136136140

4

5

6

Page 16: Borm_Chapters in Game Theory-In Honor of Stef Tijs

CONTENTS xv

Concave and Convex Serial Cost SharingBY MAURICE KOSTER

7.17.27.3

IntroductionThe Cost Sharing ModelThe Convex and the Concave Serial Cost Sharing Rule

Centrality Orderings in Social NetworksBY HERMAN MONSUUR AND TON STORCKEN

8.18.28.38.48.58.6

IntroductionExamples of Centrality OrderingsCover Centrality Ordering

8

7

Degree Centrality OrderingMedian Centrality OrderingIndependence of the Characterizing Conditions

9 The Shapley Transfer Procedure for NTU-GamesBY GERT-JAN OTTEN AND HANS PETERS

9.19.29.39.49.5

9.6

IntroductionMain ConceptsNonemptiness of Transfer SolutionsA CharacterizationApplications9.5.19.5.29.5.39.5.4

The Shapley ValueThe CoreThe NucleolusThe

Concluding Remarks

10 The Nucleolus as Equilibrium PriceBY Jos POTTERS, HANS REIJNIERSE, AND ANITA

VAN GELLEKOM

10.110.2

10.310.4

10.5

IntroductionPreliminaries10.2.110.2.2

Economies with Indivisible Goods and MoneyPreliminaries about TU-Games

Stable EquilibriaThe Existence of Price Equilibria: Necessary and Suffi-cient ConditionsThe Nucleolus as Regular Price Vector

143

143144146

157

157159164168173177

183

183185189192195195196198199202

205

205207208209210

216218

Page 17: Borm_Chapters in Game Theory-In Honor of Stef Tijs

xvi CONTENTS

11 Network Formation, Costs, and Potential GamesBY MARCO SLIKKER AND ANNE VAN DEN NOUWELAND

11.111.211.3

11.411.5

IntroductionLiterature ReviewNetwork Formation Model in StrategicFormPotential GamesPotential Maximizer

12 Contributions to the Theory of Stochastic GamesBY FRANK THUIJSMAN AND KOOS VRIEZE

12.112.212.3

The Stochastic Game ModelZero-Sum Stochastic GamesGeneral-Sum Stochastic Games

13 Linear (Semi-) Infinite Programs and CooperativeGamesBY JUDITH TIMMER AND NATIVIDAD LLORCA

13.113.2

IntroductionSemi-infinite Programs and Games13.2.113.2.213.2.3

Flow gamesLinear Production GamesGames Involving Linear Transformation ofProducts

13.3 Infinite Programs and Games13.3.113.3.2

Assignment GamesTransportation Games

13.4 Concluding remarks

14 Population Uncertainty and Equilibrium Selection: aMaximum Likelihood ApproachBY MARK VOORNEVELD AND HENK NORDE

14.114.2

14.314.414.5

IntroductionPreliminaries14.2.114.2.214.2.3

TopologyMeasure TheoryGame Theory

Games with Population UncertaintyMaximum Likelihood EquilibriaMeasurability

223

223224

228233238

247

247250255

267

267268268270

273276276279283

287

287289289290291292293297

Page 18: Borm_Chapters in Game Theory-In Honor of Stef Tijs

CONTENTS xvii

14.614.714.814.914.10

Random Action SetsRandom GamesRobustness Against RandomizationWeakly Strict EquilibriaApproximate Maximum Likelihood Equilibria

299300302305308

Page 19: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Chapter 1

Stochastic CooperativeGames: Theory andApplications

BY PETER BORM AND JEROEN SUIJS

1.1 Introduction

Cooperative behavior generally emerges for the individual benefit of thepeople and organizations involved. Whether it is an international agree-ment like the GATT or the local neighborhood association, the maindriving force behind cooperation is the participants’ belief that it willimprove their welfare. Although these believed welfare improvementsmay provide the necessary incentives to explore the possibilities of co-operation, it is not sufficient to establish and maintain cooperation. Itis only the beginning of a bargaining process in which the coalition part-ners have to agree on which actions to take and how to allocate any jointbenefits that possibly result from these actions. Any prohibitive objec-tions in this bargaining process may eventually break up cooperation.

Since its introduction in von Neumann and Morgenstern (1944), co-operative game theory serves as a mathematical tool to describe andanalyze cooperative behavior as mentioned above. The literature, how-ever, mainly focuses on a deterministic setting in which the synergybetween potential coalitional partners is known with certainty before-hand. An actual example in this regard is provided by the automobile

1

P. Borm and H. Peters (eds.), Chapters in Game Theory, 1–26.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Page 20: Borm_Chapters in Game Theory-In Honor of Stef Tijs

2 BORM AND SUIJS

industry, where some major car manufacturers collectively design newmodels so as to save on the design costs. Since they know the cost ofdesigning a new car, they also know how much they will save on designexpenditures by cooperating.

In every day life, however, not everything is certain and many of thedecisions that people make are done so without the precise knowledgeof the consequences. Moreover, the risks that people face as a result oftheir social and economic activities may affect their cooperative behaviorin various instances. A typical example in this respect is a joint-venture.Investing in a new project is risky and therefore a company may prefer toshare these risks by cooperating in a joint-venture with other companies.A joint-venture is thus arranged before the project starts when it is stillunknown to what extent it will be a success. A similar argument appliesfor investment pools/funds, where investors pool their capital and makejoint investments to benefit from risk sharing and risk diversification.As opposed to joint ventures and investment pools/funds, risk sharingneed not be the primary incentive for cooperation. In many other cases,cooperation arises for other reasons and risk is just involved with theactions and decisions of the coalition partners. Small retailers, for in-stance, organize themselves in a buyers’ cooperative to stipulate betterprices when purchasing their inventory. Any economic risks, however,are not reduced by such a cooperation.

The first game theoretical literature on cooperative decision-makingunder risk dates from the early 70s with the introduction of chance-constrained games by Charnes and Granot (1973). Research on thissubject was almost non-existent in the following decades until recentlyit was picked up again by Suijs et al. (1999) and Timmer et al. (2000).This paper provides a brief survey on the three existing models andcompares their performance in two situations: the allocation of randommaintenance costs of a communication network tree to its users, and thedivision of a stochastic estate among creditors in a bankruptcy situation.

Chance-constrained games were introduced in Charnes and Granot(1973) to encompass situations where the benefits obtained by the agentsare random variables. Their attention is focused on dividing the benefitsof the grand coalition. Although the benefits are random, the authorsallocate a deterministic amount in two stages. In the first stage, beforethe realization of the benefits is known, payoffs are promised to theindividuals. In the second stage, when the realization is known, thepayoffs promised in the first stage are modified if needed. In several

Page 21: Borm_Chapters in Game Theory-In Honor of Stef Tijs

STOCHASTIC COOPERATIVE GAMES 3

papers, Charnes and Granot introduce allocation rules for the first stagelike the prior core, the prior Shapley value, and the prior nucleolus. Tomodify these so-called prior allocations in the second stage they definethe two-stage nucleolus. We confine our discussion to the prior core andrefer to Charnes and Granot (1976, 1977), and Granot (1977) for theother solution concepts.

Suijs et al. (1999) introduced stochastic cooperative games, whichdeal with the same kind of problems as chance-constrained games do,albeit in a completely different way. A drawback of the model introducedby Charnes and Granot (1973) is that it does not explicitly take intoaccount the individuals’ behavior towards risk. The effects of risk aversebehavior, for example, are difficult to trace in this model. The modelintroduced in Suijs et al. (1999) explicitly includes the preferences of theindividuals. Any kind of behavior towards risk, from risk loving behaviorto risk averse behavior, can be expressed by these preferences. Anothermajor difference is the way in which the benefits are allocated. Asopposed to a two stage allocation, which assigns a deterministic payoffto each agent, an allocation in a stochastic cooperative game assigns arandom payoff to each agent. Furthermore, for a two stage allocationthe agents must come to an agreement twice. In the first stage, beforethe realization of the payoff is known, they have to agree on a priorallocation. In the second stage, once the realization is known, they haveto agree on how the prior payoff is modified. In stochastic cooperativegames the agents decide on the allocation before the realization is known.As a result, random payoffs are allocated so that no further decisionshave to be taken once the realization of the payoff is known.

The model introduced by Timmer et al. (2000) is based on the modelof stochastic cooperative games introduced by Suijs et al. (1999). Thedifference lies in the way random payoffs are allocated. Suijs et al.(1999) distinguishes two parts in an allocation. The first part concernsthe allocation of the risk. In this regard, non-negative multiples of ran-dom payoffs are allocated to the agents. The second part then concernsdeterministic transfer payments between the agents. The inclusion ofdeterministic transfer payments enables the agents to conclude mutualinsurance deals. In exchange for a deterministic amount of money, i.e.an insurance premium, agents may be willing to bear a larger part ofthe risk. In order to exclude these insurance possibilities from the anal-ysis, Timmer et al. (2000) does not allow for any deterministic transferpayments.

Page 22: Borm_Chapters in Game Theory-In Honor of Stef Tijs

4 BORM AND SUIJS

Besides a theoretical discussion of the abovementioned models, wewill compare their performance in two possible applications. The focusof our analysis will be on the existence of core allocations.

The first application concerns the allocation of the random main-tenance costs of a communication network tree that connects a serviceprovider to its clients. Typical examples one can think of in this contextare cities that are connected to a local powerplant, a cable TV network orcomputer workstations that are connected to a central server. Megiddo(1978) considered this cost allocation problem in a deterministic set-ting. It was shown that the corresponding TU-game has a nonemptycore. Maintenance costs, however, are generally random of nature, forone does not know up front when connections are going to fail and whatthe resulting costs of repair will be. In this regard, random variablesare more appropriate to describe the maintenance costs. In addition,by using random variables we are able to take the reliability or qualityof a connection into account. Low quality connections may be cheapin construction, but they are also more likely to require repair and/ormaintenance to keep them in operation. So, by assuming that thesecosts are deterministic one passes over the important aspect of a net-work’s reliability.

The second application concerns the division of the estate of a bank-rupt enterprise among its creditors. In this paper we assume that theexact value of the estate is uncertain. Generally, the value of a firm’sassets (e.g. inventory, production facilities, knowledge) is ambiguousin case of bankruptcy as market values are no longer appropriate forvaluation purposes. We assume the creditors all have a deterministicclaim on this random estate. Since the value of the estate is insufficientto meet all claims, an allocation problem arises. O’Neill (1982) offersa game theoretical analysis of bankruptcy problems in a deterministicsetting. It is shown that the core of bankruptcy games is nonempty. Infact, any allocation that gives each creditor at least zero and at mosthis claim is a core allocation.

This chapter is organized as follows. Section 1.2 provides a theo-retical discussion of the three existing types of cooperative games thatcan deal with random payoffs. In particular, we focus on the core ofthese games and the corresponding requirements for it to be nonempty.Section 1.3 considers the cost allocation problem in a network tree whileSection 1.4 considers the bankruptcy problem. Finally, Section 1.5 con-cludes.

Page 23: Borm_Chapters in Game Theory-In Honor of Stef Tijs

STOCHASTIC COOPERATIVE GAMES 5

1.2 Cooperative Decision-Making under Risk

For cooperative games with transferable utility, the payoff of a coalitionis assumed to be known with certainty. In many cases though, thepayoffs to coalitions can be uncertain. This would not raise a problem ifthe agents can await the realizations of the payoffs before deciding whichcoalitions to form and which allocations to settle on. But if the formationof coalitions and allocations has to take place before the payoffs arerealized, the framework of TU-games is no longer appropriate.

This section presents three different models especially designed todeal with situations in which the benefits from cooperation are bestdescribed by random variables. The following models will pass in re-view consecutively: chance-constrained games (cf. Charnes and Granot,1973), stochastic cooperative games with transfer payments (cf. Suijs etal., 1999), and stochastic cooperative games without transfer payments(cf. Timmer et al., 2000).

1.2.1 Chance-Constrained Games

Chance-constrained games as introduced by Charnes and Granot (1973)extend the theory of cooperative games in characteristic function formto situations where the benefits from cooperation are random variables.So, when several agents decide to cooperate, they do not exactly knowthe benefits that this cooperation generates. What they do know is theprobability distribution function of these benefits. Let V(S) denote therandom variable describing the benefits of coalition S. Furthermore,denote its probability distribution function by Thus,

for all Then a chance-constrained game is defined by the pair(N, V), where N is a finite set of agents and V is the characteristicfunction assigning to each coalition the nonnegative randombenefits V(S). Note that chance-constrained games are based on theformulation of TU-games with the deterministic benefitsreplaced by stochastic benefits V(S).

For dividing the benefits of the grand coalition, the authors proposetwo stage allocations. In the first stage, when the realization of thebenefits is still unknown, each agent is promised a certain payoff. Theseso-called prior payoffs are such that there is a fair chance that they arerealized. Once the benefits are known, the total payoff allocated in the

Page 24: Borm_Chapters in Game Theory-In Honor of Stef Tijs

6 BORM AND SUIJS

prior payoff can differ from what is actually available. In that case, wecome to the second stage and modify the prior payoff in accordance withthe realized benefits.

Let us start with discussing the prior allocations. A prior payoff isdenoted by a vector with the interpretation that agentreceives the amount To comply with the condition that there is areasonable probability that the promised payoffs can be kept, the priorpayoff must be such that

with This condition assures that the total amountthat is allocated is not too low or too high. Note that expression

(1.1) can also be written as

where denotes the The of a random payoff X,denoted by is the largest value such that the realization of Xwill be less than with at most probability Formally, theof X is defined by where

To come to a prior core for chance-constrained games, one needsto specify when a coalition S is satisfied with the amount itreceives, so that it does not threaten to leave the grand coalition N.Charnes and Granot (1973) assumes that a coalition S is satisfied withwhat it gets, if the probability that they can obtain more on their ownis small enough. This means that for each coalition there existsa number such that coalition S is willing to participatein the coalition N given the proposed allocation whenever

The number is a measure ofassurance for coalition S. Note that the measures of assurance may varyover the coalitions. Furthermore, they reflect the coalitions’ attitudetowards risk, the willingness to bargain with other coalitions, and so on.The prior core of a chance-constrained game (N, V) is then defined by

Example 1.1 Consider the following three-person chance-constrainedgame (N, V) defined by if

Page 25: Borm_Chapters in Game Theory-In Honor of Stef Tijs

STOCHASTIC COOPERATIVE GAMES 7

and Furthermore, let for alland Next, let be a prior allocation. Since

condition (1.1) implies that For the one-personcoalitions we have that Furthermore,since for all two-person coalitions S it holds that

it follows that Hence, theprior core of this game is given by

A chance-constrained game (N, V) with a nonempty core is called bal-anced. Furthermore, if the core of every subgame is nonempty,then (N, V) is called totally balanced. The subgame is givenby for all

A necessary and sufficient condition for nonemptiness of the priorcore is given by the following theorem, which can be found in Charnesand Granot (1973).

Theorem 1.2 Let (N, V) be a chance-constrained game. Thenif and only if with

1.2.2 Stochastic Cooperative Games with Transfer Pay-ments

A stochastic cooperative game with transfer payments is described by atuple where is the set of agents, a

Page 26: Borm_Chapters in Game Theory-In Honor of Stef Tijs

8 BORM AND SUIJS

map assigning to each nonempty coalition S a collectionof stochastic payoffs, and the preference relation of agent over theset of random payoffs with finite expectation. In particular, itis assumed that the random payoffs are expressed in some infinitelydivisible commodity like money. Benefits consisting of several differentor indivisible commodities are excluded. The interpretation of isthat each random payoff represents the benefit that resultsfrom one of the several actions that coalition S has at its disposal whencooperating.

Let and let be a stochastic payoff for coalitionS. An allocation of X is represented by a pair with

and for all Given a pairagent then receives the random payoff So, an

allocation consists of two parts. The first part represents determinis-tic transfer payments between the agents in S. Note that theallows the agents to discard some of the money. The second part al-locates a fraction of the random payoff to each agent in S. The classof stochastic cooperative games with agent set N is denoted by SG(N)and its elements are denoted by Furthermore, denotes the setof allocations that coalition S can obtain in that is

where andA core allocation is an allocation such that no coalition has an incen-

tive to part company with the grand coalition because they can do betteron their own. The core of a stochastic cooperative game isthus defined by

A stochastic cooperative game with a nonempty core iscalled balanced. Furthermore, if the core of every subgame is non-empty, then is called totally balanced. The subgame is given by

where for allThe core of a stochastic cooperative game can be empty. Necessary

and sufficient conditions for nonemptiness of the core only exist for aspecific subclass of stochastic cooperative games that we wil discussbelow; they are still unknown for the general case.

Page 27: Borm_Chapters in Game Theory-In Honor of Stef Tijs

STOCHASTIC COOPERATIVE GAMES 9

Let be a stochastic cooperative game with preferencessuch that for each there exists a function

satisfying

(M1) for all if and only if

(M2) for all and all

The interpretation is that equals the amount of money for whichagent is indifferent between receiving the amount with certaintyand receiving the stochastic payoff The amount

is called the certainty equivalent of X. Condition (M1) statesthat agent weakly prefers one stochastic payoff to another one if andonly if the certainty equivalent of the former is greater than or equal tothe certainty equivalent of the latter. Condition (M2) states that thecertainty equivalent is linearly separable in the deterministic amount ofmoney

Example 1.3 Let the preference with be such that forit holds that With the cer-

tainty equivalent of given by the conditions(M1) and (M2) are satisfied. That (M1) is fulfilled is straightforward.For (M2), note that

for all and all Hence,

Let be a stochastic cooperative game satisfying conditions(M1) and (M2). Take An allocation isPareto optimal for coalition S if there exists no allocation

such that for all Pareto optimalallocations are characterized by the following proposition, which is dueto Suijs and Borm (1999).

Proposition 1.4 Let satisfy conditions (M1) and (M2).Then is Pareto optimal if and only if

Page 28: Borm_Chapters in Game Theory-In Honor of Stef Tijs

10 BORM AND SUIJS

For interpreting condition (1.5), consider a particular allocation for coali-tion S and let each member pay the certainty equivalent of the randompayoff he receives. Acting in this way, the initial wealth of each memberdoes not change and, instead of the random payoff, the coalition nowhas to divide the certainty equivalents that have been paid by its mem-bers. Since the preferences are strictly increasing in the deterministicamount of money one receives, the more money a coalition can divide,the better it is for all its members. So, the best way to allocate therandom benefits, is to maximize the sum of the certainty equivalents.Furthermore, we can describe the random benefits of each coalition bythe maximum sum of the certainty equivalents they can obtain, providedthat this maximum exists, of course. This follows from the fact that foreach it holds that

where

Expression (1.6) means that it does not matter for coalition S whetherthey allocate a random payoff or the deterministic amount

To see that this equality does indeed hold, note that the inclusionfollows immediately from the definition of For the reverse

inclusion let Next, letbe such that and definefor each Since andfor all it holds that

So, if for a stochastic cooperative game the valueis well defined for each coalition we can also describe the game

by a TU-game with as in (1.7) for all Letdenote the class of stochastic cooperative games for

which the conditions (M1) and (M2) are satisfied and the gameis well defined. The following theorem is taken from Suijs and Borm(1999).

Theorem 1.5 Let andare such that for all thenif and only if

Page 29: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Stochastic cooperative games without transfer payments are introducedin Timmer et al. (2000). We denote the class of these games by SG* (N).Its description is similar to the one that includes transfer paymentsexcept for the following assumptions.

First, it is assumed that cooperation by coalition S generates a single,nonnegative random payoff i.e. for all

So, in this respect, the model follows the structure of chance-constrained games.

Second, deterministic transfer payments between agents are not al-lowed. As a result, allocations of the random payoff are describedby a vector such that The set of feasible alloca-tions for coalition is thus described by

Note that in the absence of deterministic transferpayments insurance is not possible.

Finally, the transitive and complete preferences satisfy the fol-lowing conditions

1Timmer et al. (2000) do not require nonnegativity of However, since this paperfocuses on the core only, the nonnegativity conditions on impose no additional re-strictions as nonnegativity also follows from individual rationality and

STOCHASTIC COOPERATIVE GAMES 11

An immediate consequence of this result is that for eachit holds true that if and only if Furthermore, tocheck the nonemptiness of we can rely on the well-known theoremby Bondareva (1963) and Shapley (1967).

(P1)

(P2)

for any and it holds that if andonly if

for all X, with for somethere exists such that

Condition (P1) is implied by first order stochastic dominance which isgenerally accepted to be satisfied for a rationally behaving individual.Condition (P2) is a kind of continuity condition.

The core of a stochastic cooperative game without transfer paymentsis defined in the usual way, that is it contains all allocations that inducestable cooperation of the grand coalition. Formally, the core of a stochas-tic cooperative game without transfer payments is given

1.2.3 Stochastic Cooperative Games without TransferPayments

Page 30: Borm_Chapters in Game Theory-In Honor of Stef Tijs

We can provide necessary and sufficient conditions for nonemptinessof the core for a specific subclass of stochastic cooperative games withouttransfer payments. To describe this subclass, let us take a closer look atthe individuals’ preferences.

Let be a stochastic cooperative game without transferpayments. Define Since the preference relation

satisfies conditions (P1) and (P2), there exists a functionsuch that and, if then

if and only if

for any S, The proof is straightforward. Take Let

be a strictly increasing continuous function withFor each define such that

and for all If such does not exist,

define Note that (P1) implies thatand Furthermore, note that is strictly increasing in

Transitivity implies for any S, that ifso that (1.9) follows from the monotonicity condition

(P1).

Example 1.6 Consider the quantile preferences as presented in Exam-ple 1.3, that is, if and only if These preferencessatisfy conditions (P1) and (P2). In addition, can be defined as fol-lows. Let denote the nonzero random payoffs.For all and all define if and

otherwise. Since it holdsthat if then if and only if

The subclass of stochastic cooperative games with-out transfer payments is defined as follows. For eachthe preferences are such that for each the function

is linear, that is for all Thisimplies the following property for

2 The latter condition guarantees that is unique and that

12 BORM AND SUIJS

by

Page 31: Borm_Chapters in Game Theory-In Honor of Stef Tijs

This property states that in a way only the relative dispersion of theoutcomes of a random payoff is taken into account. Note that this prop-erty is different from (M2), which states that preferences over randompayoffs are independent of one’s initial wealth. We will illustrate thisdifference in the following example.

Example 1.7 For simplicity, let us restrict our attention to nonnegativerandom payoffs defined on two states of the world that occur with equalprobability. So, we can represent a random payoff X by a vectorwith

First, consider the preference relation based on the utility functionthat is if and only if The

corresponding certainty equivalent is defined byIt is a straightforward exercise to show that (M2) is satisfied, i.e.

These preferences, however, violate (P3). To see this,consider the random payoff (0, 1) that pays 0 with probability 0.5 and1 with probability 0.5. The certainty equivalent of this random payoffequals So, this individual is indifferentbetween receiving the random payoff and receiving 0.38 with certainty.Now, consider a multiple of this lottery, e.g. (0, 3), then the individ-ual should be indifferent between receiving the lottery and receiving3 · 0.38 = 1.14. However, the certainty equivalent for this lottery onlyequals

Next, consider the preferences based on the utility functionwhere and are the respective outcomes of X. Then

if and only if Note that sinceit holds that if that is (P3) is

satisfied. The certainty equivalent can be defined bybecause Thiscertainty equivalent violates condition (M2). To see this, consider thelottery (1, 4). The corresponding certainty equivalent equals 2. So, if(M2) would be satisfied, this individual should be indifferent betweenthe lottery (3, 6) (i.e. the lottery (1, 4) increased with 2) and its cer-tainty equivalent 4. However, the certainty equivalent of the lottery(3,6) equals 4.24.

Finally, note that preferences based on quantiles (see Example 1.3and Example 1.6) satisfy both (M2) and (P3).

For the class MG*(N) we can derive necessary and sufficient condi-

13STOCHASTIC COOPERATIVE GAMES

(P3) if then for any

Page 32: Borm_Chapters in Game Theory-In Honor of Stef Tijs

tions for the core to be nonempty. For this purpose, we introducesome notation. Take and letbe such that For the moment, assume that such

exists. So, is that multiple of the random payoff Xfor which agent is indifferent between and Y. Since

we obtainfor thatHence, Furthermore, since is linear itfollows that

for all If does not exist, defineRecall that the core of a stochastic cooperative game

is given by expression (1.8). Consider an allocationand suppose that coalition does not agree with this allocationbecause they can do better on their own. This means that there existsan allocation such that for allIf then agent prefers to any multiple of

Hence, coalition S cannot strictly improve the payoff of agentConsequently, coalition S has no incentive to part company

with the grand coalition N. Furthermore, since implies thatso that it follows

that each coalition willstay in the grand coalition N. If we know thatthat Then (P1) and (P3) imply that

Since is a feasible allocationit holds that Hence, coalition Scannot construct a better allocation if and only if1. Consequently, we have that

Nonemptiness of the core is characterized by the following theorem. Itsproof is stated in the Appendix.

Theorem 1.8 Let Then the core if andonly if the following statement is true: for all suchthat for each it holds true that

14 BORM AND SUIJS

Page 33: Borm_Chapters in Game Theory-In Honor of Stef Tijs

STOCHASTIC COOPERATIVE GAMES

Note that for the deterministic case, that is TU-games, this conditionis similar to Bondareva (1963) and Shapley (1967). This follows im-mediately from the fact that for TU-games and

substituting for eachIn the next two sections, we will apply the three different theoretical

models to two specific situations, namely cost allocation in network treesand bankruptcy situations.

1.3 Cost Allocation in a Network Tree

With the geographical dispersion of customers and service providers theneed arises for a communication network that connects the provider toits clients. Typical examples one can think of in this context are citiesthat are connected to the local powerplant or computer workstationsthat are connected to a central server. Megiddo (1978) considered thiscost allocation problem in a deterministic setting, which we discuss first.

Let N denote the finite set of agents that are connected througha network with a single service provider denoted by It is assumedthat the network is a tree. This assumption is derived from Claus andKleitman (1973), which also considers the construction problem of suchnetworks. Using the total construction costs as the objective, it is shownthat a minimum cost network is a tree. The network is represented by

with the interpretation that the linkbetween agents k and l exists if and only if The cost of eachlink is denoted by Furthermore, let be thepath that connects agent to the source Note that since T is a tree thispath is unique. The corresponding fixed tree game is definedas for all So, represents thetotal cost of all the links that the members of S use to reach the source.Megiddo (1978) showed that these games are totally balanced and thatthe Shapley value belongs to the core.

Given an existing communication network tree, the cost of each linkmay be considered to consist of two parts, namely the construction costand the maintenance cost. Observe that the latter is generally randomof nature as one does not know up front when connections are goingto fail and what the resulting cost of repair will be. In this regard,random variables are more appropriate to describe the maintenance costof each link. So, let denote the random cost of thelink and assume that these costs are mutually independent.

15

Page 34: Borm_Chapters in Game Theory-In Honor of Stef Tijs

In the remainder of this section we model the cost allocation problemas a chance-constrained game, stochastic cooperative game with transferpayments, and a stochastic cooperative game without transfer payments,respectively.

Recall that a chance-constrained game is denoted by (N, V) withrepresenting the random payoff to coalition S. In this

case, V(S) equals the total random costs of the links that the membersof S use to reach the source, that is foreach As the following example shows, the prior core of suchchance-constrained games can be empty.

Example 1.9 Consider the tree illustrated in Figure 1.1. There areonly two agents and each agent has a direct connection to the source.The random costs are uniformly distributed on (0,1) and therandom costs are exponentially distributed with mean 1. Let thelevels of assurance be and For anallocation to belong to the prior core of the game, it must holdthat andwith

the probability distribution function ofThis implies that and

Hence, the prior core is empty ifAs one can see in Figure 1.2,

this is the case for

A stochastic cooperative game with transfer payments is describedby a tuple with the total cost

16 BORM AND SUIJS

Page 35: Borm_Chapters in Game Theory-In Honor of Stef Tijs

STOCHASTIC COOPERATIVE GAMES 17

of the links that coalition S uses. In addition, we have to specify thepreferences of the agents. We assume that the preference relationof agent can be represented by the that isif and only if Note that these preferences satisfythe properties (M1)–(M2). Hence, The correspondingcertainty equivalent equals so that

for all Since and0 it easily follows that Hence, a Paretooptimal allocation allocates the random costs V(S) to the member of Swith the highest that is the agent that, relatively speaking, looks atthe most optimistic outcomes of V(S).

From Theorem 1.5 we know that the core of the stochasticcooperative game is nonempty if and only if the core of the corre-sponding TU-game is nonempty. The following example showsthat in this case the core can be empty.

Example 1.10 Consider the network allocation problem presented inExample 1.9. Let Since

and we have that the coreof the game is empty if and only if

From Figure 1.2 we know that this holds true for

Page 36: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Finally, let us turn to stochastic cooperative games without transfer pay-ments. Again, let us confine our attention to agents whose preferencescan be represented by quantiles. Since these preferences satisfy condi-tions (P1)–(P3), we know that the core is given by expression (1.11). Re-call that is such that if Sincethe latter holds if and only if it follows from

thatHence,

Notice that since implies for all itfollows that

Example 1.11 Consider again the network allocation problem of Ex-ample 1.9. Let A core allocationsatsfies the condition

for This is equivalent tofor Using

and we obtain that the core isempty if and only if This isthe case if (see Figure 1.2).

Summarizing, stable allocations of the total random network costs neednot exist for the three models under consideration. In fact, the examplewe provided concerned only two agents. This means that even a coali-tion consisting of two agents may not benefit from cooperation. Thisseems counterintuitive, for two agents can pretend that they cooper-ate and allocate the total costs as if they were standing alone. This,however, is not possible because of the allocation structure that we im-posed. When cooperating, agents 1 and 2 cannot allocate the randomcosts in such a way that agent 1 pays and agent2 pays Both agents must bear a proportional part of the totalcosts. So, for allocating network costs the allocation structure that ischosen in the models might be too restrictive. Therefore, it may be moreappropriate to allow the agents to allocate the random costs ofeach connection proprotionally among each other instead of the totalcosts

18 BORM AND SUIJS

Page 37: Borm_Chapters in Game Theory-In Honor of Stef Tijs

STOCHASTIC COOPERATIVE GAMES 19

1.4 Bankruptcy Problems with Random Estate

A bankruptcy petition is presented against a firm when it is not able tomeet the claims of its creditors. Since the firm’s assets have insufficientvalue to pay off the debts, not every individual creditor can receive hisclaim back in full. So, what would be a fair allocation of the firm’sestate? O’Neill (1982) modeled this allocation problem by means ofthe following cooperative game. Let denote the estate of thefirm and let N denote the finite set of creditors, each creditor havinga claim on the firm’s estate. Since the firm is in bankruptcy,it must hold that the total number of claims exceeds the estate, that is

The value of a coalition is defined as the remainsof the estate E when coalition S fulfills the claims of the noncooperatingcreditors. Thus, To simplify notation,define Then forall O’Neill (1982) showed that the core of bankruptcy games isnonempty. Furthermore, it was shown that

Thus, any allocation that gives each creditor nomore than his claim is a core allocation.

Next, let us assume that the exact value of the estate is uncertain.For instance, the value of a firm’s assets (e.g. inventory, productionfacilities, knowledge) is ambiguous in bankruptcy as market values areno longer appropriate. We assume that creditors have a deterministicclaim on this random estate and that the total claims exceed the estatefor all possible realizations, i.e.

First, we model this bankruptcy problem as a chance-constrainedgame. Similar to O’Neill (1982) we define the value of a coalitionas the remains of the estate after they paid back the claims of the othercreditors, that is for all Notice that

The following example shows that the priorcore of this game can be empty.

Example 1.12 Let E be uniformly distributed between 0 and 10 andlet the claims of the two creditors be equal to 6. Further, take

and The value of coalition is givenby the random variable with probability distributionfunction

Page 38: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Note that equals zero with probability 0.6. In that case, theestate is insufficient to pay off the claim of creditor so that nothingremains for creditor Since we have thatbelongs to the prior core if for and

This implies that andObviously, such allocations do not exist.

for all Using that it directlyfollows that for all Now wecan easily prove that the core is nonempty. Defineand define the TU-game by for all

Since for all and itfollows that Moreover, the game is a bankruptcygame in the sense of O’Neill (1982) with estate Hence,the core and thus In particular it holds that

20 BORM AND SUIJS

The main reason why the game in Example 1.12 has a nonempy core isbecause Given the interpretation of this inequalitymight be considered counterintuitive. Since individual finds an alloca-tion acceptable if the probability that he cannot do better on his ownis at least one might expect that individual finds an alloca-tion for coalition acceptable if the probability that this coalitioncannot do better on its own is also at least In other words,

Imposing a monotonicity condition on is sufficientfor nonemptiness of the prior core:

Theorem 1.13 A bankruptcy game (N, V) has a nonempty prior coreif for all

Second, we model the allocation problem as a stochastic cooperativegame with transfer payments. Let be a stochasticbankruptcy game with for alland the preference relation based on the Sincethe creditors’ preferences satisfy conditions (M1) and (M2) we can definethe corresponding TU-game by

Page 39: Borm_Chapters in Game Theory-In Honor of Stef Tijs

ApplyingTheorem 1.5 then yields the following result.

Theorem 1.14 Let be a stochastic bankruptcy game and letThen

Note that if all creditors have the same preferences, that is forall then the game is a bankruptcy game with estate

Hence, equality holds in Theorem 1.14.Next, let us turn to the case without transfer payments. Using that

we have that

STOCHASTIC COOPERATIVE GAMES 21

The following proposition states that the core is nonempty. The proofis provided in the Appendix.

Theorem 1.15 Let be a stochastic bankruptcy game.Then the core is nonempty.

Bankruptcy games have a nonempty core independent of whether or notwe allow for transfer payments between the agents. So how do the coreallocations of these two bankruptcy games compare to each other? Wecan compare the core allocations by comparing the corresponding cer-tainty equivalents. Therefore, take Since satisfiesconditions (M1)–(M2), the certainty equivalent of a random payoff Xequals So, does there exists an allocationsuch that the certainty equivalent coincides with that is

for all The answer is no. In fact,by allowing transfer payments, the agents can strictly improve upon theallocation if for some To seethis, let be such that Then andfor all is a Pareto optimal risk allocation. Further, take

for andNote thatwhere

Page 40: Borm_Chapters in Game Theory-In Honor of Stef Tijs

22 BORM AND SUIJS

Since

is a feasible allocation. Moreover,for all and

Hence, all agents prefer theallocation to the allocation

Summarizing, core allocations exist for stochastic bankruptcy gameswhereas they need not exist for chance-constrained bankruptcy games.Furthermore, core allocations without transfer payments are strictlyPareto dominated by allocations with transfer payments. Hence, fromthe viewpoint of the creditors, transfer payments are preferable.

1.5 Concluding Remarks

This paper surveyed three existing models on cooperative behavior underrisk. We discussed the differences and similarities between these modelsand examined their performance in two applications. The cooperativemodels under consideration were chance-constrained games, stochasticcooperative games with transfer payments, and stochastic cooperativegames without transfer payments. The main difference between the firstand the latter is that stochastic cooperative games explicitly incorporatethe preferences of the agents. The main difference between the lattertwo is, as their names imply, in the deterministic transfer payments. Al-lowing for deterministic transfer payments allows for insurance as agentscan transfer risk in exchange for a deterministic payment, i.e. an insur-ance premium. One reason to exclude insurance possibilities from theanalysis when examining cooperative behavior is the following. Sincethe possibility to insure risks provides an incentive to cooperate (see,for instance, Suijs et al., 1998), allowing for insurance may bias theresults in the sense that it may not be clear whether the incentive to

Page 41: Borm_Chapters in Game Theory-In Honor of Stef Tijs

cooperate arises from the characteristics of the particular setting underconsideration or from the insurance opportunities.

In the analysis of the applications, we focused on stability of cooper-ation. More precisely, for each of the three models we examined whetherthe core was nonempty. For the cost allocation problem in a communi-cation network tree, all three models need not yield stable cooperation.This ‘deficiency’ is attributed to the different restrictions that the threemodels impose on the allocation space. For the bankruptcy case, con-ditions could be derived such that stable cooperation arises in all threemodels.

STOCHASTIC COOPERATIVE GAMES 23

Appendix

Proof of Theorem 1.8. Let The core isnonempty if the following system of linear equations has a nonnegativesolution

Using a variant of Farkas’ Lemma, a nonnegative solution exists if andonly if there exists no satisfying

Without loss of generality we may assume that µ> 0, otherwise we canconsider µ+ exc with exc > 0 sufficiently small. Hence, the above isequivalent to there exists no such that

for all

Rewriting yields the statement: if for all satisfyingfor each then

Proof of Theorem 1.13. Notice thatfor all The prior core is nonemtpy if and only if there

exists an allocation such that and

If for all thenfor all Since the bankruptcy game with estate

for all

for all

Page 42: Borm_Chapters in Game Theory-In Honor of Stef Tijs

and claims has a nonempty core, there existsan allocation such that and

for all Hence, is a core-allocation for thebankruptcy game (N, V).

Proof of Theorem 1.15. Define for eachNotice, that since

an allocation is a core-allocation iffor all

We will show that

Let be such that First, we consider the case thatis unique. Take Hence, If then

24 BORM AND SUIJS

where the second inequality follows from the third equal-ity from for and the third inequality from

If then

Page 43: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Bondareva, O. (1963): “Some applications of linear programming meth-ods to the theory of cooperative games,” Problemi Kibernet, 10, 119–139.In Russian.

Charnes, A., and D. Granot (1973): “Prior solutions: extensions ofconvex nucleolus solutions to chance-constrained games,” Proceedings ofthe Computer Science and Statistics Seventh Symposium at Iowa StateUniversity, 323–332.

Charnes, A., and D. Granot (1976): “Coalitional and chance-constrainedsolutions to games I,” SIAM Journal on Applied Mathematics,31, 358–367.

Charnes, A., and D. Granot (1977): “Coalitional and chance-constrainedsolutions to games II,” Operations Research, 25, 1013–1019.

Claus, A., and D. Kleitman (1973): “Cost allocation for a spanningtree,” Networks, 3, 289–304.

Granot, D. (1977): “Cooperative games in stochastic characteristic func-tion form,” Management Science, 23, 621–630.

Megiddo, N. (1978): “Computational complexity of the game theoryapproach to cost allocation for a tree,” Mathematics of Operations Re-search, 3, 189–196.

O’Neill, B. (1982): “A problem of rights arbitration from the Talmud,”Mathematical Social Sciences, 2, 345–371.

Shapley, L. (1967): “On balanced sets and cores,” Naval Research Lo-gistics Quarterly, 14, 453–460.

Suijs, J. and P. Borm (1999): “Stochastic cooperative games: super-additivity, convexity, and certainty equivalents,” Games and EconomicBehavior, 27, 331–345.

STOCHASTIC COOPERATIVE GAMES 25

where the second equality follows from and the secondinequality follows from

If agent is not unique, then the proof is similar to the caseabove. Hence, we leave that as an exercise to the reader.

References

Page 44: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Suijs, J., O. Borm, A. De Waegenaere, and S. Tijs (1999): “Cooper-ative games with stochastic payoffs,” European Journal of OperationalResearch, 113, 193–205.

Suijs, J., A. De Waegenaere, and P. Borm (1998): “Stochastic cooper-ative games in insurance,” Insurance: Mathematics & Economics, 22,209–228.

Timmer, J., P. Borm, and S. Tijs (2000): “Convexity in stochastic coop-erative situations,” CentER Discussion Paper Series 2000-04, TilburgUniversity.

von Neumann, J., and O. Morgenstern (1944): Theory of Games andEconomic Behavior. Princeton: Princeton University Press.

26 BORM AND SUIJS

Page 45: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Chapter 2

Sequencing Games: aSurvey

BY IMMA CURIEL, HERBERT HAMERS, AND FLIP KLIJN

2.1 Introduction

During the last three decades there have been many interesting inter-actions between linear and combinatorial optimization and cooperativegame theory. Two problems meet here: On the one hand the problemof minimizing the costs or maximizing the revenues of a project, onthe other hand the problem of allocating these costs or revenues amongthe participants in the project. The first problem is dealt with usingtechniques from linear and combinatorial optimization theory, the sec-ond problem falls in the realm of cooperative game theory. We mentionminimum spanning tree games (cf. Granot and Huberman, 1981), linearproduction games (cf. Owen, 1975), traveling salesman games (cf. Pot-ters et al., 1992), Chinese postman games (cf. Granot et al., 1999) andassignment games (cf. Shapley and Shubik, 1972). An overview of thistype of games can be found in Tijs (1991) and Curiel (1997).

Another fruitful topic in this area has been and still is that of se-quencing games. This paper gives an overview of the developments ofthe interaction between sequencing situations and cooperative games.

In operations research, sequencing situations are characterized by afinite number of jobs lined up in front of one (or more) machine(s) thathave to be processed on the machine(s). A single decision maker wantsto determine a processing order of the jobs that minimizes total costs.

27

P. Borm and H. Peters (eds.), Chapters in Game Theory, 27–50.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Page 46: Borm_Chapters in Game Theory-In Honor of Stef Tijs

28 CURIEL, HAMERS, AND KLIJN

This single decision maker problem can be transformed into a multipledecision makers problem by taking agents into account who own at leastone job. In such a model a group of agents (coalition) can save costsby cooperation. For the determination of the maximal cost savings ofa coalition one has to solve the combinatorial problem corresponding tothis coalition. In particular, the maximal cost savings for each coalitioncan be modeled by a cooperative transferable utility game, which is anordered pair where N denotes a non-empty, finite set of players(agents) and is a mapping from the power set of N to the realnumbers with The questions that arise are: which coalition (s)will form, and how should the maximal cost savings be allocated amongthe members of this coalition. One way to answer these questions is tolook at solution concepts and properties of the game. One of the mostprominent solution concepts in cooperative game theory is the core of agame. It consists of all vectors which distribute i.e., the revenuesincurred when all players in N cooperate, among the players in such away that no subset of players can be better off by seceding from the restof the players and acting on their own behalf. That is, a vector is inthe core of a game if and forall A cooperative game whose core is not empty is said to bebalanced.

A well-known class of balanced games is the class of convex games.A game is called convex if for any and any itholds

Convex (or submodular) games are known to have nice properties, in thesense that some solution concepts for these games coincide and othershave intuitive descriptions. For example, for convex games the coreis equal to the convex hull of all marginal vectors (cf. Shapley, 1971,and Ichiishi, 1981), and, as a consequence, the Shapley value is thebarycentre of the core (Shapley, 1971). Moreover, the bargaining setand the core coincide, the kernel coincides with the nucleolus (Maschleret al., 1972) and the can be easily calculated (Tijs, 1981).

In this paper we will focus on balancedness and convexity of theseveral classes of sequencing games that will be discussed.

Sequencing games were introduced in Curiel et al. (1989). Theyconsidered the class of one-machine sequencing situations in which norestrictions like due dates and ready times are imposed on the jobs and

Page 47: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SEQUENCING GAMES 29

the weighted completion time criterion was chosen as the cost crite-rion. It was shown for the corresponding sequencing games that theyare convex and, thus, that the games are balanced. Hamers et al. (1995)extended the class of one-machine sequencing situations considered byCuriel et al. (1989) by imposing ready times on the jobs. In this casethe corresponding sequencing games are balanced, but are not neces-sarily convex. For a special subclass of sequencing games with readytimes, however, convexity could be established. Similar results are alsoobtained in Borm et al. (1999), in which due dates are imposed on thejobs.

Instead of imposing restrictions on the jobs, Hamers et al. (1999)and Calleja et al. (2001) extended the number of machines. Hamers etal. (1999) consider sequencing situations with parallel and identicalmachines in which no restrictions on the jobs are imposed. Again, theweighted completion time criterion is used. They proved balancednessin case there are two machines, and show balancedness for special classesin case there are more than two machines. Calleja et al. (2001) estab-lished balancedness for a special class of sequencing games that arisefrom 2 machine sequencing situations in which a maximal weighted costcriterion is considered.

Van Velzen and Hamers (2001) consider some classes of sequencinggames that arise from the same sequencing situations as used in Curielet al. (1989). A difference, however, is that the coalitions in their gameshave more possibilities to maximize their profit. They show that someof these classes are balanced.

This chapter is organized as follows. We start in Section 2.2 by recall-ing permutation games and additive games, two classesof games that are closely related to sequencing games. Section 2.3 dealswith the sequencing situations and games studied in Curiel et al. (1989).Section 2.4 discusses the sequencing games that arise if ready times ordue dates are imposed on the jobs. Multiple-machine sequencing gamesare discussed in Section 2.5. Section 2.6 considers sequencing games thatarise when the agents have more possibilities to maximize their profit.

2.2 Games Related to Sequencing Games

In this section we consider two classes of games that are closely related tosequencing games: permutation games, introduced by Tijs et al. (1984)and additive games, introduced by Curiel et al. (1993).

Page 48: Borm_Chapters in Game Theory-In Honor of Stef Tijs

30 CURIEL, HAMERS, AND KLIJN

The main reason to start with these games is that they play an importantrole in the investigation of the balancedness of sequencing games.

Permutation games describe a situation in which persons all haveone job to be processed and one machine on which each job can beprocessed. No machine is allowed to process more than one job. Side-payments between the players are allowed. If player processes his job onthe machine of player the processing costs are Letbe the set of players. The permutation game with costs is thecooperative game defined by

for all and The class of all S-permuations isdenoted by The number denotes the maximal cost savingsa coalition can obtain by processing its jobs according to an optimalschedule compared to the situation in which every player processes hisjob on his own machine.

Theorem 2.1 Permutation games are totally balanced.

For Theorem 2.1 several proofs are presented in literature. We mentionTijs et al. (1984), using the Birkhoff-von Neumann theorem on dou-bly stochastic matrices. Curiel and Tijs (1986) gave another proof ofthe balancedness of permutation games. They used an equilibrium ex-istence theorem of Gale (1984) for a discrete exchange economy withmoney, thereby showing a relation between assignment games (cf. Shap-ley and Shubik, 1972) and permutation games. Klijn et al. (2000) usethe existence of envy-free allocations in economies with indivisible ob-jects, quasi-linear utility functions, and an amount of money to provethe balancedness of permutation games.

Let be an order on the player set. Then a game is called aadditive game if it satisfies the following three conditions:

(i)(ii)(iii)

for allis superadditive

for any where is the set of max-imally connected components of S. Here, a coalition is called connectedwith respect to if for all and such thatit holds that

The next result of Curiel et al. (1994), shows thatadditive games have a non-empty core.

Page 49: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SEQUENCING GAMES 31

Theorem 2.2 additive games are balanced.

The proof shows that a specific vector, the which is the averageof two specific marginal vectors, is in the core.

Moreover, Potters and Reijnierse (1995) showed that foradditive games, a class of games that contain additivegames, the core is equal to the bargaining set and the nucleolus coin-cides with the kernel.

2.3 Sequencing Situations and SequencingGames

In this section we describe the class of one-machine sequencing situationsand the corresponding class of sequencing games as introduced in Curielet al. (1989). Furthermore, we will discuss the EGS rule and the splitcore, solution concepts that generate allocations that are in the core of asequencing games. Finally, we show that sequencing games are convex.

In a one-machine sequencing situation there is a queue of agents,each with one job, before a machine (counter). Each agent (player) hasto have his job processed on this machine. The finite set of agents isdenoted by By a bijection we candescribe the position of the agents in the queue. Specifically,means that player is in position We assume that there is an initialorder of the jobs before the processing of the machinestarts. The set of all possible processing orders is denoted by Theprocessing time of the job of agent is the time the machine takes tohandle this job.

For each agent the costs of spending time in the system can bedescribed by a linear cost function defined bywith So is the cost for agent if he has spent units of timein the system.

A sequencing situation as described above is denoted bywhere is the set of players,

andThe starting time of the job of agent if processed in a semi-active

way according to a bijection is

Page 50: Borm_Chapters in Game Theory-In Honor of Stef Tijs

32 CURIEL, HAMERS, AND KLIJN

where is such that Here, a processing order iscalled semi-active if there does not exist a job which could be processedearlier without altering the processing order. In other words, there areno unnecessary delays in the processing order. Note that we may restrictour attention to semi-active processing orders since for each agent thecost function is weakly increasing. Consequently, the completion time

of the job of agent with respect to is equal toBy reordering the jobs, the total costs of the agents will change.

Clearly, there exists an ordering for which the total costs are minimized.A processing order that minimizes total costs, and thus maximizes totalcost savings, is an order in which the players are processed in decreasingorder with respect to the urgency index defined by Thisresult is due to Smith (1956) and is formally presented (without proof)in the following proposition.

Proposition 2.3 Let be a sequencing situation. Then for

if and only if

Note that an optimal order can be obtained from the initial order byconsecutive switches of neighbours with directly in front of and

The problem of allocating the maximal surplus back to the players istackled using game theory. First, we define a class of cooperative gamesthat arises from the above described sequencing situations. Second,some allocation rules will be discussed and related to sequencing games.

For a sequencing situation the costs of coalition Swith respect to a processing order are equal toLet be an optimal order of N. Then the maximal cost savings forcoalition N are equal to We want to determine themaximal cost savings of a coalition S that decides to cooperate. For this,we have to define which rearrangements of the coalition S are admissiblewith respect to the initial order. A bijection is calledadmissible for S if it satisfies the following condition:

Page 51: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SEQUENCING GAMES 33

where for any the set of predecessor of a player withrespect to is defined as

This condition implies that the starting time of each agent outsidethe coalition S is equal to his starting time in the initial order, and theagents of S are not allowed to jump over players outside S. The set ofadmissible rearrangements for a coalition S is denoted by

By defining the worth of a coalition S as the maximum cost sav-ings coalition S can achieve by means of an admissible rearrangementwe obtain a cooperative game called a sequencing game. Formally, fora sequencing situation the corresponding sequencing game

is defined by

for allWe will refer to the games defined in (2.1), introduced in Curiel et al.(1989), as standard sequencing games or s-sequencing games.

Expression (2.1) can be rewritten in terms ofthe cost savings attainable by players and when is directly in

front of Then for any S that is connected with respect to it holdsthat

For a coalition T that is not connected with respect to it follows that

where is the set of components of T, a component of T being amaximal, connected subset of T.

Example 2.4 Let N = {1, 2, 3}, for all andIt follows that and Then

for alland

We can conclude that games are additivegames. Hence, games are balanced. Nevertheless, we showthat two context specific rules, the EGS (Equal Gain Splitting) rule and

Page 52: Borm_Chapters in Game Theory-In Honor of Stef Tijs

34 CURIEL, HAMERS, AND KLIJN

the split core, provide allocations that are in the core of a sequencinggame.

From (2.2) it follows immediately that for an s-sequencing game thatarises from a sequencing situation

Recall that the set of predecessors of player with respect to theprocessing order is given by We define theset of followers of with respect to to be

The Equal Gain Splitting, introduced in Curiel et al. (1989), isa map that assigns to each sequencing situation a vector in

and is defined by

for all Note that the EGS-rule is independent of the chosenoptimal order and that the EGS-rule assigns to each player half of thegains of all neighbour switches he is actually involved in reaching anoptimal order from the initial order.

From (2.3) it readily follows that the EGS-rule allocates the maximalcost savings that coalition N can obtain, i.e.,

Example 2.5 Let N = {1,2,3}, for alland From Example 1 we have that and

Thenand Moreover, we have

The EGS-rule divides the gain of each neighbour switch equally amongboth players involved. Generalizing the EGS-rule we consider GainSplitting (GS) rules in which each player obtains a non-negative partof the gain of all neighbour switches he is actually involved in to reachthe optimal order. The total gain of a neighbour switch is divided amongboth players that are involved. Formally, we define for all and all

Page 53: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SEQUENCING GAMES 35

where Note that for eachwe possibly obtain another allocation. Moreover,

in case for all

Example 2.6 If we take and in the sequencingsituation of Example 2.5, then

The split core, introduced in Hamers et al. (1996), of a sequencing situ-ation is defined by

The following theorem, due to Hamers et al. (1996), shows that the splitcore of a sequencing situation is a subset of the core of the correspondings-sequencing game.

Theorem 2.7 Let be a sequencing situation and letbe the corresponding game. Then

Proof. It is sufficient to show for allconnected coalitions with equality for S = N. Let and letS be a connected set. Then

In case S = N the inequality becomes an equality. Hence,

Because is the gain splitting rule in which all weightsare equal to we have the following corollary.

Corollary 2.8 Let be a sequencing situation and letbe the corresponding game. Then

Page 54: Borm_Chapters in Game Theory-In Honor of Stef Tijs

36 CURIEL, HAMERS, AND KLIJN

The following theorem, due to Curiel et al. (1989), shows that s-seq-uencing games are convex games.

Theorem 2.9 Let be a sequencing situation. Then the cor-responding game is convex.

Proof. Let and let Then it follows that thereexists and with and

such that

Since for all we have that is convex.

2.4 On Sequencing Games with Ready Timesor Due Dates

In this section we consider one-machine sequencing games with differenttypes of restrictions on the jobs. First we study the situations in whichjobs are available at different moments in time. Put differently, weimpose ready times (release dates) on the jobs. We show that thesegames are balanced games, and that a special subclass is convex. Second,we will investigate situations in which jobs have a due date.

The description of a one-machine sequencing game in which the jobshave ready times is similar to that of the one-machine sequencing gamesin the previous section. We only have to include the notion of the readytime of a job and put an extra assumption on the initial order. Theready time of the job of agent is the earliest time the processing ofhis job can begin. Furthermore, it is assumed that there is an initialorder such that

(A1) for all with

A sequencing situation as described above is denoted bywhere N is the set of players,

and

Page 55: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SEQUENCING GAMES 37

ifif

where is such that Hence, the completion timeof the job of agent with respect to is equal to The

total costs of a coalition is given by

The set of admissible rearrangements of a coalition S is identical tothe set defined in the previous section, i.e., Consequently, given asequencing situation the corresponding sequencing gameis analogously defined as in the previous section, i.e.,

for all We refer to the games defined in (2.4) as r-sequencinggames. Because the set of admissible rearrangements is identical to theone in s-sequencing games we have again that for any coalition T it holdsthat

Because games are additive games, wecan conclude that games have a non-empty core.

Theorem 2.10 Let be a sequencing situation. Then thecorresponding game is balanced.

The following example shows that convexity need not be satisfied.

Example 2.11 Let N = {1,2,3},and The costs according to the initial order equal

1 · 1 + 3 · 3 + 6 · 12 = 82. The optimal rearrangement equals (1, 3, 2)with corresponding costs of 1 · 1 + 4 · 12 + 6 · 3 = 67. Consequently, wehave that Furthermore, since

The starting time of the job of agent if processed according toa bijection (in a semi-active way) in the situation with ready times is

Page 56: Borm_Chapters in Game Theory-In Honor of Stef Tijs

38 CURIEL, HAMERS, AND KLIJN

and since From this we can infer thatis not a convex game:

However, convexity can be established for a special subclass. More pre-cisely, we restrict our attention to sequencing situationswhere there are no time gaps in the job processing according to theinitial order i.e.,

for all with

and

and for all

Now, we state a convexity result, due to Hamers et al. (1995).

Theorem 2.12 Let be a sequencing situation satisfying(A1)-(A3) and let be the corresponding game. Then

is convex.

In the second part of this section we focus on one-machine sequencinggames in which due dates are imposed on the jobs.

The description of a one-machine sequencing game in which the jobshave due dates is similar to that of the one-machine sequencing gamesin which ready times are involved. We only have to replace the readytimes by due dates and put an extra assumption on the initial order.The job of agent has to be finished by the due date Furthermore,it is assumed that there is an initial order such that

for all with

A sequencing situation as described above is denoted bywhere N is the set of players,

The starting time of a job is defined identically to that in Section2.3, and consequently the completion time is also defined identically.The set of admissible rearrangements of a coalition is in the same spirit:jobs outside the coalition cannot jump over jobs inside the coalitionand their starting time is not changed, i.e., for all

Moreover, we impose the restriction that a rearrangement is

(A2)

(A3)

(B1)

Page 57: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SEQUENCING GAMES 39

admissible only if all jobs are processed before their due dates. Formally,an admissible rearrangement satisfies

We denote the set of admissible rearrangements of a coalition S byThe corresponding sequencing game is defined as follows

for all We will refer to the game defined in (2.5) as d-sequencing game.

Because games are additive games, wehave the following theorem.

Theorem 2.13 Let be a sequencing situation satisfying(B1) and let satisfy (B2). Then the corresponding game

is balanced.

Convexity can be established for a special subclass. More precisely, werestrict attention to sequencing situations with

Now, we state a convexity result, due to Borm et al. (1999).

Theorem 2.14 Let be a sequencing situation satisfying(B1) and (B3) and satisfy (B2). Let be the corresponding

game. Then is convex.

The proof of this result is shown by establishing the equivalence betweenthis class of d-sequencing games and the class of r-sequencing gamesdescribed in Theorem 2.12.

Other convex classes of sequencing games that arise from sequencingsituations in which due dates are involved can also be found in Borm etal. (1999). The related sequencing games arise from the same sequencingsituations, but with a different cost criterion.

(B2) for all

(B3) and for all

Page 58: Borm_Chapters in Game Theory-In Honor of Stef Tijs

40 CURIEL, HAMERS, AND KLIJN

2.5 On Sequencing Games with Multiple Ma-chines

In this section we consider multiple-machine sequencing games. First, wediscuss m-machine sequencing situations in which the weighted comple-tion time criterion is taken into account. Second, we deal with 2-machinesequencing situation with some maximal weighted completion criterion.

The first model in this section deals with multiple-machine sequenc-ing situations with parallel and identical machines. The weightedcompletion time criterion is used. Furthermore, each agent has one jobthat has to be processed on precisely one machine. These sequencingsituations, which will be referred to as sequencing situations,give rise to the class of games. Formally, in ansequencing situation each agent has one job that has to be processed onprecisely one machine. Each job can be processed on any machine. Thefinite set of machines is denoted by and the finite setof agents is denoted by We assume that each machinestarts processing at time 0 and that the processing time of each job isindependent of the machine the job is processed on. The processingtime of the job of agent is denoted by We assume that everyagent has a linear monetary cost function defined by

where is a (positive) cost coefficient.We can use a one to one map to

describe on which machine and in which position on that machine thejob of an agent will be processed. Specifically, means thatagent is assigned to machine and that (the job of) agent is inposition on machine Such a map will be called a (processing)schedule.

In the following, an sequencing situation will be de-scribed by where is the set of machines,

the set of agents, the initial schedule, theprocessing times, and the cost coefficients.

The starting time of the job of agent if processed in a semi-active way according to a schedule equals

where if and only if the job of the agents and are on thesame machine (i.e., and precedes (i.e.,Consequently, the completion time of the job of agent with

Page 59: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SEQUENCING GAMES 41

respect to is equal to The total costs of a coalitionwith respect to the schedule is given by

We will restrict our attention to sequencing situationsthat satisfy the following condition: the starting time

of a job that is in the last position on a machine with respect to issmaller than or equal to the completion time of each job that is in thelast position with respect to on the other machines. Formally, for

let be the last agent on machine with respect to then wedemand for all that

This condition states that each job that is in the last position of amachine cannot make any profit by joining the end of a queue of anyother machine.

The (maximal) cost savings of a coalition S depend on the set ofadmissible rearrangements of this coalition. We call a schedule

admissible for S with respect to if it satisfies thefollowing two conditions:(i) Two agents that are on the same machine can only switch ifall agents in between and on that machine are also members of S;(ii) Two agents that are on different machines can only switchplaces if the tail of and the tail of are contained in S. The tail of anagent is the set of agents that follow agent on his machine, i.e., theset of agents withThe set of admissible schedules for a coalition S is denoted by Anadmissible schedule for coalition N will be called a schedule.

By defining the worth of a coalition as the maximum cost savingsa coalition can achieve by means of admissible schedules we obtaina cooperative game called an game. Formally, for an

sequencing situation the correspondinggame is defined by

for all coalitions

for all

Page 60: Borm_Chapters in Game Theory-In Honor of Stef Tijs

42 CURIEL, HAMERS, AND KLIJN

Now, we will focus on the balancedness of games. Be-cause 1-machine sequencing situations coincide with s-sequencing im-plies that 1-sequencing games are balanced. The following theorem, dueto Hamers et al. (1999), shows that 2-machine sequencing games arebalanced.

Theorem 2.15 Let be such that Then thecorresponding 2-sequencing game is balanced.

Proof. Let be the jobs on machine 1 such thatif and let be the jobs on machine 2 such that

if Take such that for allNow, it is easy to see that is a additive game.

An open problem is the balancedness of sequencing gameswith However, balancedness results, due to Hamers et al. (1999),are obtained for two special classes.

Theorem 2.16 Let be the game that arises froman sequencing situation in which forall Then is balanced.

In Theorem 2.16 we assumed that all cost coefficients are equal to one.This implies that the class of games generated by theunweighted completion time criterion is a subclass of the class of bal-anced games. Clearly, the balancedness result also holds true in thecase that all cost coefficients are equal to some positive constantFurthermore, a similar result, due to Hamers et al. (1999), holds for

situations with identical processing times instead of identicalcost coefficients.

Theorem 2.17 Let be the game that arises froman sequencing situation in which forall Then is balanced.

The second model that will be discussed in this section considers se-quencing situations with two parallel machines. Contrary to previousmodels in this paper, it is assumed that each agent owns two jobs to beprocessed, one on each machine. The costs of an agent depend linearlyon the final completion time of his jobs. In other words, it depends onthe time an agent has to wait until both his jobs have been processed.

Page 61: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SEQUENCING GAMES 43

Now, the formal description of the model is provided. The set ofthe two machines is denoted by M = {1, 2}. There is a finite set ofagents We assume that each agent has 2 jobs to beprocessed, one on each machine. Moreover, we assume that each machinestarts processing at time 0 and by the vector we denote theprocessing times of the jobs of every agent for the job to beprocessed on machine 1 and for the job to be processed on machine2. We also assume that there is an initial scheme of the jobs onthe machines where and are the initial orders for the first and thesecond machine, respectively. Formally, and are bijections from Nto where and mean that initially, player hasa job in position on machine 1 and a job in position on machine 2 inthe initial queues before the machines. Let be the set of orders ofN, i.e., bijections from N to Then denotes the setof possible schemes.

Every agent has a linear cost function defined bywhere and where represents the time player has

to wait to have both his jobs processed.A 2 parallel machines sequencing situation is a 5-tuple

and we will refer to it as a 2–PS sit-uation.

Let be a scheme. We denote bythe completion time of the job of agent

on the first machine with respect to the order Similarly,denotes the completion time of the job of agent on

the second machine with respect to For every player we con-sider the final completion time with respect to that is

Then the total costs of the agents with respect tocan be written as

Let be a 2–PS situation. The max-imal cost savings of a set of players depend on the set of admis-sible rearrangements of this set of agents S. We call a scheme an

A scheme is called optimal for N if total costs areminimized, i.e.,

Page 62: Borm_Chapters in Game Theory-In Honor of Stef Tijs

44 CURIEL, HAMERS, AND KLIJN

admissible rearrangement for S with respect to if it satisfies thefollowing two properties: two agents can only switch on onemachine if all agents in between and on that machine with respect tothe initial order on that machine are also members of S. Formally, giventhe initial order an admissible order for S on machine 1, is abijection such that for all Similarly,an admissible order for S on machine 2 is a bijection such that

for allLet and denote the set of admissible rearrangements of

coalition S on machine 1 and machine 2, respectively. The setis called the set of admissible schemes for S. In other words,

we consider an scheme to be admissible for S if each agent outside Shas the same completion time on each machine as in the initial scheme.Moreover, the agents of S are not allowed to jump over players outsideS.

Then, given a 2–PS situation, the corresponding 2–PS gameis defined in such a way that the worth of a coalition is equal tothe maximal cost savings the coalition can achieve by means of admis-sible schemes. Formally,

for all The following example illustrates that a 2–PS gameneed not be convex.

Example 2.18 Consider the 2–PS situation withN = {1, 2, 3, 4, 5, 6, 7, 8, 9}, and the initialscheme given by:

Take S = {1, 3} , T = {1, 3, 4, 5, 6} and Optimal schemes are:

Page 63: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SEQUENCING GAMES 45

Let be the corresponding 2-PS sequencing game, thenHence, is not convex.

However, Calleja et al. (2001) show that simple 2-PS games, i.e., gamesthat arise from situations in which all processing times and cost coeffi-cients are equal to one, are balanced.

Theorem 2.19 Let be a simple 2–PSsituation and let be the corresponding 2-PS game. Then isbalanced.

Another class of sequencing games that arises from multiple machinesequencing situations are the FD-sequencing games, introduced in vanden Nouweland et al. (1992). These games arise from flow shops with adominant machine. They present an algorithm that provides an optimalorder for these sequencing situations. If the first machine is the dominantmachine the class of FD-sequencing games coincides with the class ofs-sequencing games. In all other cases, the FD-sequencing games neednot be balanced.

2.6 On Sequencing Games with more Admissi-ble Rearrangements

This section discusses some classes of one machine sequencing situationsin which the set of admissible rearrangements is enlarged. First, weconsider one-relaxed sequencing situations, introduced in van Velzen andHamers (2001), a restriction of relaxed sequencing situations, discussedin Curiel et al. (1993), and the corresponding games. Next, we considerrigid sequencing situations and related games, also introduced in vanVelzen and Hamers (2001).

One-relaxed sequencing situations are similar to the one-machinesequencing situations discussed in Section 2.3, i.e., a one-relaxed se-quencing situation is described by whereis the set of players, the relaxed player,

and Also starting timeand completion time are defined identically.

Now, let the player set and the relaxed playerbe fixed. We want to determine the maximal cost savings of a coalitionS whose members decide to cooperate. For this, we have to define which

Page 64: Borm_Chapters in Game Theory-In Honor of Stef Tijs

46 CURIEL, HAMERS, AND KLIJN

rearrangements of a coalition S are admissible with respect to the initialorder. At this point the relaxed player will be used, which will create thedifference between s-sequencing games. To define admissible rearrange-ments we distinguish between two sets of coalitions: i.e.,coalitions that do not contain the relaxed player, and i.e.,coalitions that contain the relaxed player. A bijectionis called admissible for if it satisfies the conditions that thestarting time of each agent outside the coalition S is equal to his startingtime in the initial order, and the agents of S are not allowed to jumpover players outside S, i.e., for all Hence, acoalition S that does not contain the relaxed player has the same set ofadmissible rearrangements as a coalition in a s-sequencing game.A bijection is called admissible for S with if itsatisfies the following two conditions:(i) The starting time of each agent outside the coalition S is less than orequal to his starting time in the initial order: for all(ii) If there exists an such that then the agents of

are not allowed to jump over players outside S:

for all

has the same predecessors with respect to in as in

and has the same predecessors with respect to in as in

Hence, a rearrangement is admissible if player is the only playerthat can select another player in S and switch with this player (evenif these players have to jump over players outside S), as long as thestarting times of players outside S do not increase. Moreover, the jobsin S that are not job can only switch positions in the connected partsof S, except the player that is selected by The set of admissiblerearrangements of a coalition S is denoted by

By defining the worth of a coalition S as the maximum cost sav-ings coalition S can achieve by means of an admissible rearrangementwe obtain again a sequencing game. Formally, for a 1-relaxed sequenc-ing situation the corresponding 1-relaxed sequencing game

is defined by

Page 65: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SEQUENCING GAMES 47

for allNow, it can be shown, see van Velzen and Hamers (2001), that a

specific marginal vector is in the core of a 1-relaxed sequencing game.

Theorem 2.20 Let be a 1-relaxed sequencing situationand let be the corresponding game. Then is balanced.

However, the following example shows that in general 1-relaxed sequenc-ing games need not be convex.

Example 2.21 Consider the 1-relaxed sequencing situationand

Take S = {4}, T = {3,4} and Letbe the corresponding 1-relaxed sequencing game. Then

andBecause

we conclude that is not convex.

Other possible relaxations of the set of admissible rearrangements arediscussed in Curiel et al. (1993).

Rigid sequencing situations are described similarly to the one-machinesequencing situations discussed in Section 2.3, i.e., a rigid sequencing sit-uation is described by where is the set ofplayers, and

Also starting time and completion time are defined analo-gously.

Now, we want to determine the maximal cost savings of a coalitionS whose members decide to cooperate. For this, we have to define whichrearrangements of coalition S are admissible with respect to the initialorder. A bijection is called admissible for S if itsatisfies the following two conditions:(i) All players outside S have the same starting time: for all

(ii) In both orders the starting times coincide: for all

Hence, players in S can only switch if their processing times are equal.The set of admissible rearrangements of a coalition S is denoted by

By defining the worth of a coalition S as the maximum cost savingscoalition S can achieve by means of an admissible rearrangement we ob-tain a cooperative game called a rigid sequencing game. Formally, for a

Page 66: Borm_Chapters in Game Theory-In Honor of Stef Tijs

48 CURIEL, HAMERS, AND KLIJN

rigid sequencing situation the corresponding rigid sequenc-ing game is defined by

for allVan Velzen and Hamers (2001) show that rigid sequencing games are

balanced.

Theorem 2.22 Let be a sequencing situation and letbe the corresponding rigid sequencing game. Then is balanced.

The proof is based on the fact that rigid sequencing games are a subclassof the class of permutation games. The latter one, introduced in Tijset al. (1984), is a class of (totally) balanced games. In particular, if allprocessing times are equal, rigid games coincide with the class of per-mutation games. This implies immediately that rigid games in generalare not convex, because not all permutation games are convex.

References

Borm, P., G. Fiestras-Janeiro, H. Hamers, E. Sánchez E., and M. Voorn-eveld (1999): “On the convexity of games corresponding to sequencingsituations with due dates,” CentER Discussion Paper 1999-49. To ap-pear in: European Journal of Operational Research.

Calleja, P., P. Borm, H. Hamers, F. Klijn, and M. Slikker (2001): “On anew class of parallel sequencing situations and related games,” CentERDiscussion Paper 2001-3.

Curiel, I. (1997): Cooperative game theory and applications. Dordrecht:Kluwer Acacemic Publishers.

Curiel, I., G. Pederzoli, and S. Tijs (1989): “Sequencing games,” Euro-pean Journal of Operational Research, 40, 344–351.

Curiel, I., J. Potters, V. Rajendra Prasad, S. Tijs, and B. Veltman(1993): “Cooperation in one machine scheduling,” Methods of Opera-tions Research, 38, 113–131.

Curiel, I., J. Potters, V. Rajendra Prasad, S. Tijs, and B. Veltman(1994): “Sequencing and cooperation,” Operations Research, 42, 566–568.

Page 67: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SEQUENCING GAMES 49

Curiel, I., and S. Tijs (1986): “Assignment games and permutationgames,” Methods of Operations Research, 54, 323–334.

Gale, D. (1984): “Equilibrium in a discrete exchange economy withmoney,” International Journal of Game Theory, 13, 61–64.

Granot, D., H. Hamers, and S. Tijs (1999): “On some balanced, totallybalanced and submodular delivery games,” Mathematical Programming,86, 355–366.

Granot, D., and G. Huberman (1981): “Minimum cost spanning treegames,” Mathematical Programming, 21, 1–18.

Hamers, H., P. Borm, and S. Tijs (1995): “On games corresponding tosequencing situations with ready times,” Mathematical Programming,70, 1–13.

Hamers, H., F. Klijn, and J. Suijs (1999): “On the balancedness of multi-machine sequencing games,” European Journal of Operational Research119, 678–691.

Hamers, H., J. Suijs, S. Tijs, and P. Borm (1996): “The split core forsequencing games,” Games and Economic Behavior, 15, 165–176.

Ichiishi, T. (1981): “Super-modularity: applications to convex gamesand the greedy algorithm for LP,” Journal of Economic Theory, 25,283–286.

Klijn, F., S. Tijs, and H. Hamers (2000): “Balancedness of permuta-tion games and envy-free allocations in indivisible good economies, Eco-nomics Letters, 69, 323–326.

Maschler, M., B. Peleg, and L. Shapley (1972): “The kernel and bar-gaining set of convex games,” International Journal of Game Theory, 2,73–93.

Owen, G. (1975): “On the core of linear production games,” Mathemat-ical Programming, 9, 358–370.

Potters, J., I. Curiel, and S. Tijs (1992): “Traveling Salesman Games,”Mathematical Programming, 53, 199–211.

Potters, J., and H. Reijnierse (1995): additive games,”International Journal of Game Theory, 24, 49–56.

Shapley, L. (1971): “Cores of convex games,” International Journal ofGame Theory, 1, 11–26.

Shapley, L., and M. Shubik (1972): “The assignment game I: The core,”International Journal of Game Theory, 1, 111–130.

Page 68: Borm_Chapters in Game Theory-In Honor of Stef Tijs

50 CURIEL, HAMERS, AND KLIJN

Smith, W. (1956): “Various optimizers for single-stage production,”Naval Research Logistics Quarterly, 3, 59–66.

Tijs, S. (1981): “Bounds for the core and the ” in: O. Moeschlinand D. Pallaschke (eds.), Game Theory and Mathematical Economics.Amsterdam: North-Holland, 123–132.

Tijs, S. (1991): “LP-games and combinatorial optimization games,”Cahiers du Centre d’Etudes de Recherche Operationnelle, 34, 167–186.

Tijs, S., T. Parthasarathy, J. Potters, and V. Rajendra Prasad (1984):“Permutation games: another class of totally balanced games,” ORSpektrum, 6, 119–123.

van den Nouweland, A., M. Krabbenborg, and J. Potters (1992): “Flow-shops with a dominant machine,” European Journal of Operational Re-search 62, 38–46.

van Velzen, B., and H. Hamers (2001): “Relaxations on sequencinggames,” Working paper Tilburg University.

Page 69: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Chapter 3

Game Theory and theMarket

BY ERIC VAN DAMME AND DAVE FURTH

3.1 Introduction

Based on the assumption that players behave rationally, game theorytries to predict the outcome in interactive decision situations, i.e. sit-uations in which the outcome is determined by the actions of all play-ers and no player has full control. The theory distinguishes betweentwo types of models, cooperative and non-cooperative. In models ofthe latter type, emphasis is on individual players and their strategychoices, and the main solution concept is that of Nash equilibrium (Nash,1951). Since the concept as originally proposed by Nash is not com-pletely satisfactory—it does not adequately take into account that cer-tain threats are not credible, many variations have been proposed, seevan Damme (2002b), but in their main idea these all remain faithful toNash’s original insight. The cooperative game theory models, instead,focus on coalitions and outcomes, and, for cooperative games, a widevariety of solution concepts have been developed, in which few unifyingprinciples can be distinguished. (See other chapters in this volume foran overview.) The terminology that is used sometimes gives rise to con-fusion; it is not the case that in non-cooperative games players do notwish to cooperate and that in cooperative games players automaticallydo so. The difference instead is in the level of detail of the model; non-cooperative models assume that all possibilities for cooperation have

51

P. Borm and H. Peters (eds.), Chapters in Game Theory, 51–81.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Page 70: Borm_Chapters in Game Theory-In Honor of Stef Tijs

been included as formal moves in the game, while cooperative modelsare “incomplete” and allow players to act outside of the detailed rulesthat have been specified.

One of us had the privilege and the luck to follow undergraduatecourses in game theory with Stef Tijs. There were courses in non-cooperative theory as well as in cooperative theory and both were fun.When that author had passed his final (oral) exam, he was still puzzledabout the relationships between the models and the solution conceptsthat had been covered, and he asked Stef a practical question: when touse a cooperative model and when to use a non-cooperative one? Thatauthor does not recall the answer, but he now considers the questionto be a nonsensical one: it all depends on what one wants to achieveand what is feasible to do. Frequently, it will not be possible to writedown an explicit non-cooperative game, and even if this is possible, oneshould be aware that players may attempt to violate the rules that theanalyst believes to apply. On the other hand, a cooperative model maybe pitched at a too high level of abstraction and may contain too littledetail to allow the theorist to come up with a precise prediction aboutthe outcome. In a certain sense, the large variety of solution conceptsthat one finds in cooperative game theory is a natural consequence of themodel that is used being very abstract It also follows from these consid-erations that cooperative and non-cooperative models are complementsto each other, rather than competitors.

Our aim in this chapter is to demonstrate the complementarity be-tween the two types of game theory models and to illustrate their useful-ness for the analysis of actual markets. Section 3.2 provides a historicalperspective and briefly discusses the views expressed in Von Neumannand Morgenstern (1953) and Nash (1953). Section 3.3 focuses on bar-gaining games, while Section 3.4 discusses oligopoly games and markets.Auctions are the topic of Section 3.5. Section 3.6 concludes.

3.2 Von Neumann, Morgenstern and Nash

As von Neumann and Morgenstern (1953) argue, there is not much pointin forming a coalition in 2-person zero-sum games. In this case, both thecooperative and the non-cooperative theory predict the same outcome.Furthermore, in 2-person non-zero-sum games, there is only one coalitionthat can possibly form and it will form when it is attractive to form itand the rules of the game do not stand in the way. The remaining

GAME THEORY AND THE MARKET52

Page 71: Borm_Chapters in Game Theory-In Honor of Stef Tijs

VAN DAMME AND FURTH 53

question then is how the players will divide the surplus, a question thatwe will return to in Section 3.3. The really interesting problems startto appear when there are at least three players. Von Neumann andMorgenstern (1953, Chapter V) argue that in this case the game cannotsensibly be analyzed without coalitions and side-payments, for, even ifthese are not explicitly allowed by the rules of the game, the players willtry to form coalitions and make side payments outside of these formalrules.

To illustrate their claim, the founding fathers of game theory startfrom a simple non-cooperative game. Assume there are three playersand each player can point to one of the others if he wants to form acoalition with him. In this case, the coalition forms if and onlyif points to and points to The rules also stipulate that ifforms, the third player, has to pay 1 money unit to each of andFormally this game of coalition formation can, therefore, be representedby the normal form (non-cooperative) game in Figure 3.1.

The game in Figure 3.1 has several pure Nash equilibria; it also has amixed Nash equilibrium in which each player chooses each of the otherswith equal probability. Von Neumann and Morgenstern start their anal-ysis from a non-cooperative point of view, i.e. as if the above matrixtells the whole story:

“Since each player makes his personal move in ignorance ofthose of the others, no collaboration of the players can beestablished during the course of play” (p. 223).

Nevertheless, von Neumann and Morgenstern argue that the whole pointof the game is to form a coalition, and they conclude that, if playersare prevented to do so within the game, they will attempt to do sooutside. They realize that this raises the question of why such outsideagreements will be kept, and they pose the crucial question what, ifanything, enforces the “sanctity” of such agreements? They answer thisquestion in the following way

Page 72: Borm_Chapters in Game Theory-In Honor of Stef Tijs

The reader may judge for himself whether, and in which circumstances,he considers this argument to be convincing. In any case, if one acceptsthe argument that a convincing theory cannot be formulated withoutauxiliary concepts such as “agreements” and “coalitions”, then one isalso naturally led to the conclusion that side-payments will form anintegral part of the theory. This latter argument is easily seen by con-sidering a minor modification of the game of Figure 3.1. Suppose thatif the coalition {1, 2} would form the payoffs would beand that if {1, 3} would form, the payoffs would bewhat outcome of the game would result in this case? Von Neumannand Morgenstern argue that the advantage of player 1 is quite illusory:if player 1 would insist on getting in the coalition {1, 2}, then 2would prefer to form the coalition with 3, and similarly with the rolesof the weaker players reversed. Consequently, in order to prevent thecoalition of the two “weaker” players from forming, player 1 will offer aside payment of to the one he is negotiating with. Consequently, vonNeumann and Morgenstern conclude:

54 GAME THEORY AND THE MARKET

“There may be games which themselves—by virtue of therules of the game (...) provide the mechanism for agreementsand their enforcement. But we cannot base our considera-tions on this possibility since a game need not provide thismechanism; (...) Thus there seems no escape from the neces-sity of considering agreements concluded outside the game.If we do not allow for them, then it is hard to see what,if anything, will govern the conduct of a player in a simplemajority game” (p. 223).

“It seems that what a player can get in a definite coalitiondepends not only on what the rules of the game provide forthat eventuality, but also on the other (competing ) possi-bilities of coalitions for himself and for his partner. Sincethe rules of the game are absolute and inviolable, this meansthat under certain conditions compensations must be paidamong coalition partners; i.e., that a player must have topay a well-defined price to a prospective coalition partner.The amount of the compensations will depend on what otheralternatives are open to each of the players” (p. 227).

Obviously, if one concludes that coalitions and side-payments have to beconsidered in the solution, then the natural next step is to see whether

Page 73: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Hence, Nash was the first to introduce the formal distinction between thetwo classes of games. After having given the formal definition of a non-cooperative game, Nash then defines the equilibrium notion, he provesthat any finite game has at least one equilibrium, he derives propertiesof equilibria, he discusses issues of robustness and equilibrium selectionand finally he discusses interpretational issues. Even though the thesisis short, it will be clear that it accomplishes a lot. In the remainder ofthis section, we give a brief sketch of the mathematical core of Nash’sthesis, as it also allows us to introduce some notation.

A non-cooperative game is a tuple where I is a nonemptyset of players, is the strategy set of player and (where

) is the payoff function of player This formal structurehad already been introduced by von Neumann and Morgenstern, who

strategies. A mixed strategy of player is a probability distributionon In what follows we write to denote a generic pure strategy andwe write for the probability that assigns to Ifis a combination of mixed strategies, we may write for playerexpected payoff when is played. Von Neumann and Morgenstern hadproved the important result that for rational players it was sufficient to

VAN DAMME AND FURTH 55

“Von Neumann and Morgenstern have developed a very fruit-ful theory of two-person zero-sum games in their book Theoryof Games and Economic Behavior. This book also containsa theory of games of a type which we would callcooperative. This theory is based on an analysis of the in-terrelationships of the various coalitions which can be formedby the players of the game. Our theory, in contradistinction,is based on the absence of coalitions in that it is assumedthat each participant acts independently, without collabora-tion or communication with any of the others. The notion ofan equilibrium point is the basic ingredient in our theory.”

the solution can be determined by these aspects alone, and it is thatproblem that Von Neumann and Morgenstern then set out to solve inthe remaining 400 pages of their book.

John Nash refused to accept that it was necessary to include elementsoutside the formal structure of the game to develop a convincing theoryof games. His thesis (Nash, 1950a), of which the mathematical core waspublished a bit later (Nash, 1951) opens with

had also argued that, for finite it was natural to introduce mixed

Page 74: Borm_Chapters in Game Theory-In Honor of Stef Tijs

look at expected payoffs. In other words, it is assumed that payoffs arevon Neumann Morgenstern utilities. Nash now defines an equilibriumpoint as a mixed strategy combination such that each player’s mixedstrategy maximizes his payoff if the strategies of the others (denotedby ) are held fixed, hence

56 GAME THEORY AND THE MARKET

Nash’s main result is that in finite games (i.e. I and all are finitesets) at least one equilibrium exists. The proof is so elegant that it isworthwhile to give it here. For and write

and consider the map defined (componentwise) by

then is a continuous map, that maps the convex set (of all mixedstrategy profiles) into itself, so that, by Brouwer’s fixed point theorem,a fixed point exists. It is then easily seen that such a is anequilibrium point of the game.

The section “Motivation and Interpretation” from Nash’s thesis was notincluded in the published version (Nash, 1951). In retrospect, this is tobe regretted as it led to misunderstandings and delayed progress in gametheory for some time. Nash provided two interpretations. The first “ra-tionalistic interpretation” argues why equilibrium is relevant when thegame is played by fully rational players, the second “mass action rep-resentation” argues that equilibrium might be obtained as a result ofignorant players learning to play the game over time when the game isrepeated. We refer the reader to van Damme (1995) for further discus-sion on these interpretations; here we confine ourselves to the remarkthat the rationalistic interpretation, the view of a solution as a convinc-ing theory of rationality, had already been proposed in von Neumannand Morgenstern, see Section 17.3 of their book. However, the found-ing fathers had not followed up their own suggestion. In addition, theyhad come to the conclusion that it was necessary to consider set-valuedsolution concepts. Again, Nash was not convinced by their argumentsand he found it a weak spot in their theory.

Page 75: Borm_Chapters in Game Theory-In Honor of Stef Tijs

3.3 Bargaining

In this section we illustrate the complementarity between game theory’stwo approaches for the special case of bargaining problems.

As referred to already at the end of the previous section, the theorythat von Neumann and Morgenstern developed generally allows multipleoutcomes. Consider the special case of a simple bargaining problem.Assume there is one seller who has one object for sale, who does not valuethis object himself, and that there is one buyer that attaches value 1 toit, with both players being risk neutral. For what price will the objectbe sold? Von Neumann and Morgenstern discuss this problem in Section61 of their book where they come to the conclusion that “a satisfactorytheory of this highly simplified model should leave the entire interval(i.e. in this case [0,1]) available for (p. 557).

The above is unsatisfactory to Nash. In Nash (1950b), he writes:

VAN DAMME AND FURTH 57

“In Theory of Games and Economic Behavior a theory ofgames is developed which includes as a special case

the two-person bargaining problem. But the theory devel-oped there makes no attempt to find a value for a given

game, that is, to determine what it is worth toeach player to have the opportunity to engage in the game(...) It is our opinion that these games should havevalues.”

Nash then postulates that a value exists and he sets out to identify it.To do so, he uses the axiomatic method, that is

“One states as axioms several properties that it would seemnatural for the solution to have and then one discovers thatthe axioms actually determine the solution uniquely” (Nash,1953, p. 129)

In his 1950b paper, Nash adopts the cooperative approach, hence, heassumes that the solution can be identified by using only informationabout what outcomes and coalitions are possible. Without loss of gener-ality, let us normalize payoffs such that each player has payoff 0 if playersdo not cooperate and that cooperation pays, i.e., there is at least onefeasible payoff vector with In this case, the solution thenshould just depend on the set of payoffs that are possible when playersdo cooperate. Let us write for the solution when the set of feasiblepayoff vectors is S. This set S will be convex, as players can randomize.

Page 76: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Obviously, such trivialities as and for shouldbe satisfied. In addition, the solution should be independent of whichutility function is used to represent the given players preferences and itshould be symmetric when the game is symmetric. All thesethings are undebatable. It is quite remarkable that only one additionalaxiom is needed to uniquely determine the solution for each bargainingproblem. This is the Axiom of Independence of Irrelevant Alternatives:

If and then

Again the proof of this major result is so elegant, that we cannot resistto give it. Define as that point in S that maximizes inThen by rescaling utilities we may assume and it followsthat the line is a supporting hyperplane for S at (1,1). (Itseparates the convex set S from the convex setNow let T be the set Then, by symmetry,

started out with, the solution is a price ofOne interpretation of the above solution is that it will result when

players can bargain freely. Obviously, if the players would be severelyrestricted in their bargaining possibilities, then a different outcome mayresult. For example, in the above buyer-seller game, if the seller canmake a take it or leave it offer, the buyer will be forced to pay a price of(almost) one. Similarly, if the buyer would have all the bargaining power,the price would be (close to) zero. The advantage of non-cooperativemodelling is that it allows to analyze each specific bargaining procedureand to predict the outcome on the basis of detailed modelling of therules; the drawback (or realism?) of that model is that the outcomemay crucially depend on these details. The symmetry assumption inNash’s axiomatic model represents something like players having equalbargaining power and this is obviously violated in these take it or leaveit games. It is not clear how such asymmetric games could be relevantfor players that are otherwise completely symmetric. Nash (1953) con-tains important modelling advise for non-cooperative game theorists.He writes that in the non-cooperative approach

“the cooperative game is reduced to an non-cooperative game.To do this, one makes the players’ steps of negotiation in

58 GAME THEORY AND THE MARKET

hence II A implies We have, therefore,established that there is only one solution satisfying the II A axiom: itis the point where the product of the players’ utilities is maximized. Asa corollary we obtain that, in the simple seller-buyer example that we

Page 77: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Nash also writes that the two approaches to solve a game are comple-mentary and that each helps to justify and clarify the other. To comple-ment his cooperative analysis, Nash studies the following simultaneousdemand game: each player demands a certain utility level that heshould get; if the demands are compatible, that is, if theneach player gets what he demanded, otherwise disagreement (with pay-off 0) results. At first it seems that this non-cooperative game does notfulfill our aims, after all any Pareto optimal outcome of S corresponds toa Nash equilibrium of the game, and so does disagreement. Nash, how-ever, argues that one of these equilibria is distinguished in the sense thatit is the only one that is robust against small perturbations in the data.Of course, this unique robust equilibrium is then seen to correspond tothe cooperative solution of the game. Specifically, Nash assumes thatplayers are somewhat uncertain about what outcomes are feasible. Let

be the probability that is feasible, with if anda continuous function that falls rapidly to zero outside of S. With

uncertainty given by player payoff function is now given byand it is easily verified that any maximum of the map is anequilibrium of this slightly perturbed game. Note that all these equilib-ria converge to the Nash solution (the maximum of on S) whentends to the characteristic function of S and that, for nicely behavedthe perturbed game will only have equilibria close to the Nash solution.Consequently, only the Nash solution constitutes a robust equilibriumof the original demand game.

The above coincidence certainly is not an isolated result, the Nashsolution also arises in other natural non-cooperative bargaining models.As an example, we discuss Rubinstein’s (1981) alternating offer bargain-ing game. Consider the simple seller buyer game that we started thissection with and assume bargaining proceeds as follows, until agreementis reached or the game has come to an end. In odd numbered periods

the seller proposes a price to the buyer and the buyerresponds by accepting or rejecting the offer; in even numbered periods

the roles of the players are reversed and the buyer has the

VAN DAMME AND FURTH 59

the cooperative game become moves in the non-cooperativemodel. Of course, one cannot represent all possible bar-gaining devices as moves in the non-cooperative game. Thenegotiation process must be formalized and restricted, butin such a way that each participant is still able to utilize allthe essential strength of his position” (Nash, 1953, p. 129).

Page 78: Borm_Chapters in Game Theory-In Honor of Stef Tijs

initiative; after each rejection, the game stops with positive but smallprobability Rubinstein shows that this game has a unique (subgameperfect) equilibrium, and that, in equilibrium, agreement is reached im-mediately. Let be the price proposed by the seller (resp.the buyer). The seller realizes that if the buyer rejects his first offer, thebuyer’s expected utility will be hence, the seller will notoffer a higher utility, nor a lower. Consequently, in equilibrium we musthave

and, by a similar argument

It follows that the equilibrium prices are given by

60 GAME THEORY AND THE MARKET

and as tends to zero (when the first mover advantage vanishes and thegame becomes symmetric), we obtain the Nash bargaining solution.

We conclude this section with the observation that also in Von Neu-mann and Morgenstern (1953) both cooperative and non-cooperativeapproaches are mixed. In Section 3.2, we discussed the 3-player zero-sum game and the need to consider coalitions and side-payments. InSection 22.2 of the Theory of Games and Economic Behavior, the gen-eral such game is considered: if coalition forms, then player hasto pay to this coalition What coalition will form andhow will it split the surplus? To answer this question, von Neumann andMorgenstern consider a demand game. They assume that each playerspecifies a price for his participation in each coalition. Obviously, if

is too large, and will prefer to cooperate together rather than toform a coalition with Given cannot expect more thanin while cannot expect more than in hence willprice himself out of the market if

Consequently, each player cannot expect more than

If the game is essential and it pays to form a coalition, i.e.then the above system of three questions with three unknown

Page 79: Borm_Chapters in Game Theory-In Honor of Stef Tijs

has a unique solution. Each player can reasonably demandwe can predict how the coalition that will form will split the surplus,

but all three possible coalitions are equally likely, hence, we cannot saywhich coalition will form.

3.4 Markets

In this section, we briefly discuss the application of game theory tooligopolistic markets. In line with the literature, most of the discussionwill be based on non-cooperative models, but we will see that also herecooperative analysis plays its role.

In a non-cooperative oligopoly game, the players are firms, the strat-egy sets are compact and connected subsets of an Euclidean space, andthe payoffs are the profits of the firms. As Nash’s existence theorem onlyapplies to finite games, a first question is whether equilibrium exists.Here we will confine ourselves to the specific case where the strategy setof player denoted is a closed and connected interval in Hence,in essence we assume that each firm sells just one product, of which iteither sets the price or the quantity. We speak of a Cournot game whenthe strategies are quantities, of a Bertrand game when the strategies areprices. Write X for the Cartesian product of all For player hisbest response correspondence is the map that assigns to eachthe set of all that maximize this player’s payoff against Notethat in the two-player case, (viewed as a function of ) will typicallybe decreasing in the case of a Cournot game and be increasing in thecase of Bertrand. In the former case, we speak of strategic substitutes,in the latter of strategic complements. We write for the vector of all

When for each player the profit function is continuous on X

of the Kakutani fixed point theorem are satisfied ( is an upper-hemicontinuous map, for which all image sets are non-empty compact andconvex), hence, the oligopoly game has a Nash equilibrium. When prod-ucts are differentiated, these conditions will typically be satisfied, butwith homogeneous products, they may be violated. For example, in theBertrand case, without capacity constraints and with no possibility toration demand, the firm with the lowest price will typically attract alldemand, hence, demand functions and profit functions are discontinu-ous. Dasgupta and Maskin (1986) contains useful existence theoremsfor cases like these. (See also Furth, 1986.) Of course, the equilibrium

VAN DAMME AND FURTH 61

and is quasi–concave in for fixed then the conditions

Page 80: Borm_Chapters in Game Theory-In Honor of Stef Tijs

In Cournot’s model, a strategy of a firm is the quantity supplied to the

incentive to supply the quantity that is the best response to andhe defined an equilibrium as a situation in which each of the duopolistsis at a best response. Hence, the solution that Cournot proposed, theCournot equilibrium, can be viewed as a Nash equilibrium. Neverthe-less, Cournot’s interpretation of the equilibrium seems to have been verydifferent from the modern “rationalistic” interpretation of equilibrium.In retrospect, it seems to be more in line with the “mass action in-terpretation” of Nash. The following citations are revealing about therelationship between the works of Nash and Cournot:

“After one hundred and fifty years the Cournot model re-mains the benchmark of price formation under oligopoly.Nash equilibrium has emerged as the central tool to ana-lyze strategic interactions and this is a fundamental method-ological contribution which goes back to Cournot’s analysis.”(Vives, 1989, p.511)

62 GAME THEORY AND THE MARKET

is not necessarily unique.The first formal analysis of an oligopolistic market was performed by

Cournot, who analyzed a duopoly in which two firms sell a homogeneous(consumption) good to the consumers, see Cournot(1838). He writes

“Let us now imagine two proprietors and two springs ofwhich the qualities are identical, and which, on account oftheir similar positions, supply the same market in competi-tion. In this case the price is necessarily the same for eachproprietor. [...]; and each of them independently will seekto make this income as large as possible.” (Cournot, 1838,cited from Daughety, 1988, p. 63)

“After the appearance of the Nash equilibrium, what wewitness is the gradual injection of a certain ambiguity intoCournot’s account in order to make it interpretable in termsof Nash. Following Nash, Cournot is reread and reinter-preted. This may have several different motivations, of whichwe here present concrete evidence of two. In one case, it isa way of anchoring, or stabilizing, the new and still floatingidea of the Nash equilibrium. By showing that somebody inthe past—and all the better if it is an eminent figure—seems

market. Cournot argued that if firm supplies firm will have an

Page 81: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Bertrand (1883) criticized Cournot for taking quantities as strategic vari-ables and he suggested to take prices instead. It matters a lot for theoutcome what the strategic variables are. In a Cournot game, a playerassumes that the opponent’s quantity remains unchanged, hence, thiscorresponds to assuming that the opponent raises his price if I raisemine. Clearly such a situation is less competitive than one of Bertrandcompetition in which a firm assumes that the opponent maintains hisprice when it raises its own price. Consequently, prices are frequentlylower in the Bertrand situation. In fact, when the firms produce iden-tical products, marginal cost are constant and there are no capacityconstraints, already with two firms Bertrand price competition resultsin the competitive price, that is, the price is equal to the marginal costin this case.

This result, that in a Bertrand game with homogeneous productsand constant marginal cost, the competitive price is already obtainedwith two firms is sometimes called the Bertrand paradox and it seemsto have bothered many economists in the past. Edgeworth (1897) sug-gested that firms have capacity constraints and that such constraintsmight resolve the paradox; after all, with capacity constraints, the re-action of the opponent will be less aggressive, hence, the market lesscompetitive. However, capacity constraints raise another puzzle. Sup-pose one firm sets the competitive price, but is not able to supply to-tal demand at that price. After this firm has sold its full capacity, a‘residual’ market remains and the other firm makes most profits whenit charges the ‘residual monopoly price’ in this market. As Edgeworthobserved, given the high price of the second firm, the first firm has anincentive to raise its price to just below this price. Obviously, at thesehigher prices, there is then a game of each firm trying to undercut theother, which is driving prices down again. As a consequence, a purestrategy equilibrium need not exist. We are led to Edgeworth cycles,see also Levitan and Shubik (1972). However, we note here that therealways exists an equilibrium in mixed strategies: firms set prices ran-domly, according to some distribution function. It may be shown, see

VAN DAMME AND FURTH 63

to have had ‘the same idea’ in mind, the Nash equilibriumis given a history, it is legitimised, and the case for gametheory is strengthened. In the other case, the motivation isto detract from the originality of Nash’s idea, maintainingthat ‘it was always there’, i.e., Nash h(Leonard, 1994, p. 505)

as said nothing new.”

Page 82: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Levitan and Shubik (1972), Kreps and Scheinkman (1983) and Osborneand Pitchik (1986), that for small capacities a Cournot type outcomeresults, i.e. supplies are sold against a market clearing price, while forsufficiently large capacities, the Bertrand outcome is the equilibrium, i.e.firms set the competitive price. For the remaining intermediate capacitylevels, there is no equilibrium in pure strategies.

Kreps and Scheinkman (1983) also analyze the situation where firmscan choose their capacity levels. They assume that in the first periodfirms choose their capacity levels and and that next, knowingthese capacities, in the second period firms play the Bertrand Edgeworthprice game. In this situation, high capacity levels are attractive as theyallow to sell a lot, but they are likewise unattractive as they imply avery competitive market; in contrast, low levels imply high prices butlow quantities. Kreps and Scheinkman (1983) show that, with efficientrationing in the second period, firms will choose the Cournot quantitiesin the first period and the corresponding market clearing prices in thesecond. Hence, the Cournot model can be viewed as a shortcut of thetwo-stage Bertrand-Edgeworth model. However, it turns out that thesolution of the game depends on the rationing scheme, as Davidson andDeneckere (1986) have shown.

All the oligopoly games discussed thus far are games with imperfect in-formation: players take their decisions simultaneously. Oligopoly gameswith perfect information, in which players take their decisions sequen-tially with they being informed about all the previous moves, are nowa-days called Stackelberg games, after Stackelberg (1934). Moving sequen-tially is a way in which too intense competition might be avoided, forexample, if players succeed in avoiding simultaneous price setting, priceswill typically be higher. Von Stackelberg assumed that one of the playersis the ‘first mover’, the leader, and the other is the follower. In Stackel-berg’s model, first ‘the leader’ decides and next, knowing what the leaderhas done, ‘the follower’ makes his decision, hence, we have a game withperfect information. We believe that Stackelberg meant ‘leader’ and ‘fol-lower’ more as a behavior rule, rather than an exogenously imposedordering of the moves, hence, in our view, he assumed asymmetries be-tween different player types. Such an asymmetry results in a differentoutcome. The best a follower can do, is to play a best response againstthe action of the leader

GAME THEORY AND THE MARKET64

Page 83: Borm_Chapters in Game Theory-In Honor of Stef Tijs

In a Cournot setting, this typically implies that the leader will pro-duce more, and the follower will produce less than his Cournot quantity,hence, the follower is in a weaker position, and it pays to lead: there isa first-mover advantage. Bagwell (1995), however, has argued that thisfirst-mover advantage is eliminated if the leader’s quantity can only beobserved with some noise. Specifically, he considers the situation where,if the leader choose the follower observes with probabilitywhile the follower sees a randomly drawn with the remaining positiveprobability where has full support. As now the signal that the fol-lower receives is completely uniformative, the follower will not conditionon it, hence, it follows that in the unique pure equilibrium, the Cournotquantities are played. Hence, there is no longer a first mover advantage.Van Damme and Hurkens (1997) however show that there is always amixed equilibrium, that there are good arguments for viewing this equi-librium as the solution of the game, and that this equilibrium convergesto the Stackelberg equilibrium when the noise vanishes.

We note that, in this approach to the Stackelberg game with perfectinformation, leader and follower are determined exogenously. Now itis easy to see that, in Cournot type games, it is most advantageous tobe the leader, while in Bertrand type games, the follower position ismost advantageous. Hence, the question arises which player will takeup which player role. There is a recent literature that addresses thisquestion of endogenous leadership. In this literature, there are two-stagemodels in which players choose the role they want to play in a timinggame. The trade-off is between moving early and enjoy the advantage ofcommitment, or moving late and having the possibility to best respondto the opponent. Obviously, when firms are ‘identical’ there will beno way to determine an endogenous leader, hence, these models assumesome type of asymmetry: endogenous leaders may emerge from differentcapacities, different efficiency levels, different information, or productdifferentiation. In cases like these, one could argue that player willbecome the leader when he profits more from it than player does,hence, that player will lead if

or equivalently when

VAN DAMME AND FURTH 65

The leader knowing this, will therefore play

Page 84: Borm_Chapters in Game Theory-In Honor of Stef Tijs

in other words, that the leadership will be determined as if players hadjoint profits in mind. Based on such considerations, many papers cometo the conclusion that the dominant or most efficient firm will becomethe leader, see Ono (1982), Deneckere and Kovenock (1992), Furth andKovenock (1993), and van Cayseele and Furth (1996). To get someintuition for this result, let consider a simple asymmetric version of the2-firm Bertrand game. Assume that the product is perfectly divisible,that the demand curve is given by for and for

and that firm 2 has a capacity constraint of If firm 2 acts as aleader, firm 1 will undercut and firm 2’s profit is zero. Firm 2’s profit isalso zero if price setting is simultaneous and in this case firm 1’s profitis zero as well. If firm 1 commits to be leader, he will be undercut byfirm 2, but given that firm 2 has a capacity constraint, firm 1 is not hurtthat much by it. Firm 1 will simply commits to the monopoly price andprofits will be for firm 1 and for firm 2. Hence, only in the casewhere firm 1 takes up the leadership position will profits be positive foreach firm, and we may expect firm 1 to take up the leadership position.

Van Damme and Hurkens (1996, 1999) argue that the above profitcalculation is not convincing and that the leadership position shouldresult from individual risk considerations. Be that as it may, the inter-esting result that they derive is that these risk considerations do lead toexactly the above inequalities, hence, van Damme and Hurkens obtainthat both in the price and in the quantity game, the efficient firm willlead. Note then, that the efficient firm obtains the most preferred posi-tion in the case of Cournot competition, but not in the case of Bertrandcompetition.

Above, we already briefly referred to the work of Edgeworth onBertrand competition with capacity constraints. Edgeworth was alsothe one who introduced the Core as the concept that models unbridledcompetition. Shubik (1959) rediscovered this concept in the contextof cooperative games, and the close relation between the Core of thecooperative exchange game and the competitive outcome was soon dis-covered. Hence, also here we see the close relation between cooperativeand non-cooperative theory. In fact, it is perhaps most beautiful in thetheory of matching, see Roth and Sotomayor (1990). In the remain-der of this section, we illustrate this relationship for the most simple3-person exchange game, a game that, incidentally, also was analyzedin von Neumann and Morgenstern (1953). The founding fathers indeedalready mention the possibility of applying their theory in the context

GAME THEORY AND THE MARKET66

Page 85: Borm_Chapters in Game Theory-In Honor of Stef Tijs

of an oligopoly. Specifically, in the Sections 62.1 and 62.4 of their book,they calculated their solution, the Stable Set, of a three-person non-constant sum game that arises in a situation with one buyer and twosellers. Shapley (1958) generalized their analysis to a game withbuyers and sellers, see also Shapley and Shubik (1969). We willconfine ourselves to the case with and Furthermore, forsimplicity, we will assume that the sellers are identical, that they eachhave one single indivisible object for sale, that they do not value thisobject, and that the buyer is willing to pay 1 for it. Denoting the con-sumer by player 3, the situation can be represented by the (cooperative)3-person characteristic function game given by if and

and otherwise. In this game, the Core consists of a sin-gle allocation (0,0,1), corresponding to the consumer buying from eitherproducer for a price of 0, hence, the Core coincides with the competitiveoutcome, illustrating the well-known Core equivalence theorem.

VAN DAMME AND FURTH 67

When, in the mid 1970s, one of us took his first courses in gametheory with Stef Tijs, he considered the solution prescribed by the Corein the above game to be very natural. As a consequence, he was botheredvery much by the fact that the Shapley value of this game was not anelement of the Core and that it predicted a positive expected utility foreach of the sellers. (As is well-known, the Shapley value of this gameis (1,1,4)/6). Why could the sellers expect a positive utility in thisgame? The answer is in fact quite simple: the sellers can form a cartel!Obviously, once the sellers realize that their profits will be competedaway if they do not form a cartel, they will try to form one. Hence,in this game, coalitions arise quite naturally and, as a consequence, theCore actually provides a misleading picture. If the sellers succeed informing a stable coalition, they transform the situation into a bilateralmonopoly in which case the negotiated price will be By symmetry,each of the sellers will get in this case. But, anticipating this, theconsumer will try to form a coalition with any of the sellers, if onlyto prevent these sellers from entering into a cartel agreement. As vonNeumann and Morgenstern (1953) already realized, and as we discussedin Section 3.2, the game is really one in which players will rush to forma coalition and the price that the buyer will pay will depend on the easewith which various coalitions can form. But then the outcome will bedetermined by the coalition formation process, hence, following Nash’sadvise, non-cooperative modelling should focus on that process.

Let us here study one such process. Let us assume that the play-

Page 86: Borm_Chapters in Game Theory-In Honor of Stef Tijs

GAME THEORY AND THE MARKET

ers bump into each other at random and that, if negotiations betweentwo players are not successful (which, of course, will not happen inequilibrium), the match is dissolved and the process starts afresh. Theremaining question is what price, the consumer will pay to the sellerif a buyer-seller coalition is formed. (By symmetry, this price does notdepend on which seller the buyer is matched with.) The outcome isdetermined by the players’ outside options, i.e. by what players can ex-pect if the negotiations break down. The next table provides the utilitiesplayers can expect depending on the first coalition that is formed

For the coalition {1,3}, the outside option of the seller iswhile the buyer’s outside option is (This follows since allthree 2-person coalitions are equally likely to form in the next round.)The coalition loses if it does not come to an agreement, hence,it will split this surplus evenly. It follows that the price must satisfy

Hence, Since all coalitions are equally likely, the expected payoffof a seller equals while the buyer’s expected payoff equals Theconclusion is that expected payoffs are equal to the Shapley value of thegame. Furthermore, the outcome, naturally, lies outside of the Core.We refer that reader who thinks that we have skipped over too manydetails in the above derivation to Montero (2000), where all such detailsare filled in.

Of course, the exact price will depend on the details of the matchingprocess and different processes may give rise to different prices, hence,different cooperative solution concepts. Viewed in this way, also vonNeumann and Morgenstern’s solution of this game appears quite natural.As they write (von Neumann and Morgenstern, 1953, pp. 572, 573), thesolution consists of two branches, either the sellers compete (and thenthe buyer gets the surplus), a situation they call the classical solution,or the sellers form a coalition, and in this case, they will have to agreeon a definite rule for how to split the surplus obtained; as different rulesmay be envisaged, multiple outcome may be a possibility.

68

Page 87: Borm_Chapters in Game Theory-In Honor of Stef Tijs

3.5 Auctions

In this section, we illustrate the usefulness of game theory in the under-standing of real life auctions. The section consists of three parts. First,we briefly discuss some auction theory. Next, we discuss an actual auc-tion and provide a non-cooperative analysis to throw light on a policyissue. In the third part, we demonstrate that also in this non-cooperativedomain, insights from cooperative game theory are very relevant.

Four basic auction forms are typically distinguished. The first type isthe Dutch auction. If there is one object for sale, the auction proceeds bythe seller starting the auction clock and continuously lowering the priceuntil one of the bidders pushes the button, or shouts “mine”; that bidderthen receives the item for the price at which he stopped the clock. Thesecond basic auction form is the English (ascending) auction in whichthe auctioneer continuously increases the price until one bidder is left;this bidder then receives the item at the price where his final competitordropped out. The two basic static auction forms are the sealed bidfirst price auction and the Vickrey auction (Vickrey, 1961). In the firstprice auction, bidders simultaneously and independently enter their bids,typically in sealed envelopes, and the object is awarded to the highestbidder who is required to pay his bid. In the Vickrey auction, playersenter their bids in the same way, and the winner is again the one withthe highest bid, however, the winner “only” pays the second highest bid.

As auctions are conducted by following explicit rules they can be rep-resented as (non-cooperative) games. Milgrom and Weber (1982) haveformulated a fairly general auction model. In this model, there are bid-ders, that occupy symmetric positions. The game is one with incompleteinformation, each bidder has a certain type that is known only tothis bidder himself. In addition, there may be residual uncertainty, rep-resented by where 0 denotes the chance player. Ifis the vector of types (including that of nature), then is called the stateof the world, and is assumed to be drawn from a commonly knowndistribution F on a set that is symmetric with respect to the lastarguments. (Symmetry thus means that F is invariant with respect topermutations of the bidders.) In addition to his type, each player has avalue function, where again the assumption of symmetry is main-tained, i.e. if and are interchanged, then and are interchangedas well. Under the additional assumption of affiliation (which roughlystates that a higher value of makes a higher value of more likely),Milgrom and Weber derive a symmetric equilibrium for this model. For

VAN DAMME AND FURTH 69

Page 88: Borm_Chapters in Game Theory-In Honor of Stef Tijs

the Vickrey auction, the optimal bid is characterized by

where denotes the largest component of the vectorIn words, in the Vickrey auction, the player bids

the expected value of the object to him, conditional on his value beingthe highest, and this value also being equal to the second highest value.For the Dutch (first price) auction, the optimal bid is lower, and theformula will not be given here. (See Wilson, 1992.) We also note that,in addition to giving insights into actual auctions, game theory has alsocontributed to characterizing optimal auctions, where optimality eitheris defined with respect to seller revenue or with respect to some efficiencycriterion (Myerson, 1981; Wilson, 1992; Klemperer, 1999).

In many cases, the seller will have more than one item for sale. Incase the objects are identical (such as in the case of shares, or treasurybills), the generalizations of the model and the theory are relativelystraightforward: only one price is relevant; players can indicate howmuch they demand at each possible price and the seller can adjust price(either upward, or downward, or in a static sealed bid format) to equatesupply and demand. The issue is more complicated in case the objectsare heterogenous. With objects, the relevant price region would be

and, of course, one could imagine bidders expressing their demandfor all possible price vectors, but this may get

very complicated. Alternatively, each bidder expresses bids for collec-tions of items, hence, if then is the maximumthat is willing to pay if he is awarded set S, where the auction rulewould be completed by a winner determination rule. At present, there isactive research on such combinatorial auctions. In connection with spec-trum auctions in the US, game theorists designed the simultaneous multiround ascending auction, a generalization of the English auction. In thisformat, the objects are sold simultaneously in a sequence of rounds withat least one price increasing from one round to the next. In its mostelementary form, each bidder can bid on all items and the auction con-tinues to raise prices as long as at least one new bid is made; when theauction ends, the current highest bidders are awarded the objects atthese respective prices. To speed up the auction, activity rules may beintroduced that force the bidders to bid seriously already early on. Werefer to Milgrom (2000) for more detailed description and analysis.

Having briefly gone over the theory, our aim in the remainder ofthis section is to show how game theory can contribute to better insight

GAME THEORY AND THE MARKET70

Page 89: Borm_Chapters in Game Theory-In Honor of Stef Tijs

and to more rational discussion in several policy areas. Our examplesare drawn from the Dutch policy context, and our first example relatesto electricity. Electricity prices in the Netherlands are high, at leastthey are higher than in the neighboring Germany. As a result of theprice difference, market parties are interested in exporting electricityfrom Germany into the Netherlands. Such imports into the Netherlandsare limited by the limited capacity of the interconnectors at the border,which in turn implies that the price difference can persist. In 2000, itwas decided to allocate this scarce capacity by means of an auction; onthe website www.tso-auction.org, the interested reader can find thedetails about the auction rules and the auction outcomes. We discusshere a simplified (Cournot) model that focuses on some of the aspectsinvolved.

As always in auction design, decisions have to be made about whatis to be auctioned, whether the parties are to be treated symmetrically,and what the payment mechanism is going to be. Of course, thesedecisions have to be made to contribute optimally to the ultimate goal.In this specific case, the goal may be taken as to have an as low price forelectricity in the Netherlands as possible. The simple point now is thatadopting this goal implies that players cannot be treated symmetrically.The reason is that they are not in symmetric positions: some of themhave electricity generating capacity in the Netherlands, while others donot, and members of the first group may have an incentive to block theinterconnector in order to guarantee a higher price for the electricitythat is produced domestically. To illustrate this possibility, we considera simple example.

Suppose there is one domestic producer of electricity, who can pro-duce at constant marginal cost Furthermore, assume that demand islinear, If the domestic producer is shielded from compe-tition, and is not regulated, he will produce the monopoly quantityfound by solving:

Hence the quantity the price and the profit will be given by:

Assume that the interconnector has capacity and that in theneighboring country electricity is also produced at marginal cost In

VAN DAMME AND FURTH 71

Page 90: Borm_Chapters in Game Theory-In Honor of Stef Tijs

contrast to the home country, the foreign country is assumed to have acompetitive market, so that the price in the foreign country Asa result and there is interest in transporting electricity fromthe foreign to the home country. If all interconnector capacity wouldbe available for competitors of the monopolist, the monopolist wouldinstead solve the following problem:

hence, if he produces the total production is and the priceThe quantity that the monopolist produces in this competitive

situation is:

while the resulting price and the profit for the monopolist are givenby:

The above calculations allow us to compute how much the capacityis worth for the competing (foreign) generators. If they acquire thecapacity, they can produce electricity at price and sell it at pricethus making a margin on units, resulting in aprofit of

At the same time, the loss in profit for the monopolist is given by

We see that

so that the capacity is worth more to the monopolist. The intuition forthis result is simple, and is already given in Gilbert and Newbery (1982):competition results in a lower price; this price is relevant for all unitsthat one produces, hence, the more units that a player produces, themore he is hurt. It follows that, if the interconnector capacity would besold in an ordinary auction, with all players being treated equally, thenall the capacity would be bought by the home producer, who would

72 GAME THEORY AND THE MARKET

Page 91: Borm_Chapters in Game Theory-In Honor of Stef Tijs

then not use it. Consequently, a simple standard auction would notcontribute to the goal of realizing a lower price in the home electricitymarket.

The above argument was taken somewhat into account by the de-signers of the interconnector auction, however it was not taken to itslogical limit. In the actual auction rules, no distinction is being madebetween those players that do have generating capacity at home andthose that do not: a uniform cap of 400 Mw of capacity is imposedon all players, (hence, the rule is that no player can have more than400 Mw of interconnector capacity at its disposal, which correspondswith some 25 percent of all available capacity). This rule has obviousdrawbacks. Most importantly, the price difference results because of thelimited interconnector capacity that is available, hence, one would wantto increase that capacity. As long as the price difference is positive, andsufficiently large, market parties will have an incentive to build extrainterconnector capacity: the price margin will be larger than the invest-ment cost. However, in such a situation, imposing a cap on the amountof capacity that one may hold, may actually deter the incentive to in-vest. Consequently, it would be better to have the cap only on playersthat do have generating capacity in the home country, and that profitfrom interconnector capacity being limited.

To prevent players with home generating capacity from buying, butnot using interconnector capacity, the auction rules include “use it orlose it” clause. Clearly, such clauses are effective in ensuring that thecapacity is used, however, they need not be effective in guaranteeing alower price in the home electricity market. This can be easily seen inthe explicit example that was calculated above. Suppose that a “useit or lose it” clause would be imposed on the monopolist, how wouldit change the value of the interconnector capacity for this monopolist?Note that the value is not changed for the foreign competitors, this isstill as they will use the capacity in any way. The important insightnow is that the clause also does not change the value for the monopolist:if the monopolist is forced to use units at the interconnector, he willsimply adjust by using units less of his domestic production capacity.By behaving in this way, he will still produce in total and obtainmonopoly profits of Hence a “use it or lose it” clause has no effect,neither on the value of the interconnector for the incumbent, nor onthe value for the entrants. Therefore, the value is larger for the incum-bent, the incumbent will acquire the capacity and the price will remain

VAN DAMME AND FURTH 73

Page 92: Borm_Chapters in Game Theory-In Honor of Stef Tijs

unchanged, hence, the benefits of competition will not be realized.

This simple example has shown that the design that has been adoptedcan be improved: it would be better to impose capacity caps asymmet-rically, and it should not be expected that “use it or lose it” clauses arevery effective in lowering the price. Of course, the actual situation ismuch richer in detail than our model. However, the actual situation isalso very complicated and one has to pick cherries to come to better gripswith the overall situation. We hope it is clear that a simple model likethe one that we have discussed in this section provides an appropriatestarting point for coming to grips with a rather complicated situation.

Our second example relates to the high stakes telecommunicationsauctions that recently took place in Europe. During 2000, various Euro-pean countries auctioned licenses for third generation mobile telephony(UMTS) services. Already a couple of years earlier, some of these coun-tries had auctioned licenses for second generation (DCS-1800) services.In the remainder of this section, we briefly review some aspects of theDutch auctions. For further detail, see Van Damme (1999, 2001, 2002a).

Van Damme (1999) describes the Dutch DCS-1800 auction and ar-gues that, as a consequence of time pressure imposed on Dutch officialsby the European Commission, that auction was badly designed. Themain drawback was that the available spectrum was divided into veryunequal lots: 2 large ones of 15 MHz each and 16 small ones of on av-erage 2.5 MHz, which were sold simultaneously by using a variant ofthe multiround ascending auction that had been pioneered in the US.The rules stipulated that newcomers could bid on all lots, but that in-cumbents (at the time, KPN and Libertel) could bid only on the smalllots. In this situation, new entrants had the choice between bidding onlarge lots, or trying to assemble a sufficient number of small lots so thatenough spectrum would be obtained in total to create a viable nationalnetwork. The latter strategy was risky. First of all, by bidding on thesmall lots one was competing with the incumbents. Secondly, one facedthe risk of not obtaining enough spectrum. This is what is called in theliterature “the exposure problem”: if say 6 small lots were needed for aviable network, one had the risk of finding out that one could not obtainall six because of the intensity of competition, one might be left withthree lots which would be essentially worthless. (At the time of auction,it was not clear whether such blocks could be resold, the auction rulesstating that this was up to the Minister to decide.)

The structure of supply that was chosen had an interesting conse-

74 GAME THEORY AND THE MARKET

Page 93: Borm_Chapters in Game Theory-In Honor of Stef Tijs

quence. Most newcomers found it too risky to bid on the small lots,hence, bidding concentrated on the large lots and the price was drivenup there. In the end, the winners of the large lots, Dutchtone and Telfortpaid Dfl. 600 mln and Dfl. 545 mln, respectively for their licenses. Com-pared to the prices paid on the small lots, these prices are very high:van Damme (1999) calculates that, on the basis of prices paid for thesmall lots, these large lots were worth only Dfl. 246 mln, hence, lessthan half of what was paid. There was only one newcomer, Ben, whodared to take the risk of trying to assemble a national license from smalllots and it was successful in doing so; it was rewarded by having to payonly a relatively small price for its license. It seems clear that if theavailable spectrum had been packaged in a different way, say 3 large lotsof 15 MHz each and 10 small lots of an average 2.5 MHz each, the pricedifference would have been smaller, and the situation less attractive forthe incumbents. Perhaps one might even argue that the design thatwas adopted in the Dutch DCS-1800 auction was very favorable for theincumbents.

In any case, the 1998 DCS-1800 auction led to a five player market,at least one player more than in most other European markets. Thisprovides relevant background for the third generation (UMTS) auctionthat took place in the summer of 2000, and which was really favorablefor the incumbents. At that time, the two “old” incumbents (KPN andLibertel) still had large market shares, with the market shares of thenewer incumbents (Ben, Dutchtone and Libertel) being between 5 and10 percent each. In this situation, it was decided to auction five 3G-licenses, two large ones (of 15 MHz each) and three smaller ones (of 10MHz each). It is also relevant to know that the value of a license islarger for an incumbent than for a newcomer to the market, and thisbecause of two reasons. First, an incumbent can use its existing network,hence, it will have lower cost in constructing the necessary infrastructure.Secondly, if an incumbent does not win a 3G-license, it will also risk tolose its 2G-customers. Finally, it is relevant to know that it was decidedto use a simultaneous ascending auction.

The background provided in the previous paragraph makes clear whythe Dutch 3G-auction was unfavorable to newcomers. First, the supplyof licenses (2 large, 3 small) exactly matches the existing market struc-ture (5 incumbents, of which 2 large ones). Secondly, an ascendingauction was used, a format that allows incumbents to react to bids andthus to outbid new entrants. Thirdly, the value of a license being larger

VAN DAMME AND FURTH 75

Page 94: Borm_Chapters in Game Theory-In Honor of Stef Tijs

for an incumbent than for an entrant implies that an incumbent willalso have an incentive to outbid a newcomer. In a situation like this,an entrant cannot expect to win a license, so why should it bother toparticipate in this auction? On the basis of these arguments, one shouldexpect only the incumbents to participate and, hence, the revenues toremain small, see Maasland (2000).

The above arguments seem to have been well understood by the play-ers in the market. Even though many potential entrants had expressedan interest to participate in the Dutch 3G-auction at first, all but onesubsequently decided not to participate. In the end, only one newcomer,Versatel, participated in the auction. This participant had equally wellunderstood that it could not win; in fact, it had started court cases(both at the European and the Dutch level) to argue that the auctionrules were “unfair” and that it was impossible for a newcomer to win.If Versatel knew that it could not win a license in this auction, why didit then participate? A press release that Versatel posted on its websitethe day before the auction givens the answer to this question:

“We would however not like that we end up with nothingwhilst other players get their licenses for free. Versatel in-vites the incumbent mobile operators to immediately startnegotiations for access to their existing 2G networks as wellas entry to the 3G market either as a part owner of a licenseor as a mobile virtual network operator.”

The press release that Versatel realizes, and want the competitors torealize, that it has power over the incumbents. By participating in theauction, Versatel drives up the price that the winners (the incumbents)will have to pay. (Viewed in this light, the court cases that Versatelhad started signals to the incumbents that Versatel know that it can-not win, hence, that it must participate in the auction with anotherobjective in mind.) On the other hand, by dropping out, Versatel doesthe incumbents a favour, since the auction will end as soon as Versateldoes drop out. The press release signals that Versatel is willing to dropout, provided that the incumbents are willing to let Versatel share inthe benefits that they obtain in this way. All in all then, Versatel ap-pears to be following a smarter strategy than the newcomers that didnot participate in the auction.

For the reader who has studied von Neumann and Morgenstern(1953), the above may all appear very familiar. Recall the basic three-player non-zero sum game from that book, with one seller, two buyers,

76 GAME THEORY AND THE MARKET

Page 95: Borm_Chapters in Game Theory-In Honor of Stef Tijs

one indivisible object, and one buyer attaching a higher value to thisobject than the other. Why would the weaker participate in the game,if he knows right from the start that he will not get the object anyway?The answer that the founding fathers give is that he has power over bothother players: by being in the game, he forces the other buyer to pay ahigher price and he benefits the seller; by stepping out he benefits thebuyer, and by forming a coalition with one of these other players, he canexploit his power. This argument is also contained, and popularized,in Brandenburger and Nalebuff (1996), a book that also clearly demon-strates the value of combining cooperative and competitive analysis. Ifone knows that Nalebuff was an advisor to Versatel, then it is no longerthat surprising that Versatel has used this strategy.

One would like to continue this story with a happy end for game the-ory, but unfortunately that is not possible in this situation. Even thoughVersatel’s strategy was clever, it was not successful. Versatel stayed inthe auction, but it did not succeed in reaching a sharing agreement withone of the incumbents, even though negotiations have been conductedwith one of them. Perhaps, the other parties had not fully realized thecleverness of Versatel and, as Edgar Allen Poe already remarked, it paysto be one level smarter than your opponents, but not more. Eventually,Versatel dropped out and, in the end, only the Dutch government wasthe beneficiary of Versatel’s strategy.

3.6 Conclusion

In this chapter, we have attempted to show that the cooperative and non-cooperative approaches to games are complementary, not only for bar-gaining games, as Nash had already argued and demonstrated, but alsofor market games. Specifically, we have demonstrated this for oligopolygames and for auctions. We have shown that each approach may give es-sential insights into the situation and that, by combining insights fromboth vantage points, a deeper understanding of the situation may beachieved.

The strength of the non-cooperative approach is that allows detailedmodelling of actual institutions. Hence, many different institutionalarrangements may be modelled and analysed, thus allowing an informed,rational debate about institutional reform. Indeed, the non-cooperativemodels show that outcomes can depend strongly on the rules of thegame. The strength of this approach is at the same time its weakness:

VAN DAMME AND FURTH 77

Page 96: Borm_Chapters in Game Theory-In Honor of Stef Tijs

why would players play by the rules of the game? Von Neumann andMorgenstern argued that, whenever it is advantageous to do so, playerswill always seek for possibilities to evade constraints, in particular, theywill be motivated to form coalitions and make side-payments outside theformal rules. This insight is relevant for actual markets and even thoughcompetition laws attempt to avoid cartels and bribes, one should expectthese laws to be not fully successful.

The cooperative approach aims to predict the outcome of the gameon the basis of much less detailed information, it only takes accountof the coalitions that can form and the payoffs that can be achieved.One lesson that the theory has taught us is that frequently this infor-mation is not enough to pin down the outcome. The multiplicity ofcooperative solution concepts testifies to this. Hence, in many situa-tions we may need a non-cooperative model to make progress. Such anon-cooperative model may also alert us to the fact that the efficiencyassumption that frequently is routinely made in cooperative models maynot be appropriate. On the other hand, when the cooperative approachis really successful, such as in the 2-person bargaining context, it is reallypowerful and beautiful.

We expect that that the tension between the two models will continueto be a powerful engine of innovation in the future.

GAME THEORY AND THE MARKET78

References

Bagwell, K. (1995): “Commitment and observability in games,” Gamesand Economic Behavior, 8, 271–280.

Bertrand, J. (1883): “Theorie mathématiques de la richesse sociale,”Journal des Savants, 48, 499–508.

Brandenburger, A., and B. Nalebuff (1996): Coopetion. Currency/Double-day.

Cayseele, P. van, and D. Furth (1996a): “Bertrand-Edgeworth duopolywith buyouts or first refusal contracts,” Games and Economic Behavior,16, 153–180.

Cayseele, P. van, and D. Furth(1996b): “Von Stackelberg equilibria forBertrad-Edgeworth duopoly with buyouts,” Journal of Economic Stud-ies, 23, 96–109.

Cayseele, P. van, and D. Furth(2001): “Two is not too many for mono-poly,” Journal of Economics, 74, 231–258.

Page 97: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Cournot, A. (1838): Recherches sur les Principes Mathématiques de laThéorie des Richesses. Paris: L. Hachette.

Damme, E. van (1995): “On the contributions of John C. Harsanyi,John F. Nash and Reinhard Selten,” International Journal of GameTheory, 24, 3–12.Damme, E. van (1999): “The Dutch DCS-1800 auction,” in: Patrone,Fioravante, I. García-Jurado & S. Tijs (eds.), Game Practise: Contribu-tions from Applied Game Theory. Boston: Kluwer Academic Publishers,53–73.Damme, E. van (2001): “The Dutch UMTS-auction in retrospect,” CPBReport 2001/2, 25–30.Damme, E. van (2002a): “The European UMTS-auctions,” EuropeanEconomic Review, forthcoming.Damme, E. van (2002b): “Strategic equilibrium,” forthcoming in R.J.Aumann and S. Hart (eds.), Handbook in Game Theory, Vol. III., NorthHolland Publ. Company.

Damme, E. van, and S. Hurkens (1996): “Endogenous price leadership,”Discussion Paper nr. 96115, CentER, Tilburg University.Damme, E. van, and S. Hurkens (1997): “Games with imperfectly ob-servable commitment,” Games and Economic Behavior, 21, 282–308.

Damme, E. van, and S. Hurkens (1999): “Endogenous Stackelberg lead-ership,” Games and Economic Behavior, 28, 105–129.

Dasgupta, P., and E. Maskin (1986): “The existence of equilibria in dis-continuous games. I: Theory and II: Applications, Review of EconomicStudies, 53, 1–26, and 27–41.Daughety, A. (ed.) (1988): Cournot Oligopoly. Cambridge: CambridgeUniversity Press.Davidson, C., and R. Deneckere (1986): “Long-run competition in ca-pacity, short-run competition in price, and the Cournot model,” RandJournal of Economics, 17, 404–415.Deneckere, R., and D. Kovenock (1992): “Price leadership,” Review ofEconomic Studies, 59, 143–162.Edgeworth, F.Y. (1897): “The pure theory of monopoly,” reprinted inWilliam Baumol and Stephan Goldfeld Precusors in Mathematical Eco-nomics: An Anthology, London School of Economics, 1968.

Furth, D. (1986): “Stability and instability in oligopoly,” Journal ofEconomic Theory, 40, 197–228.

VAN DAMME AND FURTH 79

Page 98: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Furth, D., and D. Kovenock(1993): “Price leadership in a duopoly withcapacity constraints and product differentiation,” Journal of Economics,57, 1–35.Gilbert, R., and D. Newbery (1982): “Preemptive patenting and thepersistence of monopoly,” American Economic Review, 72, 514–526.Klemperer, P. (1999): “Auction theory: a guide to the literature,” Jour-nal of Economic Surveys, 13, 227–286.Kreps, D., and J. Scheinkman (1983): “Quantity pre-commitment andBertrand competition yields Cournot outcomes,” Bell Journal of Eco-nomics, 14, 326–337.

Leonard, R. (1994): “Reading Cournot, reading Nash,” The EconomicJournal, 104, 492–511.

Levitan, R., and M. Shubik (1972): “Price duopoly and capacity con-straints,” International Economic Review, 13, 111–122.Maasland, E. (2000): “Veilingmiljarden zijn een fictie,” EconomischStatistische Berichten, 9 juni 2000, 479.

Milgrom, P. (2000): “Putting auction theory to work: the simultaneousascending auction,” Journal of Political Economy, 108, 245–272.

Milgrom, P., and R. Weber (1982): “A theory of auctions and competi-tive bidding,” Econometrica, 50, 1089–1122.

Montero, M. (2000): Endogeneous Coalition Formation and Bargaining.PhD thesis, CentER, Tilburg University.

Myerson, R. (1981): “Optimal auction design,” Mathematics of Opera-tions Research, 6, 58–73.

Nash, J.F. (1950a): Non-Cooperative Games. Ph.D. Dissertation, Prince-ton University.

Nash, J.F. (1950b): “The bargaining problem,” Econometrica, 18, 155–162.

Nash, J.F. (1951): “Non-cooperative games,” Annals of Mathematics,54, 286–295.

Nash, J.F. (1953): “Two-person cooperative games,” Econometrica, 21,128–140.von Neumann, J., and O. Morgenstern (1953): Theory of Games andEconomic Behavior. Princeton, NJ: Princeton University Press (Firstedition, 1944).

Ono, Y. (1982): “Price leadership: a theoretical analysis,” Economica,49, 11–20.

GAME THEORY AND THE MARKET80

Page 99: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Osborne, M., and C. Pitchick (1986): “Price competition in a capacity-constrained duopoly,” Journal of Economic Theory, 38, 238–260.

Roth, A., and M. Sotomayor (1990): Two-sided Matching: A study inGame Theoretic Modelling and Analysis. Cambridge, Mass.: CambridgeUniversity Press.

Rubinstein, A. (1982): “Perfect equilibrium in a bargaining model,”Econometrica, 50, 97–109.Shapley, L. (1958): “The solution of a symmetric market game,” in:A.W, Tucker and R.D. Luce (eds.): Contributions to the Theory ofGames IV, Annals of Mathematics Studies 40, Princeton, PrincetonUniversity Press, 145–162.

Shapley, L., and M. Shubik (1969): “On market games,” Journal ofEconomic Theory, 1, 9–25.

Shubik, M. (1959): “Edgeworth market games,” in A.W. Tucker andR.D. Luce (eds.) Contributions to the Theory of Games IV, Annals ofMathematics Studies 40, Princeton, NJ: Princeton University Press.

von Stackelberg, H. (1934): Marktform und Gleichgewicht. Berlin: JuliusSpringer.

Vickrey, W. (1961): “Counterspeculation, auctions and competitive seal-ed tenders,” Journal of Finance, 16, 8–37.

Vives, X. (1989): “Cournot and the oligopoly problem,” European Eco-nomic Review, 33, 503–514.

Wilson, R. (1992): “Strategic analysis of auctions,” in R.J. Aumann andS. Hart (eds.), Handbook in Game Theory, Vol. I., North Holland Publ.Company, 227–279.

VAN DAMME AND FURTH 81

Page 100: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Chapter 4

On the Number of ExtremePoints of the Core of aTransferable Utility Game

BY JEAN DERKS AND JEROEN KUIPERS

4.1 Introduction

Stability of an allocation among a group of players is normally consideredto refer to the property that there is no incentive among subgroups orcoalitions of players to deviate from the given allocation and choosethe alternative of cooperation. In a transferable utility game the stableallocations are exactly the elements of the upper core. These allocationsalways exist but may not be feasible in the sense that the total payoffexceeds the total earnings of the grand coalition. The core of a gameis the set of feasible allocations within the upper core. It is a (possiblyempty) face of the upper core.

The core is perhaps the best known solution concept within Coop-erative Game Theory. The first contributions within this context arefound in Gillies (1953). It is generally believed that the core and core-like structured sets have at most extreme points. This is indeed thecase and the main contribution of this note is to provide a proof.

With core-like structured sets we denote those sets that can appearas a core of a game. Examples are the so-called core covers, which aregeneralizations of the core, and are introduced mainly in order to bypass

83

P. Borm and H. Peters (eds.), Chapters in Game Theory, 83–97.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Page 101: Borm_Chapters in Game Theory-In Honor of Stef Tijs

84 DERKS AND KUIPERS

the dissatisfactory property of the core that it may be empty. The firstresults in this direction are found in Tijs (1981) and Tijs and Lipperts(1982). Other examples of core-structures are the anti-core, the leastcore and the Selectope. Vasilev (1981) and, recently, Derks, Haller andPeters (2000) are contributions dealing with the core structure of theSelectope.

Although our first concern is the core, the main results and conceptsdeal with the upper core. The upper core of a game can be describedas the feasible region of a suitably chosen linear program, where thematrix is 0,1-valued and the constraint vector coefficients are the valuesof the coalitions. Actually, we are dealing with polyhedra of the type

where A is an integer valued matrixand In the literature there is a comprehensive study on theupper bound on the number of extreme points of such polyhedra. It iswell-known that is an extreme point of if and only if thereis a set of linearly independent vectors among the rows of A for whichthe equality (with the coefficient of associated with row )holds. Hence, a trivial upper bound for the number of extreme points

of McMullen (1970) showed that this is an overestimate

and he proved that the polyhedron has at most

extreme points. Furthermore, Gale (1963) constructed examples ofpolyhedra having precisely extreme points, so that McMullen’sbound cannot be further improved for arbitrary matrices A (see alsoChvátal, 1983, for these results).

Our main result is that for polyhedra where A is anwith 0,1-valued coefficients, an upper bound of different

extreme points exists. This is an improvement of McMullen’s upperbound in case A contains all possible 0,1-valued row vectors:

and by the Stirling approximation of being approxi-mately we observe that the McMullen upper bound expo-nentially exceeds the value

The proof of our main result is based on a polar version of an ar-gument, stated by Imre Bárány (see Ziegler, 1995, p. 25) in the contextof the related search for upper bounds for the number of facets of a0/1-polytope, which is defined to be the convex hull of a set of elements

is

Page 102: Borm_Chapters in Game Theory-In Honor of Stef Tijs

EXTREME POINTS OF THE CORE 85

with 0,1-valued coefficients. This argument shows that is an up-per bound of the number of facets of any 0/1-polytope.The polar translation of a 0/1-polytope is a polyhedron of type –with A an with 0,1-valued coefficients, and 1 the(1 , . . . , 1 ) , so that for these polyhedra Bárány’s upper bound directlyimplies a maximum number of extreme points. With a bit moreeffort we will obtain a better upper bound for the larger class of poly-hedra where the constraint vector may admit arbitrary values.

Let us define the polytope as the convex hull of the origin 0and all row vectors of the matrix A. In Section 4.2 we shall provethat the number of extreme points of is bounded by times the

volume of the polytope Since is contained inthe unit hypercube, described by the restrictions for all

if A is a 0,1-valued matrix, has a volume of at most 1,and it follows that the polyhedron has at most extreme points.

In Section 4.3 we formally introduce the cooperative game model,and state that the core, being a face of a polyhedron of type withA a (0,l)-matrix, has at most different extreme core points. Strictconvexity implies that the core actually has different extreme points,but we show that there are more games with this property. We furtherdiscuss an intuitive and direct approach for listing extreme points ofthe core (possibly with duplicates) but we show that this approach mayfail to list all extreme points, thus showing that it cannot be used forestablishing a maximum on the number of extreme points.

Section 4.4 describes some properties that are induced by havingextreme core points. These are the large core property, a kind of strictexactness, and a non-degeneracy property. We supply an example thatthese properties are not sufficient for obtaining extreme core points.

In Section 4.5 we conclude the paper with a summary.

4.2 Main Results

Let and The vector is said to be an interior pointof X if there exists such that for all with and all

with we have Here denotes the Euclidiannorm of the vector The set of all interior points of X is called theinterior of X.

For any two vectors their inner product is denoted byWe shall denote the righthand side associated with row of matrix A

Page 103: Borm_Chapters in Game Theory-In Honor of Stef Tijs

86 DERKS AND KUIPERS

byFor let denote the set of rows for which

Further, let denote the convex hull of 0 and the vectors ofIt is intuitively quite clear that has an empty interior for anytwo distinct extreme points For the sake of completeness weprovide a proof here.

Lemma 4.1 Let and be two distinct extreme points of Thenhas an empty interior.

Proof. Let and be two distinct extreme points of and supposethat has a non-empty interior. Choose in the interior of

So, lies also in the interior of and therefore, it can bewritten as a convex combination of the extreme points of the convex set

with all coefficients strictly positive, i.e. withWe have that for at least one Since it followsthat Analogously one provesthat a contradiction.

As a consequence of Lemma 4.1, the volume of the unionof polytopes is simply the sum of their volumes. In the following weshall provide a lower bound on the volume of This gives us thenan upper bound on the number of polytopes that can be containedin or equivalently, it gives us an upper bound on the number ofextreme points of

Let us denote the volume of an body byThe following theorem is well-known in linear algebra. For a

proof we refer to Birkhoff and MacLane (1963).

Theorem 4.2 Let be a set, and let A be a squarematrix of dimension Then wheredet(A) denotes the determinant of the matrix A.

Now we are in a position to provide a lower bound on

Lemma 4.3 Let A be an integer valued matrix and letFurthermore, let be an extreme point of Then

Proof. Since is an extreme point of contains a set ofindependent vectors, say According to Theorem 4.2

the volume of the convex hull of the points and 0 equals

Page 104: Borm_Chapters in Game Theory-In Honor of Stef Tijs

EXTREME POINTS OF THE CORE 87

where det de-notes the determinant of the matrix with columns All entriesof this matrix are integer, so its determinant is also integer. The indepen-dent nature of the columns in the matrix ensures that the determinantis unequal to 0 and therefore, Consequently, thevolume of the convex hull of the points and 0 is at leastthe volume of which equals Since thisconvex hull is contained in also has a volume of at least

Observe that the lower bound of can be achieved by if andonly if there is a set of independent vectors in with

and there are no elements of outside the convexhull of and 0.

Theorem 4.4 Let A be an integer valued matrix and letThen has at most extreme points.

Proof. Let E denote the set of extreme points of Clearly,for all Hence, and

According to Lemma 4.1, the intersection of any two polytopes andwith has an empty interior, and therefore

Furthermore, each polytope has a volume of at least Hence,

Combining these results the theorem follows.

Corollary 4.5 For any (0,1)-matrix A and the poly-hedron has at most extreme points.

The maximum of can only be achieved if every (0,1)-vector exceptthe null vector is a row of A. Clearly, if A has less rows, then isstrictly less than 1, and hence the bound in Theorem 4.4 is strictly lessthan If not every (0,1)-vector is a row of the matrix A we thereforeobtain a stronger bound.

Page 105: Borm_Chapters in Game Theory-In Honor of Stef Tijs

88 DERKS AND KUIPERS

The maximum of extreme points of can actually be achieved,with A chosen ’maximal’ as indicated. Examples are given e.g. in Ed-monds (1970), and in Shapley (1971) in the context of transferable utilitygames.

To obtain different extreme points in the sets being 0/1-polytopes, should all have volume for each extreme point ofand this is only possible if is a simplex, i.e., the convex hull of aset of affine independent vectors (see the observation following theproof of Lemma 3). Further, the union of these simplices should coincidewith the unit hypercube. This gives rise to a simplicial subdivision ofthe unit hypercube, also called a triangulation. The main issue in theliterature on triangulations is the minimal number of simplices needed toform a triangulation (Mara, 1976; Hughes, 1994). The so-called standardtriangulation is the subdivision of the hypercube in simplices of the form

withrunning over all permutations on See Freudenthal (1942)for an early reference (Todd, 1976, 29–30). The standard triangulationpops up in many situations. The next section will provide examples.

4.3 The Core of a Transferable Utility Game

An transferable utility game (or game for short), withis a real valued map on the set of subsets of the player set

the empty set excluded. A non-empty subset S of N isreferred to as a coalition, and its value in the game is interpretedas the net gain of the cooperation of the players in S.

A game is said to be additive if each coalition value is obtainedby summing up the one-person coalition values:for all coalitions S. Given an allocation the correspondingadditive game is the game, also denoted by with coalition values

An allocation is called stable in the game if thecorresponding additive game majorizes for all coalitionsS (or for short).

The upper core of a game denoted is the set of stableallocations. Its elements are interpreted as those payoffs to the play-ers that are preferred to playing the game. However, not all stableallocations are feasible in the sense that they can be afforded by theplayers. Here, we assume that an allocation is feasible in the game

holds. The core of a game denoted isif

Page 106: Borm_Chapters in Game Theory-In Honor of Stef Tijs

EXTREME POINTS OF THE CORE 89

defined as the set of stable and feasible allocations ofConsider a fixed sequence of the coalitions in N.

Let A denote the matrix withequal to 1 if player is a member of coalition and 0 otherwise.For a game the upper core obviously equals with the

constraint vector with -coefficient For a stableallocation the set is the convex hull of the zero vector and theindicator functions, the rows of A, corresponding to coalitions S forwhich equality holds. We will refer to these coalitions asbeing tight .

Feasibility of a stable allocation of course imply that the grand coali-tion N has to be tight. Therefore, the core of is equal to the face ofthe upper core of determined by the constraint correspondingto coalition N.

With the help of Corollary 4.5 we conclude that

Corollary 4.6 The core of an cooperative game has at mostextreme points.

We will first show that there is a large class of games for whichthe number of different core points equals the maximum possible numberof For this we need the following. A game is calledconvex if

(with the convention that the game value of the empty set equals 0).The game is called strictly convex if the convexity inequalities hold, andnone of them with equality whenever or

It is well known that the extreme points of the core of a convex gameare among the so called marginal contribution vectors (see Shapley, 1971;and Ichiishi, 1981, for the converse statement). For a permutation onthe player set N the marginal contribution allocation in thegame is defined by

with denoting the predecessors of player inThe allocation is the final outcome of the procedure where theplayers enter a room one by one in the order given by and each playerobtains the value of the coalition of players in the room minus whatalready has been allocated.

Page 107: Borm_Chapters in Game Theory-In Honor of Stef Tijs

90 DERKS AND KUIPERS

Some of the marginal contribution allocation vectors may coin-cide, but if the game is strictly convex then all these allocations aredifferent. To show this, first observe that forall permutations and players and games not necessarily (strictly)convex. Now let be a strictly convex game, a permutation of N,and S a coalition unequal to any predecessor set of We will provethat Let be a permutation such that the playersin S are the first and such that if for then also

A permutation with this property is constructed, for ex-ample, by interchanging the positions (in ) of any player with aplayer with until there are no players left with thisproperty. For each we have so that by applying(4.1), with and we obtain

Therefore, and since S is a predecessor setof it follows that (thus proving that the marginalcontribution allocations are elements of the upper core). There is an

such that is a proper subset of and for this playerstrict inequality holds in (4.2) because of strict convexity of the game.Therefore, for all coalitions S except the predecessorsets of From this, one easily derives that there are no two marginalcontribution allocations equal to each other. Consequently, the core hasprecisely extreme points (Shapley, 1971). This shows that the boundin Corollary 4.6 is sharp.

Let us call a collection of coalitions of regular if theindicator functions span It is evident that a stable allocation of agame is extreme in the upper core if and only if its tight coalitions forma regular collection, and a stable allocation is an extreme core point ifand only if N is tight, and the set of its tight coalitions is regular.

Observe that we actually proved that the tight coalitions of themarginal contribution allocation with strictly convex, are pre-cisely its predecessor sets, so that the tight coalitions form a regular col-lection. The corresponding set the convex hull of the zero vectorand the indicator functions of the predecessor sets of is easily seento equala typical simplex of the standard triangulation of the unit hypercube.On the other hand, if the tight coalitions of a stable allocation give rise

Page 108: Borm_Chapters in Game Theory-In Honor of Stef Tijs

EXTREME POINTS OF THE CORE 91

to a simplex of the form then the allocation has to be a marginalcontribution allocation. Therefore, the strict convex games are exactlythe games that give rise to the standard triangulation.

The strictly convex games are not the only games with core havingthe maximum number of extreme points as the following example shows.

Consider the non-convex (symmetric) 4-player game with values0,7,12, and 22 for coalitions with number of players respectively, 1, 2, 3,and 4:

Consider the allocations (2, 5, 5,10) and (0, 7, 7, 8). Obviously, both be-long to the core of with tight coalition sets respectively{N,{1,2},{1,3}, {1,2,3}} and {N,{1},{1,2}, {1,3}}. The two collec-tions are regular, so that we may conclude that the two allocations areextreme in the core. Because of the symmetry among the players inthe game any of the 12 allocations with coefficients 2,5,5,10, and the 12allocations with coefficients 0,7,7,8 are extreme core points. Thereforethe game has at least 24 extreme core points. There are no other since24 is the maximum number: 4! = 24.

There is an intuitive approach for obtaining the extreme core points.First, take any ordering of the players. Then, take the first player andmaximize its payoff among the core allocations. Thereafter, take thenext player, and maximize his payoff among the core allocations wherethe first player gets his maximal payoff. Continue in this way until thelast player. Following this way we obtain an extreme point of the core.Since there are different orderings of the players we obtain extremepoints (possibly, there may be duplicates).

The above example, however, shows that we may not obtain all ex-treme points in this way. Observe that if we maximize the payoff to aplayer among the core allocations we obtain the value 10, and there-fore, we will never end up in an extreme core allocation with coefficients0,7,7,8. Analogously, if we minimize the payoff, instead of maximize, wewill not terminate in a core allocation with coefficients 2,5,5,10.

4.4 Strict Exact Games

It is not only of mathematical interest to provide necessary and suffi-cient conditions for games having the maximum number of extreme core

Page 109: Borm_Chapters in Game Theory-In Honor of Stef Tijs

92 DERKS AND KUIPERS

points. One may, for example, argue that the extreme points of the coreare precisely the outcomes of a game where the players choose their ac-tions in an extreme social way. The number of extreme core points mayas such serve as a measure for social complexity (whatever these termsmay indicate in an appropriate context). Also, procedures or protocolsthat construct or give rise to core allocations may endure a complex-ity that is dependent on the number of extreme core points, especiallywhen, depending on the settings, any core point may occur as outcome.

It is therefore of interest to deduce properties that are implied bythe fact that the number of extreme core points is maximal. First,one can easily verify that the number of tight coalitions in an extreme(upper) core allocation should not exceed the dimension of the allocationspace. This means that the indicator functions of the tight coalitionsare linearly independent. Collections of coalitions with this propertyare called non-degenerate, and a game is called non-degenerate if thecollections of tight coalitions is non-degenerate for each extreme uppercore allocation.

To obtain the maximum number of extreme core points in anperson game the upper core should not have extreme points outsidethe face corresponding to the grand coalition. This is equivalent to theupper core being equal to the core and all the points lying above thecore: If this holds we say that has a largecore. A game has a large core if and only if for each stable allocation

there is a core allocation such thatFor the upper core having the maximum amount of different extreme

points it is essential that in its description as a polyhedral set norows of A can be deleted (see the remark following Corollary 4.5). Thishints to the condition that for each coalition there is a correspondingface in the upper core, which has to be of maximal dimension afacet. In other words, for each coalition S there is a stable allocationfor which S is the only tight coalition. If this is the case then the gameis called strict upper exact. Without going into details, it is not hard toprove that strict upper exactness is equivalent to the property that eachsubgame has a core of maximal dimension.

Proposition 4.7 If the core of a game has the maximum of differentextreme core points then the core has to be large and the game has to benon-degenerate and strict upper exact.

The next example shows that the converse does not hold. Consider the5-person game defined by

Page 110: Borm_Chapters in Game Theory-In Honor of Stef Tijs

EXTREME POINTS OF THE CORE 93

We will show that the extreme stable allocations in the upper core ofare the following points:(1)(2)(3)

the 20 allocations with coefficients 0,4,4,4,11,the 20 allocations with coefficients 2,3,3,3,12, andthe 60 allocations with coefficients 0,1,7,7,8.

It is left to the reader to check the stability property of these allocations,100 in total (and less than the maximum possible of 5!=120). Also, withthe help of these allocations one easily derives that the game is strictupper exact.

The tight coalitions of the stable allocation (0, 4, 4, 4,11) are theplayer set N = {1,2,3,4,5}, {1}, {1,2,3}, {1,2,4}, {1,3,4}. The in-dependence of the corresponding indicator functions follows from deter-mining the determinant value of the matrix consisting of these indicatorfunctions, say in the given order. The value equals –2, so that we mayconclude that (0,4,4,4,11) is extreme in the upper core, and due to thesymmetry among the players in the game the other 19 allocations withthe same coefficients are also extreme in the upper core. Further, thecomputed determinant value implies that the volume of theconvex hull of the zero vector and the indicator functions of the tightcoalitions, equals 2/5!, so that the 20 allocations of type (1) consume20 • 2/5! of the available volume of the unit hypercube, implying thatthe upper core can have at most 20 + 120 – 40 = 100 extreme points.

The other two types of allocations can be derived in the same way.The tight coalitions of the stable allocation (2,3,3,3,12) are N, {1,2,3},{1,2,4}, {1,3,4}, {1,2,3,4}, and form a regular collection, implying ex-tremality in the upper core for (2,3,3,3,12) and the other 19 allocationswith the same coefficients.

Finally, the tight coalitions of the stable allocation (0,1,7,7,8) formthe regular collection {N, {1}, {1,2}, {1,2,3}, {1,2,4}}, implying ex-tremality in the upper core of (0,1,7,7,8) and the other 59 allocationswith the same coefficients.

This shows that the mentioned allocations are the extreme uppercore points. All allocations are feasible, implying that the game has alarge core. Further, all collections of tight coalitions are non-degenerate,showing that the game is non-degenerate.

A game is called strict exact if for each coalition S a core allocation

Page 111: Borm_Chapters in Game Theory-In Honor of Stef Tijs

94 DERKS AND KUIPERS

exist for which S and N are the only tight coalitions. Strict exactnessimplies strict upper exactness. To see this, let be strict exact, and letS be an arbitrary coalition. A core allocation exists for which S andN are the only tight coalitions. Then the sum of and the indicatorfunction of the complement of S, is a stable allocation with Sas the only tight coalition (this argument captures also the case S = N).

One easily derives the strict exactness of the game in the previousexample. This is not coincidental as the following result shows.

Proposition 4.8 If a game is non-degenerate and strict upper exact,and has a large core, then it is strict exact.

Proof. Let be non-degenerate, strict upper exact, and let its core belarge. For an arbitrary coalition S take a stable allocation for whichS is the only tight coalition. There is a core allocation such that

Obviously, S and N are tight for Since is non-degeneratethe collection of tight coalitions of has to be non-degenerate, and wemay therefore assume that an exists such thatand for the other tight coalitions T of For sufficientlysmall the allocation belongs to the core of Its tightcoalitions are S and N, thus showing that is strict exact.

We cannot leave out the non-degenerate property or the large core con-dition. This can be derived from the following two symmetric gamesand on the player set N = {1,2,3}: for coali-tions S with 1 player, if S consists of 2 players, and

and It is left to the reader to check that bothgames are strict upper exact but not strict exact, is non-degeneratebut does not have a large core, and has a large core but fails to benon-degenerate.

Combining the previous two propositions we conclude that:

Corollary 4.9 A game is strict exact if its core consists of the maxi-mum of extreme points.

4.5 Concluding Remarks

Summarizing the contents of the paper, we proved that polyhedral setsof the form have at most times the volumeof the convex hull of the zero vector and the rows of the matrix A. Weapplied this result on 0,1-valued matrices and obtained the upper bound

Page 112: Borm_Chapters in Game Theory-In Honor of Stef Tijs

EXTREME POINTS OF THE CORE 95

of for the number of extreme points of the upper core and the coreof a game. The maximum number is attained by the strictly convexgames but other games may have this property as well. These gameshave to be strict upper exact, must have a large core and fulfill a kindof non-degeneracy. We showed that not all games with these propertieshave different extreme points. See also Figure 4.1.

Future research is concentrated on the dependence relations of thementioned properties and on the impact of the non-degenerate condi-tion which seems to involve combinatorial techniques for obtaining andanalyzing the triangulations of the unit hypercube.

Page 113: Borm_Chapters in Game Theory-In Honor of Stef Tijs

96 DERKS AND KUIPERS

References

Birkhoff, G., and S. Maclane (1963): A Survey of Modern Algebra. NewYork: MacMillan.

Chvátal, V. (1983): Linear Programming. New York: Freeman.

Derks J., H. Haller and H. Peters (2000): “The selectope for cooperativegames,” International Journal of Game Theory, 29, 23–38.

Edmonds, J. (1970): “Submodular functions, matroids, and certainpolyhedra,” in: Richard Guy et al., (eds.), Combinatorial Structuresand their Applications. Gordon and Breach, 69–87.

Freudenthal, H. (1942): “Simplizialzerlegungen von Beschränkter Flach-heit,” Annals of Mathematics, 43, 580–582.

Gale, D. (1963): “Neighborly and cyclic polytopes,” in: V. Klee (ed.),Convexity, Proceedings of Symposia in Pure Mathematics, 7, AmericanMathematical Society, 225–232.

Gillies (1953): Some theorems on games. Dissertation, Depart-ment of Mathematics, Princeton University.

Hughes, R.B. (1994): “Lower bounds on cube simplexity,” DiscreteMathematics, 133, 123–138.

Ichiishi, T. (1981): “Super-modularity: application to convex gamesand to the greedy algorithm for LP,” Journal of Economic Theory, 25,283–286.

Kuipers, J. (1994): Combinatorial Methods in Cooperative Game The-ory. Ph.D. thesis, Universiteit Maastricht, The Netherlands.

Mara, P.S. (1976): “Triangulations for the cube,” Journal of Combina-torial Theory, Ser. A, 20, 170–177.

McMullen, P. (1970): “The maximum number of faces of a convex poly-tope,” Mathematika, 17, 179–184.

Schmeidler, D. (1972): “Cores of exact games,” Journal of MathematicalAnalysis and Applications, 40, 214–225.

Shapley, L.S. (1971): “Cores of convex games,” International Journal ofGame Theory, 1, 11–26.

Tijs, S.H. (1981): “Bounds for the core and the -value,” in: O. Moeschlinand D. Pallaschke (eds.), Game Theory and Mathematical Economics.Amsterdam: North-Holland Publishing Company, 23–132.

Page 114: Borm_Chapters in Game Theory-In Honor of Stef Tijs

EXTREME POINTS OF THE CORE 97

Tijs, S.H., and F.A.S. Lipperts (1982): “The Hypercube and the corecover of cooperative games,” Cahiers du Centre d’Etudes deRecherche Opérationelle, 24, 27–37.

Todd, M.J. (1976): The Computation of Fixed Points and Applications.Lecture notes in Economics and Mathematical Systems, 124, Springer-Verlag.

Vasilev, V.A. (1981): “On a class of imputations in cooperative games,”Soviet Math. Dokl., 23, 53–57.

Ziegler, G.M. (1995): Lectures on Polytopes. Graduate Texts in Mathe-matics 152, New York: Springer.

Page 115: Borm_Chapters in Game Theory-In Honor of Stef Tijs

BY THEO DRIESSEN

5.1 Introduction

In physics a vector field is said to be conservative if there exists acontinuously differentiable function U called potential the gradient ofwhich agrees with the vector field (notation: ). There exist sev-eral characterizations of conservative vector fields (e.g.,or every contour integral with respect to the vector field is zero). Sur-prisingly, the successful treatment of the potential in physics turned outto be reproducible, in the late eighties, in the mathematical field calledcooperative game theory. Informally, a solution concept on the uni-versal game space is said to possess a potential representation if it isthe discrete gradient of a real-valued function P on called potential(notation: ). In other words, if possible, each component ofthe game-theoretic solution may be interpreted as the incremental re-turn with respect to the potential function. In their innovative paper,Hart and Mas-Colell (1989) showed that the well-known game-theoreticsolution called Shapley value is the unique solution that has a potential

99

P. Borm and H. Peters (eds.), Chapters in Game Theory, 99–120.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Chapter 5

Consistency and Potentialsin Cooperative TU-Games:Sobolev’s Reduced GameRevived

Page 116: Borm_Chapters in Game Theory-In Honor of Stef Tijs

representation and meets the efficiency principle as well. In the secondstage (the nineties) of the potential research into the solution part ofcooperative game theory, various researchers contributed different, butequivalent characterizations of (not necessarily efficient) solutions thatadmit a potential (cf. Ortmann, 1989; Calvo and Santos, 1997; Sánchez,1997). Almost all of these characterizations of solutions, stated in termsof the potential approach applied in cooperative game theory, resemblesimilar ones stated in physical terminology. For instance, the character-ization of a conservative vector field is analogous to itsdiscrete version with respect to a game-theoretic solution

commonly known as the law of preservation of discrete differences(cf. Ortmann, 1998), also called the balanced contributions principle(cf. Calvo and Santos, 1997; Myerson, 1989; Sánchez, 1997).

One characterization with no counterpart in physics states that agame-theoretic solution possesses a potential representation if and onlyif the solution for any game equals the Shapley value of another gameinduced by both the initial game and the relevant solution concept (cf.Calvo and Santos, 1997). Our main goal is to exploit this particularcharacterization whenever one deals with the so-called reduced gameproperty for solutions, also called consistency property. Informally, thekey notions of a reduced game and consistency can be elucidated asfollows. A cooperative game is always described by a finite player set aswell as a real-valued “characteristic function” on the collection of subsetsof the player set. A so-called reduced game is deducible from a givencooperative game by removing one or more players on the understandingthat the removed players will be paid according to a specific principle(e.g. a proposed payoff vector). The remaining players form the playerset of the reduced game; the characteristic function of which is composedof the original characteristic function, the proposed payoff vector, and/orthe solution in question. The consistency property for the solution statesthat if all the players are supposed to be paid according to a payoff vectorin the solution set of the original game, then the players of the reducedgame can achieve the corresponding payoff in the solution set of thereduced game. In other words, there is no inconsistency in what theplayers of the reduced game can achieve, in either the original game orthe reduced game.

Generally speaking, the consistency property is a very powerful andwidely used tool to axiomatize game-theoretic solutions (cf. the sur-veys on consistency in Driessen, 1991, and Maschler, 1992). In the

100 DRIESSEN

Page 117: Borm_Chapters in Game Theory-In Honor of Stef Tijs

early seventies Sobolev (1973) established the consistency property forthe well-known Shapley value with respect to an appropriately chosenreduced game. With Sobolev’s result at hand, we are in a positionto establish, under certain circumstances, the consistency property fora solution that has a potential representation. For that purpose theconsistency property is formulated with respect to a strongly adaptedversion of the reduced game used by Sobolev. Section 5.2 is devotedto the whole treatment of the relevant consistency property. The proofof this specific consistency property (see Theorem 5.6) is based on theparticular characterization of solutions that admit a potential. In sum-mary, this chapter solves the open problem concerning a suitably chosenconsistency property for a wide class of game-theoretic solutions, inclu-sive of the Shapley value. In addition, for any solution that admits apotential representation, we provide an axiomatization in terms of thenew consistency property, together with some kind of standardness fortwo-person games (see Theorem 5.8).

Our modified reduced game differs from Sobolev’s reduced game onlyin that any game is replaced by its image under a bijective mapping onthe universal game space (induced by the solution in question). Theparticular bijective mapping, induced by the Shapley value, equals theidentity. To be exact, Sobolev’s explicit description of the reduced gamerefers to the initial game itself, whereas our similar, but implicit defini-tion of the modified reduced game is formulated in terms of the imageof both the modified reduced game and the initial game (see Theorem5.6).

In the general framework concerning an arbitrary solution that ad-mits a potential, there is no way to acquire more information about theassociated bijective mapping and consequently, the implicit definition ofthe modified reduced game can not be explored any further to strengthenthe consistency property for this solution. For a certain type of solu-tions called semivalues (cf. Dubey et al., 1981) however, the associatedbijective mapping and its inverse are computable and hence, under theseparticular circumstances, one gains an insight into the modified reducedgame itself. Section 5.3 is devoted to a thorough study of these semival-ues and, in the setting of the consistency property for these semivalues,we provide various elegant interpretations of the modified reduced game(see Theorems 5.11 and 5.12).

CONSISTENCY AND POTENTIALS 101

Page 118: Borm_Chapters in Game Theory-In Honor of Stef Tijs

5.2 Consistency Property for Solutions that Ad-mit a Potential

A cooperative game with transferable utility (TU) is a pair whereN is a nonempty, finite set and is a characteristic function,defined on the power set of N, satisfying An element of N(notation: ) and a nonempty subset S of N (notation: or

with ) is called a player and coalition respectively, andthe associated real number is called the worth of coalition S. Thesize (cardinality) of coalition S is denoted by |S | or, if no ambiguity ispossible, by Particularly, denotes the size of the player set N. Givena (transferable utility) game and a coalition S, we write forthe subgame obtained by restricting to subsets of S onlyLet denote the set of all cooperative games with an arbitrary playerset, whereas denotes the (vector) space of all games with referenceto a player set N which is fixed beforehand.

Concerning the solution theory for cooperative TU-games, the chap-ter is devoted to single-valued solution concepts. Formally, a solution

on (or on a particular subclass of ) associates a single payoff vec-tor with every TU-game Theso-called value of player in the game represents an as-sessment by of his gains from participating in the game. Until furthernotice, no constraints are imposed upon a solution on In the nextdefinition (cf. Calvo and Santos, 1997; Dragan, 1996; Hart and Mas-Colell, 1989; Ortmann, 1998; Sanchez, 1997) we present two key notions(out of four).

Definition 5.1 Let be a solution on

(i) We say the solution admits a potential if there exists a functionsatisfying

(ii) The mapping associates with any game its solu-tion game the characteristic function of which is definedto be

102 DRIESSEN

Page 119: Borm_Chapters in Game Theory-In Honor of Stef Tijs

CONSISTENCY AND POTENTIALS

In words, the potential function represents a scalar evaluation for co-operative TU-games, of which any player’s marginal contribution agreeswith the player’s value according to the relevant solution (notation:

). If the potential exists, it is uniquely determined up to anadditive constant by the recursive formula

Usually, it is tacitly assumedthat the potential is zero-normalized (i.e., ). In fact, it iswell-known that the potential function (if it exists) is given by

103

By (5.2), the worth of coalition S in the solution gamerepresents the overall gains (according to the solution ) to the membersof S from participating in the induced subgame (on the under-standing that players outside S are not supposed to cooperate). Gen-erally speaking, the solution game differs from the initial game. Noticethat both games are the same if and only if the solution meets theefficiency principle, i.e.,

The core topic involves the so-called consistency treatment for solu-tions that admit a potential. For that purpose, we need to recall onebasic theorem from Calvo and Santos (1997); the main result of whichis referring to the well-known Shapley value. With the help of Sobolev’s(1973) pioneer work in the early seventies on the consistency property forthe Shapley value, we are able to prove, under certain circumstances, asimilar consistency property for (not necessarily efficient) solutions thatadmit a potential.

Definition 5.2 The Shapley value of player in thegame is defined as follows (cf. Shapley, 1953):

Page 120: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Theorem 5.3 Consider the setting of Definitions 5.1 and 5.2.

(i) (Cf. Calvo and Santos, 1997, Theorem, page 178.) Let be asolution on Then admits a potential if and only if

for all In words, any solution that admits apotential equals the Shapley value of the associated solution game.

(ii) (Cf. Hart and Mas-Colell, 1989, Theorem A, page 591.) TheShapley value is the unique solution on that admits a potentialand is efficient as well.

Definition 5.4 (Cf. Sobolev, 1973; Driessen, 1991.)

(i) With an game a player and his payoff(provided ), there is associated the reduced game

with player set defined by

In words, there is no inconsistency in what each of the players inthe reduced game will get according to the solution in eitherthe reduced game or the initial game.

Theorem 5.5 (Cf. Sobolev, 1973; Driessen, 1991.)The Shapley value on is consistent with respect to the reduced gameof the form (5.4).

Now we are in a position to state and prove a similar consistency prop-erty for solutions that admit a potential. Actually, for a given solution

the appropriately chosen reduced game resembles Sobolev’s reducedgame (5.4), but they differ in that the initial game is replaced bythe associated solution game In summary, it turns out that the

104 DRIESSEN

(ii) A solution on is said to be consistent with respect to thisreduced game if it holds

Page 121: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Then the solution on is consistent with respect to this modifiedreduced game, i.e.,

That is, there is no inconsistency in what each of the players in thereduced game will get according to the solution in either the reducedgame or the initial game.

Proof. Fix both the game and a player (where). Write instead of Since admits a potential, it holds, by

Theorem 5.3(i), The essential partof the proof concerns the claim that the solution game (5.6) techniqueapplied to the modified reduced game agrees with Sobolev’s reduced

1In Dragan (1996, Definition 11, page 459), the solution game plays an identicallyprominent role in defining the reduced game; the characteristic function of which is,however, from a different type since it deals with the reduced game in the sense ofHart and Mas-Colell (1989). Our model deals with the reduced game in the sense ofSobolev.

CONSISTENCY AND POTENTIALS 105

cornerstone of the consistency approach to (not necessarily efficient) so-lutions is the solution game instead of the game itself. Consequently,we have to define the modified reduced game implicitly by means of itsassociated solution game, on the understanding that a one-to-one cor-respondence (bijection) between games and solution games is supposedto be available.1

Theorem 5.6 Let be a solution on that admits a potential. Supposethat the induced mapping as given by (5.2), is a bijection.

With an game a player and his payoff(provided ), there is associated the modified reduced game

with player set which is defined implicitly by its

associated solution game the characteristic functionof which is defined to be

Page 122: Borm_Chapters in Game Theory-In Honor of Stef Tijs

game (5.4) technique applied to the initial solution game. Formally, weclaim the following:

Indeed, from both types of reduced games, we deduce that, for allit holds

Clearly, by (5.1)–(5.2), a solution that admits a (zero-normalized)potential, satisfies the for two-person games. We con-clude this section with the next axiomatization.

106 DRIESSEN

This proves (5.8). From this we deduce that the following chain of fourequalities holds:

where the first and last equality are due to Theorem 5.3(i) and the thirdequality is due to Theorem 5.5 concerning the consistency property (5.5)for the Shapley value This completes the full proof of the consistencyproperty for the solution

Definition 5.7 Let and be two solutions on We say the solutionis for two-person games if, for every two-person game

and every player it holds that

Page 123: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Theorem 5.8 Let be a solution on that admits a potential. Supposethat the induced mapping as given by (5.2), is a bijection.Then is the unique solution on that satisfies the following twoproperties:

(i) Consistency with respect to the modified reduced game implicitlydefined through its associated solution game (5.6) (with referenceto the given solution ).

(ii) –standardness for two-person games.

Proof. We show the uniqueness part of Theorem 5.8. Besides thegiven solution suppose that a solution on satisfies the consistencyproperty and the for two-person games. We prove byinduction on the size of the player set N thatfor every game The case holds trivially because of the

for two-person games applied to both solutions. Fromnow on fix an game with Due to the inductionhypothesis, it holds that for every game with

Note that, for all all it followsimmediately from (5.6) that

CONSISTENCY AND POTENTIALS 107

In other words, the two solution games and

are strategically equivalent (with reference to the trans-lation vector and thus, the covarianceproperty for the Shapley value applies in the sense that it holds

For all and all we obtain the following chain ofequalities:

by consistency for

by induction hypothesis

by Theorem 5.3(i)

for all

Page 124: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Since we arrive at the conclusion thatfor all Thus, for every game as was tobe shown.

5.3 Consistency Property for Pseudovalues: aDetailed Exposition

In this section we aim to clarify that, if we deal with a particular type ofsolutions called pseudovalues, then various elegant interpretations arisein the study of the modified reduced game as given by (5.6). Besides thevarious appealing interpretations in some kind of terminology, we claimthat the implicit definition of the modified reduced game can be trans-formed into an explicit one, although the resulting explicit descriptionbecomes rather laborious.

In Dubey et al. (1981) a semivalue on is defined to be a functionwhich satisfies the linearity, symmetry, monotonicity,

and projection axioms. It was shown (Theorem 1, page 123) that everysemivalue can be expressed by the following formula which will be usedas our starting point (but we omit certain non-negativity constraints).Throughout this section, lower-case letters and so on, are sup-posed to be non-negative integers because they are meant to refer to

108 DRIESSEN

By interchanging the roles of players and the latter result yields

We conclude that

by covariance for

by Theorem

by consistency for

for all

for all

Page 125: Borm_Chapters in Game Theory-In Honor of Stef Tijs

sizes of coalitions. For the sake of notation, let represent anarbitrary collection of real numbers called weights, meant to be read as

Definition 5.9 We say a solution on is a pseudovalue on if thereexists a collection of weights such that the following twoconditions hold:

CONSISTENCY AND POTENTIALS 109

(i)

(ii) the collection of weights possesses the upwards triangleproperty, i.e.,

In words, in the setting of populations with a variable size, the“weight” of the formation of a coalition of size in anpopulation equals the sum of the “weights” of the two events whichmay arise by enlarging the population with one person (namely,two coalitions of consecutive sizes and respectively in an

population).

For reasons that will be explained later on, no further constraints areimposed upon the weights (e.g., they are not necessarily non-negative).A pseudovalue with reference to non-negative weights is known as asemivalue (Dubey et al., 1981). It is straightforward to check that anypseudovalue admits a potential (due to the upwards triangle propertyfor where the potential function is given by

To start with, we determine an explicit formula for the associatedsolution game. As an adjunct, we become engaged with induced collec-tions of weights verifying the upwards triangle property.

for all and all

for all

Page 126: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Proposition 5.10 Let be an arbitrary collection of weights.

(i) If the collection possesses the upwards triangle property,so does the induced collection defined by

(ii) For every given thatfor all then the weights can be re-discovered asfollows:

(iii) Suppose (5.13) holds for all If the collection pos-sesses the upwards triangle property, so does the induced collection

(iv) The special case yields and for all

110 DRIESSEN

For expositional convenience, the computational, but straightforwardproof of Proposition 5.10 is postponed until Section 5.5. By (5.12)–(5.13), there exists a natural one-to-one correspondence between col-lections of weights that satisfy the upwards triangle property. Partic-ularly, any pseudovalue induces another pseudovalue the weightsof which are given by (5.12) (and vice versa, by (5.13)). For instance,by part (iv), the Shapley value induces the pseudovalue that agreeswith the marginal contribution principle in the sense that

for every game and all Another well-known pseudovalue, called Banzhaf value, corresponds to the uniformweights for all while the induced pseudovalueis associated with the weights for all Notethat the smallest weights are negative. Because of this observation,we do not want to exclude pseudovalues associated with not necessarilynon-negative weights. Throughout the remainder of this section, theinduced pseudovalue turns out to be of particular interest in order toprovide an appealing explicit and implicit interpretation of the modifiedreduced game.

all

Page 127: Borm_Chapters in Game Theory-In Honor of Stef Tijs

In the second stage we claim two preliminary results each of which isof interest on its own. Firstly, by (5.15), we state that the mappinginduced by an initial pseudovalue may be interpreted as the potentialfunction of the induced pseudovalue (in the sense that

for every game ). Secondly, by (5.16), in comparing the twosolution games associated with the modified reduced game and the initialgame respectively, the increase (decrease) to the worth of any coalitionturns out to be coalitionally-size-proportional to the increase (decrease)to the payoff of the removed player, taking into account his initial payoffand his payoff according to the induced pseudovalue (with respect tothe subgame the player set of which consists of the partnership betweenthe coalition involved and the removed player).

In the third and final stage we claim, by (5.19), that a specificallychosen weighted sum of the latter increases (decreases) to the payoff ofthe removed player represents the increase (decrease) to the worth ofany coalition, in comparing the modified reduced game and the initialgame respectively. The recursively computable coefficients used in therelevant weighted sum are identical to those which appear in the explicitdetermination of the inverse of the bijective mapping associated withthe pseudovalue This mapping turns out to be bijective under verymild conditions imposed upon the underlying collection of weightsthat prescribe the pseudovalue

Theorem 5.11 Let be a pseudovalue on of the form (5.10) asso-ciated with the collection of weights Let be theinduced mapping as given by (5.2). Further, let be the induced pseu-dovalue on associated with the induced collection of weightsas given by (5.12). Then the following holds:

(i)

(ii)

(iii)

for all all all and all (provided

CONSISTENCY AND POTENTIALS 111

Page 128: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Proof. (i) Let and By assumption of apseudovalue of the form (5.10) applied to the subgame and bysome straightforward combinatorial computations, we obtain

(ii) Let and By (5.14), player incrementalreturn with respect to the coalition in the associated solution game

is determined as follows:

where the third equality is due to the upwards triangle property for(see Proposition 5.10(i)).(iii) Let and (provided ). From theimplicit definition of the modified reduced game as given by (5.6), and(5.15) applied to respectively, we derive the following:

Theorem 5.12 Let be a pseudovalue on of the form (5.10) as-sociated with the collection of weights satisfying

112 DRIESSEN

for all

Page 129: Borm_Chapters in Game Theory-In Honor of Stef Tijs

for all Let be the induced mapping as given by(5.2). Further, let be the induced collection of weights asgiven by (5.12). For every let the induced collection of constants

be defined recur-sively by

Then the following holds:

(i) Given that for every game and all

(see (5.14)), the data of any gamecan be re-discovered as follows:

(ii) Let be the induced pseudovalue on associated withThen it holds

for all all all and all(provided ).

(iii) For the special case then (5.19) reduces to Sobolev’s

reduced game (5.4).

The rather technical proof of Theorem 5.12 will be postponed until Sec-tion 5.5.

CONSISTENCY AND POTENTIALS 113

for all and

for all

for all where

Page 130: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Remark 5.13 To conclude with, we specify the explicit determinationfor the worth of one- and two-person coalitions in themodified reduced game (5.6), without regard to the number of playersin the initial game.

Let be a pseudovalue on of the form (5.10) associated with thecollection of weights satisfying for all Letbe the induced pseudovalue on associated with the induced collectionof weights as given by (5.12).

Consider an arbitrary game and letBy applying (5.19) to one- and two-person coalitions and

respectively, and (5.10) to the pseudovalue we obtain that theworth of one- and two-person coalitions in the modifiedreduced game of the form (5.6) is determined as follows

(recall that, by (5.17),

114 DRIESSEN

In the framework of three-person games, we obtained a complete de-scription of the two-person modified reduced game and by tedious, butstraightforward calculations, one may verify that the consistency prop-erty holds true for the pseudovaluewith respect to three-person games. One useful tool concerns the up-wards triangle property for

Remark 5.14 The relationship (5.15) is also useful to provide, in theframework of pseudovalues, an alternative proof of the fundamentalequivalence theorem between any pseudovalue and the Shapley value,

Page 131: Borm_Chapters in Game Theory-In Honor of Stef Tijs

that is for every game Let us outline thisalternative proof that differs from the proofs of Calvo and Santos (1997)and Sánchez (1997) of the equivalence theorem applied to solutions thatadmit a potential.

Let Recall that, by straightforward combinatorial com-putations, the solution game is determined by (5.14) and inturn, the incremental returns of any player in the solution game aredetermined by (5.15), i.e.,

For the sake of the last equality but one, we need to establish the fol-lowing claim:

CONSISTENCY AND POTENTIALS 115

for all and all From this and some additional combi-natorial computations, we deduce that, for all the following chainof equalities holds:

or equivalently,

Page 132: Borm_Chapters in Game Theory-In Honor of Stef Tijs

for all The proof of the claim (5.22) proceeds by inductionon the size where is fixed. Recall (5.12) and theupwards triangle property of The inductive proof of (5.22) isleft to the reader.

5.4 Concluding remarks

Definition 5.1 deals with the existence of the so-called additive potentialrepresentation in the sense that each component of the game-theoreticsolution may be interpreted as the incremental return with respect tothe potential function. In Ortmann (2000) the multiplicative potentialapproach to the solution theory for cooperative games is based on thequotient instead of the difference.

Definition 5.15 (Cf. Ortmann, 2000.) Let be a solution on the setof positive cooperative games. We say the solution admits a mul-

tiplicative potential if there exists a function satisfyingand

116 DRIESSEN

As noted in Ortmann (2000), there exists a unique solution onthat admits a multiplicative potential and is efficient as well. This uniquesolution, however, can not be represented in an explicit manner, oppositeto the explicit formula (5.3) for the Shapley value in the framework ofefficient solutions that admit an additive potential. In addition to thepioneer work by Ortmann (2000), a more detailed theory about solutionsthat admit a multiplicative potential is presented in Driessen and Calvo(2001). It is still an outstanding problem to study the various typesof consistency properties for these solutions that admit a multiplicativepotential.

5.5 Two technical proofs

Proof of Proposition 5.10.(i) Let By (5.12) and the upwards triangle property (5.11) for

for all and all

Page 133: Borm_Chapters in Game Theory-In Honor of Stef Tijs

CONSISTENCY AND POTENTIALS 117

(ii) Fix The proof of (5.13) proceeds by backwards inductionon the size For (5.13) holds because of

For we deduce from (5.12) and theinduction hypothesis applied to that it holds

(iii) For every and write Let

and On the one hand, we deduce from the assumption(5.13) that it holds

it holds

Page 134: Borm_Chapters in Game Theory-In Honor of Stef Tijs

On the other hand, we deduce from the upwards triangle property forthat it holds

Since both computational methods yield the very same outcome, weconclude that Finally, the statement in part (iv) is adirect consequence of (5.12).

Proof of Theorem 5.12. Let(i) For every it suffices to prove the next equality:

Fix with and We aim to determine the coeffi-cient of the term in the sum given by the right hand of (5.24). Theterm occurs in any expression as long as providedthat Thus, we need only to consider those coalitions Rsatisfying with and each such coalition R, say of size

induces the term

Notice that, for any size there exists coalitions Rof size satisfying Hence, for every fixed

the coefficient of the term in the sum given by the righthand of (5.24) is determined by the next sum:

118 DRIESSEN

for all and all

Page 135: Borm_Chapters in Game Theory-In Honor of Stef Tijs

CONSISTENCY AND POTENTIALS

This proves (5.24).(ii) (5.19) is a direct consequence of both (5.16) and (5.18) applied tothe initial game and the reduced game as well.(iii) By (5.12), implies and for all

all By (5.17), whenever andthus, whenever Therefore, (5.19) reduces to the nextequality:

Obviously, the relevant equality agrees with Sobolev’s reduced game(5.4).

By construction based on (5.17), for all it holds

References

Calvo, E., and J.C. Santos (1997): “Potentials in cooperative TU-games,”Mathematical Social Sciences, 34, 175–190.Dragan, I. (1996): “New mathematical properties of the Banzhaf value,”European Journal of Operational Research, 95, 451–463.

Driessen, T.S.H. (1988): Cooperative Games, Solutions, and Applica-tions. Dordrecht: Kluwer Academic Publishers.

Driessen, T.S.H., (1991): “A survey of consistency properties in cooper-ative game theory,” SIAM Review, 33, 43–59.Driessen, T.S.H., and E. Calvo (2001): “A multiplicative potential ap-proach to solutions for cooperative TU-games,” Memorandum No. 1570,

119

Page 136: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Faculty of Mathematical Sciences, University of Twente, Enschede, TheNetherlands.

Dubey, P., A. Neyman, and R.J. Weber (1981): “Value theory withoutefficiency,” Mathematics of Operations Research, 6, 122–128.

Hart, S., and A. Mas-Colell (1989): “Potential, value, and consistency,”Econometrica, 57, 589–614.

Maschler, M. (1992): “The bargaining set, kernel, and nucleolus,” in:Aumann, R.J., and S. Hart (eds.), Handbook of Game Theory with Eco-nomic Applications, Volume 1. Amsterdam: Elsevier Science Publishers,591–667.

Myerson, R. (1980): “Conference structures and fair allocation rules,”International Journal of Game Theory, 9, 169–182.

Ortmann, K.M. (1998): “Conservation of energy in value theory,” Math-ematical Methods of Operations Research, 47, 423–450.

Ortmann, K.M. (2000): “The proportional value for positive cooperativegames,” Mathematical Methods of Operations Research, 51, 235–248.

Sánchez S., F. (1997): “Balanced contributions in the solution of coop-erative games,” Games and Economic Behavior, 20, 161–168.

Shapley, L.S. (1953): “A value for games,” Annals of Mathe-matics Studies, 28, 307–317.

Sobolev, A.I. (1973): “The functional equations that give the payoffsof the players in an game,” in: Vilkas. E. (ed.), Advances inGame Theory. Vilnius: Izdat. “Mintis”, 151–153.

120 DRIESSEN

Page 137: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Chapter 6

On the Set of Equilibria ofa Bimatrix Game: a Survey

BY MATHIJS JANSEN, PETER JURG, AND DRIES VER-MEULEN

6.1 Introduction

Any survey on this topic should start with the celebrated results ob-tained by Nash. First of all he showed that every non-cooperative gamein normal form has an equilibrium in mixed strategies (cf. Nash, 1950).He also established the well-known characterization of the equilibriumcondition stating that a strategy profile is an equilibrium if and onlyif each player only puts positive weight on those pure strategies thatare pure best responses to the strategies currently played by the otherplayers (cf. Nash, 1951).

In the special case of matrix games the existence of equilibria wasalready established by von Neumann and Morgenstern (1944). Theirresults though show more than just that. They show for example thatthe collection of equilibria is a polytope. Furthermore they explain howone can use linear programming techniques to actually compute such anequilibrium.

Once the existence of equilibria was also established for bimatrixgames, several authors, e.g. Vorobev, Kuhn, Mangasarian, Mills andWinkels, tried to develop methods based on linear programming to com-pute equilibria for bimatrix games. Later on authors like Winkels andJansen also generalized the structure result and showed that the set of

121

P. Borm and H. Peters (eds.), Chapters in Game Theory, 121–142.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Page 138: Borm_Chapters in Game Theory-In Honor of Stef Tijs

equilibria of a bimatrix game can be written as the union of a finitenumber of polytopes. Such a representation of the set of equilibria iscalled a decomposition in this survey.

SEVEN PROOFS During the last few decades several different de-compositions have been given. We will discuss seven of them and brieflycomment on the differences and similarities between these decomposi-tions. The first three can be seen as variations on the same line ofreasoning. In this approach, first the extreme points of the polytopesinvolved in the decomposition of the equilibrium set are characterized.Subsequently an analysis is given of exactly how groups of extreme pointsgenerate one such polytope of the decomposition. We will first discussthese three methods.

(i) In the approach by Vorobev (1958) and Kuhn (1961) (as it is de-scribed in this survey) first a description is given of the collection ofstrategies of player 1 that can be combined to an extreme equilibrium.Then it is shown that

122 EQUILIBRIA OF A BIMATRIX GAME

(1)

(2)

(3)

this collection is finite

the Cartesian product of the convex hull of a subset of withall strategies of player 2 that combine to an equilibrium with anyone strategy of the subset in question is a polytope, and

any equilibrium is an element of such a product set.

(ii) Winkels (1979) basically uses the same steps in his proofs. Theimprovement over the proof of Vorobev and Kuhn is that the definitionof the set is a bit different. This difference has the advantage thatthe proofs become shorter and more transparent.

(iii) Mangasarian’s (1964) proof is based on a more symmetric treatmentof the players. He looks at Cartesian products of subsets of withsubsets of and shows that, whenever such a product is included inthe equilibrium set, so will the convex hull of this product. Moreover,any one equilibrium is an element of the convex hull of at least one sucha product.

The latter four proofs take what can be called a dual approach. Based onthe characterization of the notion of an equilibrium in terms of carriersand best responses the defining systems of linear inequalities are given

Page 139: Borm_Chapters in Game Theory-In Honor of Stef Tijs

directly. Subsequently it is shown that any solution of such a system isan equilibrium and that any equilibrium is indeed a solution of at leastone of the systems generated by the approach in question.

(iv) The proof in Jansen (1981) is based on the two observations that anyconvex subset of the equilibrium set is contained in a maximal convexsubset of the equilibrium set and that any maximal convex subset isa polytope. Thus, since each equilibrium by itself constitutes a convexsubset of the equilibrium set, we again get the result that the equilibriumset is the union of polytopes. The fact that these polytopes are finitein number follows from the characterization of maximal convex subsetsof the equilibrium set in terms of the carriers and best responses of theequilibria in such a subset.

(v) Undoubtedly the shortest proof is by Quintas (1989). He showshow to associate with each subset of the collection of pure strategies ofplayer 1 and each subset of the collection of pure strategies of player 2a polytope of equilibria. Since each equilibrium is evidently containedin such a polytope we easily get the result of Vorobev.

(vi) The approach of Jurg and Jansen (cf. Jurg, 1993) looks very muchlike the proof by Quintas. However, their approach yields a straight-forward correspondence between the subsets of pure strategies used togenerate the polytopes of the decomposition and faces of the equilibriumset.

(vii) The approach of Vermeulen and Jansen (1994) can be seen as ageometrical variation on the same theme. Its advantage though is thatit can easily be adjusted to a proof of the same result for the collectionof perfect and proper equilibria.

NEW ASPECTS Although this chapter is intended to be a survey,we would like to point out that we also used modern insights to getshorter or more transparent proofs of the original results. Further weused an idea of Winkels in order to show how the Mangasarian approachcan be used to obtain the decomposition result. Finally we prove thatthe two decompositions of Vorobev and Winkels are in fact identical byshowing that their (different) definitions of extreme strategies coincide.

JANSEN, JURG, AND VERMEULEN 123

Notation The unit vectors inare denoted by For we writeFor we denote by conv(S) the convex hull of S and by cl(S)the closure of S. For a convex set we denote by relint (C) the

Page 140: Borm_Chapters in Game Theory-In Honor of Stef Tijs

relative interior of C and by ext(C) the set of extreme points of C. Fora finite set T, the collection of non-empty subsets of T is denoted by

6.2 Bimatrix Games and Equilibria

An game is played by two players, player 1 and player2. Player 1 has a finite set and player 2 has a finite set

of pure strategies. The payoff matrices ofplayer 1 and of player 2 are denoted by A and B respectively.This game is denoted by (A, B).

Now the game (A, B) is played as follows. Players 1 and 2 choose,independent of each other, a strategy and respectively.Here can be seen as the probability that player 1 (2) chooses his

row The (expected) payoff for player 1 is andthe expected payoff to player 2 is

A strategy pair is an equilibrium for the game(A, B) if

and

The set of all equilibria for the game (A, B) is denoted by E(A, B). Bya theorem of Nash (1950) this set is non-empty for all bimatrix games.

6.3 Some Observations by Nash

In a survey about equilibria it is inevitable to start with a descriptionof concepts and results that can be found in John Nash’ seminal paper’Non-cooperative games’ of 1951. Even in this first paper on the exis-tence of equilibria Nash evidently realized that the key to a polyhedraldescription of the equilibrium set lies in the characterization of equilibriain terms of what are nowadays called carriers and best responses.

Since it is indeed the key to all known polyhedral descriptions of theNash equilibrium set we will first have a look at his characterization ofequilibria. It can be found at the bottom of page 287 of Nash’s paper,but we will use the more modern terminology of Heuer and Millham(1976). Following them, we introduce for a strategy the carrier

and the set for all of

124 EQUILIBRIA OF A BIMATRIX GAME

Page 141: Borm_Chapters in Game Theory-In Honor of Stef Tijs

pure best replies of player 2 to For a strategy the setsand are defined in the same way.

Lemma 6.1 Let (A, B) be a bimatrix game and let be a strategypair. Then if and only if and

Proof. (a) If then

So implies that that is:Similarly, one shows that

(b) If and then for all

Similarly, for all Hence,

Next we consider the concepts of interchangeability and sub-solutionsintroduced by Nash.

A subset S of the set of equilibria of a bimatrix game satisfies theinterchangeability condition if for any pair we also havethatIf a subset S of the set of equilibria has the interchangeability property,then where

are called the factor sets of S. Since, obviously, a setof equilibria has the interchangeability property, sets of this form areprecisely the sets with the interchangeability property. In Heuer andMillham (1976) sets of equilibria of the form were called Nashsets.

Nash used the term sub-solution for a Nash set that is not properlycontained in another Nash set. In this chapter we prefer the term maxi-mal Nash set, a term that was introduced by Heuer and Millham as well.

JANSEN, JURG, AND VERMEULEN 125

and

Page 142: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Nash gave a proof of the following result. Because it will be generalizedin 6.15, the proof is left to the reader for now.

Lemma 6.2 For a bimatrix game, a maximal Nash set is the productof two convex, compact sets.

Finally, Nash proved that for a bimatrix game with only one maximalNash set —he called such a game solvable—the set of equilibria is theproduct of two polytopes.

6.4 The Approach of Vorobev and Kuhn

In this section we will describe a result of Vorobev (1958) and its im-proved version of Kuhn (1961). Their method can be seen as a “one-sided” approach to the decomposition of the Nash equilibrium set intoa finite number of (bounded) polyhedral sets.

They place themselves in the position of player 1. First they ana-lyze which strategies of player 1 occur as extreme elements of certainpolytopes of strategies that, combined with a finite number of strategiesof player 2, are equilibria. This set of extreme elements is indicatedby Then they show that, for any subset P of the collectionL(P) of strategies of player 2 that combine to an equilibrium for anyelement in P, is polyhedral. Finally they show that conv(P) × L(P),a polytope, is a subset of the Nash equilibrium set. Hence, since anyequilibrium is indeed also an element of such a polytope, we get thatthe Nash equilibrium set is a, necessarily finite, union of polytopes.

In order to verify these claims, let (A, B) be an game.For and we introduce the sets

126 EQUILIBRIA OF A BIMATRIX GAME

and

Since

Page 143: Borm_Chapters in Game Theory-In Honor of Stef Tijs

In words one could say that the set is the collection of pairsfor which is a strategy of player 1 and is an upper bound on thepayoffs player 2 can obtain given that player 1 plays

Since the sets and are obviously polyhedral we can easilysee that they only have a finite number of extreme points. Thus, thefollowing lemma implies the finiteness of

Lemma 6.3 If for some finite set of strategies ofplayer 2, then for all

Proof. Let Suppose that whereWe have to prove that First

we will show that So letSince, for all

JANSEN, JURG, AND VERMEULEN 127

is the intersection of the bounded polyhedral set and a finite numberof halfspaces, is a bounded polyhedral set. So is a polytope.Similarly, is a polytope.

For a set P of strategies of player 1 and a set of strategies of player2, Vorobev introduces the sets

Obviously these sets are convex and compact.Vorobev calls a strategy of player 1 extreme if for

some finite set of strategies of player 2. Let denote the set ofextreme strategies of player 1.

In order to prove that is a finite set, Kuhn introduces the sets

we have for that This implies thatSo Furthermore, since

and

and

Page 144: Borm_Chapters in Game Theory-In Honor of Stef Tijs

By (1) and (2), Similarly, Hence,

Because this leads to the equality Since, fora this proves that

In a similar way one shows that, for a finite set P of strategies of player1, the set L(P) has a finite number of extreme points. Therefore thefollowing theorem implies that the set of equilibria of a bimatrix gameis the union of a finite number of polytopes.

Theorem 6.4 For any bimatrix game (A , B)

128 EQUILIBRIA OF A BIMATRIX GAME

Since

Proof. (a) Let P be a non-empty subset of such thatSuppose that and Then and theconvexity of implies that So

(b) Suppose that is an element of E(A, B). Then, by def-inition, the set is a subset of Sinceand is a polytope,Clearly, for all that is: So

Since, as we already observed, is a finite set, the above theorem im-mediately implies that the equilibrium set is the finite union of maximalNash sets.

Another observation we would like to make at this point is that theprevious approach also yields a way to index maximal Nash sets. Thisworks as follows.

Lemma 6.5 Let P be a set of strategies of player 1 and be a set ofstrategies of player 2. Then is a Nash set if and only if P is asubset of and is a subset of L(P). It is a maximal Nash set ifand only if P equals and equals L(P).

Page 145: Borm_Chapters in Game Theory-In Honor of Stef Tijs

6.5 The Approach of Mangasarian and Winkels

In his proof Mangasarian (1964) also employs the polyhedral setsand as they were introduced by Kuhn (1961). However, comparedwith the previous approach, Mangasarian’s method of proof is based ona more symmetric treatment of the players.

Mangasarian proved that each equilibrium of a bimatrix game canbe constructed by means of the finite set of extreme points of the twopolyhedral sets and corresponding to the game. In this sectionwe will describe Mangasarian’s ideas. Furthermore we will incorporatethe concept of a Nash pair due to Winkels (1979) to show that for anybimatrix game the set of equilibria is the finite union of polytopes. Theexposition of the proof we present here is a slightly streamlined versionof the original proof by Winkels.

Mangasarian’s approach is based on the following result, also provedby Mills (1960): a pair of strategies is an equilibrium of a bimatrixgame (A, B) if and only if there exist scalars and such that

JANSEN, JURG, AND VERMEULEN 129

Mangasarian calls a quartet extreme ifand Obviously, in this case, is an equilibrium.In order to prove that all equilibria can be found with the help of thefinite number of extreme quartets, we need the following lemma due toWinkels (1979).

Lemma 6.6 Let be an equilibrium of a bimatrix game (A, B),and let be a strict convex combination of pairs

in Then, for all is an equilibrium of the game(A, B) and

Proof. Suppose that whereand for all

Consider a strategy Then

and if then

Page 146: Borm_Chapters in Game Theory-In Honor of Stef Tijs

So, the theorem of Krein-Milman states that is a convexcombination of elements of the set ext(M). Since is a linear function,ext(M) So is a convex combination of extremepoints of

(b) According to part (a), we can write as a strict convexcombination of pairs in ext By Lemma 6.6, forall and

Similarly, we can write as a strict convex combinationof pairs in such that, for allE(A, B) and

The inclusion implies thatSimilarly, So, for all and isan extreme quartet. Since is a convex combinationof the quartets the proof is complete.

Following Winkels we call a strategy of player 1 extreme if there existsa strategy of player 2 such that is an extreme quartet.Extreme strategies for player 2 are defined in a similar way. Letdenote the (finite) set of extreme strategies of player Note that we

Hence, This implies that so that

In view of (1) and (2), and

The following result is due to Mangasarian (1964).

Theorem 6.7 Let be an equilibrium of a bimatrix game (A, B).Then the quartet is a convex combination of ex-treme quartets.

Proof. (a) First we will show that is a convex combinationof extreme points of

Consider the linear function on defined by

Then for any pair and is an elementof the compact, convex set

130 EQUILIBRIA OF A BIMATRIX GAME

Page 147: Borm_Chapters in Game Theory-In Honor of Stef Tijs

will show in Lemma 6.17 that the extreme strategies in the sense ofWinkels coincide with the extreme strategies as introduced by Vorobev.

We call a pair (P, ) with and a Nash pair for thegame (A , B) if is a Nash set.

Lemma 6.8 If is a Nash set for a bimatrix game (A, B), thenconv(P) × conv( ) is a Nash set too.

6.6 The Approach of Winkels

In this section we will describe the result of Vorobev and Kuhn again.This time we will follow the ideas developed by Winkels (1979) by usinghis definition of an extreme strategy of a player. Winkels came to hisdefinition by combining the ideas of Mangasarian and Kuhn.

JANSEN, JURG, AND VERMEULEN 131

Proof. If × , then and are a convexcombination of strategies and respectively.Since for all for all By theconvexity of Hence for all which leads to

that is:

Since is a finite set, the number of Nash pairs is finite too.Furthermore, for every Nash pair (P, ), conv(P) × conv( )and, by Theorem 6.7, each equilibrium is contained in a set conv(P) ×conv( ), where (P, ) is a Nash pair. This proves that the set of equi-libria of a bimatrix game is the finite union of polytopes.

Theorem 6.9 For any bimatrix game (A,B)

Note that, due to the definition of a Nash pair, not all Nash sets used inthis decomposition are necessarily maximal. Thus, some of them maybe redundant.

Lemma 6.10 If P is a set of strategies of player 1, then(a) if(b) L(conv(P)) = L(P).

Page 148: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Proof. We will give a proof of part (b) only. Because

Now suppose that and that is a convex combination ofstrategies Then the convexity of implies thatThat is: is an equilibrium. This proves that

Theorem 6.11 stated below is Winkels’ version of Vorobev’s result. Infact, by Lemma 11, this theorem is identical to (Vorobev’s) Theorem6.4.

Theorem 6.11 For any bimatrix game (A, B)

132 EQUILIBRIA OF A BIMATRIX GAME

Proof. (a) Let P be a non-empty subset of such thatAccording to Lemma 6.10(a), whereas the convexity ofthe right-hand set implies that In combinationwith Lemma 6.10(b) and Lemma 6.5 this inclusion proves that

(b) In order to prove the converse inclusion, assume thatAccording to Theorem 6.7, the quartet is a strict convexcombination of extreme quartets, sayNow let Then and By Lemma6.6, for all which implies that Hence, is anelement of conv(P) × L (P), and the proof is complete.

In order to prove that the sets described in the foregoing theorem are infact polytopes, Winkels introduces for a subset P of the finite set

and he concludes that L(P) is a polytope on the basis of the followingresult.

Lemma 6.12 If P is a subset of then

Proof. Since and L(P) is convex,

Now let and As in the proof of Theorem 6.11 oneshows that a set exists such that

for all Therefore for allSince

Since for allll Lemma 6.6 implies that, for any

Page 149: Borm_Chapters in Game Theory-In Honor of Stef Tijs

6.7 The Approach of Jansen

In the approaches described in the foregoing two sections, extreme strate-gies were the central issue. In the work of Jansen (1981) though thestarting point was the notion of a maximal Nash set. In fact the sourceof inspiration for the research of Jansen was Heuer and Millham (1976),where several properties of (the intersection of) these maximal Nash setswere obtained.

Lemma 6.8 states in fact that any Nash set is contained in a convexNash set. As a consequence of this result, a maximal Nash set is aconvex set. Before we can show that the maximal Nash sets are in factthe maximal convex sets, we first need a lemma.

Lemma 6.13 Any convex subset C of the set of equilibria of a bimatrixgame (A, B) is contained in a (convex) Nash set.

Proof. Assume that We will show that andare equilibria.

Consider, for the strategies andSince for close to 1

JANSEN, JURG, AND VERMEULEN 133

So, Similarly, for close to 0

and therefore Hence, Similarly,This proves that

there is a with there is a with

is a (convex) Nash set containing C.

Theorem 6.14 Let C be a convex subset of the set of equilibria of abimatrix game (A, B). Then C is a maximal convex subset if and onlyif C is a maximal Nash set.

Proof. (a) Suppose that C is a maximal convex subset of E(A,B).Then according to Lemma 6.13, C is contained in and hence equal to aconvex Nash set. In view of Lemma 6.8, this Nash set must be maximal.

Page 150: Borm_Chapters in Game Theory-In Honor of Stef Tijs

(b) Let C be a maximal Nash set and suppose that C is containedin the convex set T. According to Lemma 6.13, T is contained in aNash set, say So, by the maximality of C, this is possible only if

Hence, C is a maximal convex set.

If is an equilibrium of a bimatrix game (A,B), then is aconvex subset of E(A, B). Hence we can find, applying Zorn’s Lemma, amaximal convex subset of E(A, B) containing In view of Theorem6.14, each equilibrium of the game (A, B) is contained in a maximalNash set and E(A, B) is the union of such sets. In order to show thatthe number of maximal Nash sets is finite, we need the following lemma.

This proves that and, since that

Thus we may conclude that This implies, incombination with the fact that and that

and

So which proves that is a Nash set containingS. Since S is maximal, In a similar manner one shows that

(b) In part (a) it has been proved that the four inclusions mentionedin the theorem hold for a If, on the other hand, the four

134 EQUILIBRIA OF A BIMATRIX GAME

Lemma 6.15 Let be a maximal Nash set for a bimatrixgame (A, B). Further, let and let be a strategypair. Then

(a)

(b) if and only ifand

Proof. (a) Obviously, In order to show thatis a Nash set, suppose that Since relint thereexists a such that ThenSince for

Page 151: Borm_Chapters in Game Theory-In Honor of Stef Tijs

inclusions hold, then it follows that This impliesthat and that is:

By Lemma 6.15 a maximal Nash set is completely determined by thequartet where is some equilibriumin its relative interior. Since there is only a finite number of such quar-tets, we obtain the following result of Jansen (1981).

Theorem 6.16 The set of equilibria of a bimatrix game is a (not nec-essarily disjoint) union of a finite number of maximal Nash sets.

Finally we will show that the extreme strategies as introduced by Winkelscoincide with the extreme strategies in the sense of Vorobev.

Lemma 6.17 For a strategy of player 1 the following statements areequivalent:

JANSEN, JURG, AND VERMEULEN 135

(1)

(2)

(3)

there exist a strategy of player 2 and a maximal Nash set S suchthat

Proof. We will prove the implications(a) Suppose that for some strategy of player 2

and some maximal Nash set S. By Lemma 6.15, andwhere Hence,

(b) Suppose that Let Then finite sets P andof strategies of player 1 and 2 exist such that and

In view of Lemma 6.3, this implies thatfor some and for some Since

and So is an extreme quartet, thatis: So

(c) Suppose that By definition there is a strategyin such that Then for some maximal

Nash set S. If then there exist suchthat and LetThen so that and are elements of

Since this contradicts the factthat

A similar results holds for strategies of player 2.

Page 152: Borm_Chapters in Game Theory-In Honor of Stef Tijs

6.8 The Approach of Quintas

A very short and straightforward proof is the following one by Quintas(1989). With each set I of pure strategies of player 1 and set J ofpure strategies of player 2 he associates the collection of strategy pairs

such that the carrier of is contained in I, all pure strategies inJ are best responses to the carrier of is contained in J and all purestrategies in I are best responses to It is straightforward that such acollection is a polytope, that there is only a finite number of them, andthat each equilibrium is contained in such a polytope.

More formally, for an bimatrix game (A, B) and a pairQuintas introduces the subset

and

136 EQUILIBRIA OF A BIMATRIX GAME

of E(A, B). Because this set is bounded and determined by finitely manyinequalities, it is a polytope.

If, for an equilibrium we take andthen obviously So

relation between elements of and faces of maximal Nash setsas can be seen by considering the game

Although is an extreme equilibrium of this game (and hencea face of some maximal Nash set), there is no pairsuch that

Moreover for this game H({1,2}, {1,2}) = H({1,2}, {1,2,3}). Inthe next section we will describe an approach not suffering from thisdrawback.

6.9 The Approach of Jurg and Jansen

In this section we describe the approach of Jurg and Jansen (cf. Jurg,1993) who adapted the method of Quintas by replacing the pairs he

One can show that for a pair the polytope H(I, J)is a face of a maximal Nash set. However, generally there is not a nice

Page 153: Borm_Chapters in Game Theory-In Honor of Stef Tijs

dealt with by quartets consisting of the two carriers and the two sets ofpure best replies of a strategy pair. Their approach reveals more of thestructure of the set of equilibria and in particular of maximal Nash sets.

By Lemma 6.1 a strategy pair is an equilibrium of anbimatrix game (A, B) if and only if the (equilibrium) inclusions

and are satisfied. To check this relation we needthe quartet

JANSEN, JURG, AND VERMEULEN 137

If is an equilibrium of (A, B), then this quartet is called the char-acteristic quartet of The set of all characteristic quartets for thebimatrix game (A, B) is denoted by Char (A, B). Clearly, as a subset of

this set is finite and it partitions the set of equilibria.For a quartet the set F(I, J, K, L) is the

collection of pairs of strategies for which

and it is called the characteristic set corresponding to this quartet. Ifis an element of F(I, J, K, L), then

and

which implies that satisfies the equilibrium inclusions. Hence,F(I, J, K, L) is a subset of E(A, B). Clearly an equilibrium iscontained in the characteristic set corresponding to the characteristicquartet of so we have

Since there are only finitely many different characteristic quartets, thereare also finitely many different characteristic sets. Again, each charac-teristic set is bounded and described by finitely many linear inequalitiesand therefore a polytope. Hence

Theorem 6.18 The equilibrium set of a bimatrix game is the union ofa finite number of polytopes.

Page 154: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Let As in part (a) of the proof of Lemma 6.15,one can show that for a

and Hence is an element of thecharacteristic set corresponding to the characteristic quartet ofBy consequence, T is contained in this characteristic set.

Thus Theorem 6.19 settles the existence of maximal Nash sets. Further-more, this theorem implies Theorem 6.16. Note that in this approachZorn’s lemma is not used.

Obviously, a characteristic set F(I, J, K, L) is maximal if and only ifthere is no characteristic quartet different from (I, J, K, L)such that and Hence the followinglemma implies that, more generally, each characteristic set is a face of amaximal Nash set and conversely.

Lemma 6.20 Let (I, J, K, L) be a characteristic quartet for a game(A, B). Then F is a face of F(I, J, K, L) if and only if

for some characteristic quartet withand

Proof. (a) First let be a characteristic quartet suchthat and ThenF(I , J, K, L). Let G be the smallest face of F(I, J, K, L) containing

We will prove that is a face ofF(I, J, K, L) by showing that

Since we can take aLet Arguments sim-

ilar to those in the proof of Theorem 6.19 yield that

138 EQUILIBRIA OF A BIMATRIX GAME

Because of this finite number, we can assume that in Theorem 6.18each of the polytopes or equivalently each of the characteristic sets ismaximal, i.e. not properly contained in another one.

One easily checks that a characteristic set is a Nash set. Moreover

Theorem 6.19 Let (A,B ) be a bimatrix game. A maximal character-istic set is a maximal Nash set for (A , B) and vice versa.

Proof. We have proved the theorem if we show that each Nash set iscontained in a characteristic set.

Let T be a Nash set. According to Lemma 6.8, S = conv(T) is alsoa Nash set.

Page 155: Borm_Chapters in Game Theory-In Honor of Stef Tijs

and Since more-over it follows that So

(b) Secondly let F be a face of F(I, J, K, L). Chooserelint(F). As in the foregoing part one can show thatwhere and

The proof is complete if we can show thatTherefore we suppose that

By part (a), is a face of F(I, J, K, L). Hence, F is aface of

Choose It is easily shown thatis the characteristic quartet of

Let, for

JANSEN, JURG, AND VERMEULEN 139

Then for smallSince F is a face of there are a pair

and a real number c such that

and

This implies that

which is a contradiction. Hence

In fact, since implies that (I, J, K, L)equals we infer from Lemma 6.20:

Theorem 6.21 For a bimatrix game (A, B) there is a one-to-one cor-respondence between the elements of Char(A, B) and the set of faces ofmaximal Nash sets for (A , B).

Page 156: Borm_Chapters in Game Theory-In Honor of Stef Tijs

6.10 The Approach of Vermeulen and Jansen

In this section the method of Vermeulen and Jansen (1994) is described.The advantage of this method is that it can easily be adjusted to getthe same structure result for perfect (cf. Vermeulen and Jansen, 1994)and proper equilibrium (cf. Jansen, 1993).

The key of this approach is the introduction of an equivalence re-lation for each player by identifying the strategies to which the otherplayer has the same pure best replies. With the help of these relationsthe strategy spaces of both players are partitioned in a finite numberof equivalence classes. The closure of each of these classes appears tobe a poly tope. By considering the intersection of the set of equilibriawith the closure of the product of two equivalence classes (one for eachplayer), Vermeulen and Jansen show that the set of equilibria is in factthe finite union of polytopes.

For a bimatrix game two strategies and are called best-replyequivalent, denoted as if In a similar wayan equivalence relation can be defined for the strategies of player 2.

Since for an game, is a subset of N for allthe number of equivalence classes in corresponding to the

equivalence relation must be finite. The equivalence classes aredenoted as Similarly, is the finite union of equivalenceclasses, say For later purposes, we choose representantsin and in for all and

Obviously, each equivalence class is a convex set. Furthermore,

140 EQUILIBRIA OF A BIMATRIX GAME

Lemma 6.22 For all pairs

and

Proof. We will only give a proof of the first equality.Obviously, for aFor a with we consider the strategy

Page 157: Borm_Chapters in Game Theory-In Honor of Stef Tijs

which concludes the proof.

With the help of the representation of the closure of an equivalence classas given in the previous lemma, it is easy to prove that the closure of anequivalence class corresponding to the relation is a polytope.

Next we consider the set of equilibria contained in the closure of theproduct of two equivalence classes (one for each player). For a pairwe consider the Nash set

Obviously a Nash set is a polytope and each equilibrium is con-tained in some Nash set Further, if is an element of someNash set then Lemma 6.22 implies thatand Hence, by Lemma 6.1, is anequilibrium. So we have the following result.

Theorem 6.23 The set of equilibria of a bimatrix game is the finiteunion of poly topes.

Since the number of Nash sets is finite, each Nash set is containedin a maximal one and the set of equilibria of a bimatrix game is thefinite union of maximal Nash sets.

References

Heuer, G.A., and C.B. Millham (1976): “On Nash subsets and mobilitychains in bimatrix games,” Naval Res. Logist. Quart., 23, 311–319.

JANSEN, JURG, AND VERMEULEN 141

where In order to show that for all first we takea Then for all

For a and a

Hence, for all which means thatThen however

Page 158: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Jansen, M.J.M. (1981): “Maximal Nash subsets for bimatrix games,”Naval Res. Logist. Quart., 28, 147–152.

Jansen, M.J.M. (1993): “On the set of proper equilibria of a bimatrixgame,” Internat. J. of Game Theory, 22, 97–106.

Jurg, A.P. (1993): “Some topics in the theory of bimatrix games,” Dis-sertation, University of Nijmegen.

Kuhn, H.W. (1961): “An algorithm for equilibrium points in bimatrixgames, Proc. Nat. Acad. Sci. U.S.A., 47, 1656–1662.

Mangasarian, O.L. (1964): “Equilibrium points of bimatrix games,” J.Soc. Industr. Appl. Math., 12, 778–780.

Mills, H. (1960): “Equilibrium points in finite games,” J. Soc. Indust.Appl. Math., 8, 397–402.

Nash, J.F. (1950): “Equilibrium points in n-person games,” Proc. Nat.Acad. Sci. U.S.A., 36, 48–49.

Nash, J.F. (1951): “Noncooperative games,” Ann. of Math., 54, 286–295.

Quintas, L.G. (1989): “A note on polymatrix games,” Internat. J. GameTheory, 18, 261–272.

Vermeulen, A.J., and M.J.M. Jansen (1994): “On the set of perfectequilibria of a bimatrix game,” Naval Res. Logist. Quart., 41, 295–302.

von Neumann, J., and O. Morgenstern, O. (1944): Theory of Gamesand Economic Behavior. Princeton: Princeton University Press.

Vorobev, N.N. (1958): “Equilibrium points in bimatrix games,” Theor.Probability Appl., 3, 297–309.

Winkels, H.M. (1979): “An algorithm to determine all equilibrium pointsof a bimatrix game,” in: O. Moeschlin and D. Pallaschke (eds.), GameTheory and Related Topics. Amsterdam: North-Holland, 137–148.

142 EQUILIBRIA OF A BIMATRIX GAME

Page 159: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Chapter 7

Concave and Convex SerialCost Sharing

BY MAURICE KOSTER

7.1 Introduction

A finite set of agents jointly own a production technology for one or morebut a finite set of output goods, and to which they have equal accessrights. The production technology is fully described by a cost functionthat assigns to each level of output the minimal necessary units of (mon-etary) input. Each of the agents has a certain level of demand for thegood; then given the profile of individual demands the aggregate demandis produced and the corresponding costs have to be allocated. This sit-uation is known as the cooperative production problem. For instance,sharing the overhead cost in a multi-divisional firm is modeled througha cooperative production problem by Shubik (1962). Furthermore, thesame model is used by Sharkey (1982) and Baumol et al. (1982) in ad-dressing the problem of natural monopoly. Israelsen (1980) discusses adual problem, i.e., where each of the agents contributes a certain amountof inputs, and correspondingly the maximal output that can thus begenerated is shared by the collective of agents. In this chapter I con-sider cost sharing rules as possible solutions to cooperative productionproblems, i.e. devices that assign to each instance of a cooperative pro-duction problem a unique distribution of costs. In particular the focuswill be on variations of the serial rule of Moulin and Shenker (1992), thecost sharing rule that caught the most attention during the last decade

143

P. Borm and H. Peters (eds.), Chapters in Game Theory, 143–155.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Page 160: Borm_Chapters in Game Theory-In Honor of Stef Tijs

144 KOSTER

by its excellent performance in different strategic environments. Moulinand Shenker (1992) discuss the attractive features of the serial rule incase of technologies exhibiting negative externalities, and Moulin (1996)focusses on the serial rule in presence of positive externalities. Here twonew cost sharing rules are introduced: the concave serial rule and theconvex serial rule. Both cost sharing rules calculate the individual costsshares by a compound of two operations: the first operation is that of aparticular transformation of the cost sharing problem using methodol-ogy as in Tijs and Koster (1998), and, secondly, the serial rule is appliedto this adaptation. To be more precise, the concave serial rule appliesthe serial rule to the cost sharing problem with the (concave) pessimisticcost function, whereas the convex serial rule applies the serial rule to thecost sharing problem with (convex) optimistic cost function. It is shownthat these cost sharing rules have diametrically opposed equity features.The concave serial rule is shown to be the unique cost sharing rule thatconsistently minimizes the range of cost shares (being the difference be-tween the maximal and minimal cost share) under all those cost sharingrules that satisfy the classical equity property ranking (see, e.g., Moulin,2000) and the property excess lower bound, which considers minimal jus-tifiable differences between agents based on differences in their demands.On the other hand, the convex serial rule maximizes the range of costshares given ranking and the property excess upper bound, that can beseen as the dual of excess lower bound. In particular, it follows that theserial rule combines diametrically opposed equity properties, since it co-incides with the concave serial rule on the class of cost sharing problemswith concave cost functions, and it coincides with the convex serial ruleon the class of cost sharing problems with convex cost function.

7.2 The Cost Sharing Model

Consider a fixed and finite set of agents sharing aproduction technology for the production of some divisible good. Cor-respondingly, a cost sharing problem consists of an ordered pairwhere

(i)

(ii)

stands for the profile of individual demands forproduction; is the demand of agent

is the cost function, i.e. a nondecreasing absolutely

Page 161: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SERIAL COST SHARING 145

continuous function1 with that summarizes the produc-tion technology. For any output level denotes thecorresponding necessary amount of (monetary) input. The condi-tion indicates the absence of fixed costs.

Denote the class of all cost sharing problems by and the class of allcost functions is denoted

For denote by the function that relates each nonnegativereal to the derivative of at if it exists, and to 0 otherwise. We mayunambiguously speak of the marginal cost function and is calledthe marginal cost at production level for any The marginal costfunction is integrable and the total costs of production of units canbe expressed in terms of the marginal cost function, since by Lebesgue(1904) it holds for all

Given a cost sharing problem we seek to allocate the total costs forproducing the aggregate demand, i.e. A systematic device forthe allocation of costs for the class of cost sharing problems is modelledthrough the notion of a cost sharing rule. More formally, a cost sharingrule is a mapping such that for all it holds

Here stands for the cost share of agent is theaggregate demand of the coalition of agents N, i.e. Moregenerally, for any the aggregate demand of the coalition ofagents S, is denoted

In the literature many cost sharing rules are discussed, as there are,for instance, the proportional cost sharing rule, and the serial cost shar-ing rule of Moulin and Shenker (1992). The cost shares according to theproportional cost sharing rule for are given by

A function is absolutely continuous if for all intervalsand all there is a such that for any finite collection of pairwise

disjoint intervals with it holds

1

Page 162: Borm_Chapters in Game Theory-In Honor of Stef Tijs

146 KOSTER

The serial cost sharing rule, denoted is defined as follows. Takeand let be a permutation of N that orders the demands in-

creasingly, such that if Define intermediate productionlevels

Then the serial cost shares for are specified by

7.3 The Convex and the Concave Serial CostSharing Rule

In the literature serial ideas in cost sharing are discussed for differenttypes of production situations, and especially results are discussed forthe extreme cases in presence of solely positive or negative externali-ties. Moulin and Shenker (1992) show that the serial cost sharing ruleis, from a strategic point of view, an attractive allocation device in de-mand games related to production situations with convex cost function.Moulin (1996) discusses the serial cost sharing rule in case of economiesof scale. De Frutos (1998) defines the decreasing serial cost sharingrule, with outstanding strategic properties in demand games in case ofeconomies of scale. Hougaard and Thorlund-Petersen (2001) axiomati-cally characterize a cost sharing rule that coincides with the serial costsharing rule if the cost function is convex, and with the decreasing serialcost sharing rule if the cost function is concave. I will show that theserial cost sharing rule has diametrically opposed equity properties inthe above extreme settings of either a concave or a convex cost func-tion. Two new cost sharing rules are introduced in this section, i.e. theconvex serial cost sharing rule and the concave serial cost sharing rule,that determine cost shares according to the serial cost sharing rule forsome adapted cost sharing problem. These adapted cost sharing prob-lems rely on techniques from Tijs and Koster (1998) and Koster (2000).The convex (concave) serial cost sharing rule coincides with the serialcost sharing rule on the class of cost sharing problems with convex (con-cave) cost function. As will turn out, the concave serial cost sharing ruleminimizes the range of cost shares subject to some constraint, while the

Page 163: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SERIAL COST SHARING 147

convex serial cost sharing rule can be seen as its dual in the sense thatit maximizes the range of cost shares subject to a corresponding dualconstraint.

Before defining these cost sharing rules some preparations are needed.For each cost sharing problem Tijs and Koster (1998)study two cooperative games for G as an alternative for the traditionalstand alone game (see e.g. Sharkey, 1982; Young, 1985; Hougaard andThorlund-Petersen, 2000), using the notion of the pessimistic- and opti-mistic cost function. Given a particular cost sharing problemthe pessimistic cost function relates each partial demanded productionlevel in to the aggregate of highest marginal costs at whichthis level possibly could have been processed, whereas the optimisticcost function focusses on the lowest marginal costs in this respect.

Definition 7.1 Given the pessimistic cost functionis defined by

Here stands for the on and de-notes the Lebesgue measure. The optimistic cost function, is definedby2

Calculating the pessimistic- and optimistic cost function can be quitedemanding, even for simple cost functions. A useful technique to cal-culate the pessimistic cost function is discussed in Koster (2000). Notethat and indeed define cost functions and that

Moreover, Koster (2000) shows that is concave and is convexon respectively. Due to these observations and

2The original definition in Tijs and Koster (1998) resembles that of the pessimisticcost function where the supremum is interchanged with infimum. It is shown thatthe pessimistic and optimistic cost functions are duals in the sense of the first line ofthe present definition.

Page 164: Borm_Chapters in Game Theory-In Honor of Stef Tijs

148 KOSTER

are referred to as the pessimistic- and optimistic cost for producing theamount The transformations of the cost sharing problem

to and are used to define the concave- andconvex serial cost sharing rule.

Definition 7.2 The concave serial cost sharing rule, denoted isdefined by for all Similarly,the convex serial cost sharing rule, denoted is defined through

for all

Note that both cost sharing rules can be seen as extensions of the serialcost sharing rule: in case of a concave cost function it holds

and thus and if is convex thenand hence

Both cost sharing rules share desirable properties with other eligiblecost sharing rules. For instance, one can show that both cost sharingrules are demand monotonic, i.e., an agent who increases his demandwill pay more in the new situation. Another feature of the above costsharing rule is ranking; the natural ordering of the vector of cost sharespreserves the natural ordering of the demand profile. Formally,

Axiom A cost sharing rule satisfies ranking if for all it holdsthat

Thus ranking is the equity principle that requires from the larger deman-ders a higher contribution to the total costs of producing the aggregatedemand. The property is certainly transparent within the actual settingof nondecreasing costs. In particular, ranking implies the classical equaltreatment of equals.

Also and satisfy the bounds on cost shares specified bythe core of the cooperative pessimistic cost game of Tijs and Koster(1998). Each such bound comprises the pessimistic- or optimistic costsfor producing the aggregate demand of a coalition of agents as part ofthe total production. Instead of considering bounds on individual costshares, the focus is on (minimal) maximal differences between the costshares of the agents, thereby using information of the (optimistic) pes-simistic costs for producing the excess demands.

Page 165: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SERIAL COST SHARING 149

Axiom Consider a cost sharing rule on and let Thensatisfies excess lower bound for agent if

If (7.5) holds for all then satisfies excess lower bounds. Similarly,satisfies excess upper bound for agent if

If (7.6) holds for all then satisfies excess upper bounds.3

The excess lower bound property ascertains that the collective of largeragents is not subsidized in the sense that they do not pay a lower priceper unit of excess production that can be sustained by the productiontechnology. A similar interpretation can be given to the excess upperbound: the larger agents do not subsidize the smaller ones by paying aprice that exceeds a level that is supported by the production technology.The properties are not very restrictive. In fact, it is shown that even thecombination of the two is to be considered weak. More specifically, themost popular cost sharing rules like and satisfy both properties, aswell as the newly proposed and Importantly, the inequalities(7.5) and (7.6) turn out to be tight for and respectively.

Proposition 7.3 satisfies excess lower bounds and excess upperbounds.

Proof. Consider with Then

where Recall that

Then by equality (7.7) and

3The bounds are similar to those discussed in Aadland and Kolpin (1998) in thecase of airport situations.

Page 166: Borm_Chapters in Game Theory-In Honor of Stef Tijs

150 KOSTER

the fact that and are concave and convex on respectively,we get the desired inequalities, since

and

Proposition 7.4 satisfy excess lower bounds and excessupper bounds. In particular, in case of the inequalities (7.5), andin case of the inequalities (7.6) are tight, respectively.

Proof. Take and assume that the demands are orderedsuch that Let be a cost sharing rule such that

for some cost function withThen it holds for any

Then budget balance implies

Page 167: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SERIAL COST SHARING 151

Now distinguish between the three cases:(a) Then by (7.10), the inequalities (7.4), and the dualityrelation between and

This proves that satisfies excess upper bounds. Excess lower boundsfollows directly from (7.10), the inequalities (7.4), and the duality rela-tion between and by flipping the above inequality sign togetherwith interchanging and(b) Then satisfies excess lower bounds withequalities since the combination of equality (7.10) and the duality rela-tion between and gives

Excess upper bounds follows by almost the same reasoning as for case(a).(c) This case resembles case (b). One only needs tointerchange and in the proof of case (b) in order to obtain thedesired (in) equalities for

Remark It is left to the reader to show that if a cost sharing rulesatisfies excess lower bounds and excess upper bounds, than satisfiesa property that is called constant returns. Constant returns is a mostcompelling answer to solving cost sharing problems in total absence of

Page 168: Borm_Chapters in Game Theory-In Honor of Stef Tijs

152 KOSTER

externalities: satisfies constant returns when in caseis such that there is with for all In other words,each agent pays a fixed price per unit of the good.

Proposition 7.4 is indicative of the special character of the cost sharingrules and As I am about to show, in the universe of all costsharing rules with the property ranking, these rules can be seen as theextremes of the set of cost sharing rules satisfying the excess lower- andupper bounds. Among all cost sharing rules with the properties rankingand excess upper bounds, creates the highest difference betweenthe smallest and the largest cost share in a consistent way. Similarly,among the cost sharing rules with the properties ranking and excesslower bounds, is the unique rule that consistently minimizes thegap between the largest and smallest cost share. So where may beperceived as a constrained egalitarian cost sharing rule, is on theother side of the spectrum.

Define the range of a vector byFor a cost sharing rule and cost sharing problem

the corresponding range of cost shares is the number

Theorem 7.5 The concave serial cost sharing rule is the unique costsharing rule which minimizes the range of cost shares for all cost func-tions among the cost sharing rules satisfying ranking and excess lowerbounds.

Proof. By Proposition 7.4 only the proof of the uniqueness part re-mains. Take and let be a cost sharing rule with the premisesas enlisted above (inclusive of range minimization). For notational con-venience, put Concerning the uniquenessproof, suppose on the contrary Without loss of generality assumethat By ranking it holds that whenever

and thus the range Distinguish two cases. Firstconsider the case that Since there is a maximal suchthat Then excess lower bound for agent gives

Hence As by the choice offor all and thus the righthandside of the latter inequality is zero.

Page 169: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SERIAL COST SHARING 153

So, or and as by assumption we haveSuppose that for some

Then by excess lower bound for agent

and This shows that for all But thenwhich contradicts budget balance. So it

must hold that The excess lower bound for 1 gives

Budget balance implies hence. Consequently

contradicting range minimization.

More or less in the same way one can prove the following:

Theorem 7.6 The convex serial rule is the unique cost sharing rulewhich maximizes the range of cost shares for each cost function amongthose rules satisfying ranking and excess upper bounds.

Remark Always splitting costs equally among the agents yields a costsharing rule that is usually referred to as the equal split cost sharingrule. This cost sharing rule minimizes the range of cost shares subject toranking, but it does not satisfy the excess bounds previously discussed.

A result similar to Theorems 7.5 and 7.6 is the characterization of theconstrained egalitarian solution for fixed tree cost sharing problems byKoster et al. (1998). This cost sharing rule uniquely minimizes the rangeof cost shares among those cost sharing rules satisfying some monotonic-ity condition. In addition it is also shown that minimization of the rangeof cost shares under the given monotonicity restrictions is equivalentwith minimization of the highest cost share. This idea carries over tothe present context.

Page 170: Borm_Chapters in Game Theory-In Honor of Stef Tijs

154 KOSTER

Theorem 7.7 The concave serial rule is the unique cost sharing rulewhich minimizes the largest cost share for each cost function among thoserules satisfying ranking and excess lower bound.

Proof. The same argument as in Theorem 7.5 works here. If is acost sharing rule that minimizes the maximal cost share then it shouldhold that for all problems with being thelargest demand. Then we are exactly in the first case in the proof ofTheorem 7.5.

Now the following result should be no surprise:

Theorem 7.8 The convex serial rule is the unique cost sharing rulewhich maximizes the largest cost share for each cost function amongthose rules satisfying ranking and excess upper bound.

References

Aadland, D., and V. Kolpin (1998): “Shared irrigaton cost: an empiricaland axiomatical analysis,” Mathematical Social Sciences, 35, 203–218.

Baumol, W., J. Panzar, R. Willig and E. Bailey (1998): ContestableMarkets and the Theory of Industry Structure. San Diego, California:Harcourt Brace Jovanovich.

De Frutos, A. (1998): “Decreasing serial cost sharing under economiesof scale,” Journal of Economic Theory, 79, 245–275.

Hougaard, J., and L. Thorlund-Petersen (2000): “The stand-alone testand decreasing serial cost sharing,” Economic Theory, 16, 355–362.

Hougaard, J., and L. Thorlund-Petersen (2001): “Mixed serial cost shar-ing,” Mathematical Social Sciences, 41, 51–68.

Israelsen, D. (1980): “Collectives, communes, and incentives,” Journalof Comparative Economics, 4, 99–124.

Koster, M. (2000): Cost Sharing in Production Situations and NetworkExploitation. PhD Thesis, Tilburg University.

Koster, M., S. Tijs, Y. Sprumont and E. Molina (1998): “Sharing thecost of a network: core and core allocations,” CentER Discussion Paper9821, Tilburg University.

Lebesgue, H. (1904). Leçons sur l’intégration et la recherche des fonc-tions primitives. Paris: Gauthier-Villars.

Page 171: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SERIAL COST SHARING 155

Moulin, H. (1996): “Cost sharing under increasing returns: a comparisonof simple mechanisms,” Games and Economic Behavior, 13, 225–251.

Moulin, H. (2000): “Axiomatic cost and surplus-sharing,” in: Arrow, S.and K. Suzumura (eds.), Handbook of Social Choice and Welfare (forth-coming) .

Moulin, H., and S. Shenker (1992): “Serial cost sharing,” Econometrica,60, 1009–1037.

Moulin, H., and S. Shenker (1994): “Average cost pricing versus serialcost sharing; an axiomatic comparison,” Journal of Economic Theory,64, 178–201.

Sharkey, W. (1982): The Theory of Natural Monopoly. Cambridge, UK:Cambridge University Press.

Shubik, M (1962): “Incentives, decentralized control, the assignment ofjoint cost, and internal pricing,” Management Science, 8, 325–343.

Tijs, S.H., and M. Koster (1998): “General aggregation of demand andcost sharing methods,” Annals of Operations Research, 84, 137–164.

Young, H.P. (1985). Cost Allocation: Methods, Principles, Applications.Amsterdam: North-Holland.

Page 172: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Social networks describe relationships between agents or actors in a soci-ety or community. Examples of such relations are: ‘is able to communi-cate with’, ‘is in the same club as’, ‘has strategic alliances with’, ‘tradeswith’, ‘has diplomatic contacts with’, ‘is friend of’, etc. These relationscan be formalized by dyadic attributes of pairs of agents. This yields agraph where vertices or nodes play the roles of agents, and edges or arcsthose of these attributes. Such a network or graph enables the study ofstructural characteristics describing the agents’ position in the network.In literature, a variety of power or status measures have been discussed,see for example, Braun (1997) or Bonacich (1987). For measures of prox-imity, see Chebotarev and Shamis (1998). Also measures for centralityhave been discussed, see for instance Faust (1997), Friedkin (1991), andmany others. Centrality captures the potential of influencing decisionmaking or group processes in general, of being a focal point of com-munication, of being strategically located and the like, see for exampleGulatti and Gargiulo (1999) and Freeman (1979). Centrality, therefore,plays an important role in networks on social, inter-organizational, orcommunicational issues.

Let a centrality ordering be a mapping assigning to a graph G apartial ordering, on the set of vertices of that graph. This ordering

Chapter 8

Centrality Orderings inSocial Networks

BY HERMAN MONSUUR AND TON STORCKEN

8.1 Introduction

157

P. Borm and H. Peters (eds.), Chapters in Game Theory, 157–181.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Page 173: Borm_Chapters in Game Theory-In Honor of Stef Tijs

is a reflexive and transitive relation. A pair of vertices, is in thatrelation whenever is at a position in the graph which is considered to beat least as central as the position of in that graph. So, if the ordering iscomplete, then constitutes a complete list of the vertices arrangedfrom best to worst with respect to their centralities. In Monsuur andStorcken (2001), centrality positions have been studied, yielding a subsetof vertices considered to be the central ones. So, in the latter case this setwould consist of all best ordered vertices. In the present chapter, similarto the centrality position approach, the focus is on the conceptual issueof what makes a vertex more central than another one. This leads to anaxiomatic study of centrality orderings.

Three centrality orderings defined for simple undirected connectedgraphs, are characterized. These are the cover, the degree and themedian centrality orderings. The cover relation originates from Miller(1980): in a social network, vertex is said to cover vertex if all neigh-bours of are also neighbours of If covers then every social link of

can be covered by one of So, weakly dominates Degree central-ity orders the vertices according to their number of neighbours. Here,the assumption is that the more neighbours a vertex has, the more it isa focal point of communication. In the extreme case that all vertices areneighbours of a vertex this vertex is at a star position, and vertexoccupies a most central position, see also Freeman (1979). Median cen-trality orders the vertices of a graph according to their sum of distancesto all other vertices. The smaller this sum, the more central a vertexis considered. This refers to network communication, where each agentis to be reached separately, and costs are determined by distances. An-other way of looking at this is as follows. A vertex is viewed central tothe extent that it can avoid the control of communication of others. SeeFreeman (1979), who introduced closeness centrality by this. Medianand closeness centrality yield the same outcome.

For cover as well as median centrality we introduce a set of char-acterizing conditions. For degree centrality, we provide four such sets.We strived to employ as few as possible conditions in these six charac-terizations. Of course, the independence of the conditions within eachset is proved as well. All the conditions might be appealing from anintuitive point of view. For instance, the star condition: a star positionis ordered better than a non-star position. Or neutrality: the namesof the vertices are not important. A number of conditions that we usehave the following general format. If in going from graph G to graph

MONSUUR AND STORCKEN158

Page 174: Borm_Chapters in Game Theory-In Honor of Stef Tijs

the change in network environment for vertex is similar in spiritto the change for that of vertex then the ordering between and inG is the same as that in So, a centrality ordering is invariant withrespect to these environment changes. In Monsuur and Storcken (2001),in two of the three characterizations, a convergence condition is used.Here such a condition is absent. Although the setting here is differentfrom that of centrality positions, some of the conditions in Monsuur andStorcken (2001) could be adapted to the present situation.

The chapter is organized as follows. In Section 8.2, the model isspelled out and several centrality orderings are defined. Section 8.3 ison the cover relation. That is, a characterization is discussed and fur-thermore it is shown that many centrality orderings are just refinementsof the cover relation. In Section 8.4, four characterizations of the degreecentrality ordering are provided. In Section 8.5, median centrality ischaracterized. Finally, Section 8.6 deals with the independence of theconditions in all these characterizations.

8.2 Examples of Centrality Orderings

Let denote an infinite but countable set of potentialvertices. A graph or network G is an ordered pair (V,E), where Vis a finite, non-empty set of vertices, and E is a subset of

the set of non-ordered pairs of V. Elements ofE are called edges or arcs. Vertices represent agents while arcs representthe relation between these agents. If then and areneighbours.

Furthermore, denotes the neigh-bourhood of and the closed neighbourhood of

The number of neighbours determines the degree of a vertex

The star of a graph G consists of all vertices which are adjacent toall other vertices, i.e.

Let be a non-empty subset of V and let be a subset ofThen a graph is a subgraph of G,

which we denote by In case it issaid to be the subgraph of G, induced by

Let P = (W,F) be a graph, where ThenP is called a path between and if

P is a path in G, if P is a path and a subgraph of G.

CENTRALITY ORDERINGS 159

Page 175: Borm_Chapters in Game Theory-In Honor of Stef Tijs

The length of the path P is i.e. #W – 1. P is also denoted by

Let G = (V,E) and be graphs. Then the unionis the graph determined by the ordered pair

In the sequel we assume that a graph is connected, i.e. between anytwo vertices there is a path.

To have connected union, we take V and not disjoint. In view ofconnectedness, the geodesic distance between two vertices and in agraph G, i.e., the minimal length of the paths between and inducesa well-defined function, denoted as Obviously isdefined to be zero. Further, the sum of distances between a given vertex

and all other vertices is denoted byLet V be a non-empty and finite subset of vertices. A partial ordering

on V is a reflexive and transitive relation on V. Formeans that is at least as good as which we write as If

and and are indifferent: Ifand then is better than If, in addition, an orderingis complete, then it is a weak ordering. For denotes therestriction of to i.e.

Given a network G = (V, E), where we may rankthe vertices from most central to least central. As we consider orderingswith respect to centrality, we call all these possible orderings centralityorderings.

MONSUUR AND STORCKEN160

It is clear that, intuitively speaking, the point at the center of a star is themost central one. The measure assigns to this center the highestcentrality. With respect to communication, a point with highest degree

Centrality ordering. A centrality ordering is a function assigningto each undirected, connected graph G = (V,E) a partial orderingon V.

In social network analysis, this ordering is often based on scores assignedto vertices, indicating the centrality of that vertex or point, resulting ina complete ordering. That is, the higher the score, the more central avertex is considered. First, we introduce measures that are discussedin the existing literature on this subject, see for example Faust (1997),Freeman (1979) or Friedkin (1991).

Let G be a graph. Then or the degreecentrality, is equal to

Page 176: Borm_Chapters in Game Theory-In Honor of Stef Tijs

centrality is visible in this network, or is a focal point of communication,see for example Freeman (1979).

Let G be a graph. Then or the be-tweenness centrality, is

where is the number of geodesics from to containing i.e.paths in G from to of minimal length, that contain while isthe total number of geodesics from toThis view of centrality is based upon the frequency with which a vertexfalls on a shortest path in G, connecting two other vertices. A vertexlinking pairs of other vertices can influence the transmission of informa-tion.

Let G be a graph. Then [or the closenesscentrality, is defined by :

CENTRALITY ORDERINGS 161

So, is the inverse of the average distance of to other vertices.Interpreting each vertex as a point that controls the flow of informationwhich passes through, a vertex is viewed as central to the extent that itcan avoid this control of communication of others, (Freeman, 1979).

Next, let the adjacency matrix M be defined for an elementat position by if otherwise This

means that the square matrix M is a nonnegative and symmetric matrix.From the theory of nonnegative matrices (see for example Berman andPlemmons, 1979) we deduce the following results. An nonnegativematrix A is reducible if these exists a permutation of its rows and its

columns, such that we obtain with B and D square

matrices, or and A = 0. Otherwise A is irreducible. It can beshown that a nonnegative matrix A is irreducible if and only if for every

there exists a natural number such that where is theelement of at position If a nonnegative matrix is irreduciblethen the spectral radius of A , is a simple eigenvalue, any eigenvalueof A of the same modulus is also simple, A has a positive eigenvectorcorresponding to and any nonnegative eigenvector of A is a multipleof If there exists a natural number such that is positive, in which

Page 177: Borm_Chapters in Game Theory-In Honor of Stef Tijs

case A is a primitive matrix, then A is irreducible and is greater inmagnitude than any other eigenvalue. If A is nonnegative and primitivewith then is a positive matrix whose columns arepositive eigenvectors corresponding to

If we assume that the graph G is connected then is positive, soM is primitive. We deduce that is a simple eigenvalue, M has apositive eigenvector corresponding to and existsand gives positive copies of

Let G be a graph. Then [or the eigenvec-tor centrality, is defined as

162 MONSUUR AND STORCKEN

We give a motivation for this measure. In determining the centrality of avertex, one may take into account the centrality of its neighbours: beingconnected to a highly central vertex adds to the centrality of a vertex.Since in turn, the centrality of the neighbours also depends upon thecentralities of other vertices, this process looks circular. The eigenvec-tor approach proves to be useful in solving this problem. Indeed, if welet represent the centralities of the vertices, then contains, foreach vertex, the sum of the centralities of its neighbours. Since, for arbi-trary vertices and thisprocess of assigning to a vertex the sum of centralities of its neighbours,does not change the (relative) centralities. The (iterative) procedure ofcomputing the eigenvector, as described above, is also implemented in aprototype search engine for the Web, see the Clever Project (1999).

We next introduce a new measure, which uses the Shapley value ofa game, see Shapley (1953). For a graph G, let the cooperative (trans-ferable utility) game be defined by letting be the numberof unordered pairs such that all shortest paths from to passthrough W, where W is any subset of V. The term ’passes through’refers to the situation that at least one vertex of the path P is also anelement of W. Seeing paths as communication links, measures themediator role of W in these communication links.

If we consider the dual game that is defined bythen it is easy to verify that equals the number

of unordered pairs and in W, such that there is a pathfrom to that is contained entirely in

Page 178: Borm_Chapters in Game Theory-In Honor of Stef Tijs

The Shapley value of a game is defined as

where It is known that the Shapley values of andcoincide.Let G be a graph. Then [or the Shapley

centrality, is defined by

CENTRALITY ORDERINGS 163

Further, [or the center centrality is definedby

This measure of centrality is based on the geometrical notion of central-ity: minimizing the eccentric distances. Here the score is based on theeccentricity.

The median centrality [or is definedby

This measure is based on the sum. This refers to a communicationnetwork, where each agent is to be reached individually and costs aredetermined by the distances. So, the smaller the sum, the more centralan agent’s position.

Note that and yield coinciding orderings. Of course,the underlying ideas are the same.

All these centrality measures induce corresponding centrality order-ings

The centrality orderings introduced so far, are based on scores, whichcan depend on the local structure around a vertex, like degree centrality,or can depend on global structures, like median, center and eigenvectorcentrality orderings.

Next, we define the cover centrality ordering, which is not basedon scores but on the very local structure around vertices. Actually, wediscuss the cover relation, which is not necessarily complete. See alsoMiller (1980), Moulin (1986), Fishburn (1977), Peris and Subiza (1999)and Dutta and Laslier (1999). In a graph G, a vertex covers vertexif either It is straightforward to prove that this

and

or

Page 179: Borm_Chapters in Game Theory-In Honor of Stef Tijs

8.3 Cover Centrality Ordering

In this section, we characterize the cover centrality ordering as the inclu-sion minimal centrality ordering satisfying four independent conditions.Furthermore, we show that degree, closeness, hence median, eigenvectorand Shapley centrality orderings are refinements of this cover centralityordering, while betweenness and center centrality orderings are not.

A centrality ordering satisfies the star condition if for each graphG = (V, E),

164 MONSUUR AND STORCKEN

cover relation is transitive (and reflexive). Therefore, it is a (partial)ordering.

The the cover centrality ordering assigns to each graph Gits cover relation:

As it is natural that star vertices are the most central positions, thiscondition is intuitively clear.

A centrality ordering satisfies partial independence if for everygraph G = (V,E) and subgraph such thatfor some

So, partial independence means that the centrality ordering betweenand only depends on the local network environment of and in agraph. It is therefore clear that degree and cover centrality orderingssatisfy this condition, while, for example, the median centrality orderingdoes not.

A centrality ordering is said to be equable of equal distanceconnected arc additions, if for each graph G = (V,E) and subgraph

and vertices

whenever there are such that

Page 180: Borm_Chapters in Game Theory-In Honor of Stef Tijs

for alland for all

To illustrate this condition, take and let whereand

So, in going from to G, we add connected arcsand such that the distances between and and between

and decrease with 1 and all other distances between and and theother vertices remain unchanged. Furthermore, the added arc hasthe same distance to as arc has to In this case, equability ofequal distance connected arc addition requires that this addition has noeffect on the ordering between and Loosely speaking, it means thatthe preference between and remains unchanged whenever we onlydecrease the distance between and by 1 and the distance betweenand by 1 and and have the same distance to respectively and

and either or is the added arc. It is straightforward toprove that and satisfy this equability condition.

A centrality ordering is said to be appendix dominating if forgraph G = (V,E), with and all vertices

CENTRALITY ORDERINGS 165

For a graph G = (V,E), with letthere is a vertex such that for all is on allpaths from to }.

These four conditions characterize the cover centrality ordering:

Theorem 8.1 Let be a centrality ordering that satisfies the star con-dition, partial independence, equability of equal distance connected arcaddition and the appendix domination. Then for allconnected graphs G.

Proof. It is straightforward to show that the cover centrality orderingsatisfies these four conditions. Conversely, let be a centrality orderingsatisfying these four conditions. Further, let G = (V, E) be a graph and

such that To prove the theorem it is sufficient to provethe following two implications:

and

Page 181: Borm_Chapters in Game Theory-In Honor of Stef Tijs

166 MONSUUR AND STORCKEN

We consider two cases.Case 1: and Then by the appendix

domination, we have proving (8.14) and (8.15).Case 2: or By partial independence,

we obtain a sequence of graphssuch that

Note that for any Now, letApplying equability of equal distance con-

nected arc additions times, yields a sequence of graphssuch that

and

Now, let with Now,we either have or is obtained from by anequal distance connected arc addition. So, we may conclude that

Note that so that the two implications (8.14) and(8.15) follow evidently from (8.20).

The independence of the four characterizing conditions will be discussedin Section 8.6.

The centrality ordering satisfies the cover principle if for allgraphs G = (V, E), and all vertices and in V: if covers then

If, in addition, does not cover and we say that satisfiesthe strict cover principle. If satisfies the strict cover principle, isa refinement of

Proposition 8.2 The centrality orderings induced by andsatisfy the cover principle, but do not satisfy the strict cover

principle.

Page 182: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Proof. We first consider the measure Suppose covers Weconsider two cases.

Case 1. Let be ageodesic between and Since covers also

is a geodesic between andCase 2: Consider a path from to of the following

type: Because covers we havea shorter path This means that cannot be partof a geodesic from toThe conclusion is that for each geodesic path from to containingwe also have a geodesic path from to including or

The betweenness centrality measure does not satisfy thestrict cover-principle. To show this, consider G with

Then strictly covers ButThe proof that satisfies the cover principle is left to the reader.

To show that it does not satisfy the strict cover principle, letThen strictly covers while

CENTRALITY ORDERINGS 167

Proposition 8.3 The centrality orderings induced byand satisfy the strict cover principle.

Proof. Let G = (V, E) be a graph with M its adjacency matrix and letand be distinct vertices. Since the assertion is obvious for

we only consider the other measures.Suppose that covers Since for every the distance from to is

larger than or equal to the distance from to If,in addition, does not cover there is an element such thatwhile So the distance between and is strictly smaller thanthe distance between and resulting in

Next, let be the vector containing the eigenvector centralities.Then

where the inequality is strict whenever the covering is strict. Sincethe result easily follows.

For the proof for we consider the dual game Letcover in the graph G. We first prove that satisfies the coverprinciple: We consider two cases.

Case 1: let F be such that We show thatLet P be a path along G from vertex to that is contained

in If then P is entirely contained in If

Page 183: Borm_Chapters in Game Theory-In Honor of Stef Tijs

MONSUUR AND STORCKEN168

then where andSince covers in graph G, we also have another path from to

that is contained inHence So we have

Case 2: let F be such that while Then there is a uniquewith the same cardinality as F and while

It is evident that sinceNext we proceed by showing that Let P be a path

from vertex to contained in F. If then P is contained entirely inNow suppose that where

and If then, as before, we may exchange byresulting in a path from to that is contained in If so that

then is a path in So, wehave Altogether, we haveshowed that

To verify that does satisfy the strict cover principle, weconsider the case that covers while does not cover This meansthat there exists an element such that whileWe show that there exists a subset F of the set of vertices such that

To this end, takeThen if otherwise it equals 3. Furthermore

while This proves thatcompleting the proof of the proposition.

8.4 Degree Centrality Ordering

In this section, four characterizations of the degree centrality orderingare presented. The notion of degree centrality for undirected graphs issimilar to that of the Copeland score in tournaments. See also Freeman(1979), Moulin (1980), Rubinstein (1980) and Delver et al. (1991). Someof the characterizing conditions stem from this literature on Copelandscores. First we discuss the various conditions used in the characteriza-tions.

The following condition is slightly stronger than equability of equaldistance connected arc additions, see equation (8.12), in that we do notrequire that the arcs are connected.

A centrality ordering is said to be equable of equal distancearc additions, if for graph G = (V,E) and subgraph and

Page 184: Borm_Chapters in Game Theory-In Honor of Stef Tijs

CENTRALITY ORDERINGS 169

vertices

whenever there are such that

for alland for all

Note that adding connected arcs at equal distances in the neigh-bourhood of , hence implies that and either

or So, this addition does not affect the cover re-lation between and If connectedness is dropped, arc additions, evenat equal distances may influence the cover relation. Therefore,does not satisfy this new condition. On the other hand, it is straight-forward to see that and do satisfy this condition. Infact, if we substitute this equability condition for the connected versionin Theorem 8.1 and drop the appendix domination, we obtain a set ofcharacterizing conditions for as is shown in Theorem 8.4(i).

The following condition requires the notion of a lenticular graph. Letbe paths from vertex to Then

the union is called a lenticular graph betweenand if for all with Hencethe paths only meet at and

A centrality ordering is called invariant at lenticular additionsif for graphs G = (V, E), all vertices and lenticular graphs

between and

whenever and for allwith

So, if we add a number of disjoint paths from to such that thedistances between all pairs of vertices different from is notchanged, then this addition does not affect the centrality ordering be-tween x and y. Theorem 8.4(ii) shows that by substituting this conditionfor equability of equal distance arc additions yields a new set of charac-terizing conditions for the degree centrality ordering. It is easy to seethat adding lenticular graphs may ruin the cover relation.

A centrality ordering is said to be complete if for all graphs G,is a complete ordering.

Page 185: Borm_Chapters in Game Theory-In Honor of Stef Tijs

where such that and

Neutrality means that the centrality ordering does not depend onthe actual numbering of the vertices. It guarantees a similar ordering ofthe vertices if these are at similar positions in a graph.

A centrality ordering is monotonic if for all graphs G = (V,E)and subgraphs with for some vertices

with we have for every

170 MONSUUR AND STORCKEN

A centrality ordering is said to be neutral if for all graphs G =(V,E) and all permutations on V,

Going from to G, the arc is added where is not equal toThen monotonicity implies that if is at least as good as at thenthe relative position of with respect to improves, meaning that isbetter than at Clearly satisfies this condition. Replacingthe star and equability conditions by the latter three defined conditionsyields yet another characterization of in Theorem 8.4(3).

A centrality ordering is swap invariant if for all graphs G =(V, E) and and all vertices

whenever there are such thatE and

Going from G to we swap neighbour of withneighbour Swap invariance means that this typeof neighbour swapping has no effect on the ordering between and

Replacing partial independence with swap invariance, we obtain ourfourth characterization of

It is straightforward to check that satisfies all the conditionsdefined in this section.

Theorem 8.4 The degree centrality ordering is the only central-ity ordering that satisfies either of the following four sets of conditions.(i) partial independence, equability of equal distance arc additions andthe star condition, (ii) partial independence, invariance at lenticularadditions and the star condition, (iii) partial independence, neutrality,monotonicity and completeness, (iv) swap invariance, neutrality, mono-tonicity and completeness.

of

Page 186: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Proof. It is straightforward to prove that satisfies all these con-ditions. In order to complete the proof of these four characterizations,let be a centrality ordering satisfying one of these four sets of condi-tions. We proceed by showing that Let G = (V,E) be agraph and two distinct vertices, It is sufficient to provethat

CENTRALITY ORDERINGS 171

and

Case 1. Let satisfy partial independence. By adding arcs whichneither involve vertex nor vertex we obtain a graphwith subgraph G, such that for allBy partial independence,

Let f satisfy the set of conditions (i) of the theorem. First, considerthe case where

Then, since for all by equability of equaldistance arc additions, it is without loss of generality to assume that

and that is empty. Now, letand Invoking equability of equal distance arc additions,we may assume that This holds for all such and

So, if then By (8.28)and the star property it follows that If

then So, by (8.28) and thestar property, we find

Now, consider the special case of Since G is con-nected, we have If we are done with the starcondition. Suppose Then by partial independence, it issufficient to prove where is the path graph

Now, apply the previous case to andThis yields Application of the previous case to

and yields Then, by transitivity of the orderingwe obtain As we proved the implications (8.26) and (8.27),we showed that if satisfies the set of conditions (i).

Let satisfy the set of conditions (ii). By invariance at lenticularadditions, it is without loss of generality to assume that

Page 187: Borm_Chapters in Game Theory-In Honor of Stef Tijs

172 MONSUUR AND STORCKEN

Hence, for all Now, by invariance at lenticularadditions, it is without loss of generality to assume that Let

Let and consider the pathAdding P to we have by invariance at lenticular addi-

tions that Now, by partial independence,we may delete arc delete arcs where and add thetwo arcs and yielding graph such that

Now, is a path in and by invariance atlenticular additions, it follows thatAs the previous reasoning shows that it iswithout loss of generality to assume that is empty.Next, consider and Nowis a path in By partial independence and invariance at lenticularadditions, we have This holds for allsuch and Hence it is without loss of generality to assume that

is empty if andis empty if Now, similar as to conditionset (1), the star condition can be used to prove implications (8.26) and(8.27). So, if satisfies the conditions mentioned in (ii).

Let satisfy the conditions in (iii). In order to prove implication(8.26), let Then, the cardinality of

equals that of Hence there is a bijection fromConsider a permutation on V such that

and forfor and for allor Now, and by neutrality

As and is complete, it followsthat and Hence by (8.28) andwhich proves implication (8.26). In order to prove implication (8.27), let

Let be a subgraph of which is obtained by deleting some of thearcs for such thatThen, by implication (8.26), we have and by monotonicitywe have So, by (8.28) this yields which provesimplication (8.27). So, if satisfies the conditions mentioned(iii).

Case 2. satisfies swap invariance, neutrality and completeness.First, we prove implication (8.26). LetBy a sequence of swaps, while maintaining connectedness, we may swap

So,

to

Page 188: Borm_Chapters in Game Theory-In Honor of Stef Tijs

all neighbours of with those of yielding a graph such that

By swap invariance, we haveConsider a permutation on V such thatand for all other vertices. Then and by neutrality

So by (8.28) we haveAs is complete, this yields that which proves implication(8.26). Now, similar as in the case of (iii), monotonicity gives implication(8.27). So if satisfies the set of conditions (iv).

In Section 8.6, we discuss the independence of the conditions in Theorem8.4.

CENTRALITY ORDERINGS 173

8.5 Median Centrality Ordering

In the existing literature, one may find axiomatic characterizations oflocations on networks. For example, see Foster and Vohra (1998), Holz-man (1990) or McMorris et al. (2000). There are, however, a few dif-ferences between their work and our approach. Firstly, they considertree networks (or median graphs), while in our model a network maybe arbitrarily cyclic. Secondly, in social networks, the problem is tofind the set of central vertices among the set of all vertices. This con-trasts with theories concerning location or consensus, where one gen-erally works with profiles, which are of locations (notnecessarily vertices). The problem then is to find a compromise loca-tion on the network. Thirdly, we assume that the distance between twoadjacent vertices equals one, which is not the case in location theory.Altogether, this means that our necessary and sufficient conditions usedin the following axiomatic characterization of the median for arbitrary,social networks, do not compare to the axioms used in location theory.

Here the median centrality ordering will be characterized by threeconditions: the star property, invariance at lenticular additions and yetanother equability condition. This latter condition is a strengthening ofequal distance arc additions, see equation (8.21), in that equal distanceof the added arcs is not required.

A centrality ordering is said to be equable of arc addition iffor all graphs G = (V,E) and subgraphs and all vertices

Page 189: Borm_Chapters in Game Theory-In Honor of Stef Tijs

whenever there are such that

for all for all

Theorem 8.5 The median centrality ordering is the only cen-trality ordering that satisfies equability of arc addition, invariance atlenticular addition and the star condition.

Proof. It is straightforward to prove that the median centrality orderingsatisfies these three conditions. In order to prove that it is the onlycentrality ordering that does so, let G = (V,E) be a graph and

two vertices. Let be a centrality ordering that satisfies theconditions. It is sufficient to prove the following two implications:

174 MONSUUR AND STORCKEN

and

The proof of these implications is based on the construction of a specialsequence of graphs: is a confined sequence of graphsif, for each is obtained from by a lenticularaddition or equable arc additions with respect to and Note that

and by equability of arc additions and invariance at lenticular additions,

Eventually, we construct a confined sequence of graphs, such thatApplying the previous two remarks and the star

condition, then yield implications (8.30) and (8.31). The following Claimprepares the construction of such a confined sequence of graphs.

Claim. Let G = (V, E) be a graph. Let be two distinct verticesand let Then there is a confined sequence of graphs

such thatand for all

Proof of Claim. First, we construct a confined sequence of graphs,say such that

for all where If this is true

and

Page 190: Borm_Chapters in Game Theory-In Honor of Stef Tijs

for then this desired sequence consists of only.So, we assume that this is not the case. Let

for all Further, let andbe subsets of W defined by forall and for allWe proceed by arbitrarily choosing two vertices and

Let andNote that

Next, let be obtained from by lenticu-lar addition of two distinct paths and

Moreover, letIt is clear that for any

where the latter inequality is strict for So, if it is the casethat is obtained from by equable arcs addition, then repeatingthis procedure yields the desired sequence So, we haveto prove that is obtained from by equable arcs addition.

Obviously, andTake and In order to show that

and are equable arc additions, it is sufficient to prove thatand As we

have andWe prove that the assumption for any

yields a contradiction.Case 1. Suppose that any shortest path from to in contains

In view of the construction of and it is without loss ofgenerality to assume Since

and is on a shortest path from to in we haveTherefore Next, since

we obtain Becausewe have But this means that But then, since

is not in a contradiction.Case 2. Suppose that there is a shortest path from to in not

containing So it necessarily contains But then weobtain contradicting

So, we may conclude that yields a contradiction.Similarly, for any yields a contra-diction.

CENTRALITY ORDERINGS 175

is possible).

Page 191: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Hence, we have a confined sequence with the prop-erty that proving the second part of theClaim.

If then the conclusion thatfor all yields and for all

So, the Claim is proved for the case Therefore,we now consider the case Let such that

and So mean-ing that we have and furthermore, sincewe have a path Then add a (lenticular) path

resulting in Next, we have the follow-ing equable arc additions giving if then reducing

by one, if then reducing by one,and reducing by one. Now we haveBy repeating these last path and arc additions, we obtain a sequence asdesired in the lemma. This completes the proof of the Claim.

176 MONSUUR AND STORCKEN

Now, let Then, by the Claim, there exists a sequence ofgraphs where is obtained from by meansof equable arc additions or lenticular additions, such that

and

Consider the case Thenwhere and Now,suppose that Then Add 2t (equable)arcs resulting in where By the starcondition, Using Remark (8.33) then also If

then, by similar reasoning, Thisproves implication (8.30) and (8.31) for this case.

Next, consider the case By virtue of (8.35) and(8.36), the new path with of length

is an allowed lenticular addition, giving Now, by a simpleinduction argument, we obtain that (8.30) and (8.31) hold for Butthen Remarks (8.33) yields (8.30) and (8.31) for graph G.

The independence of the characterizing conditions will be investigatedin the following section.

Page 192: Borm_Chapters in Game Theory-In Honor of Stef Tijs

In the foregoing sections, three centrality orderings are characterized in-volving in total six sets of conditions. In this section, we discuss theindependence of the conditions within each set. To prove the indepen-dence and thereby completing the characterizations, in the sense thatthese characterizing condition sets are inclusion minimal, we introducesix centrality orderings. These are just centrality orderings to fit theindependence proofs and most likely will have no further practical use.

CENTRALITY ORDERINGS 177

8.6 Independence of the Characterizing Condi-tions

Stap centrality ordering, is a centrality ordering defined for allgraphs G = (V,E) byand or and

Degmed centrality ordering, is a centrality ordering, whereindifferences according to may be resolved according toFor a graph G = (V, E) and vertices it is defined by

whenever or whenever andwhenever and

Smaller than centrality ordering, is based on a numbering ofall potential vertices in V. Let whenever for all

Now is defined for all graphs G = (V,E) by

Centrality ordering is defined for all graphs G = (V,E) byif and if

where is the path graphCentrality ordering is defined for all graphs G = (V, E) and

by if orand This centrality ordering refines the degree centrality bytie-breaking indifferences according to the numbering of the vertices.

Centrality ordering is defined for all graphs G byif and is the partial ordering

where is the path graphConsider the characterizing conditions for in Theorem 8.1.

The independence of equability of equal distance connected arc addi-tions from the other three conditions is shown by that of partialindependence by that of the star condition by and thatof appendix domination by

Consider the first characterization of in Theorem 8.4(i). Theindependence of equability of equal distance arc additions from the other

Page 193: Borm_Chapters in Game Theory-In Honor of Stef Tijs

178 MONSUUR AND STORCKEN

two conditions is shown by that of partial independence byand that of the star condition by

Consider the second characterization of in Theorem 8.4(ii).The independence of partial independence from the other two conditions,is shown by that of invariance of lenticular addition by andthat of the star condition by

Consider the third characterization of in Theorem 8.4(iii).The independence of partial independence from the other three condi-tions, is shown by that of neutrality by that of monotonicityby and that of completeness by

Consider the fourth characterization of in Theorem 8.4(iv).The independence of neutrality from the other three conditions is shownby that of swap invariance by that of monotonicity byand that of completeness by

Consider the characterization of in Theorem 8.5. The inde-pendence of equability of arc additions from the other two conditions isshown by that of invariance of lenticular addition by and thatof the star condition by

This shows the independence of all conditions in each characteriza-tion. The following tables indicate which centrality ordering satisfieswhich condition. They are straightforward, although cumbersome, tocheck.

Page 194: Borm_Chapters in Game Theory-In Honor of Stef Tijs

CENTRALITY ORDERINGS 179

Berman, A., and R.J. Plemmons (1979): Nonnegative matrices in themathematical sciences. New York: Academic Press.

Bonacich, P. (1987): “Power and centrality: a family of measures,”American Journal of Sociology, 92, 1170–1182.

Braun, N. (1997): “A rational choice model of network status,” SocialNetworks, 19, 129–142.

Chebotarev, P.Yu., and E. Shamis (1998): “On proximity measures forgraph vertices,” Automation and Remote Control, 59, 1443–1459.

Clever Project (1999): “Hypersearching the Web,” Scientific American,June, 44–52.

Danilov, V.I. (1994): “The structure of non-manipulable social choicerules on a tree,” Mathematical Social Sciences, 27,123–131.

Delver, R., H. Monsuur, and A.J.A. Storcken (1991): “Ordering pairwisecomparison structures,” Theory and Decision, 31, 75–94.

Delver, R., and H. Monsuur (2001): “Stable sets and standards of be-haviour,” Social Choice and Welfare, 18, 555–570.

Dutta, B., and J.-F. Laslier (1999): “Comparison functions and choicecorrespondences,” Social Choice and Welfare, 16, 513–532.

Faust, K. (1997): “Centrality in affiliation networks,” Social Networks,19, 157–191.

Fishburn, P.C. (1977): “Condorcet social choice functions,” SIAM Jour-nal of Applied Mathematics, 33, 469–489.

References

Page 195: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Foster, D., and R. Vohra (1998): “An axiomatic characterization of aclass of locations on trees,” Operations Research, 46,347–354.Freeman, L.C. (1979): “Centrality in social networks, conceptual clari-fications,” Social Networks, 1, 15–239.Friedkin, N.E. (1991): “Theoretical foundations for centrality measures,”American Journal of Sociology, 96, 1478–1504.Gulati, R., and M. Gargiulo (1999): “Where do inter-organizationalnetworks come from?” American Journal of Sociology, 104, 1439–1493.Haynes, T.W., S.T. Hedetniemi, and P.J. Slater (1998). Fundamentalsof domination in graphs. Marcel Dekker, Inc.

Holzmann, R. (1990): “An axiomatic approach to location on networks,”Mathematics of Operations Research, 15, 553–563.Miller, N.R. (1980): “A new solution set for tournament and majorityvoting: Further graph-theoretical approaches to the theory of voting,”American Journal of Political Science, 24, 68–96.McMorris, F.R., H.M. Mulder, R.C. and Powers (2000): “The medianfunction on median graphs and semilattices,” Discrete Applied Mathe-matics, 101, 221–230.Moulin, H. (1986): “Choosing from a tournament,” Social Choice andWelfare, 3, 271–291.Papendieck, B., and P. Recht (2000): “On maximal entries in the prin-ciple eigenvector of graphs,” Linear Algebra and its Applications, 310,129–138.Peris, J.E., and B. Subiza (1999): “Condorcet choice correspondencesfor weak tournaments,” Social Choice and Welfare, 16, 217–231.Rubinstein, A. (1980): “Ranking the participants in a tournament,”SIAM J. Appl. Math., 38, 108–11.Ruhnau, B. (2000): “Eigenvector centrality, a node centrality?” SocialNetworks, 22, 357–365.Seidman, S.B. (1985): “Structural consequences of individual positionin nondyadic social networks,” Journal of Mathematical Psychology, 29,367–386.Shapley, L.S. (1953): “A value for n-person games,” in: H.W. Kuhn andA.W. Tucker (eds.), Annals of Mathematics Studies, 28, 307–317.Storcken, T., and H. Monsuur (2001): “An axiomatic theory of central-ity in social networks,” Meteor Research Memorandum, RM/01/009,University of Maastricht.

180 MONSUUR AND STORCKEN

Page 196: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Vohra, R. (1996): “An axiomatic characterization of some locations ontrees,” European Journal of Operations Research, 90, 78–84.

CENTRALITY ORDERINGS 181

Page 197: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Chapter 9

The Shapley TransferProcedure for NTU-Games

BY GERT-JAN OTTEN AND HANS PETERS

9.1 Introduction

A cooperative game is described by sets of feasible utility vectors, oneset for each coalition. Such a game may arise from each situation whereinvolved parties can achieve gains from cooperation. Examples rangefrom exchange economies to cost allocation between divisions of multi-nationals or power distribution within political systems. The two centralquestions are: which coalitions will form; and on which payoffs will eachformed coalition agree. Since an answer to the latter question seems aprerequisite to study the former question of coalition formation, mostof the literature has concentrated on the question of payoff distribu-tion. Specifically, the usual assumption is that the grand coalition ofall players will form and then the question is which payoff vector(s) thiscoalition will agree upon.

This question has been studied extensively for two special cases:games with transferable utility, and pure bargaining games.

In a game with transferable utility, what each coalition can do isdescribed by just one number: the total utility or payoff, which thatcoalition can distribute among its members in any way it wants. Theunderlying assumption is the presence of a common medium of exchangein which the players’ utilities are linear. For instance, the payoff is inmonetary units and the players have linear utility for money.

183

P. Borm and H. Peters (eds.), Chapters in Game Theory, 183–203.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Page 198: Borm_Chapters in Game Theory-In Honor of Stef Tijs

184 OTTEN AND PETERS

In a pure bargaining game intermediate coalitions—coalitions otherthan the grand coalition or individual players—play no role. Becauseof these simplifying features, both types of games are easier to analysethan general cooperative games, also called games with nontransferableutility or NTU-games.

A solution is a map that assigns to every game within a certainsubclass of NTU-games a feasible payoff vector or set of feasible payoffvectors for the grand coalition. In an important article, Shapley (1969)proposed a procedure to extend single-valued solutions defined on theclass of games with transferable utility and satisfying a few minimalconditions, to NTU-games. This procedure works as follows. For a givenNTU-game consider any vector of nonnegative weights for the players.For every coalition, maximize the correspondingly weighted sum of theutilities of its members over the set of feasible payoffs of that coalition.Regard these coalitional maxima as a game with transferable utility, andapply the given solution for transferable utility games to this game: thosepayoff vectors of the original NTU-game that, when similarly weighted,belong to the solution of the TU-game, are defined to be in the solutionof the NTU-game. The complete solution of the NTU-game is thenobtained by repeating this procedure for every possible weight vector.

This Shapley transfer procedure has been applied in particular tothe Shapley value for games with transferable utility (Shapley, 1953),resulting in the ‘nontransferable utility value’ (Aumann, 1985) for NTU-games. For pure bargaining games, this solution coincides with the Nashbargaining solution, proposed by Nash (1950) for the case of two players.As indicated, however, by Shapley (1969) the procedure can be appliedto a variety of solutions; also the existence result established by Shapleyholds under quite mild conditions. An explicit example of this is theNTU studied in Borm et al. (1992). Another example is the socalled ‘inner core’ studied by Qin (1994) but proposed earlier by Shapley(1984).

The main objective of the present contribution is twofold. First wereview and extend the Shapley transfer procedure; the extension is to anycompact and convex valued, continuous solution. Shapley’s existenceresult will be re-established for this extension. Second, we characterizesolutions that are obtained by this procedure. This characterization canbe seen as an alternative description of the Shapley transfer procedure.The price for obtaining existence and characterization within the sameframework is the occurrence of zero weights.

Page 199: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SHAPLEY TRANSFER PROCEDURE 185

Section 2 introduces the Shapley transfer procedure. Section 3 con-tains the existence result and a digression on a well known and earlierprocedure proposed by Harsanyi (1959,1963)—to which the existenceresult applies equally. In Section 4 the announced characterization ispresented. Section 5 discusses applications to several TU-solutions: theShapley value, the core, the nucleolus, and the Section 6 con-cludes.

Notations. For a finite subset S of the natural numbers let denotethe nonnegative orthant of For denote iffor every and denote if for every The vectorinequalities <, are defined analogously. The . denotes the usual innerproduct: The product denotes the vector inwith coordinate equal to For

for a real number For another finiteset of natural numbers M with andlet be defined by for every let

Thus, the set is the projection of Yon the

9.2 Main Concepts

Let denote the set of players. A coalition is a nonemptysubset of N. A subset is comprehensive if andimply for all The Pareto optimal subset of D is theset

and the weakly Pareto optimal subset of D is the set

A nontransferable utility game or NTU-game is a pair (N, V) where Vassigns to every coalition S a feasible set V(S) such that

(N1) for every

(N2) V(S) is a nonempty compact convex and comprehensive subset offor every coalition S;

(N3) PO(V(S)) = W PO(V(S)) for every coalition S.

Page 200: Borm_Chapters in Game Theory-In Honor of Stef Tijs

186 OTTEN AND PETERS

These assumptions, though restrictive, are still quite standard. The nor-malization in (N1) is mainly for convenience; it is not innocent becausetogether with (N2) it implies, for instance, that every coalition can atleast as good as singleton coalitions. The convexity assumption in (N2)may arise from the players having von Neumann-Morgenstern utilityfunctions over uncertain outcomes, or concave ordinal utility functionsover bundles of goods. It is essential to what follows. Condition (N3)means that, for every coalition, every weakly Pareto optimal point isalso Pareto optimal: there are no flat segments in the weakly Pareto op-timal boundary of V(S). One consequence is that if with

for some then there is a withand for all note that in that case It follows, inparticular, that either V(S) = {0} or there is a with

If for every then (N, V) is called apure bargaining game. If for every coalition S there is a nonnegativereal number such that then(N, V) is called a game with transferable utility or TU-game. Such a TU-game is sometimes also denoted by Our definition deviates fromthe usual one in that all payoff vectors are restricted to the nonnegativeorthant.

Instead of (N, V) or we will usually write V or with theunderstanding that the player set is N.

The class of NTU-games [TU-games, pure bargaining games] withplayer set N is denoted by Often the superscript ‘N ’ isomitted. Subclasses are denoted by etc.

Let be a subclass of NTU-games. An NTU-solution is acorrespondence that assigns to each NTU-game aset (We use to denote a correspondence, i.e., a set-valued function.) If then is also called a TU-solution. UsuallyTU-solutions are denoted by small characters, e.g.,

A TU-solution defined on a class is regular if it satisfiesthe following three conditions. In condition (T3), for an NTU-gameV and a real number we denote by the NTU-game with

for every coalition S.

(T1) is a nonempty, compact, and convex subset of PO(V(N))for every

(T2) is continuous on

(T3) is homogeneous, that is, for every and real number

Page 201: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SHAPLEY TRANSFER PROCEDURE 187

Here, continuity is meant with respect to the restriction to of the Eu-clidean metric on and the Hausdorff metric for compactsets in Conditions (T1), (T2) and (T3) are not very restrictive.Most known single-valued solutions (e.g., Shapley value, nucleolus,

) are continuous and homogeneous on the classes of TU-games onwhich they are defined. The best known multi-valued concept, the core,satisfies (T1), (T2), and (T3) on the class of balanced games. See Section5 for some of the details.

For an arbitrary NTU-game V and an arbitrary vectorthe associated game is the transferable utility game de-fined by

where

for every coalition S. By (N1) and (N2) these numbers are welldefined. For a class of NTU-games denote by

the class of all TU-games that arise as transfer games associated withNTU-games in Let be a regular TU-solution definedon We extend to an NTU-solution as follows.For each and

One way to understand this procedure (cf. Qin, 1994) is to think ofthe players as countries and of the coordinates of as exchange ratesbetween these countries. Then the transfer game expresses whatcoalitions of countries can do in real monetary terms, and the vectorrepresents a payoff distribution in real monetary terms. If this payoffdistribution is feasible in terms of the original individual currencies, thenit is a solution of the game.

We call the transfer solution associated with Observe thatnot all of the properties in (T1) and (T2) are trivially inherited byIn this contribution we will be mainly concerned with existence, i.e.,nonemptiness.

implies

Page 202: Borm_Chapters in Game Theory-In Honor of Stef Tijs

188 OTTEN AND PETERS

The transfer procedure can also be applied to TU-games, resultingin an extension of a TU-solution to the associated transfer solution onTU-games. We observe:

Lemma 9.1 Let be a regular TU-solution on a class Letwith Then

Proof. Let and Then forevery coalition S, so Hence,

Observe that actually we do not need regularity for Lemma 9.1 to hold.The inclusion in the lemma, however, can be strict, even for regularsolutions, as the following example shows.

Example 9.2 Let N = {1,2,3} and for every letif and oth-

erwise. Define byfor every Observe that is a regular TU-solution.

Consider the game with for all coalitions S with more thanone player. Then and LetThen andSince it follows that Hence,is strictly larger than

Note that the possibility of the strict inclusion is notdue to the possibility of zero coordinates of but to the occurrence ofboundary solution points with zero coordinates.

In Section 5 we present examples showing that also for TU-solutionssuch as the core and the nucleolus the inclusion in Lemma 9.1 can bestrict: hence, the transfer procedure applied to TU-games may add ad-ditional solution outcomes. This will not be the case for the Shapleyvalue or the

We conclude this section by reviewing a well known application of thisprocedure. Let, as above, be the class of pure bargaining games.The class of associated transfer games consists of all TU-gameswith for all Consider the equal-split solution on

that is,

for every

Page 203: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SHAPLEY TRANSFER PROCEDURE 189

Let V be a pure bargaining game such that for some(This is without loss of generality since the only alternative case isV(N) = {0}, see above.) Then for every Let

then if, and only if, there is asuch that Since for every itfollows that both and are positive and hencewhere and by definition of there is a hyperplanesupporting V(N) at with normal Consider the productV(N). At the supporting hyperplane to the level curve of this prod-uct has normal as follows straightforwardly by partialdifferentiation; hence this hyperplane also supports V(N). It followsthat the product is maximized on V(N) at So is the

Nash bargaining solution outcome for V (Nash, 1950). Thus,contains exactly one point, which is the Nash bargaining solution

outcome.

9.3 Nonemptiness of Transfer Solutions

In this section we show that under the imposed conditions a transfersolution assigns a nonempty set of payoff vectors to any NTU-gamefor which it is defined. This result extends Shapley’s (1969) existenceresult to the case where the TU-solution under consideration may be acorrespondence. The proof closely follows Shapley’s proof.

In order to obtain a compact set we normalize the vectors.Specifically, let denote the

simplex in Elements of are also called weightvectors.

Theorem 9.3 Let and letLet be a regular TU-solution. Then

Proof. Define the correspondence by

So P assigns to a weight vector the set of all ‘sidepayments’ by whichsolution payoff vectors of the transfer game are carried over to feasibleelements of the correspondingly weighted NTU-game. Note that P isnonempty, convex and compact valued. Moreover, it is upper semicon-tinuous since is continuous in the Hausdorff metric, and

Page 204: Borm_Chapters in Game Theory-In Honor of Stef Tijs

190 OTTEN AND PETERS

is upper semicontinuous. These propertiesare inherited by the correspondence defined by

In particular this implies compactness of the set Thereforewe can find a compact and convex subset D ofsuch that Extend the correspondence L to Dby defining, for every

where for every

Since the projection is also continuous, Kakutani’s fixed point theoremimplies that there is a with Let

If then hence by definition of P there existsSo and the proof is complete.

Suppose We will show that this is not possible. In this casethere is an with Since

there exists a with Moreover,by definition we have for all take with

then in particular in contradictionwith

By checking the proof of this theorem one observes that it would bevalid for any other transfer procedure as long as the resulting TU-gamesfor a specific NTU-game depend continuously on the weight vector. Oneexample of such a procedure is the one underlying the definition of theHarsanyi NTU-value (Harsanyi, 1959, 1963).

In order to define this procedure let V be an NTU-game and let bea weight vector. We first assume that has only positive coordinates.We recursively define the dividends as follows:

for every and for S with more than one player:

where for every is given by

Observe that for all so that isthe maximal element in the direction reciprocal to on the boundary

Page 205: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SHAPLEY TRANSFER PROCEDURE 191

of For coalitions of three and more players the idea is similar,but the starting point is, generally speaking, no longer the origin. Nowdefine, for each coalition S, the vector and let bedefined by

Note that there is an asymmetry in this definition between the grandcoalition and the smaller coalitions. We comment on this below.

That is actually a TU-game follows immediately sinceby definition. For positive the game depends con-

tinuously on We still have to define the games for the case wherehas one or more coordinates equal to zero. The following lemma shows

that this can be done by taking limits.

Lemma 9.4 Let and let V be an NTU-game. Then there is aTU-game such that for any sequence in

with

Proof. Let in with Then obviouslyconverges: call the limit For and let

be the vector as defined above associated with Since is in thecompact set V(S) for every we have

where Since convergesto for some the proof is complete by defining

In view of this lemma we can define a collection of Harsanyi transfergames for every The extension of a TU-solution can bedefined completely analogous as in the case of the Shapley transfer pro-cedure. Since the Harsanyi transfer games again depend continuouslyon the existence result, Theorem 9.3, also holds for this case:

Theorem 9.5 Under the assumptions in Theorem 9.3, whereis the extension of according to the Harsanyi procedure.

One application is to take the Shapley value as a TU-solution: thisresults in the so-called Harsanyi NTU-value under the Harsanyi transferprocedure. Specifically, if in the definition of the TU-game above wewould take then would be the Shapley value of the

Page 206: Borm_Chapters in Game Theory-In Honor of Stef Tijs

192 OTTEN AND PETERS

resulting TU-game. If, additionally, maximizes on V(N) thenis said to be in the Harsanyi solution. Hence, such points result

by applying the Harsanyi transfer procedure to the Shapley TU-solution.See Hart (1985) for a characterization.

9.4 A Characterization

In this section we present a general characterization of NTU-solutionsthat are obtained by extending regular TU-solutions through the Shap-ley transfer procedure.

Let be an NTU-solution defined on a class of NTU-games. Welist the following possible properties of

Property 9.6 is Pareto optimal if for every

This property needs no further explanation.In the following property, for a game V and a positive vectordenotes the game defined by for every nonempty

coalition

Property 9.7 is scale covariant if for every everyand every with we have: if then

One possible interpretation of this property is that the players havecardinal utility functions that are unique only up to a positive affinetransformation.

For the next two properties and the ensuing characterization re-sult we need to introduce some additional notation. For a game V,a nonempty coalition S and a nonnegative vector denote byH(V, S, ) the halfspace of of all points on or below the hyperplanewith normal supporting V(S) from above, and by itsboundary. Thus, for any point of tangency

and

For with an NTU-game V is called an hyperplanegame if for every nonempty coalition S the set PO(V(S)) coincides withthe nonnegative part of a hyperplane with normal Note that everyTU-game is a hyperplane game.

Page 207: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SHAPLEY TRANSFER PROCEDURE 193

Property 9.8 is expansion independent if for every andthere is a with

such that, for and for all satisfyingfor all S with we have:

This property captures the essence of the Shapley transfer procedure. Ifa game is extended by allowing sidepayments that preserve the utilitycomparison ratios between the players at a certain solution outcome,then that outcome should still belong to the solution of the extendedgame. The property, naturally, requires this to hold also for gamesin between the original game and the game extended by sidepayments,although this is not needed for the characterization theorem below. If theutility comparison ratios are not uniquely determined—which is the caseif there is no unique supporting hyperplane at the solution outcome—then the requirement applies to at least one set of ratios. A provisois made for the zero components of the vector in the formulation ofProperty 9.8; by property (N3) of an NTU-game it follows that playersoutside the set L must have zero at and then Property 9.8 impliesthat these players will stay at zero in the solution.

The expansion independence property was first introduced by Thom-son (1981) in the context of pure bargaining problems.

The fourth property is in a sense the ‘dual’ of Property 9.8.

Property 9.9 is contraction independent if for every hyperplane gameevery and every with such that,

for the set supports fromabove for every coalition S with we have:

This property is a variant of the well known ‘independence of irrelevantalternatives’ condition proposed by Nash (1950).

The characterization result is as follows.

Theorem 9.10 Let be an NTU-solution defined on a class ofNTU-games containing all hyperplane games, and let be a regularTU-solution on The following two statements are equivalent:

(i)

(ii)

satisfies Properties 9.6–9.9 and for every

for every

Page 208: Borm_Chapters in Game Theory-In Honor of Stef Tijs

194 OTTEN AND PETERS

Proof. In this proof we use the following fact, the proof of which isleft to the reader.

Fact. Let and let with Then

Proof of Assume that (ii) holds. To prove (i), we have toshow that satisfies Properties 9.6–9.9.

Pareto optimality of follows by definition.For scale covariance, let and with and

Let and with Defineby for every Then so

by Fact (i). Hence, This implies scalecovariance of

For expansion independence, take and Letwith Then Let L and

as in the definition of Property 9.8. Then by (N3)and Hence, which proves expansion

independence.Finally, for contraction independence, let be an

game Let hence there is a withObserve that for we have

otherwise there would be a in V(N) with contradictingThen for as in Property 9.9 it follows that

Together with this implies

Proof of Assume that (i) holds. Let andWe prove that and, thus, that By Paretooptimality and expansion independence of we can take and L asin Property 9.8. Define by if and if

For as in Property 9.8 take a game. Property9.8, expansion independence, implies By scale covariance,

where for every Sinceis a TU-game, this implies Hence, by

scale covariance, and by Property 4:For the converse implication, let We show that

which completes the proof of the theorem. Let withBy Lemma 9.1, and since we

have Define by if andif By scale covariance and noting that

we obtain Now the gameis a game and V satisfies the requirements for

with respect to this hyperplane game as in Property 9.9, contraction

Page 209: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SHAPLEY TRANSFER PROCEDURE 195

independence. Hence, this property implies

9.5 Applications

The Shapley transfer procedure and the corresponding results on exis-tence and characterization can be applied to most known solutions forTU-games. Here, we consider applications to the Shapley value, thecore, the nucleolus, and the

First we state a lemma characterizing the transfer games associatedwith TU-games. The proof is straightforward and left to the reader. Fora vector and a coalition denote

Lemma 9.11 Let and Then forevery coalition If is efficient in i.e., and

then for every for which

9.5.1 The Shapley Value

For a TU-game the Shapley value (Shapley, 1953) is defined by

for every where and | · | denotes the cardinality of afinite set. The Shapley TU-solution assigns the set to a game

With some abuse of notation we use for the Shapley solution andomit the set-brackets. An alternative definition using dividends (cf.Section 3) is also possible. Within our framework, the Shapley valueis well defined as long as Call an NTU-game V monotonicif whenever Hence, a TU-game is monotonicif whenever Denote the corresponding classes ofgames by and Then the Shapley value is well defined on

and It is also straightforward to check that isa regular TU-solution on Moreover, the inclusion in Lemma 9.1turns out to be an equality, as the following lemma shows.

Lemma 9.12 Let Then

Proof. Let and with such thatIn view of Lemma 9.1 it is sufficient to prove that

Page 210: Borm_Chapters in Game Theory-In Honor of Stef Tijs

196 OTTEN AND PETERS

Let Then, by Lemma 9.11,for every coalition S with and for all Bymonotonicity of we have for every coalition S with

Suppose for some S, then by definitionof the Shapley value there must be an with andhence a contradiction. Hence for everycoalition S with

We claim that for every coalition S with Sup-pose not, then there is a with Take arbitrary.By monotonicity, henceTherefore, a contradictionsince by definition. This proves our claim.

Since whenever and wheneverwe have Also, since for

all Hence so by homogeneity of we have

Application of the results in the preceding sections now yields:

Corollary 9.13 for every Moreover, is theunique NTU-solution on that satisfies Pareto optimality, scale co-variance, expansion and contraction independence and coincides with theShapley value on

Proof. Theorem 9.3 implies nonemptiness of on The secondpart follows from Theorem 9.10 and Lemma 9.12.

On the subclass of pure bargaining games coincides withthe Nash bargaining solution: see the last part of Section 2.

An earlier characterization of (also called the ShapleyNTU-value)was given by Aumann (1985). This characterization presumes existenceand makes use of specific properties of the Shapley value; it also usesthe standard concept of unbounded TU-games.

9.5.2 The Core

The core of a TU-game is defined by

Page 211: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SHAPLEY TRANSFER PROCEDURE 197

More generally, the core of an NTU-game V is defined by

where ‘int’ denotes the topological interior. Nonemptiness of the core isclosely connected to the idea of balancedness. A collection of nonnega-tive numbers { S a coalition} is called balanced iffor every player An NTU-game V is called balanced if forevery balanced collection we have where

is constructed from V(S) by adding zeros for playersoutside S. It is well known (Bondareva, 1963; Shapley, 1967) that aTU-game has a nonempty core if and only if it is balanced. For anNTU-game, balancedness—and even a weaker balancedness condition,cf. Scarf (1967)—implies nonemptiness of the core, but not the otherway around.

Let denote the class of balanced TU-games, i.e., TU-games withnonempty cores. The TU-solution c is regular, as is easy to verify. Let

denote the class of balanced NTU-games. By slightly adapting anargument of Qin (1994)1 it can be shown that if and only if

In words, an NTU-game is balanced if and only if allassociated transfer games are balanced.

Corollary 9.14 for every Moreover, is the uniqueNTU-solution on that satisfies Pareto optimality, scale covariance,expansion and contraction independence and coincides with on

Proof. Theorem 9.3 implies nonemptiness of on The secondpart follows from Theorem 9.10.

In this case, applying the transfer procedure on TU-games may addsolution outcomes, as the following example shows.

Example 9.15 Consider the four-person TU-game with player setN = {1, 2, 3, 4} and with if |S| = 3 or

and otherwise. This is a monotonic game withcore equal to

Let then is equal to except that Now,for hencebut

1Attributed to Shapley.

Page 212: Borm_Chapters in Game Theory-In Honor of Stef Tijs

198 OTTEN AND PETERS

Example 9.15 also implies that the core C(V) of an NTU-game V doesnot have to contain Also the converse is not true:

Example 9.16 Consider the three-person NTU-game V with player setN = {1, 2, 3} and with

and V(S) = {0} otherwise. Note thatThe only possible transfer game through which we

could obtain would be one corresponding to(or a positive multiple of that vector). For this transfer game we have

and so thathence

Example 9.16 still works if we replace the game V by with as the onlydifference that now In thatcase, however, the resulting (1, 1, l)-transfer game has an empty core andtherefore The latter fact follows also directly by consideringthe collection and otherwise. This showsthat if an NTU-game has a nonempty core, then this property is notnecessarily inherited by the associated transfer games.2

9.5.3 The Nucleolus

The nucleolus (Schmeidler, 1969) for a TU-game is defined as follows.For every Pareto optimal payoff vector arrange the so-called excesses

in a nonincreasing order. Then computesuch that the thus associated vector of excesses is lexicographically

minimal: the resulting payoff vector is the nucleolus of the game. If thegame has a nonempty core, then the nucleolus is in the core.

The nucleolus on is a regular TU-solution, so Theorems 9.3 and9.10 apply again. Consequently, denoting the nucleolus by we have:

Corollary 9.17 for every Moreover, is the uniqueNTU-solution on that satisfies Pareto optimality, scale covariance,expansion and contraction independence and coincides with on

Just as was the case with the Shapley value, also the transfer solution as-sociated with the nucleolus coincides with the Nash bargaining solution

2Qin (1994) also studies an extension of the core to NTU-games by applying theconcept of transfer games. These transfer games are ( ) hyperplane games ratherthan TU-games, and his approach also differs from ours since in the transfer gamethe feasible set of a coalition of players with zero weights is unbounded. In particular,such a game will always have an empty core.

Page 213: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SHAPLEY TRANSFER PROCEDURE 199

2).Like in the case of the core the transfer procedure may add outcomes

to TU-games, as is illustrated by the next example.

Example 9.18 Consider the four-person TU-game with N = {1, 2, 3, 4},and otherwise. Then

as is easily derived by symmetry. Takethen is equal to except that now By symmetry andthe fact that the nucleolus is in the core, Hence

so that

Observe that the game in this example is not balanced. It is an openquestion to find an example with a balanced TU-game.

9.5.4 The

The for TU-games (Tijs, 1981, Borm et al., 1992) is defined asfollows. For a TU-game define the ‘utopia vector’ by

and the ‘minimal right vector’by for every Then the

is the unique Pareto optimal point on the line segment withand as endpoints, if such a point exists and if

Games for which these two conditions are satisfied are called quasi-balanced. It can be shown that that every balanced game is quasi-balanced. By we denote the class of quasi-balanced TU-games.

We will show that transfer games associated with quasi-balancedTU-games are again quasi-balanced. First, we derive some inequalitiesconcerning the utopia and minimal right vectors of transfer games.

Lemma 9.19 Let be a TU-game and Then

for all

and

for all

Proof. Let Then, by Lemma 9.11,

on the subclass of pure bargaining games (see the last part of Section

Page 214: Borm_Chapters in Game Theory-In Honor of Stef Tijs

200 OTTEN AND PETERS

and

Here, the before-last inequality follows from the first part of theproof.

Lemma 9.20 Let and Then

Proof. Let Then by Lemma 9.19 and the fact that

and

hence

We next show that the Shapley transfer procedure does not add solutionoutcomes to TU-games. Cf. Lemma 9.12, where we prove this for theShapley value.

Lemma 9.21 Let Then

Proof. Let and with such thatIn view of Lemma 9.1 it is sufficient to prove that

Let then, by Lemma 9.11, for allHence,

Let such thatCase (a):Then

so

Page 215: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SHAPLEY TRANSFER PROCEDURE 201

and

where the second inequality follows from Lemma 9.19, the second equal-ity from Lemma 9.11, and the first equality from (9.2). Hence, all in-equalities in (9.3) are equalities. In particular, is efficient in so

So by (9.1), and for allso for these For

hence Altogether,

Case (b):Then for by (9.1), hence by Lemma9.11, so thatThus,

Now let First suppose |M | > 1. Thenand

Hence sothat This concludes the proof for |M| > 1. Ifthen by (9.4) and efficiency of the value, Because of

(9.4) and efficiency, we have henceThis concludes the proof of the lemma.

Denote by the class of NTU-games such thatBy Lemma 9.20 this class contains Moreover, contains theclass of all balanced NTU-games, since every transfer game associ-ated with a balanced NTU-game is balanced and therefore also quasi-balanced.

We have:

Corollary 9.22 for every Moreover, is theunique NTU-solution on that satisfies Pareto optimality, scale co-variance, expansion and contraction independence and coincides withthe on

Proof. Theorem 9.3 implies nonemptiness of on The secondpart follows from Theorem 9.10 and Lemma 9.21.

Page 216: Borm_Chapters in Game Theory-In Honor of Stef Tijs

202 OTTEN AND PETERS

Since on where as before is the class of pure bargaining games,the coincides with the equal-split solution, it follows again thatthe transfer solution coincides with the Nash bargaining solution onthe class of pure bargaining games.

9.6 Concluding Remarks

The main objective of this contribution was to provide existence andcharacterization of NTU-solutions obtained from TU-solutions by theShapley transfer procedure within one and the same framework. Theprice paid for this is the allowance of zero weights and the associatedtechnical problems. The benefit is that the results can be applied tomany TU-solutions: see Corollaries 9.13–9.22.

The approach followed above can be modified in many ways. Ifexistence is less of an issue then we may restrict attention to only positiveweights and consider other classes of games: this is the approach usuallyadopted in the literature. Also the transfer procedure may be varied:cf. the Harsanyi procedure as discussed in Section 3, or the procedureused to extend the so-called consistent value—which coincides with theShapley value on TU-games—to NTU-games (see Maschler and Owen,1992).

References

Aumann, R.J. (1985): “An axiomatization of the non-transferable utilityvalue,”, Econometrica, 53, 599–612.

Bondareva, O.N. (1963): “Some applications of linear programmingmethods to the theory of cooperative games,” Problemy Kibernetiki, 10,119–139.

Borm, P., H. Keiding, R.P. McLean, S. Oortwijn, and S.H. Tijs (1992):“The Compromise Value for NTU-Games,” International Journal ofGame Theory, 21, 175–189.

Harsanyi, J.C. (1959): “A bargaining model for the cooperativegame,” Annals of Mathematics Studies, Princeton University

Press, Princeton, 40, 325–355.

Harsanyi, J.C. (1963): “A simplified bargaining model for thecooperative game,” International Economic Review, 4, 194–220.

Page 217: Borm_Chapters in Game Theory-In Honor of Stef Tijs

SHAPLEY TRANSFER PROCEDURE 203

Hart, S. (1985): “An axiomatization of Harsanyi’s nontransferable util-ity solution,” Econometrica, 53, 1295–1313.

Maschler, M., and G. Owen (1992): “The consistent value for gameswithout side payments,” in: R. Selten (ed.), Rational Interaction, 5–12.New York: Springer Verlag.

Nash, J.F. (1950): “The bargaining problem,” Econometrica, 18, 155–162.

Qin, C.-Z. (1994): “The inner core of an game,” Games andEconomic Behavior, 6, 431–444.

Scarf, H. (1967): “The core of an game,” Econometrica, 35,50–67.

Schmeidler, D. (1969): “The nucleolus of a characteristic function game,”SIAM Journal of Applied Mathematics, 17, 1163–1170.

Shapley, L.S. (1953): “A value for games,” in: H. Kuhn, A.W.Tucker (eds.), Contributions to the Theory of Games, Princeton Univer-sity Press, Princeton, 307–317.

Shapley, L.S. (1967): “On balanced sets and cores,” Naval ResearchLogistics Quarterly, 14, 453–460.

Shapley, L.S. (1969): “Utility comparison and the theory of games,” in:G.Th. Guilbaud (ed.), La Décision. Editions du CNRS, Paris.

Shapley, L.S. (1984): “Lecture notes on the inner core,” Department ofMathematics, University of California, Los Angeles.

Thomson, W. (1981): “Independence of irrelevant expansions,” Inter-national Journal of Game Theory, 10, 107–114.

Tijs, S.H. (1981): “Bounds for the core and the ” in: O. Moeschlinand D. Pallasche (eds.), Game Theory and Mathematical Economics,123–132. Amsterdam: North-Holland.

Page 218: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Chapter 10

The Nucleolus asEquilibrium Price

BY JOS POTTERS, HANS REIJNIERSE, AND ANITA VANGELLEKOM

10.1 Introduction

The exchange economies studied in this chapter find their origins inDebreu (1959). They have a finite set of agents and a finite set ofindivisible goods Besides there is an infinitely divisible good referredto as ‘money’. It can be used to ‘transfer utility’ from one agent toanother agent: the marginal utility of money does not depend on theagent nor his wealth.

We introduce the notions of a stable equilibrium (with respect to aprice vector) and a regular price. Stable equilibria are robust in the sensethat they are not affected by any increase of the money supply. A pricevector is regular if it can be considered to be a shadow-price of the linearprogram corresponding to the economy. We show that price vectors thatsupport the stability of an equilibrium are regular. Furthermore, condi-tions on the economy are provided such that reallocations maximizingso-called social welfare can be extended to a stable equilibrium by anyregular price.

Economies of the considered type do not necessarily have equilibria,but regular prices can always be found. A particular one will be definedby means of the nucleolus. The existence of algorithms to calculate thenucleolus facilitates the task to find a price vector.

205

P. Borm and H. Peters (eds.), Chapters in Game Theory, 205–222.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Page 219: Borm_Chapters in Game Theory-In Honor of Stef Tijs

206 POTTERS, REIJNIERSE, AND VAN GELLEKOM

The nucleolus is introduced by Schmeidler (1969), primarily as a toolto prove the nonemptiness of the bargaining set (Aumann and Maschler,1964). Gradually, the nucleolus became a one-point solution rule in itsown right for TU-games with nonempty imputation set. It shares withthe Shapley value the properties of efficiency, dummy player, symmetryand anonymity but does not satisfy some other properties of the Shapleyvalue, like additivity (Shapley, 1953) and strong monotonicity (Young,1985). On the other hand, the nucleolus is a core allocation wheneverthe core is nonempty, and it satisfies the reduced game property in thesense of Snijders (1992).

As a solution rule the nucleolus is an expression of one-sided egalitar-ianism on coalition level. It is an attempt to treat all coalitions equallyin the sense that they exceed or fall short to their coalition values withthe same amount. When this is not possible, it exhibits the tendency tohelp the ‘poorer’ coalitions (coalitions with a high excess).

When computing the nucleolus, it turns out that only a few coalitionvalues have an influence on the position of the nucleolus: apart fromthe grand coalition there is a collection of at most coalitionsthat determine the nucleolus (Reijnierse and Potters, 1998). Here,represents the number of players.

In the early nineties (cf., among other papers, Potters and Tijs, 1992;Maschler et al., 1992) several other types of (pre-)nucleolus conceptswere introduced. One of these is the The difference withthe standard pre-nucleolus is that only the excesses of coalitionsare taken into account. A consequence of the restriction to coalitions in

is that the can be empty or can consist of more than onepoint. By applying a result of Derks and Reijnierse (1998), we providenecessary and sufficient conditions for the to be a singleton.

In Potters and Reijnierse (1998) the idea of the is usedto simplify the computation of the nucleolus. For certain classes ofTU-games one can find relatively small collections such that the

coincides with the nucleolus. When the size of is muchsmaller than (e.g., polynomial in ) the computation of the

has a lower complexity than the computation of the nucle-olus. E.g., for assignment games the one-coalitions and the mixed two-coalitions ‘determine the nucleolus’.

The present contribution shows another application of theIt will be shown that in economies with indivisible goods, money and—what is called—quasi-linear utility functions a (potential) equilibrium

Page 220: Borm_Chapters in Game Theory-In Honor of Stef Tijs

NUCLEOLUS AS EQUILIBRIUM PRICE 207

price can be computed by computing a in an ‘associated’TU-game. The collection will be polynomial in the number of agents,but exponential in the number of goods.

With such an economy a partial TU-game is associatedwith the following features:

the player set equals

the collection of participating coalitions is

is defined by: is the utility of agent forbundle C,

is taken sufficiently large to guarantee the nonemptiness ofthe of

If is the of the part canbe understood as a price vector. If the exchange economy has a (stable)price equilibrium, the vector is one of the possible equilibrium pricevectors.

The chapter consists of the following sections. The preliminaries intro-duce the type of exchange economies to be studied and repeat the basicdefinitions in this area; it also contains a short recapitulation of the mainconcepts from the theory of TU-games.

Section 10.3 provides two properties economies can have. One ofthem, the SW-condition, is necessary for the existence of stable priceequilibria, together they are sufficient. This part is a generalizationof the results of Bikhchandani and Mamer (1997). It will be shownhow regular price vectors, potential equilibrium price vectors, look like.Section 10.4 gives the proofs of the theorems of Section 10.3.

Section 10.5 considers the partial TU-game and proves thatis a regular price vector if is the of If

is defined to be the collection of coalitions in with maximal excess,the SW-condition holds if and only if contains a partition of

10.2 Preliminaries

This section consists of two parts. The first part introduces the type ofexchange economies that will be considered. The second part recalls thedefinitions of some concepts of the theory of TU-games.

Page 221: Borm_Chapters in Game Theory-In Honor of Stef Tijs

208 POTTERS, REIJNIERSE, AND VAN GELLEKOM

10.2.1 Economies with Indivisible Goods and Money

The economies we consider in this chapter have the following features:

There is a finite set of agents N, and

There is a finite set of indivisible goods and

Each agent has an initial endowmentdenotes the set of goods initially held by agent and is theamount of money agent has in the beginning. We assume that

is a distribution of i.e., wheneverand We allow, however, that for some agents

Each agent has a preference relation on the set of com-modity bundles with and We assumethat can be represented by a utility function of the form

(separability of money),whenever and (monotonicity),

Because of the last assumption, an economy is determined by N,and

Comment: Separability of money is the most restrictive condition.In fact, it induces four properties, namely:

separability per se, saying that for somefunction defined on

strict monotonicity in money, saying that is strictly monotonic,

the possibility of interpersonal comparison of utility, which is ex-pressed by and

the property that money can be used as a physical means to trans-fer utility because the marginal utility for money is constant (and,by scaling, set to be 1).

Page 222: Borm_Chapters in Game Theory-In Honor of Stef Tijs

NUCLEOLUS AS EQUILIBRIUM PRICE 209

By trading, a coalition can realize any redistribution of the goodsin and any redistribution of the money supply Wecall such a twofold redistribution a So,a must satisfy:

and

An is a price equilibrium, if there exists aprice vector with the following properties:

(i)(ii)

for all (budget constraints)

if for some and then

(maximality conditions).

Here, is an abbreviation of By the strict monotonicity ofthe utility functions the maximality conditions imply that the budgetconstraints are, in fact, equalities. Furthermore, by the monotonicity ofthe reservation prices an equilibrium price is nonnegative.

10.2.2 Preliminaries about TU-Games

A transferable utility game or TU-game, is a pair consisting of afinite player set N and a map with In a partialTU-game the map is only defined on a collection of coalitions

containing N. The of a partial game consists of allvectors with and for all Forpartial TU-games the pre-imputation set consists of all vectors

with The excess of a pre-imputation withrespect to a coalition and the partial game is:

For we define Letand let be the map that orders the coordinates

of each vector of in a weakly decreasing order. Let be thelexicographic order1 on The of consists of allpre-imputations that are lexicographically optimal:

1I.e. if or if being the first coordinate at which anddiffer.

for all

Page 223: Borm_Chapters in Game Theory-In Honor of Stef Tijs

210 POTTERS, REIJNIERSE, AND VAN GELLEKOM

In Maschler et al. (1992) also the nucleolus is defined ina similar way, with the exception that only allocations in a closed subset

of the pre-imputation set are possible candidates.Unlike the regular nucleolus, the can be empty or consist

of more than one point. However, by applying a result due to Derks andReijnierse (1998) we will prove that the consists of one pointfor all partial games if and only if:

is complete: for every the equation has a

solution,is balanced: the equation has a positive solution.

Here, denotes the indicator vector of coalition S Maschler etal. (1992) proved that is nonempty when is a compactsubset of and that all excess-functions are constanton

10.3 Stable Equilibria

Equilibria can arise by a lack of money. This will be illustrated in Ex-ample 10.4. Such equilibria are unstable in the sense that a (sufficientlyhigh) increase of the initial money supply upsets the equilibrium char-acter of the reallocation. We are interested in equilibria for which thisdoes not occur. Let be an exchange economy with indivisible goodsand money.

An N-reallocation is a stable price equilibrium ofif there exists a price vector such that for every thereallocation is a price equilibrium with equilibriumprice if the initial money supply becomes

So, a price equilibrium is stable if it remains a price equilibrium when theinitial endowment of money is increased. If an equilibrium is stable, notnecessarily every equilibrium price supports its stability, as the followingexample shows:

Example 10.1 Let N = {1,2} and Letand The reservation values are given by:

Page 224: Borm_Chapters in Game Theory-In Honor of Stef Tijs

NUCLEOLUS AS EQUILIBRIUM PRICE 211

If we set the price to be both agents would like to haveboth goods (yielding profits of value (10-7)-2=1), but their budgets arenot sufficient. Therefore, they just keep their own goods. Hence, theinitial endowment is an equilibrium. Price fails to bean equilibrium price if we increase the money endowments to (3,3).

Another price leading to the same equilibrium is Thisprice remains an equilibrium price at any increase of the money supply.Therefore, is a stable equilibrium. We say that supportsthe stability of the equilibrium (and does not).

This section provides necessary conditions and sufficient conditions forthe existence of stable price equilibria. As will be proved, the existenceof stable price equilibria requires two conditions

(1)(2)

a condition on the reservation prices anda condition on the money supply.

Because of the separability of the utility functions these conditions canto a large extent be handled separately, as we shall see. The first con-dition does not depend on the initial endowments (we call it the socialwelfare condition or SW-condition); the second condition (this is calledthe abundance condition or AB-condition, for short) depends on ini-tial endowments. To formulate these conditions we need the followingconcepts:

An maximizes social welfare or is ef-ficient if is maximal among all Themaximal social welfare is denoted by

A stochastic redistribution consists of a set of numbersone for each agent in N and each subset C of with the propertythat for all and for everycommodity So, a stochastic redistribution is a nonnegativesolution of the vector equation:

Here and are the characteristic vectors of and anddenotes the direct sum:

Page 225: Borm_Chapters in Game Theory-In Honor of Stef Tijs

212 POTTERS, REIJNIERSE, AND VAN GELLEKOM

The numbers can be understood as a lottery for agentThe number is the chance that agent obtains bundle C. The

second condition says that the probability that object will be assignedis also one. Note that the integer-valued stochastic redistributions areexactly the N-redistributions: each agent obtains with probability onea bundle and is a redistribution.Expected social welfare realized by the stochastic redistribution

is, by definition,

Now we can formulate the SW-condition:

An economy satisfies the SW-condition if no stochastic redis-tribution has a higher expected social welfare than

As maximal expected social welfare is determined by the following linearprogram (LP):

The AB-condition depends on initial endowments. If, e.g., the initialdistribution of the indivisible goods maximizes social welfare,it is even an empty condition.

The AB-condition stipulates that each agent has enough money tosell for the price (the lowest price for which he is willing to sell

) and to buy for the price (the highest price he is willingto pay for ). This makes clear that the AB-condition might be toorestrictive. If the price for is higher than or the price ofis lower than a smaller amount of money is sufficient. We willfrequently use the phrase satisfies the AB-condition’.

maximize: subject to:

for andfor all

for all

the SW-condition says that (LP) has an integer-valued optimal solution.Note that the SW-condition is not dependent on the initial endowments.

An economy satisfies the AB-condition if there is anN-redistribution that maximizes social welfare and satisfies the

inequalities

Page 226: Borm_Chapters in Game Theory-In Honor of Stef Tijs

NUCLEOLUS AS EQUILIBRIUM PRICE 213

Note that the AB-condition is much weaker than the abundanceconditions that are found in the literature, namely:

or even:

see Beviá et al. (1999) and Bikhchandani and Mamer (1997). Theseconditions are, in our opinion, unreasonably restrictive: every agentmust be able to buy all indivisible goods for the highest price he iswilling to pay.

A vector is called a regular price vector if there is a vectorsuch that is an optimal solution of the dual linear program

(LP)*:

minimize: s subject to:for all and

Let us formulate the two theorems concerning the existence of priceequilibria. The proofs will be postponed untill the next section.

Theorem 10.2 [cf. Bikhchandani and Mamer (1997)] An exchange econ-omy with quasi-linear utility functions, indivisible goods and moneyhas a price equilibrium if the SW-condition and the AB-condition aresatisfied.

In fact, the proof of Theorem 10.2 shows that every redistributionmaximizing (expected) social welfare for which the AB-condition holdscan be extended to a stable price equilibrium and that theset of prices supporting its stability contains all regular price vectors.

In Example 10.4, we shall see that an economy can have equilibriathat do not maximize social welfare and that equilibrium prices neednot be regular.

In the following theorem we prove that the SW-condition is a neces-sary condition for the existence of stable price equilibria.

Theorem 10.3 [cf. Bikhchandani and Mamer (1997)] If an economywith quasi-linear utility functions, indivisible goods and money has

a price equilibrium, then the SW-condition holds. Every stable equilib-rium allocation maximizes (expected) social welfare and every equilib-rium price supporting its stability is regular.

Page 227: Borm_Chapters in Game Theory-In Honor of Stef Tijs

214 POTTERS, REIJNIERSE, AND VAN GELLEKOM

Combining the two theorems we see that, if the AB- and SW-conditionsare satisfied, the set of prices supporting some stable price equilibriumconsists of all regular prices. The following simple examples show whatcan happen in economies with indivisibilities.

Example 10.4 Let N = {1,2} and Let andThe reservation values are additive

It is easy to see that social welfare is optimized if the agents switch theirendowments. A price vector supporting this exchange obeys

and By solving the linear program (LP)*,one can verify that the set of regular prices vectors is given by theseinequalities.

To support the redistribution and by a regular pricevector, player 1 has, after payment, This amount liesbetween and so lack of money may block the existenceof regular equilibrium prices (if ) or may block some regularequilibrium prices (if ).

Let us consider the case that and the price vector isThen the reallocation and

is a price equilibrium that does not maximize social welfare and theequilibrium price is not regular. The better assignment and

cannot be realized because agent 1 does not have enough moneyto buy

The next example originates from Beviá et al. (1999). They show thatif the money supply is sufficiently large, the reservation values excludethe existence of equilibrium prices at all.

Example 10.5 Let N = {1,2,3} and The reserva-tion values are given in the table below. The initialendowments are and

Page 228: Borm_Chapters in Game Theory-In Honor of Stef Tijs

NUCLEOLUS AS EQUILIBRIUM PRICE 215

The authors show that the unique social optimum [ and] is not supported by regular equilibrium prices. The reason is

that a stochastic redistribution has a higher value. If agent 1 obtainsand each with chance agent 2 obtains or with equal chancesand agent 3 obtains or each with chance the total expected utilityis 24.5, higher than the social optimum 24. And, indeed, if we increasee.g. to 8.5, the price vector supports the sociallyoptimal redistribution, if there is enough money ( is sufficient).Note that, in the original economy (with ) the social optimalredistribution and is a price equilibrium, if theprices are (7,5,8) and

These examples show that if the money supply is sufficiently restrictive,non-stable equilibria or equilibrium prices not supporting stability canexist. We end this section with a scheme overviewing these phenomena.Let be an equilibrium with equilibrium price

supports the stability of (see next section, Comments (iii)).

Page 229: Borm_Chapters in Game Theory-In Honor of Stef Tijs

216 POTTERS, REIJNIERSE, AND VAN GELLEKOM

10.4 The Existence of Price Equilibria: Neces-sary and Sufficient Conditions

This section provides the proofs of the theorems in the previous one.We start by giving a proof of the fact that the SW-condition and theAB-condition guarantee the existence of price equilibria (Theorem 10.2).

Proof of Theorem 10.2. Let be any redistribution of the indi-visible goods that maximizes social welfare having the property

Let be any optimal solution of the linear program (LP)*:

minimize: subject to:for all and

that is a price equilibrium, the inequality and themaximality conditions have to be checked.

By the SW-condition, the integer-valued stochastic reallocation

if andelse

is an optimal solution of (LP). By complementary slackness, we find:

Since:

we find:

which is nonnegative by the AB-condition So,

If for some agent commodity bundle C andthen:

The first inequality follows from the feasibility conditionAs this inequality is an equality for we find the third

relation. The last equality follows from the definition of Hence,the N-reallocation is a price equilibrium with equilibriumprice

Define for every agent In order to show

Page 230: Borm_Chapters in Game Theory-In Honor of Stef Tijs

NUCLEOLUS AS EQUILIBRIUM PRICE 217

Comments. (i) If we reconsider the proof of Theorem 10.2, we see that,if the SW-condition holds, every N-reallocation satisfying theAB-condition and every regular price vector can be matched to a priceequilibrium.

(ii) Furthermore, if we increase the initial money supply byremains regular (since LP and LP* are independent of ), more-

over still maximizes social welfare and still supports the AB-condition. Therefore, in the new situation the proof above again showsthat is an equilibrium. Hence, is a stableequilibrium of the original economy, supported by

(iii) Finally, the proof above can be used to verify that ifis an equilibrium supported by regular price then the AB-condition isnot necessary to prove the stability of price In this case we have bythe definition of a price equilibrium:

If is raised by remains regular and the (second part ofthe) proof once more shows that is an equilibrium ofthe new situation.

The SW-condition is necessary for the existence of stable price equilibria(Theorem 10.3).

Proof of Theorem 10.3. Let be a stable price equilib-rium with equilibrium price Define foreach agent The pair is a feasible point of (LP)*. We provethat for each agent

Let be any commodity bundle and let be any agent.Let be the real number and let be anypositive number. Then

If is nonnegative, the maximality condition and the budget con-straint generate the inequality So,

Substitution of givesand therefore,

If is negative, we use the fact that is also an equi-librium if the initial amount of money is This time, we redefine

and is nonnegative for sufficiently large.We can proceed as before and find again

Hence, in both cases, for all Define theinteger-valued stochastic redistribution by:

Page 231: Borm_Chapters in Game Theory-In Honor of Stef Tijs

218 POTTERS, REIJNIERSE, AND VAN GELLEKOM

ifelse.

The vectors and are feasible vectors in the primaland dual programs respectively, leading to the same value, i.e.,

Hence, this is the value of the programs and the vectors areoptimal solutions. Because is integer valued, the SW-condition is satisfied. Finally, the redistribution maximizessocial welfare.

Summarizing the results of Theorems 10.2 and 10.3, we find that theSW-condition is a necessary and sufficient condition for the existence ofstable price equilibria, as soon as the money supply satisfies the AB-condition, a stable price equilibrium allocation maximizes social welfareand equilibrium prices are regular price vectors. For unstable price equi-libria the last two statements need not be true. In Example 10.4 theequilibrium price is not regular and the reallocationand does not maximize social welfare. Comparing this resultwith the results of Bikhchandani and Mamer (1997) we find the followingdifference. Bikhchandani and Mamer (1997) assume the stronger AB-condition by which every efficient distribution satisfies our AB-condition.Under this assumption they prove the equivalence of the SW-conditionand the existence of price equilibria.

10.5 The Nucleolus as Regular Price Vector

For each economy we define the partial TU-game as follows.The ‘player’-set is The collection consists of and allcoalitions with So, if and then

The value of T is defined by Thevalue of the grand coalition is chosen to be so large that the

of is nonempty.

In order to prove that the is a regular price vector, we firsthave to show that it is a singleton.

Proposition 10.6 Let be a partial TU-game. Then the fol-lowing two statements are equivalent:

(i)

(ii)

is balanced and complete,

the is a singleton.

Page 232: Borm_Chapters in Game Theory-In Honor of Stef Tijs

NUCLEOLUS AS EQUILIBRIUM PRICE 219

Proof. Let us call a vector in of which the coordinates sum up tozero a side payment. A side payment is called beneficial iffor all Corollary 6 of Derks and Reijnierse (1998) shows that

is balanced and complete if and only if the zero vector is the onlybeneficial side payment.

Since the only beneficial side payment is the zero vector,for every side payment As the function

is continuous, there is a number such that

for all side payments with In words, moving from one pre-imputation to another with unit length distance, will always cost at least

to some coalition T inLet be any pre-imputation of How far can we move from

without enlarging the maximal excess? An upper bound can be given asfollows. Let and let and be the largest andsmallest excess, respectively, with respect to for coalitions inLet be any other pre-imputation.

If then moving from to will costsome coalition T more than which gives:

Define:

Pre-imputations outside satisfy Sowe might as well restrict the set of candidates of the toif we have a lexicographically best candidate in it is a global bestcandidate. The set is compact and therefore we can apply Theorem2.3 of Maschler et al. (1992): the is nonempty.

Let and be elements of the Theorem 4.3of Maschler et al. (1992) gives that the excesses at the nucleolus areconstant:

for allHence:

for all

The collection is complete by assumption, so and coincide; thenucleolus is a singleton.

Let be a beneficial side payment. Then adding to theleads to a (weakly) lexicographically better pre-imputation.

Page 233: Borm_Chapters in Game Theory-In Honor of Stef Tijs

220 POTTERS, REIJNIERSE, AND VAN GELLEKOM

Hence, must be the zero vector. We can apply again Corollary 6 ofDerks and Reijnierse (1998).

With the help of the previous proposition, it is not difficult to prove thatthe of is a singleton:

Lemma 10.7 Let be a partial TU-game arising from an ex-change economy. If the of consists of onepoint.

Proof. To show the completeness of it suffices to show that andare in the span of for all and This is

true, because for all we have and

To show the balancedness of it is sufficient to prove that every coali-tion S in is a member of a partition that is a subcollection of Let

and let Then, for ( isnecessary):

is a partition in

Now the lemma is a direct consequence of the previous Proposition.

Theorem 10.8 Let be an exchange economy with indivisible goodsand money. If is the of the associatedpartial game then is a regular price vector.

Proof. Let be the maximal excess with respect to

This means that for all and allAccordingly, is a feasible point of (LP)*. Suppose,

satisfies for all and moreoverThen satisfies:

for all pairs and

There is a number such that is a pre-imputation.All coalitions have an excess strictly lower than withrespect to this imputation. Hence, is not the Thiscontradiction gives that is an optimal point of (LP)*, so is aregular price vector.

Page 234: Borm_Chapters in Game Theory-In Honor of Stef Tijs

NUCLEOLUS AS EQUILIBRIUM PRICE 221

Define for any pre-imputation the collection as thesubcollection of with maximal excess:

We abbreviate by The can help to checkthe SW-condition:

Proposition 10.9 The SW-condition holds if and only if containsa partition.

Proof. We have seen that is an optimal solution of (LP)*( and as in the previous proof). If is a partition in

the (stochastic) reallocation:

ifelse

satisfies complementary slackness and is therefore optimal in (LP). Thisimplies that the SW-condition holds.

Conversely, if the SW-condition holds, there is an N-reallocationthat maximizes expected social welfare. As is an optimalsolution of (LP)*, complementary slackness holds:

Then for every agent so contains apartition.

So we are left to answer the question: does contain a partitionof If not, we can combine the results of Theorem 10.3 andProposition 10.9 and conclude that there is no stable price equilibrium.

If is a partition of in the proof of Proposi-tion 10.9 shows that this is a distribution of the indivisibilities maximiz-ing expected social welfare. If is the regular price vector2, obtainedby computing the of one selects the coalitions

with If this subcollection contains apartition then the assignment:

is a stable price equilibrium. This can be deduced by a reasoning similarto the proof of Theorem 10.2.

2The regularity follows by Theorem 10.8.

Page 235: Borm_Chapters in Game Theory-In Honor of Stef Tijs

222 POTTERS, REIJNIERSE, AND VAN GELLEKOM

Note that the condition for is weaker thanthe AB-condition for

References

Aumann, R.J., and M. Maschler (1964): “The bargaining set for co-operative games,” in: Dresher, M., Shapley, L.S., Tucker, A.W. (eds.),Advances in Game Theory. Princeton: Princeton University Press, 443–476.

Beviá, C., M. Quinzii, and J.A. Silva (1999): “Buying several indivisiblegoods,” Mathematical Social Sciences, 37, 1–23.

Bikhchandani, S., and J.W. Mamer (1997): “Competitive equilibrium inan exchange economy with indivisibilities,” Journal of Economic The-ory, 74, 385–413.Debreu, G. (1959): Theory of Value. New York: John Wiley and SonInc.

Derks, J., and J.H. Reijnierse (1998): “On the core of a collection ofcoalitions,” International Journal of Game Theory, 27, 451–459.

Maschler, M., J.A.M. Potters, and S.H. Tijs (1992): “The general nu-cleolus and the reduced game property,” International Journal of GameTheory, 21, 85–106.Potters, J.A.M., and S.H. Tijs (1992): “The nucleolus of matrix gamesand other nucleoli,” Mathematics of Operations Research, 17, 164–174.

Reijnierse, J.H., and J.A.M. Potters (1998): “The of TU-games,” Games and Economic Behavior, 24, 77–96.

Schmeidler, D. (1969): “The nucleolus of a characteristic function game,”SIAM Journal of Applied Mathematics, 17, 1163–1170.

Shapley, L.S. (1953): “A value for games,” in: Kuhn, H.W.,Tucker, A.W. (eds.) Contribution to the Theory of Games II, Annals ofMathematics Study, 28. Princeton: Princeton University Press, 307–317.

Snijders, C. (1995): “Axiomatization of the nucleolus,” Mathematics ofOperations Research, 20, 189–196.

Young, H. (1985): “Monotonic solutions of cooperative games,” Inter-national Journal of Game Theory, 14, 65–72.

Page 236: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Chapter 11

Network Formation, Costs,and Potential Games

BY MARCO SLIKKER AND ANNE VAN DEN NOUWE-LAND

11.1 Introduction

We study the endogenous formation of networks in situations where thevalues obtainable by coalitions of players can be described by a coali-tional game. To do so, we model network formation as a strategic-formgame in which an exogenous allocation rule is used to determine thepayoffs to the players in various networks. We only consider exogenousallocation rules that divide the value of each group of interacting play-ers among these players. Such allocation rules are called componentefficient. In the network-formation game, the players have to weigh thepossible advantages of forming links, such as occupying a more cen-tral position in a network and therefore maybe increasing their payoff,against the costs of forming links. The starting point of this chapter isthe strategic-form network-formation game that was introduced in Duttaet al. (1998) and that was extended to include a cost for forming a link bySlikker and van den Nouweland (2000).l We show that this strategic-form network-formation game is a potential game if and only if theexogenous allocation rule is the cost-extended Myerson value that wasintroduced in Slikker and van den Nouweland (2000). Potential games,

P. Borm and H. Peters (eds.), Chapters in Game Theory, 223–246.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

223

This model was actually first mentioned, briefly, in Myerson (1991).1

Page 237: Borm_Chapters in Game Theory-In Honor of Stef Tijs

which were introduced by Monderer and Shapley (1996), are easy to an-alyze because for such a game all the information necessary to computeits Nash equilibria can be captured in a potential function, a functionthat assigns to each strategy profile a single number. Also, the existenceof a potential function gives rise to a refinement of Nash equilibrium,namely the set of strategy profiles that maximize this potential function.We study which networks emerge according to the potential-maximizingstrategy profiles. We find for games with three symmetric players, thatthe pattern of networks supported by potential-maximizing strategy pro-files as the costs for forming links increase depends on whether the un-derlying coalitional game is superadditive and/or convex. In all cases,though, higher costs for forming links result in the formation of fewerlinks. The results that we obtain for 3-player symmetric games are sur-prisingly similar to those found for coalition-proof Nash equilibrium inSlikker and van den Nouweland (2000). We conclude the current chap-ter by extending the result that, according to the potential maximizer,higher costs for forming links result in the formation of fewer links, togames with more than three players who are not necessarily symmetric.

The outline of the chapter is as follows. We start with a review ofthe literature on network formation in Section 11.2. In Section 11.3 wedescribe cost-extended communication situations and the cost-extendedMyerson value as well as the network-formation game in strategic form.In Section 11.4 we describe potential games and we show that thenetwork-formation game in strategic form is a potential game if andonly if the cost-extended Myerson value is used to determine the payoffsof the players. In Section 11.5 we then use the potential maximizer asan equilibrium refinement in these games and we study which networksare formed according to the potential maximizer. We obtain the resultthat higher costs for forming links result in the formation of fewer links.

11.2 Literature Review

In this section we provide a brief review of the literature on networkformation.

The game-theoretical literature on the formation of networks wasinitiated by Aumann and Myerson (1988). They study situations inwhich the profits obtainable by coalitions of players can be describedby a coalitional game. For such situations, they introduce an extensive-form game of network formation in which links are formed sequentially

224 SLIKKER AND VAN DEN NOUWELAND

Page 238: Borm_Chapters in Game Theory-In Honor of Stef Tijs

and in which a link that is formed at some point cannot be brokenlater in the game. The Myerson value (cf. Myerson, 1977) is used as anexogenous allocation rule to determine the payoffs to the players in var-ious networks. Aumann and Myerson (1988) study which networks aresupported by subgame-perfect Nash equilibria of this network-formationgame. They show that there exist superadditive games such that onlyincomplete or even non-connected networks are supported by subgame-perfect Nash equilibria. They also show that in weighted majority gameswith several small players who each have one vote and who as a grouphave a majority and one large player who needs at least one small playerto form a majority, the subgame-perfect Nash equilibrium predicts theformation of the complete network on a minimal winning coalition ofsmall players. Aumann and Myerson (1988) also provide two examplesof weighted majority games with several large players in which com-plete networks containing one large player and several small players aresupported by subgame-perfect Nash equilibria. They pose the questionwhether there exists a weighted majority game for which a network thatis not internally complete can be supported by a subgame-perfect Nashequilibrium. This question is addressed by Feinberg (1988), who pro-vides an example of a weighted majority game and a network that is notinternally complete such that no new links will be formed once this net-work has been formed. Slikker and Norde (2000) use the extensive-formgame of Aumann and Myerson (1988) to study network formation insymmetric convex games. They show that for symmetric convex gameswith up to five players the complete network is always supported by asubgame-perfect Nash equilibrium and that all networks that are sup-ported by a subgame-perfect Nash equilibrium are payoff equivalent tothe complete network. Furthermore, Slikker and Norde (2000) showthat this result cannot be extended to games with more than five play-ers. They provide an example of a 6-player symmetric convex game forwhich there are networks supported by subgame-perfect Nash equilibriain which the players have payoffs that are different from those they getin the complete network.

Dutta et al. (1998) study a network-formation game in strategicform in which links are formed simultaneously. Like Aumann and My-erson (1988), they study situations in which the profits obtainable bycoalitions of players can be described by a coalitional game. They alsouse an exogenous allocation rule to determine the payoffs to the play-ers in various networks. However, rather than focusing on the Myer-

NETWORKS AND POTENTIAL GAMES 225

Page 239: Borm_Chapters in Game Theory-In Honor of Stef Tijs

son value only, they consider a class of allocation rules that includesthe Myerson value. They restrict their attention to superadditive coali-tional games. Their focus is on the identification of networks that aresupported by various equilibrium concepts. After showing that everynetwork can be supported by a Nash equilibrium of the strategic-formnetwork-formation game, they proceed by studying refinements of Nashequilibrium. Because strong Nash equilibria might not exist, they focuson less demanding refinements such as Nash equilibria in undominatedstrategies and coalition-proof Nash equilibria. They show that both ofthese equilibrium refinements predict the formation of the complete net-work or of some network in which the players get the same payoffs as inthe complete network.

Qin (1996) studies the relation between potential games and strategic-form network-formation games. He shows that the Myerson value is theunique component efficient allocation rule that results in the network-formation game being a potential game. He then applies the equilibriumrefinement called the potential maximizer, which Monderer and Shapley(1996) defined for potential games, to strategic-form network-formationgames that use the Myerson value to determine the payoffs to the play-ers in various networks. He shows that the potential maximizer predictsthe formation of the complete network or of some network in which theplayers get the same payoffs as in the complete network.

In both the extensive-form network-formation game of Aumann andMyerson (1988) and the strategic-form network-formation game of Duttaet al. (1998), forming links is free of charge. Slikker and van den Nouwe-land (2000) introduce costs for establishing links in these two modelsand study how the level of these costs influences which networks aresupported by equilibria. They use the cost-extended Myerson value todetermine the payoffs to the players in various networks. For variousequilibrium refinements, they identify which networks are supported inequilibrium as the costs for establishing links increase. For the extensive-form network-formation game they obtain the perhaps counterintuitiveresult that in some cases rising costs for forming links may result in theformation of more links in subgame-perfect equilibrium. In the strategic-form network-formation game, they concentrate on Nash equilibria inundominated strategies and coalition-proof Nash equilibria. They showthat generally for very low costs these equilibria predict the formation ofthe complete network, while the number of links formed in equilibriumdecreases as the costs increase.

226 SLIKKER AND VAN DEN NOUWELAND

Page 240: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Slikker and van den Nouweland (2001a) introduce link and claimgames, strategic-form network-formation games in which players bargainover the division of payoffs while forming links. This makes their modelvery different from those described before, where bargaining over payoffdivision occurs after a network has been formed. Following previous pa-pers, they study situations in which the profits obtainable by coalitionsof players can be described by a coalitional game. They find that Nashequilibrium does, in general, not support networks that contain a cycle.The main focus in Slikker and van den Nouweland (2001a) is on thepayoffs to the players that can emerge according to various equilibriumrefinements. They show that any payoff vector that is in the core of theunderlying coalitional game is supported by a Nash equilibrium of thelink and claim game but not necessarily by a strong Nash equilibrium,while any strong Nash equilibrium of the link and claim game resultsin a payoff vector that is in the core of the underlying coalitional game.They also provide an overview of all coalition-proof Nash equilibria for3-player games that satisfy a mild form of superadditivity.

All the papers described above study situations in which the prof-its obtainable by coalitions of players can be described by a coalitionalgame. In recent years, however, a number of papers have been pub-lished that study the formation of networks in situations where theprofits obtainable by a coalition of players do not depend solely onwhether they are connected or not, but also on exactly how they areconnected to each other. In this setting, Jackson and Wolinsky (1996)expose a tension between stability and optimality of networks. Duttaand Mutuswami (1997) further study this issue using the strategic-formnetwork-formation game of Dutta et al. (1998). They show that theconflict between stability and optimality of networks can be avoided bytaking an implementation approach.

We end this very brief review by pointing the reader to several papersthat study dynamic models of network formation in which players are notforward looking. Papers in this area mostly focus on specific parametricmodels. Without going into any detail, we refer the reader to Balaand Goyal (2000), Goyal and Vega-Redondo (2000), Jackson and Watts(2000), Johnson and Gilles (2000), Watts (2000), and Watts (2001).

For an extensive and up-to-date overview of the game-theoreticalliterature on networks and network formation we refer the reader toSlikker and van den Nouweland (2001b).

NETWORKS AND POTENTIAL GAMES 227

Page 241: Borm_Chapters in Game Theory-In Honor of Stef Tijs

11.3 Network Formation Model in StrategicForm

In this section we describe cost-extended communication situations andthe cost-extended Myerson value as defined in Slikker and van denNouweland (2000). We also describe the network-formation game instrategic form that was studied in Dutta et al. (1998) and Slikker andvan den Nouweland (2000) and we describe the results that were ob-tained in those papers.

Let N be a group of players whose cooperative possibilities are de-scribed by the characteristic function that assigns to everycoalition of players a value with The coalitionalgame describes for every coalition the value that its memberscan obtain if they cooperate, but it does not address the issue of whichplayers actually cooperate. In communication situations, cooperationis achieved through bilateral relationships that are called (communica-tion) links. The set of all possible links isand a (communication) network is a graph (N, L) in which the playersare the nodes, who are connected via the bilateral links inThe formation of each link costs A tuple in which

is a coalitional game, (N,L) is a network, and is the cost ofestablishing a link, is called a cost-extended communication situation.

For notational convenience, we omit brackets and denote a link by Notethat We will also omit brackets in other expressions and, for example, write

Let be a cost-extended communication situation. Thecost-extended network-restricted game associated with this sit-uation incorporates three elements, namely the information on the co-operative possibilities of the players as described by the coalitional game

the restrictions on cooperation as described by the network (N, L),and the costs for establishing links. Let be a coalition of play-ers. These players can use the links in tocommunicate.2 This induces a natural partition T/L of T into compo-nents, in which each component consists of a subgroup of players in Twho are either directly connected or indirectly connected through otherplayers in T, and in which two players in different components are notconnected in the network (T, L(T)). The value of T in the cost-extendednetwork-restricted game is defined as the sum of the values ofits components in the game minus the costs for the links between

2

rather than and instead of

228 SLIKKER AND VAN DEN NOUWELAND

Page 242: Borm_Chapters in Game Theory-In Honor of Stef Tijs

NETWORKS AND POTENTIAL GAMES

An allocation rule for cost-extended communication situations as-signs to every cost-extended communication situation a vec-tor of payoffs to the players. The cost-extendedMyerson value is an allocation rule for cost-extended communicationsituations that is defined using the Shapley value. The Shapley value(cf. Shapley, 1953) is a well-known solution concept for coalitional gamesand it is easiest described using unanimity games. For a coalition of play-ers the unanimity game is defined byif and otherwise. Shapley (1953) showed that ev-ery coalitional game can be written as a linear combination ofunanimity games in a unique way. In terms ofthe unanimity coefficients the Shapley value of agame is given by

229

the players in T, i.e.,

The cost-extended Myerson value of a cost-extended communicationsituation is the Shapley value of the associated cost-extendednetwork-restricted game, i.e.,

The cost-extended Myerson value can be axiomatically characterizedusing two of its properties, component efficiency and fairness.

Component Efficiency An allocation rule on a class of cost-extended communication situations is component efficient if forevery cost-extended communication situation andevery component

Fairness An allocation rule on a class of cost-extended communi-cation situations is fair if for every cost-extended communicationsituation and every link it holds that

and

Page 243: Borm_Chapters in Game Theory-In Honor of Stef Tijs

We now proceed by describing network-formation games in strategicform. In such a game, the players decide with whom they want toform links, taking into account their possible gains from cooperation asdescribed by an underlying coalitional game and the costs of forminglinks. A link between two players is then formed if and only if boththese players indicate that they want to form it. This results in theformation of a specific network and the payoffs to the players in thenetwork are determined using some exogenously given allocation rulefor cost-extended communication situations.

Let be a coalitional game, the cost for forming a link, andlet be an allocation rule on the class of cost-extended commu-nication situations with underlying coalitional game and a costc for establishing a link. In the network-formation game in strategicform the set of strategies available to player is

By choosing a strategy player indicates that he iswilling to form links with the players in Because a link between twoplayers is formed if and only if both players want to form it, a strategyprofile results in the formation of a network with links

The payoffs to the players are their payoffs in the induced cost-extendedcommunication situation as prescribed by i.e.,

The network-formation game in strategic formis described by the tuple where

3 The theorem in Jackson and Wolinsky (1996) is presented in a setting of rewardfunctions. Theorem 8.1 in Slikker and van den Nouweland (2001b) explicitly showsthe correspondence between the value of Jackson and Wolinsky (1996) and the cost-extended Myerson value.

230 SLIKKER AND VAN DEN NOUWELAND

For any coalitional game and we define the classof cost-extended communication situations with underlying coalitionalgame and a cost c for establishing a link. The cost-extendedMyerson value is the unique allocation rule on a class that satisfiescomponent efficiency and fairness. Theorem 11.1 follows from Theorem4 in Jackson and Wolinsky (1996) and we omit its proof.3

Theorem 11.1 For any coalitional game and the cost-extended Myerson value is the unique allocation rule on that sat-isfies component efficiency and fairness.

Page 244: Borm_Chapters in Game Theory-In Honor of Stef Tijs

for each and the payoff function isdefined by

We illustrate the network-formation game in strategic form in thefollowing example.

Example 11.2 Let be the 3-player game with N = {1,2,3} andgiven by

Suppose that the cost for establishing a link is We use thecost-extended Myerson value to determine the payoffs to the players forany given network. In the network-formation game everyplayer has 4 strategies, representing whether he wants to form links withnone of the other two players, one of them, or both of them. Link willbe formed only if both players and indicate that they want to formit and it will not be formed if at least one of these two players indicatesthat he does not want to form it. For example, if player 1 playsplayer 2 plays and player 3 plays then only links12 and 23 are formed, i.e., Hence, the players findthemselves in cost-extended communication situationand their payoffs are the cost-extended Myerson value of this situa-tion To compute this cost-extended Myerson value,we first compute the associated cost-extended network-restricted game

Because in network all coalitions but the coalitionconsisting of players 1 and 3 are connected, we find

NETWORKS AND POTENTIAL GAMES 231

Expressed in unanimity games, we haveThe Shapley value of is easily computed from this

as Hence, in the network-formation gamewe now have

Proceeding like described above, we find that the network-formationgame in strategic form is as represented in Figure 11.1,where player 1 chooses a row, player 2 chooses a column, and player 3chooses one of the four payoff matrices.

Page 245: Borm_Chapters in Game Theory-In Honor of Stef Tijs

232 SLIKKER AND VAN DEN NOUWELAND

Page 246: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Dutta et al. (1998) studied the network-formation gamein the absence of costs, i.e., They restrict their attention to super-additive coalitional games which satisfyfor all disjoint Also, they require the exogenous allocationrules for communication situations to satisfy three appealing propertiesthat are all satisfied by the Myerson value. Their focus is on the identi-fication of networks that are supported by various equilibrium concepts.After showing that every network can be supported by a Nash equilib-rium of the network-formation games they proceed bystudying refinements of Nash equilibrium. Because strong Nash equi-libria might not exist, they focus on less demanding refinements suchas Nash equilibria in undominated strategies and coalition-proof Nashequilibria. They show that both of these equilibrium refinements predictthe formation of the complete network or of some network inwhich the players get the same payoffs as in the complete network.

Slikker and van den Nouweland (2000) introduce costs for estab-lishing links into communication situations and study how the level ofthese costs influences which networks are supported by equilibria. Theyuse the cost-extended Myerson value to determine the payoffs to theplayers in various cost-extended communication situations. For com-putational reasons, they limit the scope of their analysis to symmetric3-player games throughout most of their paper. For various equilibriumrefinements, they identify which networks are supported in equilibriumas the costs for establishing links increase. For the network-formationgame in strategic form that we studied in Example 11.2,their results imply that every network is supported by Nash equilib-rium, while only the complete network is supported by undominatedNash equilibrium and coalition-proof Nash equilibrium. As the cost forforming a link increases, the complete network is no longer supported bya Nash equilibrium and undominated Nash equilibrium and coalition-proof Nash equilibrium support the three networks containing exactlyone link. Generally, as the costs increase, networks with fewer links aresupported in equilibrium.

11.4 Potential Games

We start out this section by describing potential games and several re-sults obtained for such games by other authors. We then show that thenetwork-formation game in strategic form is a potential

NETWORKS AND POTENTIAL GAMES 233

Page 247: Borm_Chapters in Game Theory-In Honor of Stef Tijs

where denotes the restriction of to A game thatadmits a potential is called a potential game. Monderer and Shapley(1996) showed that there exist many potential functions for each poten-tial game. Specifically, they showed that if P is a potential function astrategic-form game then adding a constant (function) to P resultsin another potential for Moreover, any two potentials P and fora game differ by a constant (function). If a strategic-form game is apotential game, then each of its potential functions contains all the in-formation necessary to determine its Nash equilibria because the changein payoff of a unilaterally deviating player is captured in the poten-tial. Moreover, the existence of a potential function naturally leads toa refinement of Nash equilibrium by selecting the strategy profiles thatmaximize the potential function.

A relation between cooperation structure formation games and po-tential games was already established in Monderer and Shapley (1996).They considered a two-stage model, called a participation game. In thefirst stage, each player chooses whether or not he wants to participate.In the second stage, the players who chose not to participate receivesome stand-alone value and the participating players are assumed toform a coalition. The players in the coalition receive payoffs that aredetermined by applying an exogenously given allocation rule. Mondererand Shapley (1996) show that the Shapley value is the unique efficientallocation rule that results in the participation game being a potentialgame. Qin (1996) studied the relation between potential games andstrategic-form network-formation games (as introduced in Section 11.3)in the absence of costs. He showed that the Myerson value is the uniquecomponent efficient allocation rule that results in the network-formationgame being a potential game.

234 SLIKKER AND VAN DEN NOUWELAND

game if and only if is the cost-extended Myerson value.Strategic-form potential games were introduced by Monderer and

Shapley (1996). A strategic-form game is a potential game if there existsa real-valued function on the set of strategy profiles that captures for anydeviation by a single player the change in payoff of the deviating player.Such a function is called a potential function or simply a potential forthe game in strategic form. Formally, a potential for a strategic-formgame is a function P on thatsatisfies the property that for every strategy profile everyand every it holds that

Page 248: Borm_Chapters in Game Theory-In Honor of Stef Tijs

The work of Monderer and Shapley (1996) and Qin (1996) indicatesthat there may be a relation between the existence of potential functionsfor games in strategic form and Shapley values of coalitional games. Thisrelation is studied by Ui (2000). To describe his result, we need someadditional notation. Let N be a set of players and a setof strategy profiles for these players. After choosing a strategy profile

the players play a cooperative game that depends onthe strategy profile chosen. In the cooperative game that is played, thevalue of a coalition depends only on the strategies of the players in thiscoalition, i.e., it is independent of the strategies of the players outsidethis coalition. Formally, for any coalition and anytwo strategy profiles such that where denotes therestriction of to Hence, with every player set N and set ofstrategy profiles we associate an indexed set of coalitionalgames in

We now turn our attention to network formation in a setting in whichthere are costs for establishing links. We first show that the strategic-form network-formation game is a potential game if the cost-extended

NETWORKS AND POTENTIAL GAMES 235

for all and

it holds that

Here, denotes the set of coalitional games with player set N.The following theorem, due to UI (2000), provides a general relation

between Shapley values of coalitional games and strategic-form potentialgames.

Theorem 11.3 Let be a game in strategicform. is a potential game if and only if there exists an indexed set ofcoalitional games such that

for each and each Furthermore, if is a potential gameand are as described above, then the function Pdescribed by

for all is a potential for

if

Page 249: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Myerson value is used to determine the payoffs for the players. We pointout that the following lemma extends a result by Qin (1996), who provesa similar result in the absence of costs.

Lemma 11.4 For any coalitional game and cost per link itholds that the network-formation game is a potential game.

Proof. Let be a coalitional game and let c be the cost for estab-lishing a link. For any strategy profile in the strategic-form game

we consider the network-restricted game asso-ciated with cost-extended communication situation Thisdefines an indexed set of coalitional games We willprove that Let andSince and both

and do not depend onit follows that does not depend on This implies that

Also, by the definition of the payoff functions of the network-formation game it holds that

for allIt now follows from Theorem 11.3 that is a potential

game.

236 SLIKKER AND VAN DEN NOUWELAND

Proving that some allocation rule results in the network-formation gamebeing a potential game begs another question, namely whether there ex-ist other allocation rules with this property. We will answer this questionin Theorem 11.6, whose proof uses the following lemma. The lemmastates that a network-formation game is a potential game only if theexogenous allocation rule used satisfies fairness.

Lemma 11.5 Let be a coalitional game and Let be anallocation rule on the class of cost-extended communication situa-tions with underlying coalitional game and cost for establishinga link. If is a potential game, then satisfies fairness.

Proof. Suppose is a potential game and let P be apotential function for this game. Fix a network (N, L). For eachwe define the strategy

Page 250: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Because was chosen arbitrarily, we may now conclude thatsatisfies fairness.

Combining Lemmas 11.4 and 11.5, we derive the following theorem.

Theorem 11.6 Let be a coalitional game and Let bea component efficient allocation rule on the class of cost-extendedcommunication situations with underlying coalitional game andcost for establishing a link. Then is a potential game ifand only if coincides with on

Proof. The if-part in the theorem follows directly from Lemma 11.4.To prove the only-if-part, suppose that the network-formation game

is a potential game. Then it follows from Lemma 11.5that satisfies fairness on Because is component efficient byassumption, it now follows from Theorem 11.1 that coincides withon

In the following theorem, we describe a potential for a strategic-formnetwork-formation game in terms of unanimity coeffi-cients of the associated network-restricted game.

NETWORKS AND POTENTIAL GAMES 237

Then, obviously, Choose a link We use the notationto denote the restriction of to the players in Then it

holds that

because the three strategy tuples andall result in the formation of the same network, namely

and, hence, in the same payoffs for the players.Using the definition of the payoff functions of the network-

formation game we now find

Page 251: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Theorem 11.7 Let be a coalitional game and the cost forestablishing a link. Then the function P defined by

for each is a potential for the network-formation game

Proof. In the proof of Lemma 11.4 we showed thatand that for the payoff functions of the network-formation

game it holds that

for all We can then conclude from the second part of Theorem11.3 that the function P given by

11.5 Potential Maximizer

In the previous section we showed that strategic-form network-formationgames with costs for establishing links are potential games if the cost-extended Myerson value is used as the exogenous allocation rule. Thispaves the path for us to use the potential maximizer as an equilibriumrefinement in these games. In the current section, we study the networksthat are formed according to the potential maximizer.

For a potential game the potential maximizer selects the strategyprofiles that maximize a potential function. This equilibrium refinement

4Note that implies that

for all is a potential function for The secondequality in the statement of the theorem now follows easily by notingthat for all and any with it holds that4

238 SLIKKER AND VAN DEN NOUWELAND

Page 252: Borm_Chapters in Game Theory-In Honor of Stef Tijs

was introduced by Monderer and Shapley (1996), who also prove thatit is well defined because for every potential game the set of strat-egy profiles that maximize a potential function is independent of theparticular potential function used. As a motivation for this equilibriumrefinement, they remark that in the so-called stag-hunt game that wasdescribed by Crawford (1991), potential maximization selects strategyprofiles that are supported by the experimental results of van Huycket al. (1990). Additional motivation for the potential maximizer as anequilibrium refinement is provided by Ui (2001), who showed that Nashequilibria that maximize a potential function are generically robust.

In a setting in which establishing links is free, Qin (1996) analyzedstrategic-form network-formation games using the Myerson value to de-termine the players’ payoffs. He showed that for any superadditive gamethe complete network is supported by a potential-maximizing strategyprofile. Furthermore, he showed that any potential-maximizing strategyprofile gives rise to the formation of a network that results in the samepayoffs to the players as the complete network. We extend the work ofQin (1996) and investigate which networks are supported by potential-maximizing strategy profiles in the presence of costs for establishinglinks.

In the following example, we consider the coalitional game of Ex-ample 11.2 and analyze the networks that are supported by potential-maximizing strategy profiles for varying levels of the cost for establishinga link.

Example 11.8 Consider the 3-player coalitional game with char-acteristic function defined by

NETWORKS AND POTENTIAL GAMES 239

We established in Lemma 11.4 that the network-formation gameis a potential game. To find the potential-maximizing

strategy profiles in this game, we start by describing a potential func-tion P. It follows from Theorem 11.7 that the value that a potentialfunction assigns to a strategy profile only depends on the network

Hence, we can describe a potential function by the values itassigns to strategy profiles resulting in the formation of various networks.Because the players in the game are symmetric, we can restrict

Page 253: Borm_Chapters in Game Theory-In Honor of Stef Tijs

240 SLIKKER AND VAN DEN NOUWELAND

attention to nonisomorphic networks only.5 Consider, for example, anetwork (N, L) with two links, say L = {12, 23}. Denoting the costs forestablishing links by the associated cost-extended network-restrictedgame is described by

In terms of unanimity games, the network-restricted game is given by

Hence, it follows for the potential P described in Theorem 11.7 that forany strategy profile that results in the formation of links 12 and 23

Table 11.1, that in the absence of costs the potential maximizerpredicts the formation of the complete network This is in linewith the results of Qin (1996). For positive costs, we derive that thepotential maximizer predicts the formation of fewer links as the costsfor establishing links rise. The results are represented in Figure 11.2.

It is easily seen that the potential P takes the same value for everystrategy profile that results in the formation of a network with twolinks. The values that P assigns to strategy profiles that result in theformation of networks with 0, 1, or 3 links are determined in a similarmanner. We provide the results in Table 11.1. It readily follows using

5Two networks and are isomorphic if there is a one-to-one cor-

that a link between two vertices in is included in if and only if the link betweenthe corresponding two vertices in is included in

respondence between the vertices in and those in with the additional property

Page 254: Borm_Chapters in Game Theory-In Honor of Stef Tijs

NETWORKS AND POTENTIAL GAMES

Figure 11.2 schematically represents the networks that can result ac-cording to the potential maximizer for different levels of the cost Theway to read this figure, as well as the figures to come, is as follows. For

(and ) the complete network is the only network that resultsaccording to the potential maximizer, for with all threenetworks with two links are supported by the potential maximizer, andso on. On the boundaries between these intervals all the networks thatappear on either side of this boundary are supported by the potentialmaximizer. So, for example, if then four networks are supportedby the potential maximizer; the empty network and three networks withone link each.

We conclude this example with the observation that, for the coali-tional game in this example, the cost-network pattern in Figure11.2 also results if we use coalition-proof Nash equilibrium instead of thepotential maximizer. That pattern can be found in Slikker and van denNouweland (2000).

We now turn our attention to the class of symmetric 3-player games. Insuch a game, the value of a coalition of players does not depend on theidentities of its members, but solely on how many players it contains.Hence, a 3-player symmetric game can be described by the valuesthat it assigns to coalitions of various sizes. To keep notations to aminimum, we assume (without loss of generality) that 1-player coalitionshave a value of zero, and we denote the values of 2-player coalitions and3-player coalitions by and respectively. In addition to this, werestrict our analysis to non-negative games and assume thatand In the setting of 3-player symmetric games, Slikker andvan den Nouweland (2000) find that for various equilibrium refinements,

241

Page 255: Borm_Chapters in Game Theory-In Honor of Stef Tijs

242 SLIKKER AND VAN DEN NOUWELAND

the cost-network patterns for network-formation games with costs forestablishing links depend on whether the underlying coalitional gameis superadditive and/or convex. We find that for the structures thatare supported by the potential maximizer a similar distinction holds.To derive these patterns, we use the values according to the potentialfunction P described in Theorem 11.7, which we provide in Table 11.2.The cost-network patterns for the classes of games that contain only non-

superadditive games, superadditive but non-convex games, and convexgames can be found in Figures 11.3, 11.4, and 11.5, respectively.

We notice that the number of links formed if the players play apotential-maximizing strategy profile declines as the cost for forming alink increases. For non-superadditive games, networks with two links arenever formed according to the potential maximizer and for convex games,networks with 1 link are not supported by the potential maximizer forany cost. For any coalitional game, we find that if the cost is very low,then all three links are formed, and if the cost is very high, then no linksare formed.

The predictions according to the potential maximizer are remarkably

Page 256: Borm_Chapters in Game Theory-In Honor of Stef Tijs

NETWORKS AND POTENTIAL GAMES

similar to the predictions according to coalition-proof Nash equilibrium(see Figures 11-13 in Slikker and vanden Nouweland, 2000). The onlydifference is the transition point from networks with 3 links to networkswith 1 link for the class with nonsuperadditive games only, i.e., for gameswith This point is for the potential maximizer (see Figure11.3) and for coalition-proof Nash equilibrium.

We are able to extend the result that the potential maximizer pre-dicts the formation of fewer links if the costs increase, to games with anarbitrary number of players that are not necessarily symmetric.6

Theorem 11.9 Let be a coalitional game and let anddenote two levels of costs for establishing links such that

This result may seem straightforward from an intuitive point of view. However,we stress that a similar result cannot be obtained for subgame-perfect Nash equi-libria of extensive-form network-formation games. Slikker and van den Nouweland(2000) show that in this game increasing costs can lead to more links being formedin equilibrium.

243

6

Page 257: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Aumann, R., and R. Myerson (1988): “Endogenous formation of linksbetween players and coalitions: an application of the Shapley value,” inRoth, A. (ed.) The Shapley Value. Cambridge, UK: Cambridge Univer-sity Press, 175–191.Bala, V., and S. Goyal (2000): “A noncooperative model of networkformation,” Econometrica, 68, 1181–1229.Crawford, V. (1991): “An evolutionary interpretation of van Huyck,Battalio, and Beil’s experimental results on coordination,” Games andEconomic Behavior, 3, 25–59.Dutta, B. and S. Mutuswami (1997): “Stable networks,” Journal ofEconomic Theory, 76, 322–344.

Dutta, B., A. van den Nouweland, and S. Tijs (1998): “Link formationin cooperative situations,” International Journal of Game Theory, 27,245–256.

References

Because for every with the strategyprofile which maximizes the potential results in the formation ofat most links. Hence,

Now, let be such that Then we find using the secondequality sign in the expression in Theorem 11.7 for and that

Let be a network that is supported by a potential-maximizingstrategy profile in and a network that is sup-ported by a potential-maximizing strategy profile in Then

Proof. Let be a potential-maximizing strategy profilein such that and let be defined analo-gously. We denote by the potential for the game

described in Theorem 11.7. Note that both network-formation games and have the same set ofstrategy profiles, which we denote by S. Because the potential function

takes a maximum value for strategy profile it holds for allthat

244 SLIKKER AND VAN DEN NOUWELAND

Page 258: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Feinberg, Y. (1998): “An incomplete cooperation structure for a votinggame can be strategically stable,” Games and Economic Behavior, 24,2–9.

Goyal, S., and F. Vega-Redondo (2000): “Learning, network formationand coordination,” Mimeo.

Jackson, M., and A. Watts (2000): “On the formation of interactionnetworks in social coordination games,” Mimeo.

Jackson, M., and A. Watts (2001): “The evolution of social and eco-nomic networks,” Journal of Economic Theory (to appear).

Jackson, M., and A. Wolinsky (1996): “A strategic model of social andeconomic networks,” Journal of Economic Theory, 71, 44–74.

Johnson, C., and R. Gilles (2000): “Spatial social networks,” Review ofEconomic Design, 5, 273–299.

Monderer, D., and L. Shapley (1996): “Potential games,” Games andEconomic Behavior, 14, 124–143.

Myerson, R. (1977): “Graphs and cooperation in games,” Mathematicsof Operations Research, 2, 225–229.

Myerson, R. (1991): Game Theory: Analysis of Conflict. Cambridge,Mass.: Harvard University Press.

Qin, C. (1996): “Endogenous formation of cooperation structures,” Jour-nal of Economic Theory, 69, 218–226.

Shapley, L. (1953): “A value for n-person games,” in: Tucker, A. andKuhn, H. (eds.), Contributions to the Theory of Games II. Princeton:Princeton University Press, 307–317.

Slikker, M., and H. Norde (2000): “Incomplete stable structures in sym-metric convex games,” CentER Discussion Paper 2000-97, Tilburg Uni-versity, Tilburg, The Netherlands.

Slikker, M., and A. van den Nouweland (2000): “Network formation withcosts for establishing links,” Review of Economic Design, 5, 333–362.

Slikker, M., and A. van den Nouweland (2001a): “A one-stage model oflink formation and payoff division,” Games and Economic Behavior, 34,153–175.

Slikker, M., and A. van den Nouweland (2001b): Social and EconomicNetworks in Cooperative Game Theory. Boston: Kluwer Academic Pub-lishers.

Ui, T. (2000): “A Shapley value representation of potential games,”Games and Economic Behavior, 31, 121–135.

NETWORKS AND POTENTIAL GAMES 245

Page 259: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Ui, T. (2001): “Robust equilibria of potential games,” Econometrica, toappear.

van Huyck, J., R. Battalio, and R. Beil (1990): “Tactic coordinationgames, strategic uncertainty, and coordination failure,” American Eco-nomic Review, 35, 347–359.

Watts, A. (2000): “Non-myopic formation of circle networks,” Mimeo.

Watts, A. (2001): “A dynamic model of network formation,” Gamesand Economic Behavior, 34, 331–341.

246 SLIKKER AND VAN DEN NOUWELAND

Page 260: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Chapter 12

Contributions to theTheory of StochasticGames

BY FRANK THUIJSMAN AND KOOS VRIEZE

12.1 The Stochastic Game Model

In this introductory section we give the necessary definitions and no-tations for the two-person case of the stochastic game model and webriefly present some basic results. In Section 12.2 we discuss the mainexistence results for zero-sum stochastic games, while in Section 12.3we focus on general-sum stochastic games. In each section we discussseveral examples to illustrate the most important phenomena. Unlessmentioned otherwise, we shall assume the state space and the actionspaces to be finite. In our discussion we shall address in particular thecontributions by Dutch researchers to the field. For a more general andmore detailed discussion we refer to Neyman and Sorin (2001).

It all started with the fundamental paper by von Neumann (1928)in which he proves the minimax theorem which says that for each fi-nite matrix of reals there exist probability vectors

and such that for all andit holds that (Note that we do not distinguish rowvectors from column vectors. In the matrix products this should be clearfrom the context.) In other words:This theorem can be interpreted to say that each matrix game has a

247

P. Borm and H. Peters (eds.), Chapters in Game Theory, 247–265.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Page 261: Borm_Chapters in Game Theory-In Honor of Stef Tijs

value. A matrix game A is played as follows. Simultaneously, and inde-pendent from each other, player 1 chooses a row and player 2 choosesa column of A. Then player 2 has to pay the amount to player1. Each player is allowed to randomize over his available actions andwe assume that player 1 wants to maximize his expected payoff, whileplayer 2 wants to minimize the expected payoff to player 1. The mini-max theorem tells us that, for each matrix A there is a unique amount,the value denoted by val(A ), which player 1 can guarantee as his mini-mal expected payoff, while at the same time player 2 can guarantee thatthe expected payoff to player 1 will be at most this amount.

Later Nash (1951) considered the N-person extension of matrix games,in the sense that all N players, simultaneously and independently chooseactions that determine a payoff for each and every one of them. Nash(1951) showed that in such games there always exists at least one (Nash-)equilibrium: a tuple of strategies such that each player is playing a bestreply against the joint strategy of his opponents. For the two-player casethis boils down to a “bimatrix game” where players 1 and 2 receiveand respectively in case their choices determine entry The re-sult of Nash says that there exist and such that for all and itholds that and where andare finite matrices of the same size.

Shapley (1953) introduced dynamics into game theory by consideringthe situation that at discrete stages in the players play one of finitelymany matrix games, where the choices of the players determine a payoffto player 1 (by player 2) as well as a stochastic transition to go to anext matrix game. He called these games “stochastic games”, whichbrings us to the topic of this chapter. Formally, a two-person stochasticgame with finite state and action spaces can be represented by a finiteset of matrices corresponding to the set of states

For matrix has size and entryof contains:

248 THUIJSMAN AND VRIEZE

a)

b)

a payoff for each player

a transition probability vectorwhere is the probability of a transition

from state to state whenever entry of is selected.

Play can start in any state of S and evolves by players independentlychoosing actions and of where denotes the state visited at

Page 262: Borm_Chapters in Game Theory-In Honor of Stef Tijs

stage In case for all and then the gameis called zero-sum, otherwise it is called general-sum. In zero-sum gamesplayers have strictly opposite interests, since they are paying each other.

At any stage each player knows the historyup to stage so the players know the sequence of

states visited and the actions that were actually chosen in any of these.The players do not know how their opponents choose those actions, i.e.they do not know their opponent’s strategy. A strategy is a plan thattells a player what mixed action to use in state at stage giventhe full history Such behavior strategies will be denoted by forplayer 1 and by for player 2.

For initial state and any pair of strategies the limiting averagereward and the reward, to player arerespectively given by

where are random variables for the state and actions at stageLet and denote vectors of rewards with coordinates

corresponding to the initial states.A stationary strategy for a player consists of a mixed action for eachstate, to be used whenever that state is being visited, regardless ofthe history. Stationary strategies for player 1 are denoted by

where is the mixed ac-tion used by player 1 in state For player 2’s strategies we write

A pair of stationary strategies determines a Markov-chain(with transition matrix) on S, where entry of is

If we use the notationwith

then

STOCHASTIC GAMES 249

where I is the identity matrix, and

with

Page 263: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Notice that (12.3) and (12.4) imply that row of is the uniquestationary distribution for the Markov chain starting in stateA stationary strategy is called pure if for all

Pure stationary strategies shall be denoted by and for players 1and 2 respectively. The following lemma is due to Hordijk et al. (1983).It says that, when playing against a fixed stationary strategy, a playeralways has a pure stationary best reply:

Lemma 12.1 For all and for all stationary strategies forplayer 2, there exist pure stationary strategies and for player 1,such that for all strategies

and

A similar result applies for stationary strategies for player 1.

Finally, we wish to mention one more type of strategies, namely Markovstrategies. These are strategies that, at any stage of play, prescribeactions that only depend on the current state and stage. Thus, the pastactions of the opponent are not being taken into account. Strategiesfor which these choices do depend on those past actions shall be calledhistory dependent.

12.2 Zero-Sum Stochastic Games

In zero-sum stochastic games it is customary to consider only the payoffsto player 1, which player 1 wishes to maximize and which player 2 wantsto minimize. In his ancestral paper on stochastic games Shapley (1953)shows

250 THUIJSMAN AND VRIEZE

It is well-known (cf. Blackwell (1962)) that

and hence (12.1), (12.2) and (12.5) give

Page 264: Borm_Chapters in Game Theory-In Honor of Stef Tijs

STOCHASTIC GAMES 251

Theorem 12.3 For each recursive game the limiting average value ex-ists, and it can be achieved by using stationary strategies, i.e.there exists and for each there exist strategies such thatfor all and

Here denotes the vector (1, 1, . . . , 1) in

Example 12.4 Consider the following recursive game.

Theorem 12.2 For each stochastic game and for all thereexists and there exist stationary strategies and such thatfor all strategies and

The vector is called the value and the strategiesare called stationary optimal strategies.

Thus we have that is the highest reward that player 1 can guarantee:

while player 2 can make sure that player 1’s reward will not exceedand each player can do so by some specific stationary strategy. Shapley’sproof is based on the observation that is the unique solution of thefollowing system of equations:

where ‘val’ denotes the matrix game value operator.Everett (1957) and Gillette (1957) were the first to consider undis-

counted rewards. Everett (1957) examined recursive games, which canbe defined as stochastic games where the only non-zero payoffs can beobtained in absorbing states, i.e. states that have the property that onceplay gets there, it remains there forever. Although optimal strategiesneed not exist for such games, Everett (1957) showed the following:

Page 265: Borm_Chapters in Game Theory-In Honor of Stef Tijs

252 THUIJSMAN AND VRIEZE

To explain this notation: Player 1 chooses rows; player 2 chooses columns;for each entry the above diagonal number is the payoff to player 1; incase of a general-sum game the payoff tuple is written at this place. Thebelow diagonal number is the state at which play is to proceed; in caseof a stochastic transition we write the transition probability vector atthis place.

States 3 and 4 are absorbing and obviously states 1 and 2 are theonly interesting initial states. For this game the limiting average valueis For player 1 a stationary limiting average

strategy is given by for states 1 and 2 re-spectively (clearly, in states 3 and 4 he can only choose the one avail-able action). As can be verified using (6), the value is

and for player 1 the unique station-ary optimal strategies are given by playing Top, the first

row, with probability in state 1 as well as in state 2.

An elementary proof for Everett’s (1957) result is given by Thuijs-man and Vrieze (1992), where for the recursive game situation a sta-tionary limiting average strategy is constructed from an ar-bitrary sequence of stationary optimal strategies, with

Example 12.5 This famous game is the so called big match introducedby Gillette (1957).

For this game the unique stationary optimal strategies aregiven by and for players 1 and 2 respectively,and for initial state 1. However, it was not clear for a long time,whether or not the limiting average value would exist. The problem wasthat against any Markov strategy for player 1 and for any player 2has a Markov strategy such that player 1’s limiting average reward is lessthan On the other hand, player 2 can guarantee that he has to pay alimiting average reward of at most but he cannot guarantee anythingless than Hence there is an apparent gap between the amounts the

Page 266: Borm_Chapters in Game Theory-In Honor of Stef Tijs

players can guarantee using only Markov strategies. The matter wassettled by Blackwell and Ferguson (1968), who formulated, for arbitrary

a history dependent strategy for player 1 which guarantees alimiting average reward of at least against any strategy of player2. This history dependent limiting average strategy is of thefollowing type. At stage suppose that play is still in state 1 whereplayer 2 has chosen Left times, while he has chosen Right times.Then, player 1 should play Bottom (his second row) with probability

where

This result on the big match was generalized by Kohlberg (1974), whoshowed that every repeated game with absorbing states has a limitingaverage value. A repeated game with absorbing states is a stochasticgame in which, just like in the big match, all states but one are absorbing.Finally, by an ingenious proof Mertens and Neyman (1981) showed:

Theorem 12.6 For every stochastic game there exists and, foreach there exist strategies and such that for all strategiesand

Their proof exploits the remarkable observation by Bewley and Kohlberg(1976) that the value as well as the stationaryoptimal strategies can be expanded as Puiseux series in powers of

For example, for the above big match we have that

Before deriving this breakthrough on the easy initial states (Tijsand Vrieze, 1986), the same authors have studied structural propertiesof stochastic games in a number of papers. In Tijs and Vrieze (1980)the effect that perturbations in the game parameters have on the valueand on the strategies is being examined. In Vrieze and Tijs(1980) the results of Bohnenblust et al. (1950) and of Shapley and Snow(1950) have been extended to the case of stochastic games.Besides, it is shown how games with a given solution can be constructed.At about the same time, Tijs and Vrieze (1981) also generalized the re-sults of Vilkas (1963) and Tijs (1980), who characterized the value formatrix games, to the case of stochastic games. Slightlyearlier, Tijs (1979) examined N-person stochastic gameswith finite state spaces and metric action spaces. He showed that, undercontinuity assumptions with respect to reward and transition functions,

STOCHASTIC GAMES 253

Page 267: Borm_Chapters in Game Theory-In Honor of Stef Tijs

as well as some assumptions on the topological size of the action spaces,the existence of can be established. As far as games withnon-finite action spaces are concerned, we should also mention the workby Sinha et al. (1991) who examined semi-infinite stochastic games, ex-tending earlier work by Tijs (1979) on semi-infinite matrix games.

As far as structural properties are concerned, we would like to men-tion the very important result by Tijs and Vrieze (1986), which saysthat for every stochastic game there is for each player a non-empty setof initial states for which a stationary limiting average optimal strategyexists. Their proof relies on the Puiseux series work by Bewley andKohlberg (1976). A new and direct proof for the same result is given inThuijsman and Vrieze (1991) and Thuijsman (1992). A detailed studyof the possibilities for limiting average optimality by means of station-ary strategies can be found in Thuijsman and Vrieze (1993), while inFlesch et al. (1996b) it is proved that the existence of a limiting averageoptimal strategy implies the existence of stationary limiting average

strategies. The idea of easy initial states also plays a key-rolein the existence proof for in general-sum stochastic gamesby Vieille (2000a,b).

Apart from these general results, specially structured stochastic gameshave been examined. We already discussed recursive games and repeatedgames with absorbing states, but we should also mention the followingclasses: irreducible/unichain stochastic games (cf. Rogers, 1969; Sobel,1971; or Federgruen, 1978), i.e. stochastic games for which for any pairof stationary strategies the related Markov chain is irreducible/unichain;single controller stochastic games (cf. Parthasarathy and Raghavan,

1981), i.e. games in which the transitions only depend on the actions ofone and the same player for all states; switching control stochastic games(cf. Filar, 1981; Vrieze et al., 1983), i.e. games with transitions for eachstate depending on the action of only one player; perfect informationstochastic games (cf. Liggett and Lippman, 1969), where in each stateone of the players has only one action available; stochastic games withadditive rewards and additive transitions ARAT (cf. Raghavan et al.,1985), i.e. there are such thatand for all and, finally, stochasticgames with separable rewards and state independent transitions (cf.Parthasarathy et al., 1984), i.e. there are such that

and for all All these classesadmit stationary limiting average optimal strategies. Later, in Thuijs-

THUIJSMAN AND VRIEZE254

Page 268: Borm_Chapters in Game Theory-In Honor of Stef Tijs

man and Vrieze (1991, 1992) and in Thuijsman (1992) new (and moresimple) proofs were provided for the existence of stationary solutionsin several of these classes. Characterizations, in terms of game proper-ties, for the existence of stationary limiting average optimal strategiesare provided in Vrieze and Thuijsman (1987), Filar et al. (1991) andThuijsman (1992).

12.3 General-Sum Stochastic Games

The first persons to examine general-sum stochastic games were Fink(1964) and Takahashi (1964), who, independent from each other, showedthe existence of stationary equilibria for stochastic games:

Theorem 12.7 For each stochastic game and for all thereexist stationary strategies and for players 1 and 2 respectively, suchthat for all strategies and

STOCHASTIC GAMES 255

and

Since, by its definition, for the zero-sum situation an equilibrium canonly consist of a pair of optimal strategies, the big match (cf. Example12.5) immediately shows that limiting average equilibria do not alwaysexist. Where we introduced strategies for the zero-sum case,we now have to introduce for the general-sum case.

Definition 12.8 A pair of strategies is called a limiting averageif neither player 1 nor player 2 can gain more than

by a unilateral deviation, i.e. for all strategies and

The existence of limiting average for arbitrary general-sumtwo-person stochastic games has recently been established by Vieille(2000a,b):

Theorem 12.9 For each stochastic game and for all there existsa limiting average

and

Page 269: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Preceding, and leading to this breakthrough, are the following results.First, the result by Tijs and Vrieze (1986) on easy initial states was gen-eralized to the case of general-sum stochastic games, i.e., it was shownthat in every stochastic game there is a non-empty set of initial statesfor which exist (cf. Thuijsman and Vrieze, 1991; Thuijsman,1992; or Vieille, 1993). Our proof of this result was based on ergodicityproperties of a converging sequence of stationary equilib-ria, with (please note that, here is just a counter and isnot related to the stage parameter). However, the equilibrium strategiesare of a behavioral type: at all stages players must take into account thehistory of past moves of their opponent. Nevertheless, a side-result ofthis approach was a simple and straightforward proof for the existenceof stationary limiting average equilibria for irreducible/unichain stochas-tic games (which was earlier derived by Rogers, 1969; Sobel, 1971; andFedergruen, 1978).

Concerning the existence of limiting average for all ini-tial states (simultaneously), sufficient conditions have been formulatedin Thuijsman (1992), which are based on properties of a converging se-quence of stationary equilibria, with whilein Thuijsman and Vrieze (1997) quite general sufficient conditions areformulated in terms of stationary strategies, and of observability andpunishability of deviations. This punishability principle is based on theobservation that in any equilibrium each player should get at least asmuch as he can guarantee himself in the worst case. To be more precise,we have seen in the previous section that player can guarantee himselfan amount

and, by its definition, player 1 can not guarantee any higher reward. Forplayer 2 we have with similar properties:

Thus player 1 has the power to restrict player 2’s reward to be at mostwhile, at the same time, in any equilibrium player 2 should always get atleast for otherwise he would have a profitable deviation. Therefore wecall this approach the threat approach, since the players are constantlychecking after each other, and any “wrong” move of the opponent willimmediately trigger a punishment that will push the reward down toThus the threats are the stabilizing force in the limiting average

256 THUIJSMAN AND VRIEZE

Page 270: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Using this threat approach, the existence of isproved for repeated games with absorbing states (cf. Vrieze and Thuijs-man, 1989), where a prototype threat approach is being used), as well asfor stochastic games with state independent transitions (cf. Thuijsman,1992), as well as for stochastic games with three states (cf. Vieille, 1993),as well as for stochastic games with switching control (cf. Thuijsman andRaghavan, 1997), and existence of pure 0-equilibria has been shown forstochastic games with additive rewards and additive transitions (ARAT,cf. Thuijsman and Raghavan, 1997). The latter class includes the classof perfect information games; a perfect information game has the prop-erty that in each state one of the players has only one action available.The use of threats is also indispensable in the existence proof given byVieille (2000a,b).

We remark that prior to our threat approach for none of these classes,the existence of limiting average was known, even though thezero-sum solutions have been derived a long time ago. Also note thateven for perfect information stochastic games stationary limiting averageequilibria generally do not exist, although for the zero-sum case purestationary limiting average optimal strategies are available (cf. Liggettand Lippman, 1969). Example 12.11 below will illustrate this point.

For recursive repeated games with absorbing states (cf. Flesch et al.,1996) and for ARAT repeated games with absorbing states (cf. Evange-lista et al., 1996) stationary limiting average do exist (with-out threats).

We conclude this chapter with three very special examples. In Exam-ple 12.10 we examine a repeated game with absorbing states for whichthere is a gap between the equilibrium rewards and thelimiting average equilibrium rewards. In Example 12.11 we discuss aperfect information stochastic game which does not have stationary lim-iting average but where the only equilibria known to us, areof the threat type. In Example 12.12 we discuss a three person recursiverepeated game with absorbing states for which the only limiting averageequilibria consist of cyclic Markov strategies. This is very remarkablesince, in that game stationary limiting average do not exist.

STOCHASTIC GAMES 257

Page 271: Borm_Chapters in Game Theory-In Honor of Stef Tijs

258 THUIJSMAN AND VRIEZE

Example 12.10 Consider the following example with three states:

This is an example of a repeated game with absorbing states, whereplay remains in the initial state 1 as long as player 1 chooses Top, butplay reaches an absorbing state as soon as player 1 ever chooses Bot-tom. Sorin (1986) examined this example in great detail. The (sup-inf)limiting average values (for initial state 1) are given byClearly then, there can be no stationary limiting averagebecause against any stationary strategy of player 1, player 2 can getat least 1, and by doing so player 1 would get which he canalways achieve by playing limiting average in the 1-zero-sumgame. However, for each pair in where Conv standsfor convex hull, Sorin (1986) gives history dependent limiting average

that yield this pair as an equilibrium reward. Besides, heshows that any limiting average corresponds to a rewardin while all equilibria yield Al-though this observation suggests that the limiting average general-sumcase can not be approached from the general-sum case, bystudying this example Vrieze and Thuijsman (1989) discovered a generalprinciple to construct, starting from any arbitrary sequence of station-ary equilibria with a limiting average

Example 12.11 The next example has four states:

This game is a recursive perfect information game for which there is nostationary limiting average One can prove this as follows.Suppose player 2 puts positive weight on Left in state 2, then player 1’sonly stationary limiting average replies are those that put weightat most on Top in state 1; against any of these strategies, player

Page 272: Borm_Chapters in Game Theory-In Honor of Stef Tijs

STOCHASTIC GAMES 259

2’s only stationary limiting average replies are those that putweight 0 on Left in state 2. So there is no stationary limiting average

where player 2 puts positive weight on Left in state 2. Butthere is neither a stationary limiting average where player2 puts weight 0 on Left in state 2, since then player 1 should put at most

weight on Bottom in state 1, which would in turn contradict player2’s putting weight 0 on Left. Following the construction of Thuijsmanand Raghavan (1997), where existence of limiting average 0-equilibria isproved for arbitrary N-person games with perfect information, we canfind an equilibrium by the following procedure. Take a pure stationarylimiting average optimal strategy for player 1 (this exists by Liggettand Lippman, 1969); let be pure stationary limiting average optimalstrategy for player 2 minimizing player 1’s reward; let be a purestationary limiting average best reply for player 2 maximizing his ownreward against (which exists by Lemma 1). Now define for player 2by: play unless at some stage player 1 has ever deviated from playing

then play Here, and Now it can beverified that is a limiting average equilibrium.

Example 12.12 Our final example is described by the following payoffmatrices:

This is a three-person recursive repeated game with absorbing states,where an asterisk in any particular entry denotes a transition to an ab-sorbing state with the same payoff as in this particular entry. There isonly one entry for which play will remain in the non-trivial initial state.One should picture the game as a 2 × 2 × 2 cube, where the layers belong-ing to the actions of player 3 (Near and Far) are represented separately.As before, player 1 chooses Top or Bottom and player 2 chooses Leftor Right. The entry (T, L, N) is the only non-absorbing entry for theinitial state. Hence, as long as play is in the initial state the only possi-ble history is the one where entry (T, L, N) was played at all previous

Page 273: Borm_Chapters in Game Theory-In Honor of Stef Tijs

stages. This rules out the use of any non-trivial history dependent strat-egy for this game. Therefore, the players only have Markov strategies attheir disposal. In Flesch et al. (1997) it is shown that, although (cyclic)Markov limiting average 0-equilibria exist for this game, there are nostationary limiting average in this game. Moreover, the setof all limiting average equilibria is being characterized completely. Anexample of a Markov equilibrium for this game is where isdefined by: at stages 1,4,7,10,... play T with probability and at allother stages play T with probability 1. Similarly, is defined by: atstages 2, 5, 8, 11, . . . play L with probability and at all other stages playL with probability 1. Likewise, is defined by: at stages 3, 6, 9, 12, . . .play N with probability and at all other stages play N with probabil-ity 1. The limiting average reward corresponding to this equilibrium is(1,2,1).

So far, existence of has been the main issue in the theorystochastic games, whereas in other areas of non-cooperative game theoryrefinements to the Nash-equilibrium concept have been introduced. Onlyfew of these refinements have been generalized to the area of stochasticgames. One such extension is that of (trembling hand) perfect equilibria.A perfect equilibrium is one where the strategies played are not onlybest replies to eachother, but to small perturbations of the opponent’sstrategy as well. Thuijsman et al. (1991) showed the existence of suchperfect stationary equilibria for arbitrary stochastic games,and also the existence of perfect stationary limiting average equilibriafor irreducible stochastic games.

Finally we would like to mention three recent contributions to thefield of stochastic games. The first one is the work of Potters et al.(1999) who examined stochastic games with a potentialfunction. For the classes of additive reward and additive transitionstochastic games (ARAT), as well as for the class of stochastic gameswith separable rewards and state independent transitions (SER-SIT),the potential function is used to derive the existence of pure stationaryoptimal strategies in the zero-sum case and pure stationary equilibria inthe general-sum case.

The second paper to mention is the one by Herings and Peeters(2000) who introduce an algorithm, a tracing procedure, to computestationary equilibria in discounted stochastic games. Moreover, conver-gence of the algorithm for almost all such games is proved and the issueof equilibrium selection is addressed.

260 THUIJSMAN AND VRIEZE

Page 274: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Bewley, T., and E. Kohlberg (1976): “The asymptotic theory of stochas-tic games,” Math. Oper. Res., 1, 197–208.

Blackwell, D. (1962): “Discrete dynamic programming,” Ann. Math.Statist., 33, 719–726.

Blackwell, D., and T.S. Ferguson (1968): “The big match,” Ann. Math.Statist., 39, 159–163.

Bohnenblust, H.F., S. Karlin, and L.S. Shapley (1950): “Solutions of dis-crete two-person games,” Annals of Mathematics Studies, 24. Princeton:Princeton University Press, 51–72.

Brown, G.W. (1951): “Iterative solution of games by fictitious play,” in:Koopmans, T.C. (ed.), Activity Analysis of Production and Allocation.New York: Wiley, 374–376.

Evangelista, F.S., T.E.S. Raghavan, and O.J. Vrieze (1996): “RepeatedARAT games,” in: Ferguson, T.S. et al. (eds.), Statistics, Probabilityand Game Theory; Papers in honor of David Blackwell, IMS LectureNotes Monograph Series, 30, pp 13–28.

Everett, H. (1957): “Recursive games,” in: Dresher, M., et al. (eds.),Contributions to the Theory of Games, III, Annals of Mathematical Stud-ies, 39. Princeton: Princeton University Press, 47–78.

Federgruen, A. (1978): “On N-person stochastic games with denumer-able state space,” Adv. Appl. Prob., 10, 452–471.

Filar, J.A. (1981): “Ordered field property for stochastic games whenthe player who controls transitions changes from state to state,” J. Opt.Theory Appl., 34, 503–515.

Filar, J.A., T.A. Schultz, F. Thuijsman, and O.J. Vrieze (1991): “Non-linear programming and stationary equilibria in stochastic games,” Math.Progr., 50, 227–237.

Fink, A.M. (1964): “Equilibrium in a stochastic game,” J. Sci.Hiroshima Univ., Series A-I, 28, 89–93.

STOCHASTIC GAMES 261

The third one is by Schoenmakers et al. (2001) who introduce anew approach for extending the method of fictitious play developed byBrown (1951) and Robinson (1950) to the situation of stochastic games.A different approach on applying fictitious play to stochastic games wasstudied by Vrieze and Tijs (1982).

References

Page 275: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Flesch, J., F. Thuijsman, and O.J. Vrieze (1996): “Recursive repeatedgames with absorbing states,” Math. Oper. Res., 21, 1016–1022.

Flesch, J., F. Thuijsman, and O.J. Vrieze (1997): “Cyclic Markov equi-libria in stochastic games,” Int. J. Game Theory, 26, 303–314.

Flesch, J., F. Thuijsman, and O.J. Vrieze (1998): “Simplifying optimalstrategies in stochastic games,” SIAM Journal of Control and Optimiza-tion, 36, 1331–1347.

Gillette, D. (1957): “Stochastic games with zero stop probabilities,” in:Dresher, M., et al. (eds.), Contributions to the Theory of Games, III,Annals of Mathematical Studies, 39. Princeton: Princeton UniversityPress, 179–187.

Herings, P.J.J., R.J.A.P. Peeters (2000): “Stationary equilibria in stoch-astic games: structure, selection and computation,” Report RM/00/031,Meteor, Maastricht University.

Hordijk, A., O.J. Vrieze, and G.L. Wanrooij (1983): “Semi-Markovstrategies in stochastic games,” Int. J. Game Theory, 12, 81–89.

Kohlberg, E. (1974): “Repeated games with absorbing states,” Annalsof Statistics, 2, 724–738.

Liggett, T.M., and S.A. Lippman (1969): “Stochastic games with perfectinformation and time average payoff,” SIAM Review, 11, 604–607.

Mertens, J.F., and A. Neyman (1981): “Stochastic games,” Int. J. GameTheory, 10, 53–66.

Nash, J. (1951): “Non-cooperative games,” Annals of Mathematics, 54,286–295.

Neyman, A., and S. Sorin (2001): Stochastic Games. Proceedings ofhe 1999 NATO Summer Institute on Stochastic Games held at StonyBrook (forthcoming).

Parthasarathy, T., and T.E.S. Raghavan (1981): “An orderfield propertyfor tochastic games when one player controls transition probabilities,”J. Opt. Theory Appl., 33, 375–392.

Parthasarathy, T., S.H. Tijs, and O.J. Vrieze (1984): “Stochastic gameswith atate independent transitions and separable rewards,” in: Hammer,G., and D. Pallaschke (eds.), Selected Topics in Operations Research andMathematical Economics. Berlin: Springer Verlag, 262–271.

Potters, J.A.M., T.E.S. Raghavan, and S.H. Tijs (1999): “Pure equi-librium strategies for stochastic games via potential functions. Report9910, Department of Mathematics, University of Nijmegen.

THUIJSMAN AND VRIEZE262

Page 276: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Raghavan, T.E.S., S.H. Tijs, and O.J. Vrieze (1985): “On stochasticgames with additive reward and transition structure,” J. Opt. TheoryAppl., 47, 451–464.

Robinson, J. (1950): “An iterative method of solving a game,” Annalsof Mathematics, 54, 296–301.

Rogers, P.D. (1969): Non-zerosum stochastic games. PhD thesis, re-port ORC 69-8, Operations Research Center, University of California,Berkeley.

Schoenmakers, G., J. Flesch, and F. Thuijsman (2001): “Fictitiousplay in stochastic games,” Report M01-02, Department of Mathematics,Maastricht University.

Shapley, L.S. (1953): “Stochastic games,” Proc Nat Acad Sci USA, 39,1095–1100.

Shapley, L.S., and R.N. Snow (1950): “Basic solutions of discrete games,”Annals of Mathematics Studies, 24. Princeton: Princeton UniversityPress, 27–35.

Sinha, S., F. Thuijsman, and S.H. Tijs (1991): “Semi-infinite stochas-tic games,” in: Raghavan, T.E.S., et al. (eds.), Stochastic Games andRelated Topics. Dordrecht: Kluwer Academic Publishers, 71–83.

Sobel, M.J. (1971): “Noncooperative stochastic games,” Ann. Math.Statist., 42, 1930–1935.

Sorin, S. (1986): “Asymptotic properties of a non-zerosum stochasticgame,” Int. J. Game Theory, 15, 101–107.

Takahashi, M. (1964): “Equilibrium points of stochastic noncooperativegames,” J. Sci. Hiroshima Univ., Series A-I, 28, 95-99.

Thuijsman, F. (1992): Optimality and Equilibria in Stochastic Games.CWI-tract 82, Center for Mathematics and Computer Science, Amster-dam.

Thuijsman, F., and T.E.S. Raghavan (1997): “Perfect information stoch-astic games and related classes,” Int. J. Game Theory, 26, 403–408.

Thuijsman, F., S.H. Tijs, and O.J. Vrieze (1991): “Perfect equilibria instochastic games,” J. Opt. Theory Appl., 69, 311–324.

Thuijsman, F., and O.J. Vrieze (1991): “Easy initial states in stochas-tic games,” in: Raghavan, T.E.S., et al. (eds.), Stochastic Games andRelated Topics. Dordrecht: Kluwer Academic Publishers, 85–100.

Thuijsman, F., and O.J. Vrieze (1992): “Note on recursive games,”in: Dutta, B., et al. (eds.), Game Theory and Economic Applications,

STOCHASTIC GAMES 263

Page 277: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Lecture Notes in Economics and Mathematical Systems, 389. Berlin:Springer, 133–145.Thuijsman, F., and O.J. Vrieze (1993): “Stationary strategiesin stochastic games,” OR Spektrum, 15, 9–15.

Thuijsman, F., and O.J. Vrieze (1998): “The power of threats in stochas-tic games,” in: Bardi et al. (eds.), Stochastic Games and NumericalMethods for Dynamic Games. Boston: Birkhauser, 339–353.

Tijs, S.H. (1979): “Semi-infinite linear programs and semi-infinite ma-trix games,” Nieuw Archief voor de Wiskunde, 27, 197–214.

Tijs, S.H. (1980): “Stochastic games with one big action space in eachstate,” Methods of Operations Research, 38, 161–173.

Tijs, S.H. (1981): “A characterization of the value of zero-sum two-person games,” Naval Research Logistics Quarterly, 28, 153–156.Tijs, S.H., and O.J. Vrieze (1980): “Perturbation theory for games innormal form and stochastic games,” J. Opt. Theory Appl., 30, 549–567.

Tijs, S.H., and O.J. Vrieze (1981): “Characterizing properties of thevalue function of stochastic games,” J. Opt. Theory Appl., 33, 145–150.

Tijs, S.H., and O.J. Vrieze (1986): “On the existence of easy initial statesfor undiscounted stochastic games,” Math. Oper. Res., 11, 506–513.

Vieille, N. (1993): “Solvable states in stochastic games,” Int. J. GameTheory, 21, 395–404.

Vieille, N. (2000a): “2-person stochastic games I: a reduction,” IsraelJournal of Mathematics, 119, 55–91.

Vieille, N. (2000b): “2-person stochastic games II: the case of recursivegames,” Israel Journal of Mathematics, 119, 93–126.

Vilkas, E.I. (1963): “Axiomatic definition of the value of a matrix game,”Theory of Probability and its Applications, 8, 304–307.

Von Neumann, J. (1928): “Zur Theorie der Gesellschafsspiele,” Mathe-matische Annalen, 100, 295–320.Vrieze, O.J. (1987): Stochastic Games with Finite State and ActionSpaces. CWI-tract 33, Center for Mathematics and Computer Science,Amsterdam.

Vrieze, O.J., and F. Thuijsman (1987): “Stochastic games and optimaltationary strategies, a survey,” in: Domschke, W., et al. (eds.), Methodsof Operations Research, 57, 513–529.

Vrieze, O.J., and F. Thuijsman (1989): “On equilibria in repeated gameswith absorbing states,” Int J Game Theory, 18, 293–310.

264 THUIJSMAN AND VRIEZE

Page 278: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Vrieze, O.J., and S.H. Tijs (1980): “Relations between the game pa-rameters, value and optimal strategy spaces in stochastic games andconstruction of games with given solution,” J. Opt. Theory Appl., 31,501–513.

Vrieze, O.J., and S.H. Tijs (1982): “Fictitious play applied to sequencesof games and discounted stochastic games,” Int. J. Game Theory, 11,71–85.

Vrieze, O.J., S.H. Tijs, T.E.S. Raghavan, and J.A. Filar (1983): “A finitealgorithm for the switching control stochastic game,” OR Spektrum, 5,15–24.

STOCHASTIC GAMES 265

Page 279: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Linear (Semi-) InfinitePrograms and CooperativeGames

Chapter 13

BY JUDITH TIMMER AND NATIVIDAD LLORCA

13.1 Introduction

In 1975 Stef Tijs defended his Ph.D. thesis entitled “Semi-infinite andinfinite matrix games and bimatrix games”. Following this, his paper“Semi-infinite linear programs and semi-infinite matrix games” was pub-lished in 1979. Both these works deal with programs and noncoopera-tive games in a (semi-)infinite setting. Several decades later these worksand Stef Tijs himself inspired some researchers from Italy, Spain andThe Netherlands to study cooperative games arising from linear (semi)infinite programs. These studies were performed under the inspiringsupervision of Stef Tijs.

While studying these games it turned out that results from Tijs(1975, 1979) were very useful. For example, the critical number that isintroduced in Tijs (1975) shows up again in the study of semi-infiniteassignment problems (see Section 13.3.1), and some results about semi-infinite linear programs in Tijs (1979) are useful when studying semi-infinite linear production problems, as in Section 13.2.2. Hence, theearly work of Stef provided a basis for studying cooperative games in asemi-infinite setting.

The aim of this work is to provide the reader with an overview of267

P. Borm andH. Peters (eds.), Chapters in Game Theory, 267–285.© 2002 KluwerAcademic Publishers. Printed in the Netherlands.

Page 280: Borm_Chapters in Game Theory-In Honor of Stef Tijs

268 TIMMER AND LLORCA

cooperative games arising from linear (semi-)infinite programs. In Sec-tion 13.2 semi-infinite programs and their corresponding games are pre-sented, like flow games (Section 13.2.1), linear production games (Sec-tion 13.2.2) and games involving the linear transformation of products(Section 13.2.2). Section 13.3 concentrates on games arising from infi-nite programs like assignment games (Section 13.3.1) and transportationgames (Section 13.3.2). For transportation games, a distinction is madebetween the transportation of an indivisible good (Section 13.3.2) or adivisible good (Section 13.3.2).

13.2 Semi-infinite Programs and Games

In this section we discuss three types of cooperative games that arisefrom semi-infinite problems. These are flow games, linear productiongames and games involving the linear transformation of products. Allthese games and their underlying problems have in common that theydeal with a finite number of agents while another component is availablein a countable infinite amount. For example, we consider linear produc-tion problems with a countable infinite number of production techniques.The main result is that each of these games has a non-empty core, justlike its finite counterpart.

As far as we know, next to these three types a few other problems andcorresponding cooperative games have been studied in a semi-infinitesetting. Connection problems and games are studied in a semi-infinitesetting in Fragnelli et al. (1999) and recently Fragnelli (2001) obtainedsome results for semi-infinite sequencing problems.

13.2.1 Flow games

Flow games in an infinite setting are introduced in Fragnelli et al. (1999).The authors consider a network with an infinite number of arcs that con-nect the source to the sink. These arcs are owned by a finite numberof players. Each arc has one owner. A group of players can pool theirprivately owned arcs and thus obtains a subnetwork of the original net-work. Their goal is to maximize the flow on this subnetwork given thecapacities of the arcs.

Formally, a network with privately owned arcs is described by a tuple

Page 281: Borm_Chapters in Game Theory-In Honor of Stef Tijs

LINEAR PROGRAMS AND COOPERATIVE GAMES 269

where M is a countable set of nodes and A is an infinite collection ofarcs, which are elements If there is an arc

then there can be a flow from node to node but not fromnode to node Multiple arcs between two nodes are also allowed. Themap assigns to each arc its capacity N isa finite set of players and the map assigns to each arc a theplayer who owns it. Finally, and are special nodes in M that arecalled the source and the sink, respectively.

Given a network H, define

and

The sets and denote the set of arcs entering and leavingnode respectively. A flow on network H is a mapsuch that

for all arcs

that is, a flow on an arc is restricted by its capacity, and

at each node the incoming flow is as large as the outgoing flow. Thevalue of a flow is defined as the outgoing flow at the source,

In order to achieve results like those for finite flows, the authors assumethat the total capacity of the arcs is finite:

Given this assumption they show that each flow has a finite value andthat there exists a flow that attains the maximal value on this net-work, that is, for all flows Denote this maximal valueby

The flow game corresponding to the network H is defined asfollows. Let be a coalition of players. Let be the subnetwork

for all

Page 282: Borm_Chapters in Game Theory-In Honor of Stef Tijs

270 TIMMER AND LLORCA

of H obtained by removing all arcs not in The valueof coalition S is the maximal value of a flow on its subnetwork

The main result in Fragnelli et al. (1999) for flowgames is that the core

of the flow game is non-empty. Hence, there exists an allocation ofto the players in N such that no coalition S has an incentive to

deviate because they receive at least as much as they can obtain on theirown.

Theorem 13.1 (Fragnelli et al., 1999) Given a network H satisfy-ing (13.1), the game has a non-empty core.

13.2.2 Linear Production Games

A semi-infinite linear production (LP) problem describes a productionsituation with a countable infinite number of linear production tech-niques and a finite number of resources. Each producer owns a bundleof resources. He may use these to produce on his own or to cooperatewith other producers. In the latter case the cooperating producers pooltheir resources and act like one large producer. All produced goods canbe sold on the market at exogenous market prices. This means that theproducers cannot influence the market prices. It is assumed that thereare no production costs. The goal of each producer is to maximize thetotal revenue of the products given the amount of resources that areavailable.

Such a problem is described by a tuple where N is thefinite set of producers. Let R be the finite set of resources and

the countable infinite set of linear production techniques.The matrix is the technology matrix where elementdescribes how much of resource is needed to produce 1 unit of product

Because production techniques are linear one needs, for example,units of resource to produce five units of product and so on.

The resource matrix tells us that producer has units ofresource The vector of market prices of the produced goods is denotedby

For the moment assume that for any resource there is at least oneproducer who owns a positive quantity of it. Furthermore, if product

Page 283: Borm_Chapters in Game Theory-In Honor of Stef Tijs

LINEAR PROGRAMS AND COOPERATIVE GAMES 271

has a positive market price then at least one resource is neededto produce In other words, there is a resource such thatFinally, all the producers take the market prices as given and all productscan be sold on the market.

The cooperative LP game corresponding to such a semi-infinite LPproblem is denoted by the pair with the function defined by

for all coalitions S of producers where

and Hence, the value of coalition S is equal to the maximalrevenue it can achieve from selling the products that are produced fromits resources. These LP games are studied by Fragnelli et al. (1999) andTijs et al. (2001). Specific attention is paid to the question when a core-element can be constructed via a related dual linear program. Owen(1975) shows that this is always possible for LP games corresponding tofinite LP problems. H is argument goes along the following lines. Thelinear program that determines the value of coalition N is

and its related dual linear program equals

The assumptions on A, B and and the finiteness of these programsimply that both programs have the same finite value. Owen (1975)shows that for any optimal solution of the dual program, the vector

defined by is a core-element of the correspondingLP game. Thus one can find a core-element with little effort since onlyone linear program has to be solved instead of determining the values

for all coalitions S in order to calculate the coreFor games corresponding to semi-infinite LP problems, this construc-

tion of core-elements need not always work as the example below shows.

Page 284: Borm_Chapters in Game Theory-In Honor of Stef Tijs

272 TIMMER AND LLORCA

Example 13.2 (Tijs, 1979) Let be the semi-infinite LPproblem where

and Then and this isunequal to the value of the dual program for coalition N, which equals2. We are confronted with a so-called duality gap, that is, the linearprogram for N and its dual program do not have the same value. Conse-quently, we cannot construct a core-element in the same fashion as Owendid, because such a core-element would not satisfy

Fragnelli et al. (1999) give two conditions on semi-infinite LP problemssuch that there is no duality gap for coalition N and a core-element canbe constructed via the dual program.

Theorem 13.3 (Fragnelli et al., 1999) Let be a semi-infinite LP problem such that

for all

Then the corresponding LP game has a non-empty core.

The first condition says that all market prices have a finite upperbound and according to the second condition there is a minimal amountof resources, that is useful for production.

A more general analysis of semi-infinite LP problems and games canbe found in Tijs et al. (2001). They study semi-infinite LP problems

with no other assumptions than andfor all and Let

be the value of the dual program for coalition N andFurther, let

Owen is optimal for the dual of N}

be the Owen set, which is the set of all vectors that can be constructedalong the same lines as Owen did for finite LP problems, and letCore The relations between the Owen set and the coreare summarized in the following theorem.

Page 285: Borm_Chapters in Game Theory-In Honor of Stef Tijs

LINEAR PROGRAMS AND COOPERATIVE GAMES 273

Theorem 13.4 (Tijs et al., 2001) Let be a semi-infiniteLP problem. If thenOtherwise,

Hence, if there is no duality gap, that is, then an elementof the Owen set is a core-element of the game. Otherwise, one cannotuse the Owen set to find core-elements. Finally, if coalition N has afinite value in the corresponding game then the core is non-empty as isshown below.

Theorem 13.5 (Tijs et al., 2001) Let be a semi-infiniteLP problem with corresponding LP game If then thegame has a non-empty core.

13.2.3 Games Involving Linear Transformation of Prod-ucts

Problems involving the linear transformation of products (LTP) are in-troduced by Timmer et al. (2000a) as generalizations of LP problems.Two assumptions for LP problems are that all producers have the sameproduction techniques (represented by the production matrix A) andany production technique has only one output good. These assumptionsdo not hold for LTP problems. Hence, in an LTP problem different pro-ducers may have different production (transformation) techniques andsuch a transformation technique may have by-products.

A semi-infinite extension of LTP problems, where the number oftransformation techniques is countable infinite, is studied in Timmeret al. (2000b) and Tijs et al. (2001). A semi-infinite LTP problem isdenoted by a tuple where N is the finite set of producers.Let be the set of all transformation techniques of producer Then

is the infinite set of available techniques. Let M be thefinite set of goods. Then denote the transformation matrixwhere is the column corresponding to transformation technique

Each row in A corresponds to a good in M. Theresource bundle of producer and

denotes the exogenous market prices.Because positive elements in A indicate output goods and negative

elements input goods, the resource matrix can be definedby for all and the columnof G is denoted by The resources owned by a coalition

is

Page 286: Borm_Chapters in Game Theory-In Honor of Stef Tijs

274 TIMMER AND LLORCA

of producers are Denote by theset of all techniques available for coalition S; thus Let

be the activity level, or productivity factor, of techniqueand For the moment assume that each transformationtechnique uses at least one input good to produce at least one outputgood.

In the corresponding LTP game the value of coalition S is thesmallest upper bound of its profit,

In Timmer et al. (2000b) the authors are interested in finding a core-element of the LTP game via a related dual program, as is done in theprevious subsection. If one considers coalition N then (13.2) reduces to

because all techniques belong to The dual program relatedto this problem is

If is an optimal solution of this program and if there is no duality gap,that is, then defined byfor all is a core-element of the LTP game. Unfortunately, thereneed not always exist an optimal solution of the dual program, and alsothe absence of a duality gap is not guaranteed.

Example 13.6 (Timmer et al., 2000b) Consider the semi-infiniteLTP problem with a single producer, two goods,

and

Then with optimal activity vector The value of thedual program is since there exists no feasible solution Hence, thereis no optimal solution to the dual program and there is a duality gap.

infs.t.

Page 287: Borm_Chapters in Game Theory-In Honor of Stef Tijs

LINEAR PROGRAMS AND COOPERATIVE GAMES 275

This example indicates that we need conditions on a semi-infinite LTPproblem if we want to find a core-element of the corresponding game viathe dual program. In Timmer et al. (2000b) two such sets of conditionsare presented. For the first set, let be the zero-vector in andthe unit vector in with and otherwise.Denote by CC(B) the convex cone generated by the (infinite) set B ofvectors in Define

and let cl(K) be the closure of the set K. Now one can show the followingresult.

Theorem 13.7 (Timmer et al., 2000b) Let be a semi-infinite LTP problem. If

then the corresponding LTP game has a non-empty core.

The first condition says that for any good there is a producer who ownsa positive amount of it and according to the second condition, the pro-ducers cannot earn a positive profit when using no inputs. The proofof this theorem shows that these conditions are sufficient to allow us toconstruct a core-element via the dual program.

A second set of conditions is similar to the conditions in Theo-rem 13.3 for semi-infinite LP problems.

Theorem 13.8 (Timmer et al., 2000b) Let be a semi-infinite LTP problem such that

for all

Then the corresponding LTP game and all its subgames have a non-empty core.

Page 288: Borm_Chapters in Game Theory-In Honor of Stef Tijs

276 TIMMER AND LLORCA

Tijs et al. (2001) also study semi-infinite LTP problems and relatedgames but they only require that N is a finite set of agents,D = {1,2,3,…}, and These conditions aresufficient to show the following result.

Theorem 13.9 (Tijs et al. (2001)) Let be a semi-in-finite LTP problem and its corresponding game. Ifthen the game has a non-empty core.

13.3 Infinite Programs and Games

This section is devoted to semi-infinite assignment and transportationproblems and corresponding games. These problems involve linear pro-grams with an infinite number of variables and of constraints. Never-theless the corresponding games are referred to as being semi-infinitebecause their player sets are partitioned into two disjoint subsets: oneset is finite while the other is countable infinite. Also in these prob-lems duality gaps may arise. Hence, in all cases considered below oneof the first results will be about the absence of such a gap. Further, theemphasis lies on showing the existence of core-elements.

13.3.1 Assignment Games

In finite assignment problems two finite sets of agents have to be matchedin such a way that the reward obtained from these matchings is as largeas possible. Shapley and Shubik (1972) study these problems in a game-theoretic setting, resulting in cooperative assignment games. A semi-infinite extension of assignment problems and games is given in Llorcaet al. (1999). An example of such a semi-infinite assignment problemis the following. Consider a textile firm whose marketing policy is toproduce unique pieces of textile. The firm owns a finite number ofprinting machines that can be programmed to print a piece of fabric.There are an infinite number of patterns available. The machines canprint all of these patterns, but with different (bounded) rewards. Thefirm wan ts to maximize the total reward from matching machines withpatterns. Therefore, it has to tackle an assignment problem in whichthere are a finite number of one type (machines) and an infinite numberof the other type (possible designs).

A semi-infinite (bounded) assignment problem is denoted by a tuple(M,W, A) where is a finite set of agents of one type

Page 289: Borm_Chapters in Game Theory-In Honor of Stef Tijs

LINEAR PROGRAMS AND COOPERATIVE GAMES 277

and is the countable infinite set of agents of the other type.Matching agent to agent results in the nonnegative reward

which is bounded from above. All these rewards are gathered in thematrix A. In the sequel we write to denote the assignment problem(M,W,A).

An assignment plan is a matrix with 0,1-entrieswhere if is assigned to and otherwise. Each agent

will be assigned to at most one agent and vice versa,therefore and Then

is the smallest upper bound of the benefit that the agents in M and Wtogether can achieve.

Given an assignment problem the correspondingsemi-infinite bounded assignment game is a cooperative gamewith countable infinite player set that is, each playercorresponds to an agent in M or to an agent in W. Let S be a coalitionof players in N and define and Then theworth of coalition S is or If there isonly one type of agents present then no matchings can be made. Oth-erwise, where denotes the (semi-infinite) assignmentproblem

The value of the grand coalition N is determined bya linear program, the so-called primal program. According to Sánchez-Soriano et al. (2001) the condition may be replaced by

When doing so, the corresponding dual program is

Both the primal and the dual program have an infinite number of vari-ables and an infinite number of constraints. Hence, they are infinite pro-grams, for which a gap between the optimal values can appear. There-fore, one would like to know if the primal and the dual program in semi-infinite assignment problems have the same value and if there exists anoptimal solution of the dual problem. If so, then one can construct acore-element like Owen did for LP problems.

if

Page 290: Borm_Chapters in Game Theory-In Honor of Stef Tijs

278 TIMMER AND LLORCA

Theorem 13.10 (Llorca et al., 1999) Let be a semi-infinite bound-ed assignment problem. Then there is no duality gap,and there exists an optimal solution for the dual program.

A corollary of this theorem is that semi-infinite assignment games havea non-empty core.

To continue the analysis of semi-infinite bounded assignment prob-lems, the so-called critical number is introduced. The origin of this num-ber is based on a similar concept introduced by Tijs (1975) for (semi-)infinite matrix games and bi-matrix games. For assignment problems itis defined as follows. If there exists an with where

denotes the finite assignment problemthen

Otherwise, The critical number tells whether there exists afinite subproblem with the same value as or not. In terms of semi-infinite assignment games, says that there exists an optimalassignment plan. If then there exists no optimal assignmentplan but we can use a finite auxiliary matrix H corresponding to thematrix A to approach the value For this we need a new concept,namely the hard-choice number. The next example explains these newterms.

Example 13.11 Let M = {1,2}, and

Agent attains a maximal value of 1 if she is assigned to agent1 or 3 in W. These agents in W are the best two choices foragent and this is denoted by

However, there is no largest value that agent can attainbecause the reward reaches the value 2 from below when goes toinfinity. Hence, agent has no best choice. Now thehard-choice number is the smallest number in such that

in this example Further, which means that thereexists no optimal assignment plan. Therefore we construct a finite auxil-iary problem to approach the value is the finite assignment

.

Page 291: Borm_Chapters in Game Theory-In Honor of Stef Tijs

LINEAR PROGRAMS AND COOPERATIVE GAMES 279

problem where is an artificial agent and thematrix is defined by if and

In this example

where the vertical line seperates the artificial agent, agent 4, from theothers. Now An optimal assignment plan in is

with and otherwise. From this, it followsthat the assignment plan Y for defined by for one

and otherwise, is a assignment, which meansthat the total reward from the assignment plan Y equals

In general, if then and an assignment plancan be obtained with the aid of the corresponding finite assignmentproblem

Theorem 13.12 (Llorca et al., 1999) Let be a semi-infinite bound-ed assignment problem with and let be the correspondingfinite problem. Then

From each optimal assignment plan for and for eachone can determine an assignment plan for

13.3.2 Transportation Games

Sánchez-Soriano et al. (2001b) introduce finite transportation games cor-responding to transportation problems where a good has to be trans-ported from suppliers to demanders. A semi-infinite extension of thesegames with indivisible goods is studied by Sánchez-Soriano et al. (2001a).Sánchez-Soriano et al. (2000) deals with semi-infinite transportation sit-uations with divisible goods.

In a semi-infinite transportation problem the demand for a singlegood at a countable infinite number of places has to be covered froma finite number of supply points. The transportation of one unit ofthe good from a supply location to a demand point generates a cer-tain (bounded) profit and the goal of the suppliers and demanders is tomaximize the total profit derived from transportation.

Page 292: Borm_Chapters in Game Theory-In Honor of Stef Tijs

280 TIMMER AND LLORCA

Let P be the finite set of supply points and the countableinfinite set of demand points. Supply point has units of thegood available for transport and the demand at point equalsunits. Both and are positive numbers for all and Theprofit of transporting one unit of the good from supplier to demander

is a nonnegative real number which is bounded from above. Thus,a semi-infinite bounded transportation problem an be described by the5-tuple where is the matrix of profits,and and are the supply and demand vectors,respectively. Denote by the transportation problem

Indivisible goods

In this subsection the good to be transported is indivisible. Thereforethe supply and demand vectors and will only consist of positiveinteger numbers. A transportation plan is a matrixwith integer entries where is the number of units of the good thatwill be transported from supply point to demand point Each supplypoint cannot supply more than units of the good,Similarly, each demand point wants to receive at most units,

Thus the maximal profit that the supply and demandpoints can achieve is

The semi-infinite transportation game corresponding to a semi-infinite transportation problem is a cooperative game with countableinfinite player set Let be a coalition of players anddefine and If or then thereare no demand or supply points present in coalition S and therefore notransportation plans can be made. In this case, Otherwise,the worth of coalition S equals

where is the problemrestricted to agents in coalition S.

Page 293: Borm_Chapters in Game Theory-In Honor of Stef Tijs

LINEAR PROGRAMS AND COOPERATIVE GAMES 281

It is shown in Sánchez-Soriano et al. (2001) that

is the dual program corresponding to the program that determinesLet be the value and the set of optimal solutions of this dualprogram. As was done for assignment problems, also here one would liketo show the existence of an optimal solution of the dual program andthe absence of a duality gap. This can be done using the results fromsemi-infinite assignment problems because such an assignment problemis a semi-infinite transportation problem and vice versa.

Theorem 13.13 (Sánchez-Soriano et al., 2001a) Let be a semi-infinite bounded transportation problem. Then andis non-empty.

Due to the absence of a duality gap, one can find a core-element ofthe semi-infinite transportation game via the Owen set, which for semi-infinite transportation problems is defined by

Notice that each element of a vector is the mean profitthat a player will receive per unit of his supply or demand. Hence, anOwen vector is a vector of profits that agents obtainfrom their supply or demand.

Theorem 13.14 (Sánchez-Soriano et al., 2001a) Let be a semi-infinite transportation problem and the corresponding game. Then

Combining the Theorems 13.13 and 13.14, one can conclude that a trans-portation game corresponding to a transportation problem with an in-divisible good has a non-empty core.

Perfectly Divisible Goods

After having studied semi-infinite transportation problems with indi-visible goods, attention will be paid to problems with perfectly divisible

Page 294: Borm_Chapters in Game Theory-In Honor of Stef Tijs

282 TIMMER AND LLORCA

goods like e.g. gas, electricity or sand. These goods need not be suppliedin integer units and therefore the elements of the supply and demandvectors and are (positive) real numbers and a transportation planX is a matrix with entries A transportation problem with aperfectly divisible good is called a continuous transportation problem todistinguish it from problems with indivisible goods.

Using the absence of a duality gap for semi-infinite transportationproblems with indivisible goods, one can establish that also transporta-tion problems with perfectly divisible goods have no duality gap.

Theorem 13.15 (Sánchez-Soriano et al., 2000) Let be a semi-infinite continuous transportation problem. Then

Showing the existence of a core-element for these problems turned out tobe not that easy. An intermediate result is that so-calledexist. Given an arbitrary cooperative game a vector issaid to be an of this game if

and

for all Thus, an shares among the playersin such a way that a coalition S can gain at most by splitting off.

Theorem 13.16 (Sánchez-Soriano et al. 2000) Let and letbe the cooperative game corresponding to the semi-infinite con-

tinuous transportation problem Then there exists anof the game

Sánchez-Soriano et al. (2000) show for two types of semi-infinite con-tinuous transportation problems that the corresponding games have anon-empty core. The first type are problems with a finite total de-mand.

Theorem 13.17 (Sánchez-Soriano et al., 2000) Let be a semi-infinite continuous transportation problem with Then thecorresponding transportation game has a non-empty core.

Page 295: Borm_Chapters in Game Theory-In Honor of Stef Tijs

LINEAR PROGRAMS AND COOPERATIVE GAMES

If the total demand is not finite, an extra condition is needed to ensurethe existence of core-elements. This condition is the following: thereexists a positive number such that

The number may be interpreted as the minimal amount of the goodthat is useful. Sánchez-Soriano et al. (2000) show that because of (13.3)one can find a specific finite transportation problem with the samevalue as the original problem Now the followingresult holds.

Theorem 13.18 (Sánchez-Soriano et al., 2000) Let be a semi-infinite continuous transportation problem with that sat-isfies (13.3). Then

Consequently, the Owen set, is non-empty. Finally, the ab-sence of a duality gap for semi-infinite continuous transportation prob-lems implies that the Owen set lies in the core of the correspondinggame.

Theorem 13.19 (Sánchez-Soriano et al., 2000) Let be a semi-infinite continuous transportation problem with that sat-isfies (13.3) and let be the corresponding game. Then the core

of this game is non-empty.

Theorems 13.17 and 13.19 present the two types of semi-infinite contin-uous transportation problems for which the non-emptiness of the coreof the corresponding game has been shown.

13.4 Concluding remarks

In this chapter we presented several cooperative games arising from lin-ear semi-infinite and infinite programs. Starting point was the Ph.D.thesis of Stef Tijs (1975) on (semi-)infinite matrix games and bimatrixgames, and a subsequent paper, Tijs (1979). Although both these worksdeal with noncooperative games, they turned out to be inspiring anduseful in the study of semi-infinite cooperative games.

When extending a problem to a (semi-) infinite setting, the existenceof core-elements of the corresponding game is not that obvious anymore.One can try to prove the non-emptiness of the core in a direct way

283

Page 296: Borm_Chapters in Game Theory-In Honor of Stef Tijs

284 TIMMER AND LLORCA

as in the theorems 13.5 and 13.9, or via the dual program and theOwen set. This latter approach requires the absence of a duality gap;another result that had to be shown using tools from linear (semi-)infinite programming.

These two approaches do not always work, as is shown in the previoussection. There Sánchez-Soriano et al. (2000) were not able to showthat the game corresponding to a semi-infinite continuous transportationproblem with infinite total demand and no positive lower bound for thedemands, has a non-empty core. Future research should try to solvethis.

References

Fragnelli, V. (2001): “On the balancedness of semi-infinite sequencinggames,” Preprint, Dipartimento di Matematica dell’Universià di Genova,N. 442 (2001).

Fragnelli, V., F. Patrone, E. Sideri, and S.H. Tijs (1999): “Balancedgames arising from infinite linear models,” Mathematical Methods ofOperations Research, 50, 385–397.

Llorca, N., S. Tijs, and J. Timmer (1999): “Semi-infinite assignmentproblems and related games,” CentER Discussion Paper 9974, TilburgUniversity, The Netherlands.

Owen, G. (1975): “On the core of linear production games,” Mathemat-ical Programming, 9, 358–370.

Sánchez-Soriano, J., N. Llorca, S.H. Tijs, and J. Timmer (2000): “Onthe core of semi-infinite transportation games with divisible goods,” toappear in European Journal of Operational Research.

Sánchez-Soriano, J., N. Llorca, S.H. Tijs, and J. Timmer (2001a): “Semi-infinite assignment and transportation games,” in: M.A. Goberna andM.A. López (eds.), Semi-Infinite Programming: Recent Advances. Dor-drecht: Kluwer Academic Publishers, 349–362.

Sánchez-Soriano, J., M.A. López, and I. García-Jurado (2001b): “Onthe core of transportation games,” Mathematical Social Sciences , 41,215–225.

Shapley, L.S., and S. Shubik (1972): “The assignment game I: the core,”International Journal of Game Theory, 1, 111–130.

Page 297: Borm_Chapters in Game Theory-In Honor of Stef Tijs

LINEAR PROGRAMS AND COOPERATIVE GAMES 285

Tijs, S.H. (1975) : Semi-Infinite and Infinite Matrix Games and Bima-trix Games. Ph.D. dissertation, University of Nijmegen, The Nether-lands.

Tijs, S.H. (1979): “Semi-infinite linear programs and semi-infinite ma-trix games,” Nieuw Archief voor Wiskunde, XXVII, 197–214.

Tijs, S.H., J. Timmer, N. Llorca, and J. Sánchez-Soriano (2001): “TheOwen set and the core of semi-infinite linear production situations,”in: M.A. Goberna and M.A. López (eds.), Semi-Infinite Programming:Recent Advances. Dordrecht: Kluwer Academic Publishers, 365–386.

Timmer, J., P. Borm, and J. Suijs (2000a): “Linear transformation ofproducts: games and economies,” Journal of Optimization Theory andApplications, 105, 677–706.

Timmer, J., N. Llorca, and S. Tijs (2000b): “Games arising from infiniteproduction situations,” International Game Theory Review, 2, 97–106.

Page 298: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Chapter 14

Population Uncertainty andEquilibrium Selection: aMaximum LikelihoodApproach

BY MARK VOORNEVELD AND HENK NORDE

14.1 Introduction

In games with incomplete information (Harsanyi, 1967–1968) as usuallystudied by game theorists, the characteristics or types of the participat-ing players are possibly subject to uncertainty, but the number of play-ers is common knowledge. Recently, however, Myerson (1998a, 1998b,1998c, 2000) and Milchtaich (1997) proposed models for situations—like elections and auctions—in which it may be inappropriate to assumecommon knowledge of the player set. In such games with populationuncertainty, the set of actual players and their preferences are deter-mined by chance according to a commonly known probability measure(a Poisson distribution in Myerson’s work, a point process in Milchtaich’spaper) and players have to choose their strategies before the player setis revealed.

After the introduction of the maximum likelihood principle by R.A.Fisher in the early 1920’s (see Aldrich, 1997, for an interesting historicalaccount), the method of selection on the basis of what is most likely to

287

P. Borm and H. Peters (eds.), Chapters in Game Theory, 287–314.© 2002 Kluwer Academic Publishers. Printed in the Netherlands.

Page 299: Borm_Chapters in Game Theory-In Honor of Stef Tijs

be right has gained tremendous popularity in the field of science dealingwith uncertainty. Gilboa and Schmeidler (1999) recently provided anaxiomatic foundation for rankings according to the likelihood function.

A first major topic of this chapter is the introduction of a gen-eral class of games with population uncertainty, including the Poissongames of Myerson (1998c) and the random-player games of Milchtaich(1997). In line with the maximum likelihood principle, the present chap-ter stresses those strategy profiles in a game with population uncertaintythat are most likely to yield an equilibrium in the game selected bychance. Maximum likelihood equilibria were introduced in Borm et al.(1995a) in a class of Bayesian games.

The algebra underlying the chance event that selects the actualgame to be played may be too coarse to make the event in which aspecific strategy profile yields an equilibrium measurable. A commonmathematical approach (also used in a decision theoretic framework; cf.Fagin and Halpern, 1991) to assign probabilities to such events is to usethe inner measure induced by the probability measure. Roughly, theinner measure of an event E is the probability of the largest measurableevent included in E.

Under mild topological restrictions, an existence result for maximumlikelihood equilibria is derived. Since the result establishes the existenceof a maximum of the likelihood function, it differs significantly fromstandard equilibrium existence results that usually rely on a fixed pointargument.

The use of inner measures is intuitively appealing and avoids mea-surability conditions. Still, the measurability issue is briefly addressedand it is shown that under reasonable restrictions the inner measure canbe replaced by the probability measure underlying the chance event thatselects the actual game. Moreover, it is shown that games with popula-tion uncertainty can be used to model situations in which, in addition tothe set of players and their preferences, also the set of feasible actions ofeach player is subject to uncertainty. This captures a common problemin decision making, namely the situation in which decision makers haveto plan or decide on their course of action, while still uncertain aboutcontingencies that may make their choices impossible to implement.

A second major topic of this chapter is the use of maximum likelihoodequilibria as an equilibrium selection device for finite strategic games.A finite strategic game can be perturbed by assuming that with a com-monly known probability distribution there are trembles in the payoff

288 VOORNEVELD AND NORDE

Page 300: Borm_Chapters in Game Theory-In Honor of Stef Tijs

functions of the players. We take two different approaches to equilib-rium selection. In the first approach we search for strategy profiles thatare most likely to be a Nash equilibrium if the players take into accountthat according to a certain probability distribution the real payoffs ofthe game differ slightly from those provided by the payoff functions. Inthe second approach, players take into account that according to a cer-tain probability distribution their real actions differ slightly from thoseactions which they intend to play. We search for strategy profiles thatare equilibria of the original game and give rise to nearby equilibria ifthe game is perturbed.

14.2 Preliminaries

For easy reference, this section summarizes results and definitions fromtopology, measure theory, and game theory that are used in the rest ofthe chapter. See Aliprantis and Border (1994) for additional informa-tion.

14.2.1 Topology

Let X and Y be topological spaces. A function is sequen-tially continuous if for every and every sequence in Xconverging to it holds that Sequential continuityis implied by continuity of functions; the converse is not true (Aliprantisand Border, 1994, Theorem 2.25). A function is

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 289

sequentially upper semicontinuous if for every and every se-quence in X converging to it holds that

sequentially lower semicontinuous if for every and everysequence in X converging to it holds thatlim

Sequential upper (lower) semicontinuity is implied by upper (lower)semicontinuity of functions; the converse is not true (Aliprantis andBorder, 1994, Lemma 2.40). The topological space X is separable if itincludes a countable dense subset. A set is

sequentially closed if for every and every sequencein A converging to it holds that Every closed set is

Page 301: Borm_Chapters in Game Theory-In Honor of Stef Tijs

sequentially closed; the converse is not true (Aliprantis and Border,1994, Example 2.10).

sequentially compact if every sequence in A has a subsequenceconverging to an element of A.

The closure of a set A is denoted by the cardinality of A is denoted by| A |. Weak and strict set inclusion are denoted by and respectively.

14.2.2 Measure Theory

Let be a probability space, where is a nonempty set, is aon and a probability measure on The inner measure

of a set is defined as

Below, a version of Fatou’s Lemma is shown to hold for lower integrals.First, a lemma is needed.

Lemma 14.1 Let be bounded. Then there exists a Lebesgueintegrable function such that and

where is the indicator function for the set E. Clearly, if is itselfLebesgue integrable, then

Boundedness of implies that Inner measures andlower integrals are related via the following equality:

Roughly speaking, the inner measure of an event E is the probability ofthe largest measurable event contained in is well-defined, sincethe set is nonempty and boundedabove by one (P is a probability measure). Moreover, if

The lower integral of a bounded function is definedas

290 VOORNEVELD AND NORDE

Page 302: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Proof. By (14.1) there is a sequence of Lebesgue integrablefunctions such that and

for each The Lebesgue integrable functionclearly satisfies Consequently,Moreover, for all

Hence

Proposition 14.2 Let be a sequence of bounded functionsThen

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 291

so

Proof. Lemma 14.1 implies that for each there exists a Lebesgueintegrable function with such that

To this sequence the classi-cal Fatou Lemma applies:

Since and is Lebesgue integrable,it follows from (14.1) and (14.3) that

Combining (14.4) and (14.5) yields the desired result.

14.2.3 Game Theory

A finite strategic game is a tuple where N is afinite, nonempty set of players, each player has a finite, nonemptyset of pure strategies, and a preference relation over Astrategy profile is a Nash equilibrium of G if

for each and Often the preference relationsof the players are represented by payoff functions

Page 303: Borm_Chapters in Game Theory-In Honor of Stef Tijs

14.3 Games with Population Uncertainty

In this section, games with population uncertainty are formally defined.Subsequently, games with population uncertainty are briefly comparedwith the random-player games of Milchtaich (1997) and the Poissongames of Myerson (1998c).

The set of potential players is a nonempty set N. Each potentialplayer has a strategy set The actual player set is determined bychance according to a probability space To each state isassociated a strategic game with a nonemptyset o f actual players having strategy space andeach player having a preference relation over The tuple

is a game with population uncertainty.In a game with population uncertainty there is uncertainty about

the exact state of nature and consequently about the gamethat will be played. The probability measure

P, according to which the state of nature is determined, is assumed tobe common knowledge among the potential players.

Games with population uncertainty as defined above generalize thePoisson games of Myerson (1998c) and the random-player games ofMilchtaich (1997). Milchtaich (1997, p.5) introduces random-playergames, consisting of:

a compact metric space X of potential players;

a simple point process (cf. Daley and Vere–Jones, 1988) on X thatdetermines the actual set of players;

292 VOORNEVELD AND NORDE

which enables us to define the mixed extension of G. The set of mixedstrategies of player is denoted by

Payoffs are extended to mixed strategy profiles in the usual way. Astrategy profile is a Nash equilibrium of G if

for each and The strategy profileis a strict (Nash) equilibrium of G if is a Nash equilibrium of G

and for each player the strategy is the unique best responseto the strategy profile of the other players. This implies that strictequilibria are equilibria in pure strategies.

Page 304: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Every random-player game is easily seen to be a gamewith population uncertainty: set N equal

to X, equal to identify with the distribution ofthe simple point process, and the preferences with the utility functions

Milchtaich (1997, p.6, Example 3) indicates that the Poisson gamesof Myerson (1998c) are random-player games and consequently gameswith population uncertainty.

Some additional notation: denotes the collection ofstrategy profiles of the potential players. Assume the potential playershave fixed a strategy profile For notational convenience,denote by the strategy profile of the players engaged inthe game that is played if state isrealized. The best response correspondence of is denoted by

i.e.,

14.4 Maximum Likelihood Equilibria

The equilibrium concepts introduced by Myerson (1998c) and Milchtaich(1997) for their classes of games with population uncertainty are variantsof the Nash equilibrium concept based on a suitably defined expectedutility function for the players. This section presents an alternativeapproach by stressing those strategy profiles that are most likely to yielda Nash equilibrium in the game selected by chance. Maximum likelihoodequilibria were introduced in Borm et al. (1995a) for a class of Bayesiangames and were considered more recently in Voorneveld (1999, 2000).In this section we define maximum likelihood equilibria for games withpopulation uncertainty and provide an existence result.

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 293

strategy sets defined by means of a continuous function from acompact metric space Y to X. The strategy set of playerequals

bounded and measurable payoff functions giving a payoffto an actual player who plays when the strategies of the otherplayers are S.

for all where denotes the strategy profileof the players in

Page 305: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Consider a game with population un-certainty. The players in N must plan their strategies in ignoranceof the stochastic state of nature that is realized. A strategy profile

gives rise to a Nash equilibrium if the realized state ofnature is an element of the set

is a Nash equilibrium of

How likely is this event? Although this set need not be measurable (i.e.,an element of the ), a common mathematical approach insuch cases is to define its likelihood via its inner measure

294 VOORNEVELD AND NORDE

A is sequentially compact;

for every the graph gphis sequentially closed in

for every the function from defined by issequentially continuous,

the probability of the largest measurable set of states of nature in whichthe strategy profile gives rise to a Nash equilibrium. SeeFagin and Halpern (1991) for another paper using inner measures ina decision theoretic framework. Formally, define the Nash likelihoodfunction for each as

and define to be a maximum likelihood equilibrium if

In a recent paper, Gilboa and Schmeidler (1999) provided an axiomaticfoundation for rankings according to the likelihood function. The follow-ing theorem provides an existence result for maximum likelihood equi-libria.

Theorem 14.3 Consider a game with population uncertaintywith nonempty If there are

topologies on A and the sets for each such that

to

Page 306: Borm_Chapters in Game Theory-In Honor of Stef Tijs

then the set of maximum likelihood equilibria is nonempty.

Proof. The set is nonempty and bounded above by one.

Since A is sequentially compact by thesequence has a subsequence converging to an elementWithout loss of generality, this subsequence is taken to beitself: This is shown to be a maximum likelihoodequilibrium.

For each and it holds by definition thatif and only if Hence

the first equality is (14.6),

the second equality follows from (14.2) and (14.7),

the first inequality follows from sequential upper semicontinuity ofand the fact that since

and is sequentially continuous by

the second inequality follows from Fatou’s Lemma for lower inte-grals (Proposition 14.2),

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 295

We show that for every the function from to {0,1} definedby is sequentially upper semicontinuous. Fix

and a sequence in converging to To show:Since

is a sequence in {0,1}, the inequality trivially holds if1. So assume that It remains to prove that

i.e., that there exists an such that foreach Suppose, to the contrary, that such an does not ex-ist. Then there is a subsequence such that

for eachfor each Since gph is sequentially closed by this impliesthat contradicting the assump-tion that This settles the preliminary work. Inthe following sequence of (in)equalities,

Hence its supremum exists. Let be a sequence in A such that

Page 307: Borm_Chapters in Game Theory-In Honor of Stef Tijs

A compactness condition like is standard in equilibrium existenceresults. The sequential continuity condition guarantees that a con-vergent sequence of strategy profiles in A is projected to a convergentsequence of strategy profiles in the games that are realized inthe different states of nature. This condition is automatically fulfilled iffor instance the topologies on A and are taken to be the producttopologies of those on the strategy spaces of the players Theclosedness condition on the graphs of best response correspondences

is closely related to the upper semicontinuity conditions im-posed on best response correspondences in equilibrium existence proofsusing the Kakutam fixed point theorem. As a consequence, even thoughthe existence proof of maximum likelihood equilibria significantly dif-fers from existence proofs involving a fixed point argument, the basicconditions driving the result are the same.

296 VOORNEVELD AND NORDE

the third equality follows from (14.2) and (14.7),

the fourth equality follows from (14.6),

the final equality follows from

The following (in)equalities hold:

But then is a maximum likelihood equilibrium of

Page 308: Borm_Chapters in Game Theory-In Honor of Stef Tijs

14.5 Measurability

Let be a game with population uncer-tainty and Theorem 14.3 relies on the use of innermeasures in case the set

of states in which a gives rise to an equilibrium turns out to lie out-side the In this section, assumptions are provided thatguarantee measurability of this set. Observe that

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 297

where is the best response correspondence of player in the gameHence, if

(a)

(b)

(c)

the set N is countable,

for each the set of states in which playerdoes not participate in the game is measurable, and

for each the set andof states in which the present action of participant is a bestresponse against the profile of his opponents, is measurable,

then is the countable intersection of the finite union of measurablesets and consequently measurable itself. Condition (a) appears innocentin many practical situations. Condition (b) is equivalent with the naturalassumption that each player participates in a measurable set ofstates. Condition (c) seems conceptually more difficult, but basicallyamounts to a measurability condition on the preference order of eachplayer as a function of the states.

Assume, for instance, that in state the preferences of playerare represented by a utility function Let

and define , which is measurable by condition(b). Consider the generated byand let denote the Borel on Then

Page 309: Borm_Chapters in Game Theory-In Honor of Stef Tijs

298 VOORNEVELD AND NORDE

so it suffices to impose measurability conditions such that for eachthe functions and

are This is clearly the case if

1.

2.

for each the function is

is countable,

since the supremum of countably many measurable functions is measur-able; cf. Aliprantis and Border (1994), Theorem 8.17. Below we providea less obvious result.

Proposition 14.4 Let Assume that the following conditionshold:

(a)

(b)

(c)

for each the function is

the function is sequentially lower semicontinuous in itscoordinate,

is separable.

Then the set is measurable.

Proof. It suffices to prove that the functionis Let and and assume thatis a countably dense subset of Then

where the last equality is a consequence of assumptions (b) and (c): Letbe such that for all Let

Since there is a sequence in C converging to Sinceis sequentially lower semicontinuous in its coordinate, it follows

that The set in (14.9)is a countable intersection of measurable sets and hence measurable, aswas to be shown.

Separability is only a weak condition. Typical examples of action spacesthat come to mind are strategy simplices (probability distributions overfinitely many pure strategies), an interval ) of prices, or a subsetof denoting possible quantities (like production levels). All such setsare separable.

Page 310: Borm_Chapters in Game Theory-In Honor of Stef Tijs

14.6 Random Action Sets

Voorneveld (1999) proposes a model in which, in addition to the setof players and their preferences, also the set of feasible actions of eachplayer is subject to uncertainty. In the set-up of Voorneveld (2000), thiswould imply of the set of actions of each participatingplayer. There are two attitudes to this. On one hand, it is possible tomodel such games explicitly and to derive equilibrium existence resultsin this more general context. This is the approach taken in Voorneveld(1999). On the other hand, it is possible to show that a simple mathe-matical trick translates such a more general problem into a game withpopulation uncertainty. This is explained below.

Suppose that—in addition to the randomness incorporated in a gamewith population uncertainty—also the action set of each player

is allowed to depend on the realized state of nature Thiscaptures a common problem in decision making, namely the situation inwhich players or decision makers have to plan or decide on their courseof action, while still uncertain about contingencies that may make theirchoices impossible to implement: a planned action can turn out to beinfeasible in the realized state of nature. One can translate this into agame with population uncertainty by

1.

2.

defining every action to be feasible, i.e., take

making sure in every realized game that an action profilein which any of the players plays an infeasible actionis not a Nash equilibrium of for instance by extending theoriginal preferences of each player over the feasibleaction set to the larger action set as follows.

(a)

(b)

Any action profile involving feasible action choices is strictlypreferred over any action profile in which a nonempty setof players chooses an infeasible action: for each

if for every butfor some then

Any action profile in which a nonempty subset ofplayers chooses an infeasible action is strictly preferred to anyaction profile in which a superset of S chooses an infeasibleaction: for each if

then

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 299

Page 311: Borm_Chapters in Game Theory-In Honor of Stef Tijs

If the preferences of the players over feasible actions are represented byutility functions then we may assume w.l.o.g.that its range is a subset of [–1,1] by taking arctan if necessary.One way to extend from to would be to take

for every and Notice that (14.10) iswell-defined, since is finite, and indeed satisfies 2(a) and 2(b) above.A simple example will illustrate this procedure.

Example 14.5 Suppose that there is only one state of nature, in whichtwo players each have one good strategy (G), which is feasible, and onebad strategy (B), which is not. Clearly, (G, G) should be the uniqueequilibrium recommendation. Suppose that (G, G) gives payoff zero toboth players. Following (14.10) means that in (B, G) and (G, B), oneof the players makes an infeasible choice, giving rise to payoff – 2 – 1 =–3 to both players, while in (B, B) both players make an infeasiblechoice, giving rise to payoff –2 – 2 = –4 to both players. Hence, thecorresponding game would be:

The unique (maximum likelihood) equilibrium of this game is (G, G).This example illustrates the need for condition 2(b): it is not sufficientto assign, in accordance with condition 2(a), a low payoff, say –3, toany action profile involving infeasible action choices, for this would implythat also (B, B) is an equilibrium recommendation, which is nonsensi-cal. Condition 2(b) takes care of distinctions between action profilesinvolving infeasible choices: the fewer infeasibilities, the better.

14.7 Random Games

In line with Borm et al. (1995a), we define a random game to be a gamewith population uncertainty in which the stochastic variable always se-lects the same player set (one might say a game with population uncer-tainty without population uncertainty) and preferences are representedby means of payoff functions. Such games are incomplete information

300 VOORNEVELD AND NORDE

Page 312: Borm_Chapters in Game Theory-In Honor of Stef Tijs

games (or Bayesian games) in which the players have no private infor-mation. Random games will be of central importance in the remainderof this chapter, which mainly focuses on the likelihood principle as anequilibrium selection device for finite strategic games.

Definition 14.6 A random game is a gamewith population uncertainty such that

(i) for all with N finite, and(ii) the preferences in are represented by functions

Borm et al. (1995a) impose a number of conditions on the randomgame: (i) each is a separable topological space, (ii) iscompact, (iii) for each and the function ismeasurable, (iv) for each and the functionis lower semicontinuous. Under these conditions, the authors providetwo results (their Lemma 1 and Theorem 1) leading to the existence ofmaximum likelihood equilibria in random games. Unfortunately, bothresults are incorrect.

Recall that a real-valued function on a topological spaceX is lower semicontinuous if is open for every

Example 14.7 Suppose there is only one state of nature and one playerwith action space [0,1], endowed with its standard topology, and payoff

for all and payoff zero otherwise. Then

The sets [0,1] and are open by definition in every topology on [0,1]and (0,1) is open as well. Hence the payoff function is lower semicontin-uous. Measurability is trivial. However, the set of maximizers of i.e.,the set of Nash equilibria of the one-player game, is the interval (0,1),which is open. The sequence approaches zero. The sequence

is the sequence of ones, since is always a Nash equilibrium.But L(0) = 0, since 0 is not a Nash equilibrium. This provides a coun-terexample to Lemma 1 of Borm et al. (1995), which erroneously claimsthat

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 301

Page 313: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Example 14.8 Suppose there is only one state of nature and one playerwith action space [0,1], endowed with its standard topology, and payoff

for all and payoff zero for Then

The sets [0,1] and are open by definition in every topology on [0,1] andwith is open as well. Hence the payoff function is lower

semicontinuous. Measurability is trivial. The action set [0,1] is com-pact. Still, there is no maximum likelihood equilibrium, contradictingTheorem 1 of Borm et al. (1995).

Fortunately, our general existence result (Theorem 14.3) applies to ran-dom games as well.

14.8 Robustness Against Randomization

A finite strategic game can be perturbed by as-suming that with certain probability there are trembles in the payofffunctions.

Definition 14.9 Let be a finite strategicgame and An of G is a random game

with:(i) the payoffs of each player are perturbed in a set of states ofnature with positive measure: for each (where

is player payoff function in(ii) the largest perturbation is

An intuitive approach would be to search for strategy profiles which aremost likely to be a Nash equilibrium if players take into account thatwith certain probability the real payoffs of the game may differ slightlyfrom those provided by the payoff function.

Definition 14.10 Let be a finite strategicgame and a set of perturbations of G. A strategy profile

is robust against randomization if there exist

302 VOORNEVELD AND NORDE

1. a sequence of positive numbers with

Page 314: Borm_Chapters in Game Theory-In Honor of Stef Tijs

such that and for every where(.) denotes the likelihood function for the random game The

set of strategy profiles in G that is robust against randomization is de-noted by RR(G).

A strategy profile is robust against randomization if it is the limit ofa sequence of maximum likelihood equilibria in perturbed games, eachhaving a strictly positive likelihood. This last restriction essentiallymeans that even though the actual payoffs are subject to chance, thestate spaces are such that at least some strategy profiles are Nash equi-libria in a set of realized games with positive measure; otherwise, theMLE concept has no cutting power.

We prove that under some conditions on the set of permissibleperturbations the set of strategy profiles that is robust against random-ization is nonempty, and that the concept is a refinement of the Nashequilibrium concept.

Theorem 14.11 Let be a finite strategicgame and a set of perturbations of G.

(a) If there exist

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 303

2.

3.

a sequence of games in

a sequence of strategy profiles converging to

a sequence of positive real numbers converging tozero and

a sequence of in with a finitestate space

then .

(b) where NE(G) denotes the set of (mixed) Nashequilibria of G.

Proof. (a): Let and be as in Theorem 14.11 (a).Choose such that for each Thestate space is finite, so for each Sinceis a sequence in the compact set it has a convergent subse-quence; its limit is robust against randomization,(b): Let as in Definition 14.10 supportas a randomization robust strategy profile. Suppose Then

Page 315: Borm_Chapters in Game Theory-In Honor of Stef Tijs

there is a player and an action such thatTake Since there is an

such that for each Definition 14.9 (ii) implies

and

for all and so that for each and

Since G is a finite strategic game, is bounded by a certain M > 0.Definition 14.9 (ii) implies that for each and each isbounded by Consequently, for all

and

where denotes the probability that is playedaccording to the mixed strategy profile Let and

for each Inequalities (14.11) and (14.12) imply that foreach and

304 VOORNEVELD AND NORDE

As and this implies thatfor all and sufficiently large. But then forlarge in contradiction with Definition 14.10.

Remark 14.12 In the proof above, the only essential parts of Defini-tions 14.9 and 14.10 were:

the fact that _ actually exist;

that payoffs in the perturbed game lie close to those in the originalgame (part (ii) of Definition 14.9);

Page 316: Borm_Chapters in Game Theory-In Honor of Stef Tijs

that the selected sequence has a positive likelihood ofbeing a Nash equilibrium in the perturbed games (final part ofDefinition 14.10).

Consequently, there is still a lot of freedom in redefining the perturbedgames without harming the fact that we have a nonempty equilibriumrefinement. In particular, considerable strengthenings of condition (i) inDefinition 14.9 would be possible and would enhance the cutting powerof the selection device.

Among the earliest equilibrium refinements is the notion of an essentialequilibrium, due to Wu and Jiang (1962). An equilibrium of a gameG is said to be essential if every game with payoffs near those of G has aNash equilibrium close to A game may not have essential equilibria.For instance, in the trivial game where the unique player has a constantpayoff function, all strategies are Nash equilibria, but none of them isessential, since for every strategy there exists some slight change in thepayoffs that makes no nearby strategy an equilibrium. However, Wuand Jiang (1962) prove that if a game has only finitely many equilibria,at least one of them is essential.

Let and be twofinite games with the same set of players and the same strategy space.The distance between G and H is defined to be the maximaldistance between its payoffs:

Definition 14.13 Let be a finite strategicgame. A strategy profile is essential if for every thereexists a such that every game H within distance from the gameG has a Nash equilibrium within distance from

Proposition 14.14 Every essential equilibrium of a finite strategic gameis robust against randomization.

Proof. Trivial: construct a sequence of with only onestate, in which the game has distance to G.

14.9 Weakly Strict Equilibria

This section describes the notion of weakly strict equilibria, introducedby Borm et al. (1995b). The proofs in this section are mostly simpli-fications of the original ones. The idea behind weakly strict equilibria

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 305

Page 317: Borm_Chapters in Game Theory-In Honor of Stef Tijs

is analogous to the principle of robustness against randomization, butattention is restricted to a very simple type of perturbations and tostrategies that are (in a sense to be made precise) undominated.

We start by defining the perturbed games that we consider in thissection. Let be a finite strategic game and

Define thewith

and for each

with

This means that there are different payoff perturbed versionsof G (one of which coincides with G), each of which is played with equalprobability. These perturbed versions are obtained by adding for each

(independently) with probability the zero, or tothe original payoffs

Let be a mixed strategy of player in G and letStrategy is if there exists a strategy such that

for each with strict inequality forsome A strategy is if no player plays anstrategy. For each and let

is in

denote the probability that is in the perturbed gameAs before,

denotes the likelihood of yielding a Nash equilibrium in the perturbedgame. We can use the measure P instead of the inner measure P*, sinceevery subset of is measurable.

Instead of considering the limit of a sequence of maximum likelihoodequilibria, as was done in the previous section, a weighted objectivefunction is taken into account, where positive weight is assigned to boththe probability of being undominated and being a Nash equilibrium.

Definition 14.15 Let be a finite strategicgame and Define the function by

A strategy profile is a weakly strict equilibrium if thereexist

306 VOORNEVELD AND NORDE

Page 318: Borm_Chapters in Game Theory-In Honor of Stef Tijs

such that for each

Since the state space of each perturbed game is finite, takesonly finitely many values, so the maximum in the definition indeed ex-

sufficientlysmall the strategy profile is also a strict Nash equilib-rium (and hence ) in each perturbed game whichin its turn is equivalent with the statement that

Proposition 14.16 The strategy profile is a strictNash equilibrium of the finite strategic game if andonly if for sufficiently small

This motivates the name ‘weakly’ strict equilibria in Definition 14.15.

Theorem 14.17 Let be a finite strategicgame.

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 307

1.

2.

a sequence of positive numbers with and

a sequence of strategy profiles converging to

(a)

(b)

(c)

(d)

G has a weakly strict equilibrium;

Each weakly strict equilibrium of G is an undominated Nash equi-librium of G;

Every strict equilibrium of G is weakly strict;

If G has strict equilibria, these are the only weakly strict ones.

Proof. (a): As in Theorem 14.11(a).(b): It is well-known that the finite strategic game G has an undomi-nated equilibrium For small, sinceand Let be a weakly strict equilibrium of G and supposeis dominated. As the set of undominated strategy profiles in G is closed,there is an environment U of such that each is dominated.Hence for sufficiently small: for each sothat a contradiction. Theproof that is a Nash equilibrium is analogous to Theorem 14.11(b).(c),(d): Follow easily from Proposition 14.16.

ists. Notice that is a strict Nash equilibrium of G if and only if for

Page 319: Borm_Chapters in Game Theory-In Honor of Stef Tijs

14.10 Approximate Maximum Likelihood Equi-libria

In this section we consider perturbations of a finite strategic gameby assuming that, according to some probability

distribution, there are trembles in the actions of the players.

Definition 14.18 Let be a finite strategicgame and The of G is the random game

with

The intuitive idea behind the games is that players, whochoose to act according to some strategies, make small mistakes whileimplementing these strategies. The approach now is to look for strat-egy profiles which have a positive probability of being close to someequilibrium of the perturbed games.

Definition 14.19 Let G be a finite game and be thecorresponding collection of For every open set

and every define

and for every open define

A profile that satisfies

is called an approximate maximum likelihood equilibrium of G.

For an open set U and an the number denotes theprobability that U contains a Nash equilibrium of the

An estimate for these probabilities for small is given by µ(U).

(i)

(ii)

is the on and P is the prod-uct measure of the uniform distributions on

for every and

308 VOORNEVELD AND NORDE

Page 320: Borm_Chapters in Game Theory-In Honor of Stef Tijs

A profile is an approximate maximum likelihood equilibrium if thereis a positive constant such that for sufficiently small and any openneighbourhood U of this probability is at least

The concept of approximate maximum likelihood equilibrium is arefinement of the Nash equilibrium concept. In fact we show that everyapproximate maximum likelihood equilibrium is a perfect Nash equi-librium (Selten, 1975). Recall that a profile is a perfect equilibriumof the finite game G if there exists a sequence of disturbance vectors

for every withand a sequence of profiles converging to such that for ev-ery we have Here is the strategic game whichis obtained by restricting the strategy set of every player to

Theorem 14.20 Every approximate maximum likelihood equilibrium ofa finite game G is a perfect equilibrium of G.

Proof. First note that for every and wehave if and only if

Now let be an approximate maximum likelihood equilibrium of G,i.e. Let Define

where || . || denotes the standard Euclidean norm inSince there exists a such thatfor every Choose Sincethere is an with so wecan choose an Clearly the sequenceconverges to 0 and the sequence converges to Moreover, thesequence defined by also converges to

and for every Since we concludethat is a perfect equilibrium of G.

In the following example we illustrate that the converse of Theorem 14.20is not true.

Example 14.21 Consider the finite strategic game G given by

The profile given byis a Nash equilibrium of G. Consider the sequence of dis-

turbance vectors given by

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 309

Page 321: Borm_Chapters in Game Theory-In Honor of Stef Tijs

and the sequence of profiles givenby and

for every One easily verifies thatfor every and that Therefore is a perfect equilibriumof G. However, is not an approximate maximum likelihood equilib-rium of G. In order to see this consider the set

for every and which is an openneighbourhood of It suffices to show that for every (0, 0.1) wehave

310 VOORNEVELD AND NORDE

Since then we get, for every

Consequently and hence So, let (0, 0.1) andbe such that and let

Then with Moreoverand

Consequently where is the pure best responsecorrespondence of player 2. Hence

and However, since we haveDivision by yields

In the remainder of this section we will show that finite games with twoplayers and finitely many Nash equilibria always possess an approximatemaximum likelihood equilibrium. Recall that the carrier of strategy

of player i is defined by

Lemma 14.22 Let G be a finite game with two players and letDefine

Let be the open neighbourhood of a defined by

Finally, let be a disturbance vector with forevery and Then we have for everyand for every that

Page 322: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Proof. First note that if and then for everyand we have By

continuity we derive that there is an open neighbourhood V of suchthat for every Heredenotes the pure best response correspondence of player Consequentlythe set

is an open set and hence is open.Let and Define

In order to show that it suffices to show thatand

We only show the first inclusion. Let be such thatThen or If then

so If thenby definition of and the same argument can be

used.

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION 311

In the following lemma we show that if a finite two player game hasa Nash equilibrium and if some state of nature yields, for someperturbation level a perturbed game which has a Nash equilibriumclose to then the same state of nature yields for all smaller levelsof perturbation a perturbed game with a Nash equilibrium close to

Lemma 14.23 Let G be a finite game with two players and letLet U be a convex, open subset of with Let

be such that for everyLet and be such that

Then for every we have

Proof. Let and let Write forsome We have and hence,according to Lemma 14.22, Consequently,

Therefore, by convexity of U, we have

Corollary 14.24 Let G be a finite game with two players and letLet U and be defined as in Lemma 14.22. Then

the map is non-increasing. Consequently,

Page 323: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Remark 14.25 The statements above can be generalized in the follow-ing way. If is a collection of open, convex sets, whichare mutually disjoint, which is such that every element of this col-lection contains a Nash equilibrium of the finite two player game G,and which satisfy for every then one can define

As a corollary of Lemma 14.23 we get again that the mapis non-increasing. Consequently, exists.

Now we are able to prove the main theorem of this section.

Theorem 14.26 Every two player finite game with finitely many Nashequilibria has an approximate maximum likelihood equilibrium.

Proof. Let G be a two player finite game with finitely many Nashequilibria Let be open, convex sets withfor every Let For sufficiently smallwe have For,if this statement is not true, there is a sequence convergingto 0, a sequence in and a sequence inwith and for every Without lossof generality we may assume that the sequences andhave limits and Let and Writingand for every we have

So, However, which yields a contradiction. By therule of inclusion and exclusion we have, for sufficiently small,

312 VOORNEVELD AND NORDE

Page 324: Borm_Chapters in Game Theory-In Honor of Stef Tijs

POPULATION UNCERTAINTY AND EQUILIBRIUM SELECTION

Here all summations are over subsets of the collection Ifforevery then the right-hand side of this equality

can be made arbitrarily small by choosing an appropriate collection of(small) open convex sets This yields a contradiction. So,there is at least one equilibrium with

Aldrich, J. (1997): “R.A. Fisher and the making of maximum likelihood1912-1922,” Statistical Science, 12, 162–176.

Aliprantis, C.D., and K.C. Border (1994): Infinite Dimensional Analy-sis; A Hitchhiker’s Guide. New York: Springer Verlag.

Borm, P.E.M., R. Cao, and I. García-Jurado (1995a): “Maximum like-lihood equilibria of random games,” Optimization, 35, 77–84.

Borm, P.E.M., R. Cao, I. García-Jurado, and L. Méndez-Naya (1995b):“Weakly strict equilibria in finite normal form games,” OR Spektrum,17, 235–238.

Daley, D.J., and D.J. Vere-Jones (1988): An Introduction to the Theoryof Point Processes. New York: Springer Verlag.

Fagin R., and J.Y. Halpern (1991): “Uncertainty, belief, and probabil-ity,” Computational Intelligence, 7, 160–173.

Gilboa, I., and D. Schmeidler (1999): “Inductive inference: an axiomaticapproach,” Tel Aviv University.

Harsanyi, J. (1967–1968): “Games with incomplete information playedby Bayesian players,” Management Science, 14, 159–182, 320–334, 486–502.

Milchtaich, I. (1997): “Random-player games,” Northwestern UniversityMath Center, discussion paper 1178.

Myerson, R.B. (1998a): “Comparison of scoring rules in Poisson votinggames,” Northwestern University Math Center, discussion paper 1214.

Myerson, R.B. (1998b): “Extended Poisson games and the Condorcetjury theorem,” Games and Economic Behavior, 25, 111–131.

313

References

and hence, by letting

Page 325: Borm_Chapters in Game Theory-In Honor of Stef Tijs

Myerson, R.B. (1998c): “Population uncertainty and Poisson games,”International Journal of Game Theory, 27, 375–392.

Myerson, R.B. (2000): “Large Poisson games,” Journal of EconomicTheory, 94, 7–45.

Selten, R. (1975): “Reexamination of the perfectness concept for equilib-rium points in extensive games,” International Journal of Game Theory,4, 25–55.Voorneveld, M. (1999): Potential Games and Interactive Decisions withMultiple Criteria, Ph.D. thesis, CentER Dissertation Series 61, TilburgUniversity.

Voorneveld, M. (2000): “Maximum likelihood equilibria of games withpopulation uncertainty,” CentER discussion paper 2000-79.

Wu, W., and J. Jiang (1962): “Essential equilibrium points ofnon-cooperative games,” Sci. Sinica, 11, 1307–1322.

314 VOORNEVELD AND NORDE