Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial...

55
Logic and Artificial Intelligence Lecture 7 Eric Pacuit Currently Visiting the Center for Formal Epistemology, CMU Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/epacuit [email protected] September 20, 2011 Logic and Artificial Intelligence 1/19

Transcript of Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial...

Page 1: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Logic and Artificial IntelligenceLecture 7

Eric Pacuit

Currently Visiting the Center for Formal Epistemology, CMU

Center for Logic and Philosophy of ScienceTilburg University

ai.stanford.edu/∼[email protected]

September 20, 2011

Logic and Artificial Intelligence 1/19

Page 2: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Agreeing to Disagree

Theorem: Suppose that n agents share a common prior and havedifferent private information. If there is common knowledge in thegroup of the posterior probabilities, then the posteriors must beequal.

Robert Aumann. Agreeing to Disagree. Annals of Statistics 4 (1976).

Logic and Artificial Intelligence 2/19

Page 3: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Agreeing to Disagree

Theorem: Suppose that n agents share a common prior and havedifferent private information. If there is common knowledge in thegroup of the posterior probabilities, then the posteriors must beequal.

Robert Aumann. Agreeing to Disagree. Annals of Statistics 4 (1976).

Logic and Artificial Intelligence 2/19

Page 4: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)1997.

Logic and Artificial Intelligence 3/19

Page 5: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1 w2 w3 w4

w5 w6 w7

They agree the true state is one of seven different states.

Logic and Artificial Intelligence 4/19

Page 6: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832 w4

432

w5

532 w6

732 w7

232

They agree on a common prior.

Logic and Artificial Intelligence 4/19

Page 7: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1 w2 w3 w4

w5 w6 w7

They agree that Experiment 1 would produce the blue partition.

Logic and Artificial Intelligence 4/19

Page 8: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1 w2 w3 w4

w5 w6 w7

They agree that Experiment 1 would produce the blue partitionand Experiment 2 the red partition.

Logic and Artificial Intelligence 4/19

Page 9: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1 w2 w3 w4

w5 w6 w7

They are interested in the truth of E = {w2,w3,w5,w6}.

Logic and Artificial Intelligence 4/19

Page 10: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832 w4

432

w5

532 w6

732 w7

232

So, they agree that P(E ) = 2432 .

Logic and Artificial Intelligence 4/19

Page 11: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832 w4

432

w5

532 w6

732 w7

232

Also, that if the true state is w1, then Experiment 1 will yieldP(E |I ) = P(E∩I )

P(I ) = 1214

Logic and Artificial Intelligence 4/19

Page 12: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832 w4

432

w5

532 w6

732 w7

232

Suppose the true state is w7 and the agents preform theexperiments.

Logic and Artificial Intelligence 4/19

Page 13: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832 w4

432

w5

532 w6

732 w7

232

Suppose the true state is w7, then Pr1(E ) = 1214

Logic and Artificial Intelligence 4/19

Page 14: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832 w4

432

w5

532 w6

732 w7

232

Then Pr1(E ) = 1214 and Pr2(E ) = 15

21

Logic and Artificial Intelligence 4/19

Page 15: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832 w4

432

w5

532 w6

732 w7

232

Suppose they exchange emails with the new subjectiveprobabilities: Pr1(E ) = 12

14 and Pr2(E ) = 1521

Logic and Artificial Intelligence 4/19

Page 16: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832 w4

432

w5

532 w6

732 w7

232

Agent 2 learns that w4 is NOT the true state (same for Agent 1).

Logic and Artificial Intelligence 4/19

Page 17: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832

w5

532 w6

732 w7

232

Agent 2 learns that w4 is NOT the true state (same for Agent 1).

Logic and Artificial Intelligence 4/19

Page 18: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832 w4

432

w5

532 w6

732 w7

232

Agent 1 learns that w5 is NOT the true state (same for Agent 1).

Logic and Artificial Intelligence 4/19

Page 19: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832

w5

532 w6

732 w7

232

The new probabilities are Pr1(E |I ′) = 79 and Pr2(E |I ′) = 15

17

Logic and Artificial Intelligence 4/19

Page 20: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432 w3

832

w5

532 w6

732 w7

232

After exchanging this information (Pr1(E |I ′) = 79 and

Pr2(E |I ′) = 1517 ), Agent 2 learns that w3 is NOT the true state.

Logic and Artificial Intelligence 4/19

Page 21: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

2 Scientists Perform an Experiment

w1

232 w2

432

w5

532 w6

732 w7

232

No more revisions are possible and the agents agree on theposterior probabilities.

Logic and Artificial Intelligence 4/19

Page 22: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Dissecting Aumann’s Theorem

I Qualitative versions: like-minded individuals cannot agree tomake different decisions.

M. Bacharach. Some Extensions of a Claim of Aumann in an Axiomatic Modelof Knowledge. Journal of Economic Theory (1985).

J.A.K. Cave. Learning to Agree. Economic Letters (1983).

D. Samet. The Sure-Thing Principle and Independence of Irrelevant Knowledge.2008.

Logic and Artificial Intelligence 5/19

Page 23: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

The Framework

Knowledge Structure: 〈W , {Πi}i∈A〉 where each Πi is a partitionon W (Πi (w) is the cell in Πi containing w).

Decision Function: Let D be a nonempty set of decisions. Adecision function for i ∈ A is a function di : W → D. A vectord = (d1, . . . ,dn) is a decision function profile. Let[di = d ] = {w | di (w) = d}.

(A1) Each agent knows her own decision:

[di = d ] ⊆ Ki ([di = d ])

Logic and Artificial Intelligence 6/19

Page 24: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

The Framework

Knowledge Structure: 〈W , {Πi}i∈A〉 where each Πi is a partitionon W (Πi (w) is the cell in Πi containing w).

Decision Function: Let D be a nonempty set of decisions. Adecision function for i ∈ A is a function di : W → D. A vectord = (d1, . . . ,dn) is a decision function profile. Let[di = d ] = {w | di (w) = d}.

(A1) Each agent knows her own decision:

[di = d ] ⊆ Ki ([di = d ])

Logic and Artificial Intelligence 6/19

Page 25: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Comparing Knowledge

[j � i ]: agent j is at least as knowledgeable as agent i .

[j � i ] :=⋂

E∈℘(W )

(Ki (E )⇒ Kj(E )) =⋂

E∈℘(W )

(¬Ki (E ) ∪ Kj(E ))

w ∈ [j � i ] then j knows at w every event that i knows there.

[j ∼ i ] = [j � i ] ∩ [i � j ]

Logic and Artificial Intelligence 7/19

Page 26: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Comparing Knowledge

[j � i ]: agent j is at least as knowledgeable as agent i .

[j � i ] :=⋂

E∈℘(W )

(Ki (E )⇒ Kj(E )) =⋂

E∈℘(W )

(¬Ki (E ) ∪ Kj(E ))

w ∈ [j � i ] then j knows at w every event that i knows there.

[j ∼ i ] = [j � i ] ∩ [i � j ]

Logic and Artificial Intelligence 7/19

Page 27: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Comparing Knowledge

[j � i ]: agent j is at least as knowledgeable as agent i .

[j � i ] :=⋂

E∈℘(W )

(Ki (E )⇒ Kj(E )) =⋂

E∈℘(W )

(¬Ki (E ) ∪ Kj(E ))

w ∈ [j � i ] then j knows at w every event that i knows there.

[j ∼ i ] = [j � i ] ∩ [i � j ]

Logic and Artificial Intelligence 7/19

Page 28: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Interpersonal Sure-Thing Principle (ISTP)

For any pair of agents i and j and decision d ,

Ki ([j � i ] ∩ [dj = d ]) ⊆ [di = d ]

Logic and Artificial Intelligence 8/19

Page 29: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Interpersonal Sure-Thing Principle (ISTP): Illustration

Suppose that Alice and Bob, two detectives who graduated thesame police academy, are assigned to investigate a murder case.

Ifthey are exposed to different evidence, they may reach differentdecisions. Yet, being the students of the same academy, themethod by which they arrive at their conclusions is the same.Suppose now that detective Bob, a father of four who returnshome every day at five oclock, collects all the information aboutthe case at hand together with detective Alice.

Logic and Artificial Intelligence 9/19

Page 30: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Interpersonal Sure-Thing Principle (ISTP): Illustration

Suppose that Alice and Bob, two detectives who graduated thesame police academy, are assigned to investigate a murder case. Ifthey are exposed to different evidence, they may reach differentdecisions.

Yet, being the students of the same academy, themethod by which they arrive at their conclusions is the same.Suppose now that detective Bob, a father of four who returnshome every day at five oclock, collects all the information aboutthe case at hand together with detective Alice.

Logic and Artificial Intelligence 9/19

Page 31: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Interpersonal Sure-Thing Principle (ISTP): Illustration

Suppose that Alice and Bob, two detectives who graduated thesame police academy, are assigned to investigate a murder case. Ifthey are exposed to different evidence, they may reach differentdecisions. Yet, being the students of the same academy, themethod by which they arrive at their conclusions is the same.

Suppose now that detective Bob, a father of four who returnshome every day at five oclock, collects all the information aboutthe case at hand together with detective Alice.

Logic and Artificial Intelligence 9/19

Page 32: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Interpersonal Sure-Thing Principle (ISTP): Illustration

Suppose that Alice and Bob, two detectives who graduated thesame police academy, are assigned to investigate a murder case. Ifthey are exposed to different evidence, they may reach differentdecisions. Yet, being the students of the same academy, themethod by which they arrive at their conclusions is the same.Suppose now that detective Bob, a father of four who returnshome every day at five oclock, collects all the information aboutthe case at hand together with detective Alice.

Logic and Artificial Intelligence 9/19

Page 33: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Interpersonal Sure-Thing Principle (ISTP): Illustration

However, Alice, single and a workaholic, continues to collect moreinformation every day until the wee hours of the morning —information which she does not necessarily share with Bob.

Obviously, Bob knows that Alice is at least as knowledgeable as heis. Suppose that he also knows what Alices decision is. Since Aliceuses the same investigation method as Bob, he knows that had hebeen in possession of the more extensive knowledge that Alice hascollected, he would have made the same decision as she did. Thus,this is indeed his decision.

Logic and Artificial Intelligence 10/19

Page 34: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Interpersonal Sure-Thing Principle (ISTP): Illustration

However, Alice, single and a workaholic, continues to collect moreinformation every day until the wee hours of the morning —information which she does not necessarily share with Bob.Obviously, Bob knows that Alice is at least as knowledgeable as heis.

Suppose that he also knows what Alices decision is. Since Aliceuses the same investigation method as Bob, he knows that had hebeen in possession of the more extensive knowledge that Alice hascollected, he would have made the same decision as she did. Thus,this is indeed his decision.

Logic and Artificial Intelligence 10/19

Page 35: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Interpersonal Sure-Thing Principle (ISTP): Illustration

However, Alice, single and a workaholic, continues to collect moreinformation every day until the wee hours of the morning —information which she does not necessarily share with Bob.Obviously, Bob knows that Alice is at least as knowledgeable as heis. Suppose that he also knows what Alices decision is.

Since Aliceuses the same investigation method as Bob, he knows that had hebeen in possession of the more extensive knowledge that Alice hascollected, he would have made the same decision as she did. Thus,this is indeed his decision.

Logic and Artificial Intelligence 10/19

Page 36: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Interpersonal Sure-Thing Principle (ISTP): Illustration

However, Alice, single and a workaholic, continues to collect moreinformation every day until the wee hours of the morning —information which she does not necessarily share with Bob.Obviously, Bob knows that Alice is at least as knowledgeable as heis. Suppose that he also knows what Alices decision is. Since Aliceuses the same investigation method as Bob, he knows that had hebeen in possession of the more extensive knowledge that Alice hascollected, he would have made the same decision as she did. Thus,this is indeed his decision.

Logic and Artificial Intelligence 10/19

Page 37: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Implications of ISTP

Proposition. If the decision function profile d satisfies ISTP, then

[i ∼ j ] ⊆⋃d∈D

([di = d ] ∩ [dj = d ])

Logic and Artificial Intelligence 11/19

Page 38: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

ISTP Expandability

Agent i is an epistemic dummy if it is always the case that all theagents are at least as knowledgeable as i . That is, for each agent j ,

[j � i ] = W

A decision function profile d on 〈W ,Π1, . . . ,Πn〉 is ISTPexpandable if for any expanded structure 〈W ,Π1, . . . ,Πn+1〉where n + 1 is an epistemic dummy, there exists a decisionfunction dn+1 such that (d1,d2, . . . ,dn+1) satisfies ISTP.

Logic and Artificial Intelligence 12/19

Page 39: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

ISTP Expandability: Illustration

Suppose that after making their decisions, Alice and Bob are toldthat another detective, one E.P. Dummy, who graduated the verysame police academy, had also been assigned to investigate thesame case.

In principle, they would need to review their decisionsin light of the third detectives knowledge: knowing what theyknow about the third detective, his usual sources of information,for example, may impinge upon their decision.

Logic and Artificial Intelligence 13/19

Page 40: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

ISTP Expandability: Illustration

Suppose that after making their decisions, Alice and Bob are toldthat another detective, one E.P. Dummy, who graduated the verysame police academy, had also been assigned to investigate thesame case. In principle, they would need to review their decisionsin light of the third detectives knowledge: knowing what theyknow about the third detective, his usual sources of information,for example, may impinge upon their decision.

Logic and Artificial Intelligence 13/19

Page 41: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

ISTP Expandability: Illustration

But this is not so in the case of detective Dummy. It is commonlyknown that the only information source of this detective, knownamong his colleagues as the couch detective, is the TV set.

Thus,it is commonly known that every detective is at least asknowledgeable as Dummy. The news that he had been assigned tothe same case is completely irrelevant to the conclusions that Aliceand Bob have reached. Obviously, based on the information hegets from the media, Dummy also makes a decision. We mayassume that the decisions made by the three detectives satisfy theISTP, for exactly the same reason we assumed it for the twodetectives decisions

Logic and Artificial Intelligence 14/19

Page 42: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

ISTP Expandability: Illustration

But this is not so in the case of detective Dummy. It is commonlyknown that the only information source of this detective, knownamong his colleagues as the couch detective, is the TV set. Thus,it is commonly known that every detective is at least asknowledgeable as Dummy.

The news that he had been assigned tothe same case is completely irrelevant to the conclusions that Aliceand Bob have reached. Obviously, based on the information hegets from the media, Dummy also makes a decision. We mayassume that the decisions made by the three detectives satisfy theISTP, for exactly the same reason we assumed it for the twodetectives decisions

Logic and Artificial Intelligence 14/19

Page 43: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

ISTP Expandability: Illustration

But this is not so in the case of detective Dummy. It is commonlyknown that the only information source of this detective, knownamong his colleagues as the couch detective, is the TV set. Thus,it is commonly known that every detective is at least asknowledgeable as Dummy. The news that he had been assigned tothe same case is completely irrelevant to the conclusions that Aliceand Bob have reached. Obviously, based on the information hegets from the media, Dummy also makes a decision.

We mayassume that the decisions made by the three detectives satisfy theISTP, for exactly the same reason we assumed it for the twodetectives decisions

Logic and Artificial Intelligence 14/19

Page 44: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

ISTP Expandability: Illustration

But this is not so in the case of detective Dummy. It is commonlyknown that the only information source of this detective, knownamong his colleagues as the couch detective, is the TV set. Thus,it is commonly known that every detective is at least asknowledgeable as Dummy. The news that he had been assigned tothe same case is completely irrelevant to the conclusions that Aliceand Bob have reached. Obviously, based on the information hegets from the media, Dummy also makes a decision. We mayassume that the decisions made by the three detectives satisfy theISTP, for exactly the same reason we assumed it for the twodetectives decisions

Logic and Artificial Intelligence 14/19

Page 45: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Generalized Agreement Theorem

If d is an ISTP expandable decision function profile on a partitionstructure 〈W ,Π1, . . . ,Πn〉, then for any decisions d1, . . . , dn whichare not identical, C (

⋂i [di = di ]) = ∅.

Logic and Artificial Intelligence 15/19

Page 46: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Analyzing Agreement Theorems in DynamicEpistemic/Doxastic Logic

C. Degremont and O. Roy. Agreement Theorems in Dynamic-Epistemic Logic.in A. Heifetz (ed.), Proceedings of TARK XI, 2009, pp.91 - 98, forthcoming inthe JPL.

L. Demey. Agreeing to Disagree in Probabilistic Dynamic Epistemic Logic. ILLC,Masters Thesis, 2010.

Logic and Artificial Intelligence 16/19

Page 47: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Dissecting Aumann’s Theorem

I “No Trade” Theorems (Milgrom and Stokey); fromprobabilities of events to aggregates (McKelvey and Page);Common Prior Assumption, etc.

I How do the posteriors become common knowledge?

J. Geanakoplos and H. Polemarchakis. We Can’t Disagree Forever. Journal ofEconomic Theory (1982).

Logic and Artificial Intelligence 17/19

Page 48: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Dissecting Aumann’s Theorem

I “No Trade” Theorems (Milgrom and Stokey); fromprobabilities of events to aggregates (McKelvey and Page);Common Prior Assumption, etc.

I How do the posteriors become common knowledge?

J. Geanakoplos and H. Polemarchakis. We Can’t Disagree Forever. Journal ofEconomic Theory (1982).

Logic and Artificial Intelligence 17/19

Page 49: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Geanakoplos and Polemarchakis

Revision Process: Given event A, 1: “My probability of A is q”,2: “My probability of A is r , 1: “My probability of A is now q′, 2:“My probability of A is now r ′, etc.

I Assuming that the information partitions are finite, given anevent A, the revision process converges in finitely many steps.

I For each n, there are examples where the process takes nsteps.

I An indirect communication equilibrium is not necessarily adirect communication equilibrium.

Logic and Artificial Intelligence 18/19

Page 50: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Geanakoplos and Polemarchakis

Revision Process: Given event A, 1: “My probability of A is q”,2: “My probability of A is r , 1: “My probability of A is now q′, 2:“My probability of A is now r ′, etc.

I Assuming that the information partitions are finite, given anevent A, the revision process converges in finitely many steps.

I For each n, there are examples where the process takes nsteps.

I An indirect communication equilibrium is not necessarily adirect communication equilibrium.

Logic and Artificial Intelligence 18/19

Page 51: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Geanakoplos and Polemarchakis

Revision Process: Given event A, 1: “My probability of A is q”,2: “My probability of A is r , 1: “My probability of A is now q′, 2:“My probability of A is now r ′, etc.

I Assuming that the information partitions are finite, given anevent A, the revision process converges in finitely many steps.

I For each n, there are examples where the process takes nsteps.

I An indirect communication equilibrium is not necessarily adirect communication equilibrium.

Logic and Artificial Intelligence 18/19

Page 52: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Geanakoplos and Polemarchakis

Revision Process: Given event A, 1: “My probability of A is q”,2: “My probability of A is r , 1: “My probability of A is now q′, 2:“My probability of A is now r ′, etc.

I Assuming that the information partitions are finite, given anevent A, the revision process converges in finitely many steps.

I For each n, there are examples where the process takes nsteps.

I An indirect communication equilibrium is not necessarily adirect communication equilibrium.

Logic and Artificial Intelligence 18/19

Page 53: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Parikh and Krasucki

Protocol: graph on the set of agents specifying the legal pairwisecommunication (who can talk to who).

I If the protocol is fair, then the limiting probability of an eventA will be the same for all agents in the group.

I Consensus can be reached without common knowledge:“everyone must know the common prices of commodities;however, it does not make sense to demand that everyoneknows the details of every commercial transaction.”

Logic and Artificial Intelligence 19/19

Page 54: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Parikh and Krasucki

Protocol: graph on the set of agents specifying the legal pairwisecommunication (who can talk to who).

I If the protocol is fair, then the limiting probability of an eventA will be the same for all agents in the group.

I Consensus can be reached without common knowledge:“everyone must know the common prices of commodities;however, it does not make sense to demand that everyoneknows the details of every commercial transaction.”

Logic and Artificial Intelligence 19/19

Page 55: Logic and Arti cial Intelligenceepacuit/classes/logicai-cmu/logai-lec7.pdfLogic and Arti cial Intelligence 2/19. G. Bonanno and K. Nehring. Agreeing to Disagree: A Survey. (manuscript)

Parikh and Krasucki

Protocol: graph on the set of agents specifying the legal pairwisecommunication (who can talk to who).

I If the protocol is fair, then the limiting probability of an eventA will be the same for all agents in the group.

I Consensus can be reached without common knowledge:“everyone must know the common prices of commodities;however, it does not make sense to demand that everyoneknows the details of every commercial transaction.”

Logic and Artificial Intelligence 19/19