[IEEE 2006 First International Conference on Communications and Electronics - Hanoi, Vietnam...

5

Click here to load reader

Transcript of [IEEE 2006 First International Conference on Communications and Electronics - Hanoi, Vietnam...

Page 1: [IEEE 2006 First International Conference on Communications and Electronics - Hanoi, Vietnam (2006.10.10-2006.10.11)] 2006 First International Conference on Communications and Electronics

A Game-Theoretic Analysis of Trust Managementin P2P Systems

Trinh Anh Tuan

High Speed Networks LaboratoryDepartment of Telecommunications and Media Informatics

Budapest University of Technology and Economics, Hungary

E-mail: [email protected]

Abstract- This paper reports some Braess-like paradoxes inpeer-to-peer (P2P) trust management systems. We use the toolsfrom game theory to model and analyze the reporting and exclu-sion processes and show how uncertainty and belief among peersmight lead to surprising and unexpected peer behaviors, which,in turn, could make current P2P trust management systemsineffective. The contributions of the paper are the followings.First, we find that if a reputation system is not incentive-compatible, the more the number of peers in the system, the lesslikely that anyone will report about a malicious peer. Second, weaddress the issue of voting for exclusion of a (maliciously believed)peer and provide an analysis of the problem. By modeling thedecision process as a Bayesian game, we find that the possibleapplication of exclusion in P2P system might be dangerous. Moreprecisely, our analysis shows that, under certain assumptions,the more the number of voting peers, the more likely that aninnocent peer is excluded from the network. Finally, in the lightsof the investigated paradoxes, we discuss possible solutions toimprove the efficiency of current trust management systems inP2P networks.

I. INTRODUCTION

One of the most exciting applications run on top of today'sInternet infrastructure is P2P application. In a P2P overlaynetwork, peers are able to communicate and share informationto other peers much more directly than in the traditionalclient-server architecture, making communication and contentsharing much more efficient. However, this does not comewithout problems. One of them is the problem of trust. Ina distributed system where agents are allowed to join, leave,share and communicate in a relatively ad-hoc manner, trustis without doubt an important issue that must be dealt with.Selfish and/or misbehaved agents may have severe impacton overall system performance, and recent measurement-based reports [1], [2] support our view. Trust managementsystems [3], [4], [5], [6], [7], [8], [9] have been proposedto increase trust between participating agents in a distributedenvironments and P2P networks in particular. In P2P networks,peers rely on some sort of trust provided by the reputationsystems. As an example, the EigenTrust [7] mechanism isreported to have performed reasonably well in a wide rangeof scenarios. Another promising approach to encourage trustand good behavior between peers is to integrate incentive-

there is a number of issues with reputation systems yet to beresolved in the context of P2P network.The first question is how to gather appropriate information

from different sources. This process involves the reportingprocess of peers in the network, voluntarily or not. Peersare rightly to ask themselves why to report. In an incentive-compatible system, it is understandable, but it is not so trivialquestion if the system is not incentive-compatible. In such asystem, many issues can have an impact on peers' decision,including the seemingly irrelevant issue, the number of par-ticipating peers (i.e. peer population), as we will address laterin the paper. Another question regarding trust managementin P2P systems is how to response. One way to do it is toexclude misbehaved/maliciuos peers from the network. In [10],a simple exclusion mechanism is proposed, but little is knownabout its performance.Game theory (see [13] for a comprehensive introduction)

has been successfully applied to model and analyze differentaspects of P2P networks, especially the problems related totrust. However, many challenging questions are still left tobe answered. First and foremost, most of the analysis so faris mainly based on simple and straightforward pure strategygames (single stage and repeated games alike), and as a result,uncertainty and belief are rarely tackled.

Uncertainty and belief among peers are some characteris-tic aspects of a P2P network. Peers are uncertain of otherpeers' action, intention, valuation and as a result, peers areeven uncertain of how to choose an appropriate strategy toreact. Peers have some prior belief about other peers' action,intention, valuation and this belief is (should be) changed inthe light of new information. A natural question arises then:How to model these aspects of a P2P network?

Uncertainty is quantifiable by the mean of probability anal-ysis. From the game-theoretic point of view, mixed strategygame fits well within the context. Game-theoretic analysis inP2P mainly deal with pure strategic game where a player isfully certain of the action she should take (i.e. with probability1) and mixed strategies are not considered sufficiently. Mod-eling a game with mixed strategy, we are able to cope withuncertainty of peers' decision making process. Instead of hav-ing a fully certain strategy, peers usually assign a probability

compatible reputation system [9] into P2P networks. However, to a concrete strategy, or in other words, a distribution over all1-4244-0569-6/06/$20.00 ©2U06 IEEE 130

I

Page 2: [IEEE 2006 First International Conference on Communications and Electronics - Hanoi, Vietnam (2006.10.10-2006.10.11)] 2006 First International Conference on Communications and Electronics

2

the strategies they can possibly have. Another question is howto model the change in belief in a game. A Bayesian game fitswell within the context. It is because in many situations, peersare not perfectly informed about other peers's characteristics ora peer may be well informed about other peers characteristicsbut may not know how well other peers are informed abouther own characteristics.The main contributions of the paper are the followings. First,

by modeling the reporting process of P2P trust managementsystem as a mixed strategy game, we find that if a reputationsystem is not incentive-compatible, the more the number ofpeers in the system, the less likely that anyone will report abouta malicious peer. This result is in contrast with the result thatwe could have if peers are allowed only to use pure strategies,which, we believe, somewhat unrealistic in practice. Second,we address the issue of voting for exclusion of a (maliciouslybelieved) peer and provide an analysis of the problem. Bymodelling the decision process as a Bayesian game, we findthat the possible application of exclusion in P2P system mightbe dangerous. In our model, we assume that the voting superpeers have some a priori belief about the type of the peerin question and depend on the light of new information, thisbelief can be changed. Our analysis shows that, under certainassumptions, the more the number of voting peers, the morelikely that an innocent peer is excluded from the network.The paper is structured as follows. Introduction and problem

statements are found in Section 1. Section 2 presents gametheoretic models and analysis of the reporting and exclusionproblems in P2P networks. Finally, Section 3 concludes thepaper.

II. THE PARADOXES

A. Trusting a malicious peerOne of the objectives of reputation system in P2P network

is to filter out malicious peers from the network. Peers reportabout the past performance of other peers in a distributedmanner. However, reporting might be costly. The cost ofreporting might be time and/or the risk of possible retaliation.To model the situation we assume that there exists a certainlevel of incentive for each peer to report about a maliciouspeer. However, peers would prefer someone else to do that(i.e. she is better of if other peers report). The justification forthis assumption is that the cost of reporting might be time,resource and/or the risk of possible retaliation.In this paper, we try to give a qualitative analysis of theproblem rather than a quantitative one because it is hard tomodel peer's (human being) satisfaction, valuation of time,fear of retaliation etc. by exact functions.Assume that each peer is satisfied if a malicious peer isreported and attaches a value s to this. Reporting is costlyand we assume that the cost of reporting is identical to allpeers and equal to c. We assume further that s > c > 0. Eachpeer would like the malicious peer to be reported but wouldbe better off not reporting hoping someone else to do that.Let us consider the following game.

Players: n reporting peers

Payoff: Given the assumptions mentioned above, thesatisfaction a peer is 0 if the malicious peer is notreported, (s -c) if she reports and s if at least onepeer reports but she doesn't.

1) Analysis: The reporting game defined above admits npure Nash equilibria, in each of which exactly one peer reports.(If that peer switches to not reporting, her payoff falls froms -c to 0; if any other peer switches to reporting, her payofffalls from s to s -c). Note that these are asymmetric Nashequilibria. The game has no symmetric pure Nash equilibriumbecause if every peer reports, the any peer is better offswitching not to reporting and if no peer reports, then anypeer is better of switching to reporting.Now let us consider the mixed strategy symmetric Nash

equilibrium of the game. Denote p be the probability that eachpeer would report about the malicious peer. Given a peer, theprobability that no one out of (n -1) remaining peers reportsis thus (1 _p)n. Consequently, the probability that at leastone peer (out of (n -1) remaining peers) reports is 1 -(1-p)n-l. By definition, in equilibrium state the expected payoffof reporting for each peer is equal to the expected payoff ofnot reporting. In equation form, we have:

s -c = 0+s(l - (1 p)n-1)

Solving the equation we have:

1 (C)n-

(1)

(2)

From 0 < c < s we have 0 < C < 1 and thus p isindeed a probability. Notice that the probability that each peerreports about the malicious peer (p) decreases as the numberof reporting peers (n) increases. Put it in other words, themore the number of reporting peers, the less likely that themalicious peer is reported.

1

0.9

0.8

0.6

cs12/

C0S 1 /100.4

C/S 1/1000.3

0.1

0 10 20 30 40 50 60 70 80 90 100n

Fig. 1. Number of reporting peers (n) vs. reporting probability (p).

B. Excluding an innocent peer

This part presents two games. The first game considers theActions: Each peer can decide to report or not to report case when there is a single juror (modeling the "super peer"

1-4244-0569-6/06/$20.00 2006 IEEE 131

Page 3: [IEEE 2006 First International Conference on Communications and Electronics - Hanoi, Vietnam (2006.10.10-2006.10.11)] 2006 First International Conference on Communications and Electronics

3

in a P2P network). The second game considers multiple jurorscase.

It is shown that in a "super peer" architecture, the only juroracts according to its signal (convicting the defendant when thesignal is guilty and acquitting it when the signal is innocent).On the other hand, it is shown that in the multiple jurors

case, the more the number of decision making peers, the morelikely that an innocent peer is excluded from the network!

In [10], an exclusion mechanism was examined and "perfectinformation about the type of each individual user" is assumed.However, in real world system, it is usually not the case. Theinformation about the type of each individual peer is usuallyimperfect. We might, however, have some belief about thetype of the peers and this belief is changed in the light of newinformation.We consider the situation where some peers in the systems

(e.g. such as "super peers" in KaZaA, [11] and Gnutella, [12]),based on the information they have, can decide to exclude amalicious peer from the network. A peer is considered "guilty"and is excluded from the system if his/her contribution level islow (e.g. under some threshold) and/or the files they providehave the high probability of being unauthentic. The super users(or we may call them "the jurors") have some belief about thecertain peer in the network of being malicious (or we may callher "the defendant") and should be excluded from the system.Suppose that each decision making peer (e.g. super peer) hasthe prior belief that the certain peer is "guilty" with probability7, a belief modified by the evidence presented. We model thepossibility that super peers interpret the evidence differentlyby assuming that for each of the suspected peer's true statuses(guilty and innocent), each juror interprets the evidence topoint to guilt with positive probability and, and that jurors'interpretations are independent. Assume furthermore that theprobabilities are the same for all jurors and denote the prob-ability of any given juror's interpreting the evidence point toguilt when the defendant is guilty by p, and the probabilityof her interpreting the evidence to point to innocence whenthe defendant is innocent by q. We also assume that a juroris more likely than not to interpret the evidence correctly, sothat p > 2 and q > 1, and hence in particular p > 1-q.Each juror wishes to convict a guilty defendant and acquit aninnocent one. She is indifferent between these two outcomesand prefers each of them to one in which an innocent defendantis convicted or a guilty defendant is acquitted. In other words,each juror's Bernoulli payoffs are:

0 if malicious peer excluded or

innocent peer acquitted(-z if innocent peer excluded

t-(I -z) if malicious peer acquitted.

(3)

Let's give some interpretation of the parameter z. Denote by r

the probability a juror assigns to the defendant's being guilty,given all her information. Then her expected payoff if thedefendant is acquitted is -r(- z) + (1 -r) 0 =-r( -z)and her expected payoff if the defendant is excluded is r

0 -(1 -r)z = (1 -r)z. Thus she prefers the defendant to beacquitted if -rl-z) > -(-r)z or r < z, and convicted if

1 -4244-05r49-6/0I6$20.0 O©20O6 iEEE 13:

r > z. That is, z is equal to the probability of guilt required forthe juror to want the defendant to be excluded. Put differently,for any juror, acquittal is at least as good as exclusion if andonly if Pr(defendant is guilty, given juror's information) < z.Now let's formulate a Bayesian game that models the situation.The players are the super peers (the jurors), and each player'saction is vote to exclude (E) or acquit (Q). We define a stateto be a list (X, sl, S2, ..., Sn), where X denote the defendant'strue status, guilty (G) or innocent (I), and si represents playeri's interpretation of the evidence, which may point to guilty(g) or innocent (b). In any state X = G (i.e. the defendantis guilty), each player assigns the probability p to any otherplayer's receiving the signal g, and the probability 1 -p toher receiving the signal b, independently of all players' signals.Similarly, in any state in which X = I (i.e. the defendant isinnocent), each player assigns the probability q to any otherplayer's receiving the signal b, and the probability 1 -q toher receiving the signal g, independently of all other players'signals. Given the assumption that unanimity is required toexclude the defendant, only action profile (E,..., E) leads toexclusion. Thus, player i's payoff function in the Bayesiangame is defined as follows:

if a (E,...E) and o1 :

if a = (E,...,E) andw-z if a = (E..... E) and i

w

ui(a, o) =

I or= G=I

-(1 - z) if a#(E,...,E) andw1 = G,(4)

where w1 is the first component of the state, giving thedefendant's true status.

Based on what what have been discussed so far, the votingfor peer exclusion problem can be modeled as a Bayesiangame as follows:

Players: A set of n super peersStates: The set of states is the set of all lists

(X Sl S2 ...* Sn) where X C {G, I} and si C{g, b} for every super peer j, where X G if theconsidered peer is malicious (guilty), X I if sheis innocent, sj = g if player j receives the signalthat she is malicious, and sj = b if player j receivesthe signal that she is innocent.

Actions: The set of actions of each player is E, Q,where E means vote to exclude, and Q means voteto acquit.

Signals: The set of signals that each player may receiveis g, b, and player j's signal function is defined byTj (X, Sl, ..., Sn) = sj (each juror is informed onlyof her own signal).

Beliefs: Type g of any player i believes that the state is(G, Sl, ..., Sn) with probability 7pk-1(1 _p)n-2k and(I, sl ..., Sn) with probability (1-7)(1-q)k -lqn-k,where k is the number of players j (including i)for whom sj g in each case. Type b of anyplayer i believes that the state is (G, si, ..., Sn) withprobability 7pk(I p) -k- 1 and (I, s,..., Sn) withprobability (1-7) (1 q)k- lqn-k- 1, where k is thenumber of players j for whom si g in each case.

Page 4: [IEEE 2006 First International Conference on Communications and Electronics - Hanoi, Vietnam (2006.10.10-2006.10.11)] 2006 First International Conference on Communications and Electronics

4

Payoff: The Bernoulli payoff function of each player t

is given in Equation (4).

1) One super peer case. We start to analyze the game in thesimplest case, in which there is a single super peer (the onlyjuror). Suppose that her signal is b. To determine whether sheprefers exclusion or acquittal, we need to find the probabilityshe assigns to the defendant's being malicious peer, given hersignal. According to Bayes' rule we have:

Pr(GQb)Pr(b G)Pr(G)

Pr(b G)Pr(G) + Pr(b I)Pr(I)(1 -p)

(1 -p) + q(l -w)

Thus, acquittal yields an expected payoff at least as high asdoes exclusion if and only if

> ((1 -p)w- (1 -p) + q(l1- ) (5)

2) Many super peers case: Suppose that every juror otherthan juror 1 votes to acquit when her signal is b and to excludewhen her signal is g. Consider type b of juror 1. For type bof juror 1 acquittal is at least as good as exclusion if theprobability that the defendant is guilty, given that juror l'ssignal is b and every other juror's signal is g, is at most z.This probability is:

Pr(G b, g.. g) Pr(b, g, ..,g G)Pr(G)Pr(b, g, ..., g)Pr(G) + Pr(b, g, gg-I)Pr(I)

(1 p)pnll T

((1 p)pn-1f + q(l -q)nl 1(1 )

Thus type b of juror 1 optimally votes for acquittal if

> (1 -p)pn 1r7- (1 -p)pn-1f +q(l -q)n-l1(1 -

Express it differently

1

iF)

(8)

That is, after getting the signal that the defendant is innocent,the juror choose acquittal as long as z is not too small. If hersignal is g, then similar calculation leads to the conclusion thatexclusion yields an expected payoff at least as high as doesacquittal if

Now, given that p > 1 q, the denominator decreases to 1

0.9

08.

0.8

0.60.9

0.8-

0.7

0.6-

N 0.5

0.4-

0.3

0.2

0.4

0.2

0o

0.5

pi 0 05

10

0.7

0.6

0.5

04

0.3

20 00215

0.1

0.1Z0.1 ,,0 0.2 0.4 0.6 0.8

Fig. 2. Condition on z in single super peer case (p = q = 0.6).

(< P)-p7+ (I-q)(l - 7)

Thus, if

(1 p)l(1-p)wr+qC(1 F) - -pw + (l- q)(1

then the super peer optimally acts according to her signal,acquitting the considered peer when her signal is b andexcluding her when it is g. This result also means that ina single super peer scenario, under certain conditions, theoutcome is reliable and trustable (reflecting the true situation).We will see in the next section that it is usually not the case

with man voting peer scenario.1-4244-056g96066$20.00 2006 IEEE

Fig. 3. Illustration of parameter z as n increases (p = q = 0.6).

as n increases. Thus the lower bound on z for which type bof juror 1 votes for acquittal approaches 1 as n increases. Weconclude that the model of a large number of super peers inwhich they are concerned about acquitting a guilty defendanthas no Nash equilibrium in which every juror votes accordingto her signal.

Let's consider now the mixed strategy of the game. Denoteby Q the mixed strategy of each juror of type b. Each typeb juror must be indifferent between voting for exclusion andvoting for acquittal, because she takes each action with positiveprobability. As a result, we need a mixed strategy Q to be suchthat the probability that the considered peer is malicious, giventhat all other super peers vote for exclusion, is equal to z. Now,the probability of any give super peer's voting for exclusionis p + (1 -p)3(C) if the considered peer is malicious and1 -q + q3(C) if she is innocent. As a result

43(C) pX-(1 - q) (9)q -(1 p)X

A

131

Page 5: [IEEE 2006 First International Conference on Communications and Electronics - Hanoi, Vietnam (2006.10.10-2006.10.11)] 2006 First International Conference on Communications and Electronics

5

0.98

0.96

0.94

0.92

0.88

0.86

0.84

0.82

0.8

0.780.8 0.85 0.9 0.95 1

x

Fig. 4. Illustration of 3(C).

where X = [7(1 -p)(l-z)/(I-71)qz)]1/(n-l). For a rangeof parameter values, 0 < /3(C) < 1 is indeed a probability.Notice that when n is large, X is close to 1, and hence /3(C) isclose to 1: a super peer who interprets the evidence as pointingto innocence very likely nevertheless votes for exclusion. Put itin other words, in equilibrium, the probability that an innocentpeer is excluded increases as n increases; the larger the groupof voting super peers, the more likely an innocent peer is tobe excluded from the network.

3) Discussion: The results from one voting super peer vs.many voting super peers reveal a simple yet striking fact onsystem management. In a single decision making architecture(e.g. single server, but many clients), the decision makingprocess is simple and the outcome is more reliable. In amultiple decision making architecture (e.g. distributed system,P2P system in particular), in stark contrast to the client-serverarchitecture, the more the decision makers, the less reliablethe outcome is. We can consider this is the cost of makinga system distributed and should be taken into account whendesigning a trustable distributed system.

III. CONCLUSION

We have modeled the reporting and exclusion problems in aP2P trust management system by using game-theoretic tools.Our analysis has shown that on the one hand the more thenumber of peers in the system, the less likely that anyonewill report about a malicious peer. On the other hand, Bymodeling the decision process as a Bayesian game, we findthat the possible application of exclusion in P2P system mightbe dangerous. More precisely, our analysis shows that, undercertain assumptions, the more the number of voting peers, themore likely that an innocent peer is excluded from the network.Finally, in the lights of the investigated paradoxes, we discusspossible solutions to improve the efficiency of current trustmanagement systems in P2P network.

This paper raises the issues, but it leaves many questionsunaddressed. We touched the question of the size of populationin a P2P network in a static manner. However, the populationof a P2P network in practice is characterized with very high

not yet addressed in this paper. We plan to extend our model toincorporate this issue. Also, we plan to carry out validation ofthe models by real measurements in operating P2P networksto confirm the soundness of our proposed models.

IV. ACKNOWLEDGEMENT

The author wish to thank Professor Ernst Biersack (InstituteEurecom, France) for sharing his insights on P2P networks andDr. Eitan Altman (INRIA, France) for fruitful discussions ongame theory.

REFERENCES

[1] E. Adar and B. A. Huberman. Free-riding on Gnutella, First Monday,5(10), October 2000.

[2] S. Saroiu, P. K. Gummadi, and S. D. Gribble. A Measurement Study ofPeer-to-Peer File Sharing Systems, MMCN02, 2002.

[3] S. Marti, H. Garcia-Molina, Taxonomy of Trust: Categorizing P2PReputation Systems, Journal of Computer Networks, Special issue onManagement in Peer-to-Peer Systems: Trust, Reputation and Security,Elsevier, 2006.

[4] Z. Despotovic and K. Aberer, P2P Reputation Management: ProbabilisticEstimation vs. Social Networks, Journal of Computer Networks, Specialissue on Management in Peer-to-Peer Systems: Trust, Reputation andSecurity, Elsevier, 2006.

[5] P. Resnick, R. Zeckhauser, E. Friedman, K. Kuwabara, Reputation Sys-tems, Communications of the ACM, 43(12), pp. 45-48.

[6] F. Cornelli, E. Damiani, S. D. Capitani, Choosing Reputable Serventsin P2P Networks, The 11th International World Wide Web Conference,2002.

[7] S. D. Kamvar, M. T. Schlosser, H. Garcia-Molina, The EigenTrust Algo-rithm for Reputation Management in P2P Networks, 12th InternationalWorld Wide Web Conference, 2003.

[8] S. D. Kamvar, M. T. Schlosser, EigenRep: Reputation Management inP2P Networks, 12th International World Wide Web Conference, 2003.

[9] R. Jurca, B. Faltings, Towards Incentive-Compatible Reputation Manage-ment, Workshop on Deception, Fraud and Trust in Agent Societies, 2002.

[10] M. Feldman, C. Padimitriou, J. Chuang, I. Stoica, Free-Riding andWhitewashing in Peer-to-Peer Systems, ACM SIGCOMM Workshopof Practice and Theory of Incentives and Game Theory in NetworkedSystems, 2004.

[11] KaZaA Home Page, http://www.kazaa.com/[12] Gnutella Home Page, http://gnutella.wego.com/[13] M. J. Osborne and A. Rubenstein, A course in game theory, Cambridge,

Massachusetts: The MIT Press, 1994.

churn rate i.e., peers come and leave very frequently. This is1-4244-0569-6/06/$20.00 ©2006 IEEE 134