SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A...

27
1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava Department of Computer Science, Purdue University West Lafayette, IN 47907 {acan, bb}@cs.purdue.edu This research is supported by NSF grants ANI 0219110, IIS 0209059 and IIS 0242840. November 3, 2006 DRAFT

Transcript of SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A...

Page 1: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

1

SORT: A Self-ORganizing Trust Model forPeer-to-peer Systems

Ahmet Burak Can and Bharat BhargavaDepartment of Computer Science, Purdue University

West Lafayette, IN 47907{acan, bb}@cs.purdue.edu

This research is supported by NSF grants ANI 0219110, IIS 0209059 and IIS 0242840.

November 3, 2006 DRAFT

Page 2: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

2

Abstract

This paper presents distributed algorithms used by a peer to reason about trustworthiness of otherpeers based on the available local information which includes past interactions and recommendationsreceived from others. Peers collaborate to establish trust among each other without using a prioriinformation or a trusted third party. A peer’s trustworthiness in providing services, e.g., uploadingfiles, and giving recommendations is evaluated in service and recommendation contexts. Three maintrust metrics, reputation, service trust, and recommendation trust, are defined to precisely measure trust-worthiness in these contexts. An interaction is evaluated based on three parameters: satisfaction, weight,and fading effect. When evaluating a recommendation, including to these parameters, recommender’strustworthiness and confidence about the information provided are considered. A file sharing applicationis simulated to understand capabilities of the proposed algorithms in mitigating attacks. For realism,peer and resource parameters are based on several empirical studies. Service and recommendation basedattacks are simulated. Nine different behavior models representing individual, collaborative, and identitychanging malicious peers are studied in the experiments. Observations demonstrate that malicious peersare identified by good peers. The attacks are mitigated even if they gain high reputation. Collaborativerecommendation-based attacks might be successful when malicious peers make discrimination amonggood peers. Identity changing is not a good attack strategy.

I. INTRODUCTION

P2P systems rely on collaboration of peers to accomplish tasks. Peers need to trust each otherfor successful operation of the system. A malicious peer can use the trust of others to gainadvantage or harm. Feedbacks from peers are needed to detect malicious behavior [1]. Since thefeedbacks might be deceptive, identifying a malicious peer with high confidence is a challenge[2]. Determining trustworthy peers requires a study of how peers can establish trust among eachother [3]. Long-term trust information about other peers can reduce the risk and uncertainty infuture interactions [1].

Interactions and feedbacks provide a means to establish trust among peers [4], [5], [6]. Abererand Despotovic [4] developed a model that trustworthiness of a peer is measured based oncomplaints. A peer is assumed as trustworthy unless there are complaints about it. P-Grid [7]provides decentralized and efficient access to trust information. Eigentrust [5] uses transitivityof trust which allows a peer to calculate global trust values of other peers. A distributed hashtable (CAN [8]) is used for efficient access to global trust information. Trust of some basepeers helps to build trust among all other peers. A recommendation is evaluated according tothe credibility of recommender. The experiments of Eigentrust on a file sharing applicationshow that trust information can mitigate attacks of collaborative malicious peers. PeerTrust [6]defines community and transaction context parameters in order to address application specificfeatures of interactions. Four different trust calculation methods are discussed and studied in theirexperiments. An important aspect of trust calculation is to be adaptive for application dependentfactors.

We propose a Self-ORganizing Trust model (SORT) that enables peers to create and managetrust relationships without using a priori information. Since preexistence of trust among peers[4] does not distinguish a newcomer and a trustworthy one [6], SORT assumes that all peersare strangers to each other at the beginning. Peers must contribute others in order to build trustrelationships. Malicious behavior quickly destroys such a relationship [5], [9]. Thus, Sybil attack[10] that involve changing of pseudonym to clear bad interaction history is costly for maliciouspeers. In SORT, trusted peers [5] are not needed to leverage trust establishment. A trusted peer

November 3, 2006 DRAFT

Page 3: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

3

can not observe all interactions in a P2P system and might be a source of misleading information.A peer becomes an acquaintance of another peer after providing a service to it, e.g., uploading afile. Using a service from a peer is called a service interaction. A recommendation represents anacquaintance’s trust information about a stranger. A peer requests recommendations only fromits acquaintances.

Measuring trust using numerical metrics is hard [3], [11], [12]. Classifying peers as eithertrustworthy or untrustworthy [4] is not sufficient. Metrics should have precision so peers can beranked according to their trustworthiness [5], [13]. As in Eigentrust, SORT’s trust metrics arenormalized to take real values between 0 and 1. Eigentrust counts two peers equally trustworthyif they are assigned to the same trust value. In SORT, trust values are considered with the levelof past experience. A peer with more past interactions is preferred among peers assigned to thesame trust value.

An interaction represents a peer’s definite information about another one. A recommendationcontains suspicious information. Combining these two types of information in one metric [4], [5]and using it to measure trustworthiness for different tasks [4], [5], [6], [9] may cause incorrectdecisions. SORT defines three important trust metrics: reputation, service trust and recommen-dation trust. Reputation is the primary metric when deciding about strangers. Recommendationsare used to calculate the reputation of a stranger. Providing services and giving recommendationsare different tasks. A peer may be a good service provider and a bad recommender at the sametime. SORT defines two contexts of trust: service and recommendation contexts. Metrics on bothcontexts are called service trust and recommendation trust respectively. Service trust representsa peer’s trustworthiness in service context. Recommendation trust is the primary metric whenrequesting and evaluating recommendations. When a peer gives misleading recommendations,it loses recommendation trust of others but its service trust remains same. Similarly, a failedservice interaction decreases the value of service trust metric.

Quality of an interaction is measured with three parameters: satisfaction, weight, and fadingeffect. A binary value [4], [5] for satisfaction does not reflect how good a service provider wasduring the interaction. In SORT, satisfaction can be expressed in detail according to applicationdependent parameters [14]. Weight parameter addresses varying importance of different inter-actions [6]. Interactions with larger weight values are more important on trust calculation. Theeffect of an interaction on trust calculation fades as new interactions happen [15], [16]. Thismakes a peer to behave consistently. A peer with large number of good interactions can notdisguise failures in future interactions for a long period of time.

A recommendation contains the recommender’s own experience about a stranger, informationcollected from its acquaintances, and level of confidence about such information. It is evaluatedbased on recommendation trust [5], [6], [9]. A recommendation from a highly trusted peer ismore important. If the level of confidence has a small value, the recommendation is consideredweak and has less effect on the reputation calculation. A recommender is liable about itsrecommendations. After giving a recommendation, the recommender’s trust value changes inproportion to its level of confidence. If a weak recommendation is inaccurate, the trust valuedoes not diminish quickly.

When a peer wants to get a service and it has no acquaintance, it simply chooses to trustany stranger that provides the service. If the peer has some acquaintances, it may either selectan acquaintance or a stranger. An acquaintance is always preferred over a stranger if theyare equally trustworthy. Selection of a stranger is based on its reputation. Acquaintances areselected according to service trust and reputation metrics. If many interactions happened with an

November 3, 2006 DRAFT

Page 4: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

4

acquaintance, service trust metric is more important than reputation metric. Otherwise, reputationof the acquaintance is more important during the selection process.

Experimental studies are conducted to understand the impact of SORT in mitigating service andrecommendation based attacks. A P2P file sharing application has been simulated. Parametersrelated to peer capabilities (bandwidth, number of shared files), peer behavior (online/offlineperiods, waiting time for sessions) and resource distribution (file sizes, popularity of files) areapproximated to several empirical results [17], [18], [19]. This enables us to make realisticobservations about the success of the algorithms and how trust relationships evolve among peers.Experiments were done on nine type of malicious peers. Attacks related to file upload/downloadoperations were always decreased. A malicious peer who performs collaborative attacks rarelybut behaves honest in other times is harder to deal with. Attacks on recommendations weremitigated in all experiments except when a malicious peer performs collaborative attacks againsta small percentage of peers and stays honest with other peers.

The main contributions of this research are outlined as follows:• Trust metrics are defined in service and recommendation contexts. Two contexts of trust

distinguish capabilities of peers based on services provided and recommendations given.• SORT’s distributed algorithms enable peers make autonomous decisions without requiring

any trusted peers. A peer adaptively adjusts the necessary level of trust according to its trustrelationships.

• A recommendation evaluation scheme is defined. A recommender’s trustworthiness and levelof confidence about the recommendation is considered for a more accurate calculation ofreputations and fair evaluation of recommendations.

• Simulation experiments on a file sharing application verify SORT’s ability in mitigatingattacks. An evaluation scheme is defined for service interactions based on the applicationparameters. An attacker model is presented including service and recommendation basedattacks and nine types of malicious peers.

Outline of the paper is as follows. Section II discusses the related research. The algorithmsand formal definitions of SORT are explained in Section III. The simulation experiments ofSORT is presented in Section IV. We outline future research opportunities to extend SORT inSection V and present conclusions in Section VI.

II. RELATED WORK

A formal model of trust based on sociological foundations is defined by Marsh [11]. In thismodel, an agent uses own experiences when building trust and does not consider informationof other agents. Abdul-rahman and Hailes’ trust model [3] evaluates trust as an aggregation ofdirect experience and recommendations of other parties. Trust metrics are defined in discretedomain. A semantic distance measure is defined to test accuracy of recommendations. Zhong[13] proposes a dynamic trust concept based on McKnight’s social trust model [12]. Uncertainevidences can be used when building trust relationships. Second-order probability and Dempster-Shaferian framework helps in evaluating uncertain evidences.

Reputation is first used as a method of building trust in e-commerce communities. Resnicket al. [1] point out limitations and capabilities of reputation systems. Ensuring long-lived re-lationships, forcing feedbacks, checking honesty of recommendations are some difficulties inreputation systems. Dellarocas [2] explains two common attacks on reputation systems: unfairlyhigh/low ratings and discriminatory seller behavior. Controlled anonymity and cluster filtering

November 3, 2006 DRAFT

Page 5: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

5

methods are proposed as countermeasures. Despotovic and Aberer [20] study an online tradescenario among self-interested sellers and buyers. Trust-aware exchanges can increase economicactivity since some exchanges may not happen without trust establishment. Terzi et al. [21]introduces an algorithm to classify users and assign them roles based on trust relationships. Yuand Singh’s model [22] propagates trust information through referral chains. Referrals are theprimary method of developing trust in others. Mui et al. [23] propose a statistical model based ontrust, reputation and reciprocity concepts. Reputation can be propagated through multiple referralchains. Jøsang et al. [24] discusses transitivity of trust with referrals. Recommendations basedon indirect trust relations may cause incorrect trust derivation. Thus, trust topologies should becarefully evaluated before propagating trust information.

Trust models are applied to P2P systems. Some prominent ones, Aberer and Despotovic [4],Eigentrust [5], Peertrust [6], are already mentioned in the introduction. Cornelli et al. [25], [26]describe how to use trust in Gnutella [27] to mitigate inauthentic file downloads. Trust metricsand a computational model are not defined. They define a polling protocol where a peer floodsthe network to learn trust information about another peer. Enhanced version of the protocolverifies the identity of replying peers [26]. Selcuk et al. [9] present a vector-based trust metricsrelying on both interactions and recommendations. If a peer has enough neighbors, it queriesthem to learn reputation of unknown peers. Otherwise, the query is flooded to all peers. Fivetypes of malicious peers are simulated on a file-sharing application. However, recommendation-based attacks are not simulated. Ooi et al. [28] propose that each peer stores its own reputationusing signed certificates. This approach eliminates the need for reputation queries, but it requiresa public-key infrastructure. Timely update of trust information in a certificate is also a problem.NICE [29] uses signed cookies as a proof of good behavior. Peers dynamically form trust groupsto protect each other. Peers in the same group have a higher trust in each other. Trust-basedpricing and trading policies help to protect integrity of groups. Moreton and Twigg [30] usetrust to increase routing security. The same authors define a trust trading protocol to create anincentive mechanism for P2P networks [31]. Gupta and Somani [32] use reputation as a currencyto get better quality of service. Peers form trust groups and watch members of groups closely. Apeer’s reputation is proportional to the peer’s own reputation and its group’s reputation. Anotherinteresting use of trust in [33] discusses trading privacy to gain more trust in pervasive systems.Wang and Vassileva [14] propose a Bayesian network model which uses different aspects ofinteractions on a P2P file sharing application.

III. THE COMPUTATIONAL MODEL OF SORTWe make the following assumptions. Peers are indistinguishable in computational power and

responsibility. There are no privileged, centralized, or trusted peers to manage trust relationships.The majority of peers are expected to be honest but some might behave maliciously. Peersoccasionally leave and join the network. A peer provide services and use services of others. Forsimplicity of discussion, one operation is considered in the service context, e.g., file download.

A. Notationspi denotes the ith peer. When pi uses a service of pj , a service interaction for pi occurs.

Interactions are unidirectional. For example, if pi downloads a file from pj , no information isstored on pj about this download.

November 3, 2006 DRAFT

Page 6: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

6

If pi have not had a service interaction with pj , pj is a stranger to pi. An acquaintance ofpi is the one whom pi had at least one service interaction with. pi’s set of acquaintances isdenoted by Ai. A peer stores a separate history of service interactions for each acquaintance.SHij denotes pi’s service history with pj . Since new interactions are appended to the history,SHij is a time ordered list.

Parameters for Interactions. After finishing a service interaction, pi evaluates quality ofthe service. 0 ≤ ek

ij ≤ 1. denotes pi’s satisfaction about kth service interaction with pj . If theinteraction is cancelled, ek

ij gets 0 value. k is the sequence number of the interaction in SHij .A service interaction is associated with a weight to quantify importance of the interaction.

0 ≤ wkij ≤ 1 denotes the weight of kth service interaction of pi with pj .

The importance of an interaction fades as new interactions happen. 0 ≤ fkij ≤ 1 denotes the

fading effect of kth service interaction of pi with pj . It is calculated as follows:

fkij =

k

shij

, 1 ≤ k ≤ shij (1)

After adding (or deleting) a service interaction to SHij , pi recalculates fkij values. fk

ij can bedefined as a function of time but it has to be recalculated whenever its value is needed.

The semantics to calculate ekij and wk

ij values depend on the application. In a file sharingapplication, authenticity of the file, average download speed, average delay, retransmission rate ofpackets and online/offline periods of the service provider might be some parameters to calculateek

ij . Size and popularity of the file might be some parameters to calculate wkij . In Section IV, we

define sample methods to calculate these values for a file sharing application.Let τ k

ij = (ekij, w

kij, f

kij) be a tuple representing the information about kth interaction. We define

SHij = {τ 1ij, τ

2ij, . . . , τ

shij

ij } where shij denotes the current size of the history. shmax denotes theupper bound for service history size which is assumed to be known by all peers. When addinga new service interaction, τ 1

ij is deleted if shij = shmax. A service interaction is deleted fromthe history after an expiry period. The expiry period should be determined according to shmax

and the happening rate of interactions among peers.Trust Metrics. 0 ≤ rij ≤ 1 denotes pi’s reputation value about pj . Similarly, 0 ≤ stij, rtij ≤

1 denotes pi’s service and recommendation trust values about pj respectively. The followingsections discusses the calculation of these metrics. When pj is a stranger to pi, we define thatrij = stij = rtij = 0. This is a protection against Sybil attack [10] since a malicious peer cannot gain advantage by changing pseudonym. If pi is interested in a service of pj , it starts areputation query about pj by requesting recommendations from its acquaintances. It calculatesrij based on the collected information. If pi can not get any recommendations, it sets rij = 0.

B. Service Trust Metric (stij)This section describes the calculation of service trust metric. A peer first calculates competence

and integrity belief values using the information about service interactions. Competence beliefis based on how well an acquaintance satisfied the needs of interactions [12], [13], [34], [35].cbij denotes the competence belief of pi about pj in the service context. Average behavior in thepast interactions can be a measure of competence belief. pi calculates cbij as follows:

cbij =1

βcb

shij∑

k=1

(ekij · wk

ij · fkij) (2)

November 3, 2006 DRAFT

Page 7: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

7

TABLE I

NOTATIONS RELATED WITH SERVICE TRUST METRICS

Notation Descriptionpi a peer with identifier iekij satisfaction of pi’s kth interaction with pj

wkij weight of pi’s kth interaction with pj

fkij fading effect of pi’s kth interaction with pj

shij size of pi’s service history with pj

cbij pi’s competence belief about pj

ibij pi’s integrity belief about pj

stij pi’s service trust value about pj

rij pi’s reputation value about pj

βcb =∑shij

k=1 (wkij · fk

ij) is the normalization coefficient. If pj has completed all service interactionsperfectly (ek

ij = 1 for all k), the coefficient βcb ensures that cbij = 1. Since 0 ≤ ekij, w

kij, f

kij ≤ 1

by the definition, cbij can take a value between 0 and 1.A peer can be competent but may present erratic behavior. Consistency is as important as

competence. The level of confidence about the predictability of future interactions is calledintegrity belief [12], [13], [34], [35], [36]. ibij denotes the integrity belief of pi about pj in theservice context. Deviation from the average behavior is a measure of integrity belief. Therefore,ibij is calculated as an approximation to the standard deviation of interaction parameters:

ibij =

√√√√ 1

shij

shij∑

k=1

(ekij · wµ

ij · fµij − cbij)

2 (3)

A small value of ibij means more predictable behavior of pj in future interactions. wµij and fµ

ij

are the mean of wkij and fk

ij values in SHij respectively. We can approximate fµij as follows:

fµij =

1

shij

shij∑

k=1

fkij =

shij + 1

2shij

≈ 1

2(4)

Based on the past interactions with pj , pi has an expectation about future interactions. pi wantsto maintain a level of satisfaction according to this expectation. Assume that the satisfactionparameter follows a normal distribution and cbij and ibij are the approximations of mean (µ)and standard deviation (σ) of this parameter respectively. According to the cumulative distributionfunction of normal distribution, an interaction’s satisfaction is less than cbij with a Φ(0) = 0.5probability. If pi sets stij = cbij , half of the future interactions will likely to have a satisfactionvalue less than cbij . Thus, stij = cbij is an over-estimate for pj’s trustworthiness. A lower estimatemakes pi more confident about future decisions with pj . pi may calculate stij as follows:

stij = cbij − ibij/2 (5)

In this case, a future interaction’s satisfaction is less than stij with Φ(−0.5) = 0.3185 probability.Adding ibij into the calculation forces pj to behave more consistently.

November 3, 2006 DRAFT

Page 8: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

8

TABLE II

NOTATIONS RELATED TO REPUTATION AND RECOMMENDATION TRUST METRICS

Notation Descriptionerij pi’s estimation for reputation of pj

ecbij pi’s estimation for competence beliefof pj in the service context

eibij pi’s estimation for integrity beliefof pj in the service context

rezik satisfaction of pi’s zth recommendation from pk

rwzik weight of pi’s zth recommendation from pk

rfzik fading effect of pi’s zth recommendation from pk

rhik size of pi’s recommendation history with pk

rcbik pi’s competence belief about pk

in the recommendation contextribik pi’s integrity belief about pk

in the recommendation contextrtik pi’s recommendation trust about pk

Equation 5 is not complete since the reputation of pj is not considered. Reputation is especiallyimportant in the early phases of a trust relationship. When there is no (or few) interactions withan acquaintance, a peer need to rely on reputation. As more interactions happen, the competenceand integrity belief values gain importance. Therefore, pi calculates stij as follows:

stij =shij

shmax

(cbij − ibij/2) +shmax − shij

shmax

rij (6)

Equation 6 balances the effects of interactions and reputation on stij value. When pj is a strangerto pi, shij = 0 and stij = rij . As interactions happen with pj , shij gets larger and rij becomesless important. When shij = shmax, rij has no effect on stij .

C. Reputation Metric (rij)This section describes the calculation of the reputation metric. In the following two sections,

we assume that pj is a stranger to pi and pk is an acquaintance of pi. pi wants to calculate rij

value. It starts a reputation query to collect recommendations from its acquaintances.Algorithm 1 shows how pi selects trustworthy acquaintances and requests their recommen-

dations. ηmax denotes the maximum number of recommendations that can be collected in areputation query. || denotes the size of a set. pi sets a threshold range for recommendation trustvalues and selects acquaintances in this range. pi requests recommendations from the selectedpeers. Then, it decreases the range and repeats the same operations. To prevent excessive networktraffic, the algorithm stops when ηmax recommendations are collected or the threshold drops underµrt − σrt.

Let Ti = {p1, p2, . . . pti} be the set of trustworthy peers selected by Algorithm 1. ti is thenumber of peers in this set. If pk ∈ Ti had at least one service interaction with pj , it replies thefollowing information as a recommendation:

• cbkj, ibkj : These values are a measure of pk’s own experience with pj .

November 3, 2006 DRAFT

Page 9: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

9

Algorithm 1 GETRECOMMENDATIONS(pj )1: µrt ⇐ 1

|Ai|∑

pk∈Airtik

2: σrt ⇐ 1|Ai|

√∑pk∈Ai

(rtik − µrt)2

3: thhigh ⇐ 14: thlow ⇐ µrt + σrt

5: rset ⇐ ∅6: while µrt − σrt ≤ thlow and |rset| < ηmax do7: for all pk ∈ Ai do8: if thlow ≤ rtik ≤ thhigh then9: rec ⇐ RequestRecommendation(pk,pj)

10: rset ⇐ rset ∪ {rec}11: end if12: end for13: thhigh ⇐ thlow

14: thlow ⇐ thlow − σrt/215: end while16: return rset

• shkj : The history size is a measure of pk’s confidence in cbkj and ibkj values. If pk hadmany service interactions with pj and cbkj and ibkj values are more credible.

• rkj : If pk had some service interactions with pj , it should have already calculated rkj .This value is a summary of recommendations of pk’s acquaintances. pi can make a betterapproximation to global reputation of pj by aggregating rkj values from its acquaintances.

• ηkj : ηkj denotes the number of pk’s acquaintances which provided a recommendation duringthe calculation of rkj . This value is a measure of pk’s confidence in rkj value. If ηkj valueis close to ηmax, rkj value is credible since more peers agree on it.

Including shkj and ηkj values in the recommendation protects pk’s credibility in the view ofpi. If pk’s knowledge about pj is insufficient, pi figures this out by examining the values of shkj

and ηkj . Thus, pi does not judge pk harshly if cbkj, ibkj, rkj values are inaccurate.pi evaluates a recommendation according to recommendation trust value of the acquaintance.

In particular, pi evaluates pk’s recommendation based on rtik value. The calculation of rtik valueis explained in Section III-D. If pi has never got a recommendation from pk, we set rtik = rik.Note that, pk is an acquaintance of pi and rik should already be computed.

After collecting all recommendations, pi calculates erij , an estimation for the reputation ofpj , by aggregating rkj values in the recommendations. rkj should be considered with respectto ηkj . Equation 7 shows the calculation of erij . βer =

∑pk∈Ti

(rtik · ηkj) is the normalizationcoefficient.

erij =1

βer

∑pk∈Ti

(rtik · ηkj · rkj) (7)

Then, pi calculates estimations for the competence and integrity beliefs about pj which aredenoted by ecbij and eibij respectively. These values are calculated by aggregating cbkj and ibkj

values in the recommendations. cbkj and ibkj should be evaluated based on shkj . Equation 8 and

November 3, 2006 DRAFT

Page 10: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

10

9 show the calculation of ecbij and eibij where βecb =∑

pk∈Ti(rtik · shkj) is the normalization

coefficient.ecbij =

1

βecb

∑pk∈Ti

(rtik · shkj · cbkj) (8)

eibij =1

βecb

∑pk∈Ti

(rtik · shkj · ibkj) (9)

Now, pi has two types of information. ecbij and eibij represent own experiences of pi’s acquain-tances with pj . erij represents the information collected from acquaintances of pi’s acquaintances.pi calculates µsh =

∑pk∈Ti

(shkj)/ti. If µsh is close to shmax value, pi’s acquaintances hada high level of own experience with pj . In this case, ecbij and eibij values should be givenmore importance than erij value. Otherwise, erij value should be more important. With theseobservations, rij is calculated in a similar way as stij:

rij =bµshcshmax

(ecbij − eibij/2) +shmax − bµshc

shmax

erij (10)

D. Recommendation Trust Metric (rtikAfter calculating rij value, pi updates recommendation trust of acquaintances based on the

accuracy of their recommendations. In particular, this section explains how pi updates rtikaccording to pk’s recommendation.

Similar to service interactions, three parameters are stored about recommendations. 0 ≤rez

ik, rwzik, rf z

ik ≤ 1 denotes satisfaction, weight and fading effect of pi’s zth recommendationreceived from pk respectively. Let γz

ij = (rezik, rw

zik, rf

zik) be a tuple representing the information

about zth recommendation. RHik = {γ1ik, γ

2ik . . . γrhik

ik } denotes recommendation history of pi

with pk. rhik is the size of RHik and rhmax denotes the upper bound of the size known by allpeers.

To calculate rezik parameter, pi compares rkj, cbkj and ibkj with erij, ecbij and eibij respectively.

If these values are close to each other, pi should assign a high value to rezik. The calculation of

rezik is as follows:

rezik = ((1− |rkj − erij|

erij

) + (1− |cbkj − ecbij|ecbij

) + (1− |ibkj − eibij|eibij

))/3 (11)

From Equation 7-9, we know that pk’s recommendation affects rij in proportion to shkj andηkj values. The effect of these values on rij is also proportional to bµshc due to Equation 10.Thus, the weight of a recommendation should be calculated with respect to shkj, ηkj, bµshcvalues. pi calculates rwz

ik as follows:

rwzik =

bµshcshmax

shkj

shmax

+shmax − bµshc

shmax

ηkj

ηmax

(12)

If shkj and ηkj values are small, rwzik value will be small. Otherwise, rwz

ik will be large. Thisis the desired result based on our observations about Equation 7-9. bµshc value balances theeffects of shkj and ηkj values on rwz

ik value. If acquaintances had many interactions with pj

(bµshc is large), shkj has more effect on the result. If bµshc is small, ηkj will be more important.

November 3, 2006 DRAFT

Page 11: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

11

e 1 ij

2. recommendation for p j

re z ik

Updates on p i : After 2: RH ik rt ik r ij After 4: SH ij st ij

1.recommendation request for p j

3.service request

P k P i

P j

4.service

Fig. 1. Operations during recommendation and service interactions

rcbik and ribik denotes the competence and integrity beliefs of pi about pk in the recom-mendation context respectively. With these parameters, pi calculates rtik in the same way asstik:

rcbik =1

βrcb

rhik∑z=1

(rezik · rwz

ik · rf zik) (13)

ribik =

√√√√ 1

rhik

rhik∑z=1

(rezik · rwµ

ik · rfµik − rcbik)

2 (14)

rtik =rhik

rhmax

(rcbik − ribik/2) +rhmax − rhik

rhmax

rik (15)

rwµij and rfµ

ij are the mean of rwkij and rfk

ij values in RHij respectively. βrcb =∑rhik

z=1 (rwzik · rf z

ik)is the normalization coefficient. If pi had no recommendation from pk, rtik = rik according toEquation 15.

Fig. 1 depicts the whole scenario briefly. Assume that pi wants to get a particular service. pj

is a stranger to pi and a probable service provider. pi needs to learn pj’s reputation. pi requestsrecommendations from trustworthy acquaintances such as pk. Assume that pk had some interac-tions with pj and sends back a recommendation to pi. After collecting all recommendations, pi

calculates rij . Then, pi evaluates pk’s recommendation, stores the results in RHik, and updatesrtik. Assuming pj is trustworthy enough, pi requests and gets the service from pj . Then, pi

evaluates this interaction and stores the results in SHij , and updates stij .

E. Selecting Service ProvidersWhen pi searches for a particular service, it gets a list of service providers. It selects one or

several service providers according to their service trust metric. For the rest of this section, weconsider a file sharing application. pi may download a file from either one or multiple uploaders.According to this choice, two methods can be defined.

One service provider. If pi prefers to download a file from only one uploader, it selects themost trustworthy uploader. pi sorts uploaders by using the comparison function in Algorithm 2.

November 3, 2006 DRAFT

Page 12: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

12

pi might select a stranger due to its high reputation. For example, if pm is a stranger, pi setsstim = rim according to Equation 6 and compares pm with other peers.

Algorithm 2 COMPARE ST(pm,pn)1: if stim > stin then2: return GREATER3: else if stim < stin then4: return LESS5: else if stim = stin then6: if shim > shin then7: return GREATER8: else if shim < shin then9: return LESS

10: else if shim = shin then11: if cbim − ibim/2 > cbin − ibin/2 then12: return GREATER13: else if cbim − ibim/2 < cbin − ibin/2 then14: return LESS15: else if cbim − ibim/2 = cbin − ibin/2 then16: if cbim > cbin then17: return GREATER18: else if cbim < cbin then19: return LESS20: end if21: end if22: end if23: if pm.uploadSpeed >= pn.uploadSpeed then24: return GREATER25: else if pm.uploadSpeed < pn.uploadSpeed then26: return LESS27: end if28: end if29: return EQUAL

Multiple service providers. For a faster download, pi may prefer to download a file frommultiple peers. This prevents high loads on reputable peers since alternative uploaders are utilized.Algorithm 3 shows an adaptive selection method. U is the set of uploaders that provide therequested service. S is the set of selected uploaders. pi selects all uploaders whose service trustvalue is larger than a threshold value. pi may not select any uploader with a high thresholdand can not build trust relationships. A low threshold may cause the selection of maliciouspeers. Thus, pi adjusts the threshold based on its set of acquaintances. If pi has low trust in itsacquaintances, it sets a low threshold. This helps pi to start interactions with strangers. When pi

gains high trust in its acquaintances, it increases the threshold value. If no trustworthy uploaderis selected (S = ∅), all the strangers in U are selected. In this way, new peers can join the

November 3, 2006 DRAFT

Page 13: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

13

network and build trust relationships. These decisions may require human intervention. Thus,the user interface of P2P application should enable user input for critical operations.

Algorithm 3 SELECTMULTIPLEUPLOADERS(U )1: µst ⇐ 1

|Ai|∑

pj∈Aistij

2: σst ⇐ 1|Ai|

√∑pj∈Ai

(stij − µst)2

3: S ⇐ ∅4: th ⇐ µst + σst

5: while |S| < maxuploader and th > 0 do6: S ⇐ S ∪ {pm ∈ U : stim ≥ th}7: th ⇐ th− σst/28: end while9: if S = ∅ then

10: S ⇐ S ∪ {pm ∈ U : shim = 0}11: end if12: return S

F. Further IssuesThis section explains some design decisions of SORT. An extended version of this discussion

can be found in [37].Reacting to attacks. Assume that pi is downloading a file from pj . If the file is virus-

infected, pi does special operations to protect itself against future attacks. pi sets satisfactionof pj’s past interactions to zero and creates a permanent interaction, which is never deletedfrom pj’s history and has zero satisfaction value. When pi gives a recommendation about pj , thereceiver of the recommendation will understand that pi had some past experience with pj andthen being attacked.

Repeating reputation query. A peer periodically updates reputation values of its acquain-tances. Knowing how an acquaintance behaves with others is helpful to understand possiblethreats coming from this acquaintance. Updated values may help to increase confidence on goodpeers and identify malicious peers in advance.

Pseudonyms. A peer selects an arbitrary pseudonym and associates it with a public/private keypair so {pseudonym, public key} pair becomes its identity. Peers exchange these pairs beforean interaction and run a challenge-response protocol to ensure each other’s identity. Thus, amalicious peer can not use the pseudonym of another peer and take advantage of its reputation.

Complaints vs. Reputation Queries. Sending complaints about a peer’s malicious behavioris a way of informing other peers [4]. However, a malicious peer may use complaints toblackmail others. Reputation queries are more resistant to blackmailing. A query may resultin a misleading reputation value if the majority of recommenders maliciously collaborate to givedeceptive information about the queried peer. Probability of such a collaboration is smaller thanthe probability that a peer individually sends a malicious complaint.

Storage space. Assume that shmax = rhmax = 20 and size of a history tuple is 40 bytes.If a peer has 2000 acquaintances, 2.20.2000.40 = 3200KB is needed for both service and

November 3, 2006 DRAFT

Page 14: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

14

recommendation histories. Having 2000 acquaintances is a rare case [17], [18] and history sizewith each acquaintance will generally be less than shmax. Therefore, storage requirements forstoring trust information are tolerable.

Integrity checking. Although it is beyond the scope of this research, checking integrity ofservices needs further investigation. Downloading a file from only one uploader makes integritychecking easier since it can be done after the download finishes. When multiple uploaders areselected, checking integrity is an issue. Some uploaders may maliciously provide inauthenticcontent but the downloader can not easily identify them. A naive approach might be as follows.If a peer downloads an inauthentic file, it requests hashes of the file segments from all uploaders.If there are conflicting hashes, a majority selection based on the responses of uploaders can revealthe malicious ones. Additionally, some complex methods utilizing Merkel Hashes [38], securehashes, and cryptograpy [39] can be used for online integrity checking with multiple uploaders.

IV. EXPERIMENTS AND ANALYSIS

Experiments have been conducted on a file sharing application to determine the capabilitiesof SORT in mitigating attacks. How much recommendations are (or not) helpful in correctlyidentifying malicious peers, how SORT handles attacks and how much attacks can be mitigatedare some questions to be studied. If a malicious peer is successful, the reason will be investigated.

A. MethodA simulation program has been implemented in Java programming language. Simulation

parameters are generated based on the findings of several empirical studies [17], [18], [19]so observations about the proposed algorithms can be more realistic. Some details of the methodwill be explained in the next section since they are closely related with the input parameters.

Downloading a file is a service interaction. A peer sharing files is called an uploader. A peerdownloading a file is called a downloader. The set of peers which downloaded a file from a peeris called downloaders of the peer. An ongoing download/upload operation is called as a session.

A file search request returns all online uploaders in the network. A peer downloads a filefrom one uploader to simplify integrity checking. Peers are assumed to have antivirus softwareso they can detect infected files. In the experiments, the same scenario is simulated with/withoutSORT. If SORT is used, a downloader selects an uploader based its trustworthiness. Otherwise, itselects an uploader with the highest bandwidth. If an uploader has reached its maximum numberof upload sessions, it rejects a new upload request. The downloader checks other uploaders untilit find an available one. The time is simulated as cycles. Each cycle represents a period of time.At the start of each cycle, the simulator does the following:

• For each ongoing download session, the size of file segments downloaded in the previouscycle is calculated. If the file is completed, the session is finished.

• Each finished session is recorded as an interaction. The downloaded file might be shared. Ifit is not shared, it is recorded as a past download. This prevents the download of the samefile again.

• Online/offline status of each peer is determined for the current cycle. According to newstatus of peers, sessions might be paused or resumed.

• An online peer may create a new download session. It selects a random file and searches foran online uploader of the file. Before starting the session, the downloader makes a bandwidthagrement with uploader. The uploader tells the amount of bandwidth it can devote.

November 3, 2006 DRAFT

Page 15: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

15

TABLE III

INPUT PARAMETERS USED IN SIMULATION SETUP

Number of Runs 5Number of Peers 1000Number of Resources 10000Minutes in a Cycle 10Number of Cycles 50000Reputation Update (cycles) 5000Report Period(cycles) 1000Maximum Uploads/Downloads 5shmax 10rhmax 20ηmax 20

Satisfaction about a service interaction is calculated based on the following parameters:• Bandwidth. The average bandwidth during the session is compared with the agreed band-

width to evaluate the reliability of the uploader in terms of bandwidth provided.• Online Period. Frequent offlines of an uploader delays the download operation. The ratio

of online and offline periods is a parameter to characterize the availability of the uploader.Calculation of ek

ij may include more parameters such as, delay, jitter, and retransmission ofdropped packets [14]. These variables are not simulated in our experiments. pi calculates asatisfaction value about its kth interaction with pj as follows:

ekij = (

AverageBwidth

AgreedBwidth+

Online

Online + Offline)/2 (16)

Weight of a service interaction is calculated according to the following parameters:• File Size. Downloading a large file is more important than a small one due to the bandwidth

consumed. The importance is not directly proportional to the file size. Files over a certainsize have the same importance. In the experiments, 100 Mb file size is set as the threshold.

• Popularity. Popular files are more requested by peers so they are more important thanunpopular files. We assume that the number of uploaders of a file is an indication of itspopularity. The most popular file is shared by the largest number of uploaders.

Let Uploadermax be the number of uploaders of the most popular file. fsize and # Uploadersdenotes the size of a file and its number of uploaders respectively. pi calculates the weight ofits kth interaction with pj as follows:

wkij =

{fsize ≥ 100MB (1 + # Uploaders

Uploadermax)/2

fsize < 100MB ( fsize100MB

+ # UploadersUploadermax

)/2(17)

B. Input ParametersTable III shows the input parameters used for the simulation setup. Each experiment is run for

five times. Results of these runs are averaged and stored as final values. An experiment contains1000 peers and 10000 unique files. A cycle represents a 10 minutes duration. Each experimentruns for 50000 cycles so it simulates nearly a year’s worth of P2P activity. A peer updates

November 3, 2006 DRAFT

Page 16: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

16

TABLE IV

INPUT PARAMETERS TO REPRESENT PEER AND RESOURCE CHARACTERISTICS

(a) File Size DistributionFile Size (kb) Ratio

100 - 1000 0.101001 - 10000 0.75

10001 - 100000 0.10100001 - 1000000 0.05

(b) Initial Uploader of Files

Initial Uploaders Ratio1 - 10 0.6011 - 30 0.2031 - 50 0.15

51 - 100 0.05

(c) Shared Files of PeersShared Files Ratio

0 - 0 0.251 - 10 0.20

11 - 100 0.30101 - 300 0.10301 - 500 0.05

(d) Bandwidth Distribution of PeersDownload-UploadBandwidth (kbps) Ratio

128 - 64 0.10512 - 128 0.10

1024 - 256 0.403036 - 768 0.20

10240 - 5120 0.15102400 - 10240 0.05

(e) Uptimes of Peers

Uptime (min) Ratio1 - 60 0.50

61 - 120 0.20121 - 180 0.10181 - 240 0.05241 - 360 0.05361 - 600 0.05601 - 720 0.05

(f) Waiting Period for a Download Session

File Size (kb) Max WaitingPeriod (cycle)

100 - 1000 51000 - 10000 20

10000 - 100000 100100000 - 1000000 1000

reputation of its acquaintances after every 5000 cycles. Time-dependent statistics are reportedafter every 1000 cycles. Maximum number of simultaneous upload (or download) sessions for apeer is set to a number between 0 and 5. A peer can start at most two downloads in a day period.Maximum sizes for service and recommendation histories are set to 10 and 20 respectively.Number of recommendations collected in a reputation query can be 20 at maximum.

Table IV shows the distribution of parameters used to model characteristics of peers andresources. These distributions are approximations to empirical results since the experiments donot include as many peers and resources as in a real life scenario. File sizes are assigned accordingto Table IV(a). For example, 75% of all files have a size between 1001 and 10000 kilobytes. Afile is shared by multiple peers. Table IV(b) shows the initial uploader distribution of files. 5% ofall files are the most popular and shared by 51-100 uploaders. Peers are assigned to an numberof shared files according to the distribution given in Table IV(c). Since a peer downloads andshares new files, its number of shared files changes continuously. 25% of all peers never sharea file. Table IV(d) shows the distribution of download and upload bandwidths of peers. A peerstays online for a period of time in a day. Then, it goes offline until the next day. Table IV(e)shows the distribution of online periods for peers. 50% of all peers stay online only 60 minutesin a day. Table IV(f) shows the maximum waiting periods for suspended sessions. A downloadsession is suspended if the uploader goes offline. The downloader waits for a while to catch theuploader online again. This prevents unnecessary session cancellations. If the uploader does notbecome online in a period of time, the suspended session is deleted. For example, waiting periodfor a file size between 100 and 1000 kb is 5 cycles. After 5 cycles, the downloader terminatesthe session, records a failed interaction. It restarts the session with another uploader. As in reallife case, a peer tends to wait more for a large file [17].

Attacker Model. Different types of malicious peers has been simulated in the experiments.Thus, type of a malicious peer is an input parameter.

November 3, 2006 DRAFT

Page 17: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

17

Two types of attacks are defined: service-based and recommendation-based. Uploading a virusinfected or inauthentic file is called a service-based attack. Giving a misleading recommendationis called a recommendation-based attack. There are two types of misleading recommendations[2]: (i) Unfairly high recommendation: Giving a positively-biased trust value about a peer wherer, cb, ib values are set to 1. (ii) Unfairly low recommendation: Giving a negatively-biased trustvalue about a peer where r, cb, ib values are set to 0. A fair recommendation is the recommender’sunbiased trust information about a peer.

A malicious peer may perform both service and recommendation based attacks. A good peeruploads authentic files and gives fair recommendations. A non-malicious network consists of onlygood peers. A malicious network contains both good and malicious peers. In the experiments, amalicious network consists of 10% malicious and 90% good peers. Malicious peers are assumedto be more powerful. They stay online longer. Thus, the actual ratio of online malicious peersto online good peers comes out to be around 20% during the experiments.

If malicious peers do not know about each other and perform attacks independently, theyare called individual attackers. Individual attackers may attack each other. Based on the attackbehavior, there are three types of individual attackers:

1) Naive. An attacker always uploads infected/inauthentic files and gives unfairly low recom-mendations to others [2].

2) Discriminatory. An attacker selects a group of victims and always uploads infected/inauthenticfiles to them [2], [9]. It gives unfairly low recommendations about victims. It treats otherpeers like a good peer.

3) Hypocritical. An attacker, with x% probability, behaves maliciously by uploading in-fected/iauthentic files and giving unfairly low recommendations [5], [9]. In the other times,it behaves as a good peer.

If malicious peers know each other and coordinate in launching attacks, they are called collab-orators. Collaborators always upload authentic files to each other. When a collaborator requestsa recommendation from another collaborator, it receives a fair recommendation. Collaboratorsgive unfairly high recommendations about each other when requested by good peers. This is toconvince good peers to download files from any one of them. All types of collaborators behavein the same way for the situations described above. In the other cases, they may behave different.According to this difference, three types of collaborators are defined as follows:

1) Naive collaborator. A collaborator always uploads infected/inauthentic files to good peersand gives unfairly low recommendations about them.

2) Discriminatory collaborator. Collaborators agree on a group of selected victims. Theyupload infected/inauthentic files to victims and give unfairly low recommendations aboutthem. They upload authentic files to non-victim peers which are the peers other thancollaborators or victims. They also give fair recommendations about non-victim peers.

3) Hypocritical collaborator. A collaborator uploads infected/inauthentic files to good peersor gives unfairly low recommendations about them with x% probability. In the other times,it behaves them fairly.

A trust model should be resistant to Sybil attacks [10] since changing pseudonym is easy in aP2P system. A malicious peer which changes its pseudonym periodically to escape from beingidentified is called a pseudospoofer. It is hard to achieve collaboration among pseudospooferssince a tight synchronization and coordination is needed. Thus, we assume that a pseudospooferbehaves in one of the individual attacker behaviors and changes its pseudonym periodically.

November 3, 2006 DRAFT

Page 18: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

18

0

500

1000

1500

2000

2500

3000

0 10000 20000 30000 40000 50000

Serv

ice-

base

d A

ttack

s

Time (cycles)

w/o SORTwith SORT

(a) Naive attackers

0

100

200

300

400

500

600

700

0 10000 20000 30000 40000 50000

Serv

ice-

base

d A

ttack

s

Time (cycles)

hypo-w/o SORThypo-with SORT

disc-w/o SORTdisc-with SORT

(b) Hypocritical and discriminatory attackers

Fig. 2. Service-based attacks of individual attackers with respect to time

C. Output Parameters.The number of service-based attacks with respect to the time is the most important output

parameter. It shows if SORT is successful about this type of attacks. The rate of successfuldownloads has also been observed to understand how much increase can be obtained in downloadoperations by mitigating attacks.

The number of recommendation-based attacks with respect to time has been observed tomeasure SORT’s ability in correctly evaluating recommendations. If many recommendation-based attacks are happening, malicious peers can affect decisions of other peers with misleadingrecommendations. Mitigating service-based attacks is more important than recommendation-based attacks since they consume more bandwidth.

The distribution of reputation and recommendation trust metrics has been observed to un-derstand if good peers assign fair trust values to each other and low trust values to maliciouspeers. These distributions give insights about the evolution of trust relationships among peersand effects of malicious behavior on the evolution. We observed a high correlation between thevalues of reputation and service trust metrics. Thus, distributions of service trust metric are notpresented in the results.

D. Experiment 1: Analysis about individual attackersIn this section, effects of individual attackers on trust relationships and SORT’s ability in

mitigating service and recommendation-based attacks will be studied. For each of three typesof individual attackers, a separate network topology is created in which 10% of all peers aremalicious. Hypocritical attackers behave malicious 30% of all interactions. A discriminatoryattacker selects 20% of all peers as victims.

Service-based attacks. Fig. 2(a) shows the number of service-based attacks in every 1000cycles for naive attackers. SORT mitigates the attacks by 50% at 10000th cycle and by 80% at20000th cycle. The attacks do not stop completely since all good peers do not learn about allmalicious peers. Due to the decrease in service-based attacks, the rate of successful downloadsincreases nearly 25%.

November 3, 2006 DRAFT

Page 19: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

19

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 100 200 300 400 500 600 700 800

Rep

utat

ion

Number of Downloaders

good peersnaive

(a) Naive attackers

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 100 200 300 400 500 600 700 800

Rep

utat

ion

Number of Downloaders

good peershypocritical

(b) Hypocritical attackers

Fig. 3. Reputation values with respect to the number of downloaders for naive and hypocritical attackers

Fig. 2(b) shows the rate of service-based attacks for hypocritical and discriminatory attackers.Good peers start to identify attackers from the beginning and 80% of attacks are mitigated at20000th cycle. The successful downloads increase by 4-5% which is not as large as the naiveattacker case. Discriminatory ones only attack a selected group of victims and hypocritical onesattack on 30% of all interactions. Thus, total number of their attacks is smaller compared to thenaive attackers.

Reputation. Fig. 3(a) shows the reputation values of naive attackers at 50000th cycle. Eachpoint represents a peer’s average reputation among its downloaders. Naive attackers have zeroreputation. They can not gain trust of other peers since they attack in all interactions. Somenaive attackers succeed to attack 600-700 downloaders by sharing large number of files.

Fig. 3(b) shows the reputation values for hypocritical attackers. Comparing to naive attack-ers, they are successful in building up some reputation. However, their average reputation issignificantly lower than most of the good peers.

Discriminatory attackers have slightly larger reputation values than hypocritical ones. Sincethey attack only victims, non-victim peers assign them high reputation values. Their averagereputation is still lower than good peers due to low recommendations of victims.

Recommendation Trust. Recommendation trust values need to be analyzed in order to under-stand if all recommendations are evaluated correctly. Naive attackers have zero recommendationtrust values since their reputation is zero. Fig. 4 shows that recommendation trust values ofgood peers have a larger average than discriminatory attackers. We observed that average ofhypocritical attackers is even smaller than discriminatory attackers. Thus, recommendations ofgood peers are more credible than that of individual attackers. Fig. 5 shows that recommendation-based attacks are mitigated due to high recommendation trust values of good peers.

Fig. 4 also suggests that having a large set of downloaders might have a negative effect onrecommendation trust value. A peer with many downloaders gets more recommendation requests.Giving many recommendations increases the chance of giving inaccurate information since anyrecommendation has some uncertainty. This reduces a peer’s average recommendation trust value.

November 3, 2006 DRAFT

Page 20: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

20

0

0.1

0.2

0.3

0.4

0.5

0.6

0 100 200 300 400 500 600 700 800

Rec

omm

enda

tion

Tru

st

Number of Downloaders

good peersdiscriminatory

Fig. 4. Recommendation trust of discriminatory attackerswith respect to number of downloaders.

0

500

1000

1500

2000

2500

3000

0 10000 20000 30000 40000 50000

Rec

omm

enda

tion-

base

d at

tack

s

Time (cycles)

naivehypocritical

discriminatory

Fig. 5. Recommendation-based attacks of individualattackers with respect to time.

0

100

200

300

400

500

600

700

800

0 10000 20000 30000 40000 50000

Serv

ice-

base

d A

ttack

s

Time (cycles)

hypo-w/o SORThypo-with SORT

disc-w/o SORTdisc-with SORT

Fig. 6. Service-based attack of hypocritical and discrim-inatory collaborators with respect to time

9.2

9.4

9.6

9.8

10

10.2

10.4

10.6

10.8

11

0 10000 20000 30000 40000 50000

Succ

essf

ul D

ownl

oads

Time (cycles)

w/o SORTwith SORT

Fig. 7. Successful downloads of hypocritical collabora-tors with respect to time

E. Experiment 2: Analysis about collaboratorsThe objective of this experiment is to analyze SORT’s ability in mitigating collaborative

attacks. It is expected that collaboration makes the detection of malicious peers more difficult.For each type of collaborator, a malicious network topology is created. Attack probability ofhypocritical collaborators is set to 0.3. All discriminatory collaborators agree on the same groupof victims which include 20% of all peers.

Service-based attacks. Experiments on naive collaborators produce a similar result to indi-vidual naive attackers. 80% of attacks are stopped after 20000th cycle and successful downloadrate increases 25% percent. Since naive collaborators upload only infected/inauthentic files, goodpeers assign zero reputation values to them and do not request their recommendations. Therefore,naive collaborators can not amplify each other’s reputation and benefit from the collaboration.

Fig. 6 shows the service-based attacks of hypocritical and discriminatory collaborators. Duringthe first 15000 cycles, hypocritical collaborators successfully amplify each other’s reputationand convince more good peers to get services from them. After 15000th cycle, good peers startto identify some of them and the rate of attacks falls down. Fig. 7 shows that this situation

November 3, 2006 DRAFT

Page 21: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

21

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 100 200 300 400 500 600 700 800

Rep

utat

ion

Number of Downloaders

good peershypocritical

(a) Hypocritical collaborators

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 100 200 300 400 500 600 700 800

Rep

utat

ion

Number of Downloaders

good peersdiscriminatory

victims

(b) Discriminatory collaborators

Fig. 8. Reputation values of hypocritical and discriminatory collaborators with respect to number of downloaders.

affects the successful download rate. In the first 20000 cycles, collaborators take advantage ofmisleading recommendations and can launch more attacks. This situation can be interpreted asa shortcoming of SORT since download rate is larger when SORT is not used. After 20000th

cycle, more collaborators are identified and the rate of successful downloads with SORT passesthe case without SORT.

Fig. 6 shows that the victims start to identify discriminatory collaborators from the beginningsince they are always attacked by collaborators. Thus, the attack rate monotonically decreases.

Reputation. Naive collaborators have zero reputation values due to the reasons explainedearlier. Fig. 8(a) shows the reputation values of hypocritical collaborators at 50000th cycle.Hypocritical collaborators have lower reputation than good peers. Collaboration does not helpthem to gain a high reputation. Reputation values at 25000th cycle present a similar distributionto Fig. 8(a). Considering the results from Fig. 6 and 7, it can be concluded that good peersidentify most of the hypocritical collaborators between 15000th and 25000th cycles.

Fig. 8(b) shows that discriminatory collaborators successfully gain a high reputation. Sincethey attack only victims, their reputation among non-victim peers remain high. Their unfairlylow recommendations decrease the reputation of victims. This situation is not observed inindividual discriminatory behavior because attackers randomly select different sets of victims.Thus, misleading recommendations are almost evenly distributed on all peers. Discriminatorycollaborators select only 20% of peers as victims so the impact of unfairly low recommendationsis concentrated on a small group.

Recommendation trust. Naive collaborators have zero recommendation trust value due totheir zero reputation values. Fig. 9(a) shows recommendation trust values of hypocritical collab-orators. Their recommendation trust values are even smaller than individual hypocritical attackers.They lose recommendation trust of good peers due to their unfairly high recommendations. Anindividual attacker does not give such recommendations so it can gain higher recommendationtrust value. Additionally, good peers in the collaborator scenario have slightly lower recommen-dation trust values comparing to individual attacker scenario. Because, their recommendationsget relatively low satisfaction value due to misleading recommendations of collaborators.

Fig. 9(b) shows recommendation trust values of discriminatory collaborators. They have

November 3, 2006 DRAFT

Page 22: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

22

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0 100 200 300 400 500 600 700 800

Rec

omm

enda

tion

Tru

st

Number of Downloaders

good peershypocritical

(a) Hypocritical collaborators

0

0.1

0.2

0.3

0.4

0.5

0 100 200 300 400 500 600 700 800

Rec

omm

enda

tion

Tru

st

Number of Downloaders

good peersdiscriminatory

victims

(b) Discriminatory collaborators

Fig. 9. Recommendation trust values of hypocritical and discriminatory collaborators with respect to number of downloaders.

0

20

40

60

80

100

0 10000 20000 30000 40000 50000

Rec

omm

enda

tion-

base

d at

tack

s

Time (cycles)

naivehypocritical

discriminatory

Fig. 10. Recommendation-based attacks of collaboratorswith respect to time.

0

500

1000

1500

2000

2500

3000

0 10000 20000 30000 40000 50000

Serv

ice-

base

d A

ttack

s

Time (cycles)

w/o SORTwith SORT

Fig. 11. Service-based attacks of naive pseudospooferswith respect to time.

the highest recommendation trust average since they do not attack non-victim peers whichconstitute 70% of peers, praise each other with unfairly high recommendations and give mis-leading recommendations collaboratively. Due to this high trust, a non-victim peer favors theirrecommendations over other peers. Non-victim peers loose recommendation trust in each othersince they assume that recommendations of collaborators are more accurate. Victims give lowrecommendations about collaborators due to the attacks. Non-victim peers think that these lowrecommendations are misleading since they consider collaborators as trustworthy. Thus, victimsfall into liar position. Victims have the lowest recommendation trust average since all other peershave a low trust in them. Fig. 10 shows that SORT can stop misleading recommendations ofnaive and hypocritical collaborators but not discriminatory ones. This situation does not cause aproblem when dealing with service-based attacks. Victims can identify collaborators and protectthemselves against service-based attacks as shown in Fig. 6.

November 3, 2006 DRAFT

Page 23: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

23

0

100

200

300

400

500

600

700

800

900

0 10000 20000 30000 40000 50000

Serv

ice-

base

d A

ttack

s

Time (cycles)

hypo-w/o SORThypo-with SORT

disc-w/o SORTdisc-with SORT

Fig. 12. Service-based attacks of hypocritical and dis-criminatory pseudospoofers with respect to time.

0

0.5

1

1.5

2

2.5

0 10000 20000 30000 40000 50000

Rec

omm

enda

tion-

base

d A

ttack

s

Time (cycles)

naivehypocritical

discriminatory

Fig. 13. Recommendation-based attacks of pseu-dospoofers with respect to time.

F. Experiment 3: Analysis about pseudospoofersIn this section, attacks of pseudospoofers are studied. The decrease in attack rate is a measure

of resistance against Sybil attacks. In the experiments, a pseudospoofer will change its pseudonymafter every 10000 cycles.

Service-based attacks. Fig. 11 shows that SORT reduces service-based attack rate for naivepseudospoofers. After every pseudonym change, the attack rate makes a small peak, but then itdrops. We observed the number of strangers selected by good peers with respect to time. Sincegood peers gain more acquaintances with time, they have less tendency to select strangers. Apseudospoofer becomes a stranger to other peers after every pseudonym change. Therefore, itgets less service requests and the attack rate drops with time.

Fig. 12 presents a similar situation for hypocritical and discriminatory pseudospoofers. Theattack rate decreases with time since pseudospoofers become more secluded from other peers.

Recommendation-based attacks. Fig. 13 shows that recommendation-based attacks decreasewith time. Since pseudospoofers become strangers after every pseudonym change, they get lessrecommendation requests.

G. Summary of the Other ResultsWe summarize the results of two experiments [37] due to space limitations: i) A non-malicious

network is simulated to observe evolution of trust relationships without having the effectsof attacks. ii) The attack probability in hypocritical behavior and the number of victims indiscriminatory behavior are changed to observe the effects of attacks on trust relationships.

Non-malicious network. With SORT, an acquaintance with low bandwidth is preferred overa stranger with high bandwidth. When SORT is not used, an uploader with a higher bandwidthis preferred so the download rate might be higher. SORT’s design minimizes this disadvantage.A peer with a high bandwidth is more capable of completing file upload requests and gettinghigh satisfaction values due to Equation 16. Thus, it will gain a higher reputation and be selectedmore.

The number of shared files affects the reputation. A peer sharing many files gets high vol-ume of download requests and gains more downloaders. Queries about this peer return more

November 3, 2006 DRAFT

Page 24: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

24

recommendations. Reputation increases with the number of recommenders due to Equation 7.We observed that being known by 10% of all peers might be enough to build a high reputation.

A peer generally performs few interactions with an acquaintance but get many recommenda-tions from it. Due to Equation 6 and 15, reputation metric has a strong correlation with servicetrust metric but not with recommendation trust metric. Therefore, we did not observe servicetrust metric in our experiments.

Main overhead of SORT comes from the reputation queries. In a download session, oneuploader is selected but reputation values about other uploaders are deleted. Reputation valuesabout unselected uploaders can be cached. If a cache entry is stored for 2000 cycles, half of thequery traffic can be prevented. Since a peer obtains more acquaintances with time, number ofcache entries and cache hit ratio increase with time.

Adapting Hypocritical and Discriminatory Behavior. Detection of hypocritical maliciouspeers becomes harder when the attack probability decreases. With a high attack probability, moreattacks can be performed at the beginning. However, malicious peers are identified faster andthe attack rate falls down quickly. The attack rate can be maintained longer with a low attackprobability. Malicious peers are forced by SORT to decrease their attack probability. Attackingwith a high probability and changing pseudonym is not a good strategy as discussed in SectionIV-F.

In discriminatory behavior, changing number of victims did not affect the resistance againstservice-based attacks. Attacks fall down slowly with 100 victims, since victims can not identifymalicious peers quickly. In the individual case, recommendation-based attacks stays on a highlevel for a long time with 100 victims. Collaborators can dominate 100 victims and increasemisleading recommendations as in Section IV-E. 400 victims can decrease the attack rate inboth individual and collaborative cases since their recommendations can counter misleadingrecommendations.

V. DISCUSSION ON FUTURE WORK

Reputation storing/collection method. Collecting trust information from acquaintances is alimiting factor of SORT. Broadcasting reputation queries is not preferred since it causes excessivenetwork traffic. DHTs can be used to access trust information efficiently [4], [5]. A trust holder isassigned for each peer to store trust information. However, trust holders may behave maliciously.In SORT, a peer develops trust in its acquaintances so it can evaluate a recommendation withrespect to trustworthiness of the acquaintance. In DHT approach, the information from trustholders is not reliable. A method to establish trust between a peer and trust holders need to bestudied so reliability of trust holders can be evaluated.

P2P system dynamics. Deletion of resources that lose popularity, addition of new peers/resourcesto an existing topology, multi-uploader sessions and flash crowds [40] are some of the situationsthat may affect evolution of trust relationships. Studying such dynamics may help to designbetter trust models.

Privacy. Reputable service providers are good targets of DOS attacks. Protecting privacy ofa service provider is harder than protecting privacy of a service requester. Because, promotingreputation of a service provider and protecting its identity are adversarial tasks [41], [16]. Privacyis also needed in a trust model to protect identity of recommenders. Otherwise, the recommendermight become a target of malicious peers. Such a privacy scheme needs an authentication methodto prevent forgery of fake recommendations.

November 3, 2006 DRAFT

Page 25: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

25

Incentives. SORT does not force a peer to provide services for other peers. An incentivemechanism can increase the benefits of good peers so they would be willing to contribute services.Reputation can be a currency when exchanging services [29], [31], [32]. Peers contribute servicesto gain more reputation and use it to get services.

VI. CONCLUSION

A self-organizing trust model for P2P networks is presented in which a peer can develop trustrelations without using a priori information. Trust metrics defined on service and recommendationtrust contexts help a peer to reason more precisely about capabilities of other peers in providingservices and giving recommendations. If all peers are behave good, reputation of a peer isproportional to its capabilities such as network bandwidth, average online period and numberof shared files. In a malicious network, service and recommendation-based attacks affect thereputation of a peer.

Three individual attacker, three collaborator and three pseudospoofer behaviors are studied.SORT mitigates service-based attacks in all scenarios. For individual attackers, hypocritical onestake more time to detect. Identification of collaborators usually takes longer than identification ofan individual attacker. Pseudospoofers are more isolated from good peers after every pseudonymchange. Since good peers get more acquaintances with time, they do not prefer to interactwith strangers and leave pseudospoofers isolated. Two types of collaborators present interestingbehavior. Hypocritical collaborators use unfairly high recommendations and attract more goodpeers at the beginning. They can take advantage of SORT for their attacks. However, good peerseventually identify them and contain their attacks. Discriminatory collaborators have a betterreputation than hypocritical collaborators since they do not attack 80% of the peers. However,their service-based attacks are mitigated faster since victims quickly identify them. They gain ahighest recommendation trust average and cause the victims to have the lowest average. Thus,they can continue to give misleading recommendations which can be stopped if a trusted thirdparty is used.

Defining a context of trust and its related metrics increases a peer’s ability to identify andmitigate attacks in the context related tasks. Therefore, various contexts of trust can be definedto enhance security of P2P systems on specific tasks. For example, a peer might use trust metricsto select better peers when routing P2P queries, checking integrity of resources, and protectingprivacy of peers.

VII. ACKNOWLEDGEMENTS

We would like to thank our colleagues. Omer Ozdil at TUBITAK implemented the first versionof the simulation program. Leszek Lilien at Western Michigan University and Sanjay Madria atUniversity of Missouri-Rolla made useful comments to improve the content of the paper.

REFERENCES

[1] P. Resnick, K. Kuwabara, R. Zeckhauser, and E. Friedman, “Reputation systems,” Communications of ACM, vol. 43, no. 12,pp. 45–48, 2000.

[2] C. Dellarocas, “Immunizing online reputation reporting systems against unfair ratings and discriminatory behavior,” inProceedings of the 2nd ACM Conference on Electronic commerce (EC), 2000.

[3] A. Abdul-Rahman and S. Hailes, “Supporting trust in virtual communities,” in Proceedings of the 33rd Hawaii InternationalConference On System Sciences (HICSS), 2000.

[4] K. Aberer and Z. Despotovic, “Managing trust in a peer-2-peer information system,” in Proceedings of the 10th InternationalConference on Information and knowledge management (CIKM), 2001.

November 3, 2006 DRAFT

Page 26: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

26

[5] S. Kamvar, M. Schlosser, and H. Garcia-Molina, “The eigentrust algorithm for reputation management in p2p networks,”in Proceedings of the 12th World Wide Web Conference (WWW), 2003.

[6] L. Xiong and L. Liu, “Peertrust: Supporting reputation-based trust for peer-to-peer ecommerce communities,” IEEETransactions on Knowledge and Data Engineering, vol. 16, no. 7, pp. 843–857, 2004.

[7] K. Aberer, A. Datta, and M. Hauswirth, “P-grid: Dynamics of self-organization processes in structured p2p systems,”Lecture Notes in Computer Science: Peer-to-Peer Systems and Applications, vol. 3845, 2005.

[8] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker, “A scalable content addressable network,” in Proceedingsof the ACM SIGCOMM, 2001.

[9] A. A. Selcuk, E. Uzun, and M. R. Pariente, “A reputation-based trust management system for p2p networks,” in Proceedingsof the 4th IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGRID), 2004.

[10] J. Douceur, “The sybil attack,” in Proceedings of the First International Workshop on Peer-to-Peer Systems (IPTPS), 2002.[11] S. Marsh, Formalising Trust as a Computational Concept. PhD thesis, Department of Mathematics and Computer Science,

University of Stirling, 1994.[12] D. H. McKnight, “Conceptualizing trust: A typology and e-commerce customer relationships model,” in Proceedings of

the 34th Annual Hawaii International Conference on System Sciences (HICSS), 2001.[13] Y. Zhong, Formalization of Dynamic Trust and Uncertain Evidence for User Authorization. PhD thesis, Department of

Computer Science, Purdue University, 2004.[14] Y. Wang and J. Vassileva, “Bayesian network trust model in peer-to-peer networks,” in Proceedings of 2nd International

Workshop on Agents and Peer-to-Peer Computing at the Autonomous Agents and Multi Agent Systems Conference (AAMAS),2003.

[15] M. Srivatsa, L. Xiong, and L. Liu, “Trustguard: Countering vulnerabilities in reputation management for decentralizedoverlay networks,” in Proceedings of 14th World Wide Web Conference (WWW), 2005.

[16] Y. Lu, W. Wang, D. Xu, and B. Bhargava, “Trust-based privacy preservation for peer-to-peer data sharing,” IEEETransactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, vol. 36, no. 3, pp. 498–502, 2006.

[17] S. Saroiu, P. Gummadi, and S. Gribble, “A measurement study of peer-to-peer file sharing systems,” in Proceedings of theMultimedia Computing and Networking, 2002.

[18] M. Ripeanu, I. Foster, and A. Iamnitchi, “Mapping the gnutella network: Properties of large-scale peer-to-peer systemsand implications for system design,” IEEE Internet Computing, vol. 6, no. 1, pp. 50–57, 2002.

[19] S. Saroiu, K. Gummadi, R. Dunn, S. D. Gribble, and H. M. Levy, “An analysis of internet content delivery systems,” inProceedings of the 5th USENIX Symposium on Operating Systems Design & Implementation (OSDI), 2002.

[20] Z. Despotovic and K. Aberer, “Trust-aware delivery of composite goods,” in Proceedings of the 2nd International Workshopon Agents and Peer-to-Peer Computing at the Autonomous Agents and Multi Agent Systems Conference (AAMAS), 2002.

[21] E. Terzi, Y. Zhong, B. Bhargava, Pankaj, and S. Madria, “An algorithm for building user-role profiles in a trust environment,”in Proceedings of the 4th International Conference on Data Warehousing and Knowledge Discovery,(DaWaK), LectureNotes in Computer Science, Vol 2454, 2002.

[22] B. Yu and M. Singh, “A social mechanism of reputation management in electronic communities,” in Proceedings of theCooperative Information Agents (CIA), 2000.

[23] L. Mui, M. Mohtashemi, and A. Halberstadt, “A computational model of trust and reputation for e-businesses,” inProceedings of the 35th Hawaii International Conference On System Sciences (HICSS), 2002.

[24] A. Jøsang, E. Gray, and M. Kinateder, “Analysing topologies of transitive trust,” in Proceedings of the First InternationalWorkshop on Formal Aspects in Security and Trust (FAST), 2003.

[25] F. Cornelli, E. Damiani, S. D. C. di Vimercati, S. Paraboschi, and P. Samarati, “Implementing a reputation-aware gnutellaservent,” in Proceedings of the NETWORKING 2002 Workshops on Web Engineering and Peer-to-Peer Computing, 2002.

[26] F. Cornelli, E. Damiani, S. D. C. di Vimercati, S. Paraboschi, and P. Samarati, “Choosing reputable servents in a p2pnetwork,” in Proceedings of the 11th World Wide Web Conference (WWW), 2002.

[27] “Gnutella website,” http://www.gnutella.com.[28] B. Ooi, C. Liau, and K. Tan, “Managing trust in peer-to-peer systems using reputation-based techniques,” in Proceedings

of the 4th International Conference on Web Age Information Management, 2003.[29] S. Lee, R. Sherwood, and B. Bhattacharjee, “Cooperative peer groups in nice,” in Proceedings of the INFOCOM, 2003.[30] T. Moreton and A. Twigg, “Enforcing collaboration in peer-to-peer routing services,” in Proceedings of the First

International Conference on Trust Management (iTrust), 2003.[31] T. Moreton and A. Twigg, “Trading in trust, tokens, and stamps,” in Proceedings of the First Workshop on Economics of

Peer-to-Peer Systems, 2003.[32] R. Gupta and A. Somani, “Reputation management framework and its use as currency in large-scale peer-to-peer networks,”

in Proceedings of the 4th IEEE Conference on Peer-to-Peer Computing (P2P), 2004.[33] B. Bhargava, L. Lilien, and M. Winslett, “Pervasive trust,” IEEE Intelligent Systems, vol. 19, no. 5, pp. 74–76, 2004.[34] K. Chopra and W. A. Wallace, “Trust in electronic environments,” in Proceedings of the 36th Annual Hawaii International

Conference on System Sciences (HICSS), 2003.[35] S. Xiao and I. Benbasat, “The formation of trust and distrust in recommendation agents in repeated interactions: A process-

tracing analysis,” in Proceedings of the 5th ACM Conference on Electronic commerce (EC), 2003.

November 3, 2006 DRAFT

Page 27: SORT: A Self-ORganizing Trust Model for Peer-to …llilien/teaching/fall2006/cs6910...1 SORT: A Self-ORganizing Trust Model for Peer-to-peer Systems Ahmet Burak Can and Bharat Bhargava

27

[36] A. Salam, L. Iyer, P. Palvia, and R. Singh, “Trust in e-commerce,” Communicatons of ACM, vol. 48, no. 2, pp. 72–77,2005.

[37] A. B. Can and B. Bhargava, “SORT: A Self-ORganizing Trust model for peer-to-peer systems,” Tech. Rep. CSD-TR06-016, Purdue University, 2006.

[38] A. Habib, D. Xu, M. Atallah, B. Bhargava, and J. Chuang, “A tree-based forward digest protocol to verify data integrity indistributed media streaming,” IEEE Transactions on Knowledge and Data Engineering (TKDE), vol. 17, no. 7, pp. 1010–1014, 2005.

[39] G. Caronni and M. Waldvogel, “Establishing trust in distributed storage providers,” in Proceedings of the 3rd IEEEConference on Peer-to-Peer Computing (P2P), 2003.

[40] T. Stading, P. Maniatis, and M. Baker, “Peer-to-peer caching schemes to address flash crowds,” in Proceedings of the FirstInternational Workshop on Peer-to-Peer Systems (IPTPS), 2002.

[41] S. Marti and H. Garcia-Molina, “Identity crisis: Anonymity vs. reputation in p2p systems,” in Proceedings of the 3rd IEEEConference on Peer-to-Peer Computing (P2P), 2003.

November 3, 2006 DRAFT