Self ORganizing Trust Model for Peer to Peer Systems PDF

download Self ORganizing Trust Model for Peer to Peer Systems PDF

of 14

Transcript of Self ORganizing Trust Model for Peer to Peer Systems PDF

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    1/14

    SORT: A Self-ORganizing Trust Modelfor Peer-to-Peer Systems

    Ahmet Burak Can,  Member ,   IEEE , and Bharat Bhargava,  Fellow ,   IEEE 

    Abstract—Open nature of peer-to-peer systems exposes them to malicious activity. Building trust relationships among peers can

    mitigate attacks of malicious peers. This paper presents distributed algorithms that enable a peer to reason about trustworthiness of

    other peers based on past interactions and recommendations. Peers create their own trust network in their proximity by using local

    information available and do not try to learn global trust information. Two contexts of trust, service, and recommendation contexts, are

    defined to measure trustworthiness in providing services and giving recommendations. Interactions and recommendations are

    evaluated based on importance, recentness, and peer satisfaction parameters. Additionally, recommender’s trustworthiness and

    confidence about a recommendation are considered while evaluating recommendations. Simulation experiments on a file sharing

    application show that the proposed model can mitigate attacks on 16 different malicious behavior models. In the experiments, good

    peers were able to form trust relationships in their proximity and isolate malicious peers.

    Index Terms—Peer-to-peer systems, trust management, reputation, security

    Ç

    1 INTRODUCTION

    PEER-TO-PEER (P2P) systems rely on collaboration of peersto accomplish tasks. Ease of performing maliciousactivity is a threat for security of P2P systems. Creatinglong-term trust relationships among peers can provide amore secure environment by reducing risk and uncertaintyin future P2P interactions. However, establishing trust in anunknown entity is difficult in such a malicious environ-ment. Furthermore, trust is a social concept and hard tomeasure with numerical values. Metrics are needed to

    represent trust in computational models. Classifying peersas either trustworthy or untrustworthy is not sufficient inmost cases. Metrics should have precision so peers can beranked according to trustworthiness. Interactions andfeedbacks of peers provide information to measure trustamong peers. Interactions with a peer provide certaininformation about the peer but feedbacks might containdeceptive information. This makes assessment of trust-worthiness a challenge.

    In the presence of an authority, a central server is apreferred way to store and manage trust information, e.g.,eBay. The central server securely stores trust informationand defines trust metrics. Since there is no central server inmost P2P systems, peers organize themselves to store andmanage trust information about each other [1], [2].Management of trust information is dependent to thestructure of P2P network. In distributed hash table (DHT)- based approaches, each peer becomes a trust holder bystoring feedbacks about other peers [1], [3], [4]. Global trust

    information stored by trust holders can be accessedthrough DHT efficiently. In unstructured networks, eachpeer stores trust information about peers in its neighbor-hood or peers interacted in the past [2], [5], [6]. A peersends trust queries to learn trust information of otherpeers. A trust query is either flooded to the network or sentto neighborhood of the query initiator. Generally, calcu-lated trust information is not global and does not reflectopinions of all peers.

    We propose a Self-ORganizing Trust model (SORT) thataims to decrease malicious activity in a P2P system byestablishing trust relations among peers in their proximity.No a priori information or a trusted peer is used to leveragetrust establishment. Peers do not try to collect trust informa-tion from all peers. Each peer develops its own local view of trust about the peers interacted in the past. In this way, goodpeers form dynamic trust groups in their proximity and canisolate malicious peers. Since peers generally tend to interactwith a small set of peers [7], forming trust relations inproximity of peers helps to mitigate attacks in a P2P system.

    In SORT, peers are assumed to be strangers to each other atthe beginning. A peer becomes an  acquaintance  of another

    peer after providing a service, e.g., uploading a file. If a peerhas no acquaintance, it chooses to trust strangers. Anacquaintance is always preferred over a stranger if they areequally trustworthy. Using a service of a peer is aninteraction, which is evaluated based on weight (importance)and recentness of the interaction, and satisfaction of therequester. An acquaintance’s feedback about a peer,  recom-mendation, is evaluated based on recommender’s trust-worthiness. It contains the recommender’s own experienceabout the peer, information collected from the recommen-der’s acquaintances, and the recommender’s level of con-fidence in the recommendation. If the level of confidence islow, the recommendation has a low value in evaluation and

    affects less the trustworthiness of the recommender.A peer may be a good service provider but a bad

    recommender or vice versa. Thus, SORT considers provid-ing services and giving recommendations as different tasks

    14 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTI NG, VOL. 10, NO. 1, JANUARY/FEBRU ARY 2013

    .   A.B. Can is with the Department of Computer Engineering, HacettepeUniversity, Beytepe, 06800 Ankara, Turkey. E-mail: [email protected].

    .   B. Bhargava is with the Department of Computer Science, PurdueUniversity, 305 N. University Street, West Lafayette, IN 47907.E-mail: [email protected].

     Manuscript received 5 Sept. 2011; revised 6 May 2012; accepted 14 Aug.2012; published online 22 Aug. 2012.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference IEEECS Log Number TDSC-2011-09-0209.Digital Object Identifier no. 10.1109/TDSC.2012.74.

    1545-5971/13/$31.00    2013 IEEE Published by the IEEE Computer Society

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    2/14

    and defines two contexts of trust: service and recommendationcontexts. Information about past interactions and recom-mendations are stored in separate histories to assesscompetence and integrity of acquaintances in these contexts.

    SORT defines three trust metrics.   Reputation   metric iscalculated based on recommendations. It is importantwhen deciding about strangers and new acquaintances.Reputation loses its importance as experience with anacquaintance increases.   Service trust   and   recommendationtrust are primary metrics to measure trustworthiness in theservice and recommendation contexts, respectively. Theservice trust metric is used when selecting serviceproviders. The recommendation trust metric is importantwhen requesting recommendations. When calculating thereputation metric, recommendations are evaluated basedon the recommendation trust metric.

    We implemented a P2P file sharing simulation tool andconducted experiments to understand impact of SORT inmitigating attacks. Parameters related to peer capabilities(bandwidth, number of shared files), peer behavior (online/

    offline periods, waiting time for sessions), and resourcedistribution (file sizes, popularity of files) are approximatedto several empirical results [8], [9], [10]. This enabled us tomake more realistic observations on evolution of trustrelationships. We studied 16 types of malicious peer behaviors, which perform both service and recommenda-tion-based attacks. SORT mitigated service-based attacks inall cases. Recommendation-based attacks were containedexcept when malicious peers are in large numbers, e.g.,50 percent of all peers. Experiments on SORT show thatgood peers can defend themselves against malicious peerswithout having global trust information. SORT’s trust

    metrics let a peer assess trustworthiness of other peers based on local information. Service and recommendationcontexts enable better measurement of trustworthiness inproviding services and giving recommendations.

    Outline of the paper is as follows: Section 2 discusses therelated research. Section 3 explains the computationalmodel of SORT. Section 4 presents the simulation experi-ments and results. Section 5 summarizes the results andpossible future work directions.

    2 RELATED  WORK

    Marsh [11] defines a formal trust model based on socio-

    logical foundations. An agent uses own experiences to buildtrust relations and does not consider information of otheragents. Abdul-rahman and Hailes [12] evaluate trust in adiscrete domain as an aggregation of direct experience andrecommendations of other parties. They define a semanticdistance measure to test accuracy of recommendations. Yuand Singh’s model [13] propagates trust informationthrough referral chains. Referrals are primary method of developing trust in others. Mui et al. [14] propose astatistical model based on trust, reputation, and reciprocityconcepts. Reputation is propagated through multiple re-ferral chains. Jøsang et al. [15] discuss that referrals based on

    indirect trust relations may cause incorrect trust derivation.Thus, trust topologies should be carefully evaluated beforepropagating trust information. Terzi et al. [16] introduce analgorithm to classify users and assign them roles based on

    trust relationships. Zhong [17] proposes a dynamic trustconcept based on McKnight’s social trust model [18]. When building trust relationships, uncertain evidences are eval-uated using second-order probability and Dempster-Shafer-ian framework.

    In e-commerce platforms, reputation systems are widelyused as a method of building trust , e.g., eBay, Amazon, andEpinions. A central authority collects feedbacks of pastcustomers, which are used by future customers in shoppingdecisions. Resnick et al. [19] discuss that ensuring long-livedrelationships, forcing feedbacks, checking honesty of recom-mendations are some difficulties in reputation systems.Despotovic and Aberer [20] point out that trust-awareexchanges can increase economic activity since some ex-changes may not happen without trust. Jsang et al. [21]indicate that reputation systems are vulnerable to incorrectand bogus feedback attacks. Thus feedback ratings must be based on objective criteria to be useful. Dellarocas [22]proposes controlled anonymity and cluster filtering methodsas countermeasures to unfairly high/low ratings anddiscriminatory seller behavior attacks. Yu and Singh [23]present a weighted majority algorithm against three attackson reputation: complementary, exaggerated positive/nega-tive feedbacks. Guha et al. [24] use trust anddistrust conceptsin a discrete domain. Their results on Epinions website’sdatashow that distrust is helpful to measure trustworthinessaccurately. Reputation systems are vulnerableto sybil attacks[25], where a malicious entity can disseminate bogus feed- backs by creating multiple fake entities. To defend againstsybil attacks, Yu et al. [26] and Tran et al. [27] proposetechniques based on the observation that fake entitiesgenerally have many trust relationships among each other

     but they rarely have relationships with real users.Trust models on P2P systems have extra challenges

    comparing to e-commerce platforms. Malicious peers havemore attack opportunities in P2P trust models due to lack of a central authority. Hoffman et al. [28] discuss five commonattacks in P2P trust models: self-promoting, white-washing,slandering, orchestrated, and denial of service attacks. Theypoint out that defense techniques in trust models aredependent to P2P system architecture.

    On a structured P2P system, a DHT structure canprovide decentralized and efficient access to trust informa-tion. In Aberer and Despotovic’s trust model [1], peers

    report their complaints by using P-Grid [29]. A peer isassumed as trustworthy unless there are complaints aboutit. However, preexistence of trust among peers does notdistinguish a newcomer and an untrustworthy one. Eigen-trust [3] uses transitivity of trust to calculate global trustvalues stored on CAN [30]. Trusted peers are used toleverage building trust among regular peers and mitigatesome collaborative attacks. PeerTrust [4] defines transactionand community context parameters to make trust calcula-tion adaptive on P-Grid. While transaction context para-meter addresses application dependent factors, communitycontext parameter addresses P2P community related issues

    such as creating incentives to force feedbacks. BothEigentrust and Peertrust evaluate a recommendation basedon trustworthiness of the recommender. Song et al. [31]propose a fuzzy-logic-based trust model on Chord [32],

    CAN AND BHARGAVA: SORT: A SELF-ORGANIZING TRUST MODEL FOR PEER-TO-PEER SYSTEMS 15

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    3/14

    which performs similar results to Eigentrust with lowermessage overhead. PowerTrust [33] constructs an overlaynetwork based on the Power law distribution of peerfeedbacks. By using a random-walk strategy and utilizingpower nodes, feedback aggregation speed, and globalreputation accuracy are improved.

    Solutions on a structured network rely on a DHTstructure to store trust information. Each peer becomes atrust holder of another peer, which is assumed to provideauthentic global trust information. However, a trust holdermight be malicious and provide inauthentic information. InSORT, instead of considering a particular trust holder’sfeedback as authentic, public opinion from all acquain-tances is considered as a more credible information. Insteadof considering global trust information, local trust informa-tion is enough to make decisions as peers develop theirown trust networks.

    On unstructured P2P systems, trust queries are generallyflooded to the whole network. Cornelli et al. [34] flood trustqueries in Gnutella network [9]. A detailed computational

    trust model is not defined. Peers make decisions based oncollected feedbacks to mitigate inauthentic file downloads.Selcuk et al. [5] present a vector-based trust metric relying on both interactions and recommendations. A reputation queryis sent to neighbors if there are enoughneighbors. Otherwise,the query is flooded to network. Although five types of malicious peers are studied, recommendation-based attacksare not considered in the experiments. Yu et al. [35] store ahistory of interactions and consider ratings and recentness of interactions when evaluating trust. Number of interactionswith a peer is a measure of confidence about the peer.GossipTrust [6] defines a randomized gossiping [36] protocolfor efficient aggregation of trust values. A query is randomlyforwarded to some neighbors instead of all neighbors.Comparing to flooding approach, gossiping reduces reputa-tion query traffic. In SORT, peers send reputation queriesonly to peers interacted in the past, which reduces networktraffic comparing to flooding-based approaches. Further-more, each peer expands its trust network with time and canobtain more credible recommendations from acquaintances.

    Some trust models use signed credentials to store trustinformation. Ooi et al. [37] propose that each peer stores itsown reputation using signed certificates. When a peer needsto know about a stranger, it requests certificates of thestranger. NICE [38] uses signed cookies as a proof of good

     behavior. Peers dynamically form trust groups to protecteach other. Peers in the same group have a higher trust ineach other. Trust-based pricing and trading policies help toprotect integrity of groups. Using signed credentials elim-inates theneed for reputation queries butensuring validity of trust information in credentials is a problem. If a peermisbehaves after collecting good credentials, it is hard torevoke credentials without using a central authority.Furthermore, a public-keyinfrastructureis generally needed.

    How to evaluate interactions and how to define trustmetrics are important problems in trust models. Wang andVassileva [39] propose a Bayesian network model which

    uses different aspects of interactions on a P2P file sharingapplication. Victor et al. [40] define trust and distrustmetrics. A nonzero distrust value lets an agent to distinguishan untrusted user from a new user. A lattice structure with

    trust and knowledge axis is used to model various trustingconditions. Swamynathan et al. [41] decouple trust metricson service and recommendation contexts to assess trust-worthiness better. Creating contexts of trust can be helpfulto address issues in various domains. Gupta et al. [42] usereputation as a currency. A central agent issues money topeers in return for their services to others. This money can beused to get better quality of service. Bhargava et al. [43]discusses trading privacy to gain more trust in pervasivesystems. In another interesting study, Virendra et al. [44] usetrust concept in mobile ad-hoc networks to establish keysamong nodes and group nodes into domains. Trustworthi-ness is measured according to lost and misrouted packets.Trust establishment phases are defined for starting up newnodes, maintaining trust of old peers, and reestablishingtrust in malicious nodes.

    In SORT, to evaluate interactions and recommendations better, importance, recentness, and peer satisfaction para-meters are considered. Recommender’s trustworthiness andconfidence about recommendation are considered when

    evaluating recommendations. Additionally, service andrecommendation contexts are separated. This enabled usto measure trustworthiness in a wide variety of attackscenarios. Most trust models do not consider how interac-tions are rated and assume that a rating mechanism exists.In this study, we suggest an interaction rating mechanismon a file sharing application and consider many real-lifeparameters to make simulations more realistic.

    3 THE  COMPUTATIONAL  MODEL OF  SORT

    We make the following assumptions. Peers are equal in

    computational power and responsibility. There are noprivileged, centralized, or trusted peers to manage trustrelationships. Peers occasionally leave and join the network.A peer provides services and uses services of others. Forsimplicity of discussion, one type of interaction is con-sidered in the service context, i.e., file download.

    3.1 Preliminary Notations

     pi   denotes the   ith peer. When  pi  uses a service of anotherpeer, it is an interaction for pi. Interactions are unidirectional.For example, if   pi   downloads a file from   p j, i t i s a ninteraction for  pi  and no information is stored on  p j.

    If   pi   had at least one interaction with   p j,   p j   is an

    acquaintance of  pi. Otherwise, p j is a stranger to pi. Ai denotes pi’s set of acquaintances. A peer stores a separate history of interactions for each acquaintance.  SH ij  denotes pi’s servicehistory  with   p j   where   shij  denotes the current size of thehistory. shmax  denotes the upper bound for service historysize. Since new interactions are appended to the history,SH ij  is a time ordered list.

    Parameters of an interaction.  After finishing an interac-tion, pi evaluates quality of service and assigns a satisfactionvalue for the interaction. Let   0  skij   1   denote   pi’ssatisfaction about   kth interaction with   p j. If an interactionis not completed,   skij ¼  0. An interaction’s importance is

    measured with a  weight  value. Let   0   wkij    1  denote theweight of  kth interaction of  pi  with  p j.

    Semantics to calculate  skij  and  wkij  values depend on the

    application. In a file sharing application, authenticity of a

    16 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTI NG, VOL. 10, NO. 1, JANUARY/FEBRU ARY 2013

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    4/14

    file, average download speed, average delay, retransmis-sion rate of packets and online/offline periods of the serviceprovider might be some parameters to calculate skij. Size andpopularity of a file might be some parameters to calculatewkij. In Section 4, we suggest equations to calculate thesevalues in a file sharing application.

    Importance of an old interaction should decrease as new

    interactions happen. The fading effect parameter addressesthis issue and forces peers to stay consistent in futureinteractions. Old interactions lose their importance so a peercannot easily misbehave by relying on its good history. Let0   f kij    1  denote the fading effect of  kth interaction of  piwith p j. It is calculated as follows:

    f kij  ¼  k

    shij;   1   k    shij:   ð1Þ

    After adding (or deleting) an interaction to   SH ij,   pirecalculates f kij  values. The fading effect can be defined asa function of time but it has to be recalculated whenever its

    value is needed. Furthermore, interactions continually losevalue with time causing all peers to lose reputation eventhough no bad interaction happens.

    Let   SH ij ¼ f 1ij;  2ij; . . . ;  

    shijij   g, where    

    kij ¼ ðs

    kij; w

    kijÞ   is a

    tuple representing the information about   kth interaction.When adding a new interaction,  1ij is deleted if  shij ¼  shmax.An interaction is deleted from the history after an expirationperiod, which should be determined according to  shmax andhappening rate of interactions.

    Trust metrics.   Let   0   rij   1   denote   pi’s reputationvalue about   p j. Similarly,   0   stij; rtij   1   denote   pi’sservice and recommendation trust values about  p j, respec-tively. When   p j   is a stranger to   pi, we define that   rij  ¼stij ¼  rtij ¼  0. This is a protection against pseudonymchanging of malicious peers [45] since such peers losereputation and cannot gain advantage by appearing withnew identities. The following sections explain calculationof these metrics. For easy reading of these sections, Table 1lists some notations related to trust metrics.

    3.2 Service Trust Metric (stij)

    When evaluating an acquaintance’s trustworthiness in theservice context, a peer first calculates competence andintegrity belief values using the information in its servicehistory.   Competence belief   represents how well an acquain-

    tance satisfied the needs of past interactions [18], [17], [46].Let  cbij  denote the competence belief of  pi  about  p j   in theservice context. Average behavior in the past interactions isa measure of the competence belief. When evaluating

    competence, interactions should be considered in propor-tion to their weight and recentness. Then,  pi  calculates cbijas follows:

    cbij ¼  1

     cb

    Xshijk¼1

    skij  w

    kij  f 

    kij

    :   ð2Þ

     cb ¼ Pshijk¼1 ðwkij  f kijÞ   is the normalization coefficient. If   p jcompletes all interactions perfectly (skij  ¼  1   for all   k), thecoefficient  cb  ensures that cbij ¼  1. Since  0    s

    kij; w

    kij; f 

    kij   1

     by definition, cbij  always take a value between 0 and 1.A peer can be competent but may present erratic

     behavior. Consistency is as important as competence. Levelof confidence in predictability of future interactions is calledintegrity belief   [18], [17], [46]. Let   ibij   denote the integrity belief of  pi  about  p j  in the service context. Deviation fromaverage behavior (cbij) is a measure of the integrity belief.Therefore,   ibij   is calculated as an approximation to thestandard deviation of interaction parameters

    ibij ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

    1shij

    Xshij

    k¼1

    skij  w

    ij  f 

    ij  cbij

    2v uut   :   ð3ÞA small value of  ibij  means more predictable behavior of  p jin the future interactions. wij and  f 

    ij are the mean of  w

    kij and

    f kij  values in SH ij, respectively. Since the weight and fadingeffect parameters are independent from the satisfactionparameter and we are interested in the average behavior(satisfaction), these parameters are eliminated in calculation by using   wij   and   f 

    ij   values for all interactions. We can

    approximate f ij  as follows:

    f ij ¼   1shij

    Xshijk¼1

    f kij ¼  shij þ 1

    2shij 1

    2:   ð4Þ

    Based on the past interactions with   p j,   pi   has anexpectation about future interactions.  pi  wants to maintaina level of satisfaction according to this expectation. If thesatisfaction parameter is assumed to follow a normaldistribution, cbij   and   ibij  can be considered as approxima-tions of mean () and standard deviation ( ) of thesatisfaction parameter, respectively. According to thecumulative distribution function of normal distribution, aninteraction’s satisfaction is less than  cbij  with a   ð0Þ ¼  0:5probability. If  pi sets stij ¼  cbij, half of the future interactions

    will likely to have a satisfaction value less than  cbij. Thus,stij ¼  cbij   is an overestimate for   p j’s trustworthiness. Alower estimate makes pi more confident about p j since therewill be less future interactions that have a lower satisfactionvalue than stij  value. pi  may calculate  stij  as follows:

    stij ¼  cbij  ibij=2:   ð5Þ

    In this case, a future interaction’s satisfaction is less than  stijwith ð0:5Þ ¼  0:3185 probability. Adding ibij  into calcula-tion forces   p j   to behave more consistently since erratic behavior increases   ibij   value. Selection of   ð0:5Þ   comesfrom our experimental results. In real life, the satisfaction

    parameter may follow a different distribution. Each peer canuse statistical analysis to determine a more precise distribu-tion based on its past interactions and change (5) accord-ingly. This analysis can be extended for each acquaintance

    CAN AND BHARGAVA: SORT: A SELF-ORGANIZING TRUST MODEL FOR PEER-TO-PEER SYSTEMS 17

    TABLE 1Notations on the Trust Metrics

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    5/14

    so a peer can determine a specialized distribution for eachacquaintance and customize its trust calculation accordingto its acquaintances.

    Equation (5) is not complete since reputation of  p j  is notconsidered. Reputation is especially important in the earlyphases of a trust relationship. When there are no (or few)interactions with an acquaintance, a peer needs to rely onthe reputation metric. As more interactions happen, thecompetence and integrity belief values gain more impor-tance. Therefore,  pi  calculates stij  as follows:

    stij ¼  shijshmax

    ðcbij  ibij=2Þ þ   1   shijshmax

    rij:   ð6Þ

    Equation (6) balances effects of interactions and thereputation value on  stij   value. When  p j   is a stranger to  pi,shij  ¼  0, and stij ¼  rij. As more interactions happen with p j,shij   gets larger and   rij   becomes less important. Whenshij  ¼  shmax, rij  has no effect on  stij.

    3.3 Reputation Metric (rij)

    The reputation metric measures a stranger’s trustworthiness based on recommendations. In the following two sections,we assume that p j is a stranger to pi and pk is an acquaintanceof  pi. If  pi  wants to calculate  rij  value, it starts a reputation

    query to collect recommendations from its acquaintances.Algorithm 1 shows how  pi   selects trustworthy acquain-

    tances and requests their recommendations. Let  max denotethe maximum number of recommendations that can becollected in a reputation query and  jS j denote the size of aset   S . In the algorithm,   pi   sets a high threshold forrecommendation trust values and requests recommenda-tions from highly trusted acquaintances first. Then, itdecreases the threshold and repeats the same operations.To prevent excessive network traffic, the algorithm stopswhen  max  recommendations are collected or the thresholddrops under  ðrt   rtÞ  value.

    Algorithm 1.  GETRECOMMENDATIONS( p j)1:   rt (

      1jAij

    P pk2Ai

    rtik

    2:    rt (   1jAij

     ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiP pk2Ai

    ðrtik  rtÞ2

    q 3:   thhigh  (  14:   thlow  (  rt þ  rt5:   rset ( ;

    6:   while  rt   rt   thlow  and jrsetj <  max  do7:   for all pk 2  Ai  do8:   if  thlow    rtik   thhigh   then9:   rec (   RequestRecommendation( pk, p j)

    10:   rset (  rset [ frecg11:   end if12:   end for13:   thhigh  (  thlow14:   thlow  (  thlow   rt=215:   end while16: return rset

    Let   T i ¼ f p1; p2; . . . pti g   be the set of trustworthy peersselected by Algorithm 1 and   ti  be the number of peers inthis set. If   pk  2  Ai   had at least one interaction with   p j, itreplies the following information as a recommendation:

    .   cbkj; ibkj. These values are a summary of   pk’sinteraction history with  p j.

    .   shkj. The history size with   p j   is a measure of   pk’sconfidence in   cbkj   and   ibkj   values. If   pk   had manyinteractions, cbkj  and ibkj  values are more credible.

    .   rkj. If  pk had any interaction with p j, it should alreadyhave calculated   rkj   value, which is a summary of recommendations of  pk’s acquaintances. pi  can makea better approximation to global reputation of  p j  byaggregating rkj  values.

    .    kj.    kj   denotes the number of   pk’s acquaintanceswhich provided a recommendation during thecalculation of   rkj. This value is a measure of   pk’sconfidence in rkj value. If   kj value is close to  max, rkjvalue is credible since more peers agree on rkj value.

    Including   shkj   and    kj   values in the recommendationprotects pk’s credibility in the view of  pi. If  pk’s knowledgeabout   p j   is insufficient,   pi   figures this out by examiningshkj   and    kj  values. Thus,   pi   does not judge   pk   harshly if cbkj; ibkj; rkj  values are inaccurate comparing to recommen-

    dations of other peers.A recommendation is evaluated according to recommen-

    dation trust value of the recommender. In particular,   pievaluates   pk’s recommendation based on   rtik   value. Thecalculation of  rtik  value is explained in Section 3.4. If  pi  hasnever received a recommendation from pk, we set rtik ¼  rik.

    After collecting all recommendations,   pi   calculates anestimation about reputation of  p j  by aggregating  rkj  valuesin all recommendations. Let   erij   denote   pi’s estimationabout reputation of   p j. In this calculation,   rkj   values areconsidered with respect to   kj  as shown:

    erij ¼  1

     erX

     pk2T i

    rtik   kj  rkj

    :   ð7Þ

    If   pk   is trustworthy and collected many recommenda-tions when calculating  rkj  value, rtik  and  kj  values will belarger and  rkj  will affect the result more.   er  ¼

    P pk2T i

    ðrtik  kjÞ   is the normalization coefficient.

    Then,   pi   calculates estimations about competence andintegrity beliefs of  p j denoted by ecbij and  eibij, respectively.These values are calculated using  cbkj  and ibkj  values in allrecommendations. Additionally, cbkj  and ibkj  are evaluated based on  shkj values. Equations (8) and (9) show calculationof   ecbij   and   eibij, where    ecb ¼ P pk2T i ðrtik  shkjÞ   is thenormalization coefficient.

    ecbij ¼  1

     ecb

    X pk2T i

    rtik  shkj  cbkj

    ;   ð8Þ

    eibij ¼  1

     ecb

    X pk2T i

    rtik  shkj  ibkj

    :   ð9Þ

    While   erij   represents the information collected fromacquaintances of  pi’s acquaintances, ecbij and  eibij representown experiences of  pi’s acquaintances with  p j.

     pi calculates the average of history sizes in all recommen-

    dations, sh ¼P

     pk2T i ðshkjÞ=ti  value. If  sh   is close to  shmaxvalue, pi’s acquaintances had a high level of own experiencewith   p j   and   ecbij;eibij   values should be given moreimportance than   erij   value. Otherwise,   erij  value is more

    18 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTI NG, VOL. 10, NO. 1, JANUARY/FEBRU ARY 2013

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    6/14

    important. With these observations,   rij   is calculated asfollows:

    rij ¼ bshc

    shmaxðecbij  eibij=2Þ þ   1 

     bshc

    shmax

    erij:   ð10Þ

    In (10), if   sh   is close to   shmax, own experiences of acquaintances (ecbij and eibij values) will be more important

    than estimation to the reputation (erij).

    3.4 Recommendation Trust Metric (rtik)

    After calculating   rij   value,   pi   updates recommendationtrust values of recommenders based on accuracy of theirrecommendations. This section explains how pi updates rtikaccording to  pk’s recommendation.

    Similar to interactions, three parameters are calculatedabout recommendations. Let   0   rsz ik; rw

    z ik,   rf 

    z ik    1  denote

    the satisfaction, weight, and fading effect of   pi’sz th recommendation from   pk, respectively. Let    z ik  ¼ðrsz ik; rw

    z ikÞ   be a tuple representing the information about

    z th recommendation and   RH ik ¼ f 1

    ik

    ;  2

    ik

     . . .  rhik

    ik

      g   be therecommendation history of   pi   with   pk. Then,   rhik   is thesize of   RH ik   and   rhmax   denotes the upper bound of recommendation history size.

    To calculate the satisfaction parameter,   pi   comparesrkj; cbkj; ibkj   values with  erij;ecbij;eibij  values, respectively.If these values are close, pk’s recommendation is good and ahigh value should be assigned to the satisfaction parameter.Then, the calculation of  rsz ik  is as follows:

    rsz ik  ¼   1  jrkj  erijj

    erij

    þ   1 

     jcbkj  ecbijj

    ecbij

    þ   1 

     jibkj  eibijj

    eibij

    3:

    ð11Þ

    From (7)-(9), we know that  pk’s recommendation affectsrij  in proportion to  shkj  and   kj  values. The effect of thesevalues on rij   is also proportional to  bshc due to (10). Thus,the weight of a recommendation should be calculated withrespect to shkj;  kj; bshc values. pi calculates rw

    z ik as follows:

    rwz ik ¼ bshc

    shmax

    shkjshmax

    þ   1  bshc

    shmax

       kj max

    :   ð12Þ

    If shkj and  kj values are large, rwz ik will have a value close

    to 1. This is the desired result based on our observations in

    (7)-(9). bshc value balances the effects of  shkj and   kj valueson  rwz ik  value. If  pi’s acquaintances had many interactionswith  p j, the value of  bshc  will be large and  shkj  will havemore effect on   rwz ik. When   bshc   is small,    kj   is moreimportant.

    Let  rcbik   and  ribik  denote  pi’s competence and integrity beliefs about   pk   in the recommendation context, respec-tively. pi  calculates rtik   in a similar way to stik

    rcbik  ¼  1

     rcb

    Xrhikz ¼1

    rsz ik  rw

    z ik  rf 

    z ik

    ;   ð13Þ

    ribik  ¼

     ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1

    rhik

    Xrhikz ¼1

    rsz ik  rw

    ik  rf 

    ik  rcbik

    2v uut   ;   ð14Þ

    rtik  ¼  rhikrhmax

    ðrcbik  ribik=2Þ þ rhmax  rhik

    rhmaxrik:   ð15Þ

     rcb ¼ Prhikz ¼1 ðrw

    z ik  rf 

    z ikÞ is the normalization coefficient. rw

    ij

    and   rf ij   are the mean of   rwkij   and   rf kij   values in   RH ij,respectively. If  pi  had no recommendation from  pk, we setrtik  ¼  rik  according to (15).

    Fig. 1 explains all scenario briefly. Assume that  pi  wantsto get a particular service.   p j   is a stranger to   pi   and aprobable service provider. To learn   p j’s reputation,   pirequests recommendations from its acquaintances. Assumethat pk  sends back a recommendation to  pi. After collectingall recommendations,   pi   calculates   rij. Then,   pi   evaluates pk’s recommendation, stores results in   RH ik, and updatesrtik. Assuming p j   is trustworthy enough, pi  gets the servicefrom   p j. Then,   pi  evaluates this interaction and stores the

    results in  SH ij, and updates  stij.

    3.5 Selecting Service Providers

    When   pi  searches for a particular service, it gets a list of service providers. Considering a file sharing application,  pimay download a file from either one or multiple uploaders.With multiple uploaders, checking integrity is a problemsince any file part downloaded from an uploader might be inauthentic. Some complex methods utilizing Merkelhashes, secure hashes, and cryptography [47], [48] can beused to do online integrity checking with multiple uploa-ders. Since this issue is beyond the scope of this paper, the

    next sections assume one uploader scenario.Service provider selection is done based on service trustmetric, service history size, competence belief, and integ-rity belief values. When   pi   wants to download a file, itselects an uploader with the highest service trust value. If service trust values are equal, the peer with a larger servicehistory size (sh) is selected to prioritize the one with moredirect experience. If these values are equal, the one with alarger   cb  ib=2   value is chosen. If   cb  ib=2   values areequal, the one with larger competence belief value isselected. If these values are equal, upload bandwidths arecompared. If the tie cannot be broken, one of the equalpeers is randomly selected.

     pi  might select a stranger due to its high reputation. Forexample, if  pm   is a stranger,  pi  sets  stim ¼  rim  according to(6). If   pm   is more trustworthy than all acquaintances,   piselects pm  as the service provider.

    CAN AND BHARGAVA: SORT: A SELF-ORGANIZING TRUST MODEL FOR PEER-TO-PEER SYSTEMS 19

    Fig. 1. Operations when receiving a recommendation and having aninteraction.

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    7/14

    Selecting best service providers may overload some peerswhile others are idle. A selection algorithm should considerload balancing to utilize all resources of peers. In oursimulations, a peer’s number of simultaneous download/upload operations are limited to a maximum. If a peerreaches its maximum number of uploads, it rejects incomingrequests so the requester can get services from others. Thissimple load balancing mechanism does not consider thewhole system state to balance loads. Thus, this issue needsmore investigation in a future work.

    4 EXPERIMENTS AND  ANALYSIS

    A file sharing simulation program is implemented in Java toobserve results of using SORT in a P2P environment. Somequestions studied in the experiments are as follows: howSORT handles attacks, how much attacks can be mitigated,how much recommendations are (not) helpful in correctlyidentifying malicious peers, and what type of attackers arethe most harmful.

    4.1 Method

    The simulation runs as cycles. Each cycle represents a periodof time. Downloading a file is an interaction. A peer sharingfiles is called an uploader. A peer downloading a file is called adownloader. The set of peers who downloaded a file from apeer are called  downloaders  of the peer. An ongoing down-load/upload operation is called a  session. Simulation para-meters are generated based on results of several empiricalstudies [8], [9], [10] to make observations realistic. Moredetails can be found in Appendix A, which can be foundon the Computer Society Digital Library at http://doi.

    ieeecomputersociety.org/10.1109/TDSC.2012.74 and in [49].A file search request reaches up to 40 percent of the

    network and returns online uploaders only. A file isdownloaded from one uploader to simplify integritychecking. All peers are assumed to have antivirus softwareso they can detect infected files. Four different cases arestudied to understand effects of trust calculation methodsunder attack conditions:

    .   No trust. Trust information is not used for uploaderselection. An uploader is selected according to its bandwidth. This method is the base case to under-stand if trust is helpful to mitigate attacks.

    .   No reputation query.   An uploader is selected basedon trust information but peers do not requestrecommendations from other peers. Trust calcula-tion is done based on SORT equations but reputa-tion (r) value is always zero for a peer. This methodwill help us to assess if recommendations arehelpful.

    .   SORT. All SORT equations are used as in Section 3.

    .   Flood reputation query. SORT equations are used but areputation query is flooded to the whole network.This method will help us to understand if gettingmore recommendations is helpful to mitigate at-

    tacks. A peer may request a recommendation fromstrangers. In this case, a peer assigns a recommenda-tion trust value to the stranger as rtstranger ¼  rt   rt,where   rt   and    rt   are the mean and standard

    deviation of recommendation trust values of allacquaintances. If a peer does not have any acquain-tances, rtstranger ¼  0:1.

    Before starting a session, a downloader makes a bandwidth agrement with the uploader, which declaresthe amount of bandwidth it can devote. Parameters of eachfinished session are recorded as an interaction. The satisfac-tion parameter is calculated based on following variables:

    .   Bandwidth. The ratio of average bandwidth (AveBw)and agreed bandwidth (AgrBw) is a measure of reliability of an uploader in terms of bandwidth.

    .   Online period.   The ratio of online (OnP) and offline(OffP) periods represents availability of an uploader.

     pi   calculates the satisfaction of   kth interaction with   p j   asfollows:

    skij  ¼

    AveBw

    AgrBw þ

      OnP 

    OnP  þ  OffP 

    2 if  AveBw < AgrBw;

    1 þ  OnP 

    OnP  þ  OffP  2 otherwise:

    8>><

    >>:  ð16Þ

    Calculation of the satisfaction parameter may includemore variables such as, delay, jitter, and retransmission of dropped packets [39], [44]. These variables are not includedin the equation since they are not simulated.

    The weight of an interaction is calculated based on twovariables:

    .   File size. A large file is more important than a smallone due to bandwidth consumption. However,importance is not directly related to the file size. We

    assume that files over 100 MB have the sameimportance..   Popularity.   Popular files are more important than

    unpopular ones. We assume that number of uploa-ders of a file is an indication of its popularity.

    Let Uploadermax  be the number of uploaders of the mostpopular file.  size  and   #Uploaders  denote the file size andthe number of uploaders, respectively.   pi   calculates theweight parameter of  kth interaction with  p j  as follows:

    wkij ¼

    size

    100MB  þ

     #Uploaders

    Uploadermax

    2 if  size ><>>:

      ð17Þ

    Sometimes rarely shared files might be important butthis case is discarded for simplicity.

    4.2 Attacker Model

    Attackers can perform service-based and recommendation- based attacks. Uploading a virus infected or an inauthenticfile is a service-based attack. Giving a misleading recommen-dation intentionally is a   recommendation-based attack. Thereare two types of misleading recommendations [22]:

    .

      Unfairly high.   Giving a positively-biased trust valueabout a peer where r  ¼  cb ¼  ib ¼  1  and  sh  ¼  shmax..   Unfairly low.  Giving a negatively-biased trust value

    about a peer where r  ¼  cb ¼  ib ¼  0  and  sh  ¼  shmax.

    20 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTI NG, VOL. 10, NO. 1, JANUARY/FEBRU ARY 2013

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    8/14

    Setting sh ¼  shmax  maximizes the effect of a misleadingrecommendation. A   fair recommendation   is the recommen-der’s unbiased trust information about a peer. A service- based attack can be detected immediately since a virusinfected or an inauthentic file can be recognized after thedownload. However, it is hard for a peer to determine arecommendation-based attack if the peer’s own experienceconflicts with a recommendation. Since a recommendermight be cheated by attackers, there is no evidence to provethat a recommendation is intentionally given as misleading.

    A   good peer   uploads authentic files and gives fairrecommendations. A malicious peer (attacker)  performs bothservice and recommendation-based attacks. Four differentattack behaviors are studied for malicious peers: naive,discriminatory, hypocritical, and oscillatory behaviors. Anonmalicious network consists of only good peers. A maliciousnetwork   contains both good and malicious peers. If mal-icious peers do not know about each other and performattacks independently, they are called as individual attackers.Individual attackers may attack each other. For individual

    attackers, attack behaviors are as follows:

    1.   Naive.   The attacker always uploads infected/inauthentic files and gives unfairly low recommen-dations about others [22].

    2.   Discriminatory. The attacker selects a group of victimsand always uploads infected/inauthentic files tothem [22], [5]. It gives unfairly low recommendationsabout victims. For other peers, it behaves as a goodpeer.

    3.   Hypocritical.  The attacker uploads infected/iauthen-tic files and gives unfairly low recommendations

    with x percent probability [3], [5]. In the other times,it behaves as a good peer.4.   Oscillatory.  The attacker builds a high reputation by

     being good for a long time period. Then, it behavesas a naive attacker for a short period of time. Afterthe malicious period, it becomes a good peer again.

    If malicious peers know each other and coordinatewhen launching attacks, they are called as   collaborators.Collaborators always upload authentic files and providefair recommendations to each other. Collaborators giveunfairly high recommendations about each other whenrecommendations are requested by good peers (to pro-

    mote reputation of collaborators). All collaborators behavesame for these situations. In the other situations, they behave according to the attack behavior described below.

    1.   Naive.   Collaborators always upload infected/in-authentic files to good peers and give unfairly lowrecommendations about them.

    2.   Discriminatory.  Collaborators select a group of peersas victims. They upload infected/inauthentic files tothe victims and give unfairly low recommendationsabout them. They upload authentic files to  nonvictim peers. They also give fair recommendations aboutnonvictim peers.

    3.   Hypocritical.   Collaborators upload infected or in-authentic files to good peers or give unfairly lowrecommendations about them with x percent prob-ability. In the other cases, they behave as good peers.

    4.   Oscillatory.   Collaborators behave as good peers fora long time period. Then, they do service-basedand recommendation-based attacks in the naivecollaborator mode for a short period of time.Good/malicious periods are synchronized amongall collaborators.

    Another type of attackers are  pseudospoofers, who changetheir pseudonym to escape from being identified [45], [5]. Inaddition to above attacker behaviors, we studied individualattackers and collaborators in the pseudonym changingmode. In the experiments, an individual attacker change itspseudonym periodically. Collaborators are assumed to usea special protocol to achieve synchronization duringpseudonym change so they can keep coordination.

    4.3 Analysis on Individual Attackers

    This section explains the results of experiments on indivi-dual attackers. For each type of individual attacker, two

    separate network topologies are created: one with 10 percentmalicious and one with 50 percent malicious. Each networktopology is tested with four trust calculation methods. In theexperiments, a hypocritical attacker behaves malicious in20 percent of all interactions. A discriminatory attackerselects 10 percent of all peers as victims. An oscillatoryattacker behaves good for 1,000 cycles and malicious for100 cycles.

    Service-based attacks.  Table 2 shows the percentage of service-based attacks prevented by each trust calculationmethod. When a malicious peer uploads an infected/inauthentic file, it is recorded as a service-based attack.Number of attacks in No Trust method is considered as the

     base case to understand how many attacks can happenwithout using trust information. Then, number of attacksobserved for each trust calculation method is comparedwith the base case to determine the percentage of attacksprevented. In the table, NoRQ and FloodRQ denote “Noreputation query” and “Flood reputation query” methods,respectively.

    In a 10 percent malicious network, all methods canprevent more than 60 percent of attacks of naive attackers.NoRQ method’s performance is close to other methodssince a good peer identifies a naive attacker after having thefirst interaction. Thus, recommendations are not veryhelpful in the naive case. For discriminatory attackers, the

    situation is similar since their naive attacks easily revealtheir identity to victims. For the hypocritical and oscillatoryattackers, a good peer may not identify an attacker in thefirst interaction. Therefore, recommendations in SORT and

    CAN AND BHARGAVA: SORT: A SELF-ORGANIZING TRUST MODEL FOR PEER-TO-PEER SYSTEMS 21

    TABLE 2Percentage of Service-Based Attacks

    Prevented for Individual Attackers

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    9/14

    FloodRQ methods can be helpful to identify some attackers before an attack happens.

    In a 50 percent malicious network, the prevention ratio

    can be maintained over 60 percent for naive and discrimi-natory behaviors since attackers are identified quickly.NoRQ method can perform close to SORT and FloodRQmethods. In hypocritical and oscillatory behaviors, SORT canprevent nearly 40 percent of all attacks, which is still goodconsidering the extreme number of attackers. Althoughattack prevention ratio is higher in the naive behavior,number of attacks is 4-8 times higher than other attackertypes. Thus, naive attackers can be considered moresuccessful than other type of individual attackers.

    In SORT, a peer interacts less with strangers as its set of acquaintances grows. Therefore, rate of service-based attacksdecreases with time. In all cases, SORT’s prevention ratio forservice-based attacks is close to FloodRQ method. However,FloodRQ method causes 7-10 times more recommendationtraffic than SORT. The difference in misleading recommen-dations is much higher as explained below. Thus, SORT hasa better performance tradeoff than FloodRQ method.

    Recommendation-based attacks.   In the simulations,when a malicious peer gives a misleading recommendation,it is recorded as a recommendation-based attack. Fig. 2 showsthe rate of recommendation-based attacks in the 10 percentmalicious network. When SORT is used, peers form theirown trust network with time and do not request recommen-dations from untrustworthy peers. Therefore, SORT can

    effectively mitigate recommendation-based attacks withtime. In FloodRQ method, peers collect more recommenda-tions from both acquaintances and strangers. Therefore,attackers find opportunity to disseminate more misleading

    recommendations as strangers. In FloodRQ method, numberof recommendation-based attacks are roughly 10 to 30 timesmore than SORT in discriminatory, hypocritical, and

    oscillatory behaviors. Naive attackers cannot disseminatemisleading recommendations with SORT since they areidentified after the first interaction. In FloodRQ method, if apeer is not interacted with a naive attacker before, it canrequest recommendations from the attacker as a stranger.Therefore, naive attackers can disseminate more misleadingrecommendations than other attacker types in FloodRQmethod. This observation shows that instead of consideringpublic opinion, collecting recommendations from acquain-tances provides more reliable information.

    In 50 percent malicious network, recommendation trustvalues of attackers are close to good peers so attackers candisseminate more misleading recommendations. However,SORT still mitigates misleading recommendations 5-10times more than FloodRQ method.

    Distribution of trust metrics.  Peers with higher capabil-ities (network bandwidth, online period, and number of shared files) can finish more interactions successfully. Thus,they generally have better reputation and service trustvalues. Recommendation trust values are not directly relatedto peer capabilities since giving a recommendation does notrequire high capabilities.

    Fig. 3 shows average reputation values of peers at the endof simulation in 10 percent network with SORT. In ourexperiments, some peers had 500-600 downloaders. Naive

    attackers are not shown in the figure. They have zeroreputation values since they attack in all interactions.Discriminatory attackers only attack to victims so they can build up some reputationamong nonvictimpeers. Oscillatory

    22 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTI NG, VOL. 10, NO. 1, JANUARY/FEBRU ARY 2013

    Fig. 2. Recommendation-based attacks of individual attackers with respect to the time.

    Fig. 3. Reputation values in individual attacker experiments (with SORT).

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    10/14

    attackers gain the highest reputation since their attack periodis 10 percent of their good period. Hypocritical attackers havelower reputation value than oscillatory ones since they attackin 20 percent of all interactions. In 50 percent maliciousnetwork, average reputation values of good peers drops dueto large number of misleading recommendations. However,

    attackers have lower reputation values than good peers in allexperiments.

    Generally, a peer performs few interactions with anacquaintance but may request many recommendationsfrom the acquaintance before each download operation.Thus, reputation values have a strong correlation withservice trust values but not with recommendation trustvalues. Giving many recommendations increases the chanceof giving inaccurate information, especially in hypocriticaland oscillatory attackers. Furthermore, having a large set of downloaders might have a negative effect on recommenda-tion trust values. A peer with many downloaders gets morerecommendation requests and its average recommendation

    trust value drops. Naive attackers have zero recommenda-tion trust values since their reputation is zero. In the other behaviors, attackers have less recommendation trust valuesthan good peers so recommendations of good peers aremore credible.

    4.4 Analysis on Individual Pseudospoofers

    This section explains the results of experiments onindividual pseudospoofers. Pseudospoofers change theirpseudonyms after every 1,000 cycles. For other parameters,attackers behave as described in Section 4.3.

    Service-based attacks.  Table 3 shows the attack preven-tion ratio for individual pseudospoofers. The values

    obtained by comparing the base case with each trustcalculation method. After every pseudonym change, attack-ers become strangers to others. This behavior has two effects:1) Pseudospoofers clear their bad history. Hence a good peermay interact with them when it cannot find more reliableuploaders, which increases attacks. 2) Pseudospoofers be-come more isolated from good peers. They lose their abilityto attract good peers with time, which decreases attacks.

    In all experiments, NoRQ method performs 20-40 percentworse than other trust calculation methods. Since norecommendation is collected in NoRQ method, chance of selecting an attacker again is higher after every pseudonymchange. Therefore, SORT and FloodRQ methods have better

    results in the experiments. Recommendations increase thechance of finding good peers among strangers.

    When SORT or FloodRQ methods are used, the preven-tion ratio of naive pseudospoofers is less than other type of 

    attackers. This is a result of the first effect. Naivepseudospoofers attack in all interactions and clean their bad history after every pseudonym change. Thus, naivepseudospoofers can perform 40 percent more attacks thannaive individual attackers. In the other types of pseudos-poofers, the second effect gains importance and attackers

     become more isolated. Therefore, attack prevention ratiodoes not decrease as much as naive pseudospoofers, evenincreases in some cases.

    Recommendation-based attacks.   Comparing to non-pseudonym changing individual attackers, recommenda-tion-based attacks decrease due to the second effect.Attackers become more isolated from good peers after everypseudonym change and get less recommendation requests.Therefore, their recommendation-based attacks sharplydecrease after every pseudonym change in a 10 percentmalicious network. This situation is slightly different in a50 percent malicious network. The good peers need tointeract with more strangers since they can hardly find eachother. Hence attackers can distribute more misleadingrecommendations. However, recommendation-based attackrates of individual pseudospoofers are still 70-80 times lessthan individual attackers explained in Section 4.3.

    Distribution of trust metrics. Since pseudospoofers cleartheir old history in every 1,000 cycles, they cannot gain ahigh reputation among good peers. This situation is samefor service trust and recommendation trust metrics.Average reputation, service trust, and recommendationtrust values of pseudospoofers remain under 0.1 value inmost simulations.

    4.5 Analysis on Collaborators

    Collaboration of attackers generally makes attack preven-tion harder. This section presents experimental results oncollaborators. Collaborators form teams of size 50 andlaunch attacks as teams. We first tried teams of size 10 butthis was not enough to benefit from collaboration andresults were close to individual attackers. Hence team sizeis increased to observe effects of collaboration better. Attackprobability of hypocritical collaborators is set to 0.2.Oscillatory collaborators behave good for 1,000 cycles andmalicious for 100 cycles. Discriminatory collaborators attackto the same group of victims, which are 10 percent of allpeers. In other words, different teams are attacking thesame victims and stay honest with others.

    Service-based attacks.  Table 4 shows the percentage of 

    attacks prevented by each method. Attacks of naivecollaborators can be prevented by 60 percent or more.Naive collaborators are identified by good peers after thefirst interaction so they are not asked for recommendations.

    CAN AND BHARGAVA: SORT: A SELF-ORGANIZING TRUST MODEL FOR PEER-TO-PEER SYSTEMS 23

    TABLE 3Percentage of Service-Based Attacks

    Prevented for Individual Pseudospoofers

    TABLE 4Percentage of Service-Based Attacks

    Prevented for Collaborators

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    11/14

    Thus, they cannot praise each other with unfairly highrecommendations and cannot take advantage of collabora-tion. Discriminatory collaborators naively attack to victimsso they are quickly identified by the victims. Their

    collaboration does not help to launch more attacks thanindividual discriminatory attackers. Hypocritical and oscil-latory collaborators can take advantage of collaboration.They attract more good peers than individual attackers bypraising each other. They are not quickly identified sincethey perform attacks occasionally. Especially in a 50 percentmalicious network, SORT performs worst than NoRQmethod for hypocritical and oscillatory behaviors. In suchan extreme environment, misleading recommendations of collaborators cause a pollution in the recommendation pooland affect decisions of peers negatively. In such extremelymalicious environments, some trusted peers might help

    good peers for finding each other.Recommendation-based attacks.   Fig. 4 shows recom-

    mendation-based attack rate with SORT. In a 10 percentmalicious network, attacks can be contained. However,

    collaboration enables to disseminate more misleadingrecommendations than individual attack scenarios.

    In a 50 percent malicious network, attacks of hypocriticaland oscillatory collaborators can be contained on a level but

    cannot be decreased to an acceptable level. They cancontinue to disseminate misleading recommendations dueto their large number. Discriminatory collaborators candisseminate more misleading recommendations than otherssince they are trusted by 90 percent of all peers. Discrimi-natory collaborators constitute 50 percent of all peerpopulation while victims are 10 percent of all population.Therefore, they can deceive other good peers with theirmisleading recommendations. Although they can continueto propagate misleading recommendations about victims,they cannot launch more service-based attacks since theyare identified by victims.

    Distribution of trust metrics.   Fig. 5 shows reputationvalues of collaborators. Naive collaborators are not shownin the figure since they are quickly identified by good peersand have zero reputation values.

    24 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTI NG, VOL. 10, NO. 1, JANUARY/FEBRU ARY 2013

    Fig. 4. Recommendation-based attacks of collaborators with respect to the time.

    Fig. 5. Reputation values in collaborative attack experiments (with SORT).

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    12/14

    In a 10 percent malicious network, misleading recom-mendations of discriminatory collaborators slightly de-crease the reputation of victims as shown in Fig. 5a. This isnot observed in the individual discriminatory behavior sinceeach attacker selects a different set of victims and misleadingrecommendations are evenly distributed among all peers.

    Discriminatory collaborators attack to only 10 percent of peers so impact of misleading recommendations is concen-trated on a small group. Since they do not attack nonvictimpeers and give unfairly high recommendations about eachother, they can maintain a high reputation. This situation ismore clear with the 50 percent malicious network as shownin Fig. 5d. Victims has a very low reputation since 50 percentof all peers give misleading recommendations about them.Reputation of nonvictim good peers does not change muchsince they are not attacked.

    As shown in Fig. 5b, good peers can maintain higherreputation than hypocritical collaborators in the 10 percent

    malicious network. In 50 percent malicious network setup,collaborators gain higher reputation values and decreasereputation of good peers as shown in Fig. 5e. However, theystill have lower repuation values than good peers. Oscilla-tory collaborators have a similar distribution to hypocriticalcollaborators. They have a higher reputation average sincetheir attack frequency is less than hypocritical collaborators.

    Distributions of service trust and recommendation trustvalues are similar to reputation values. Thus, we will notdiscuss distributions of these metrics further.

    4.6 Analysis on Collaborating Pdesudospoofers

    This section presents the results of experiments oncollaborating pseudospoofers. Collaborating pseudospoo-fers are assumed to change their pseudonyms in every 1,000cycles using an external synchronization method. For theother parameters, they behave like collaborators.

    Service-based attacks.  Table 5 shows the percentage of attacks prevented by each trust calculation method. Theresults verify our observations in Section 4.4. In naive anddiscriminatory behaviors, changing pseudonym causes thefirst effect: collaborators clear bad history and get moreattack opportunities. Therefore, attack prevention ratiodrops for these collaborators. In hypocritical and oscillatory behaviors, the second effect becomes more important. After

    every pseudonym change, collaborators become moreisolated from good peers and lose their attack opportunities.Therefore, attack rate slightly drops and attack preventionratio increases.

    SORT’s performance is the best in all test cases. SORTenables peers to establish stronger trust relationships thanNoRQ and FloodRQ methods. In NoRQ, a good peer cannotlearn experience of others through recommendations andpseudonym changing lets attackers to launch more attacks.In FloodRQ, collecting recommendations of strangersenables collaborators to disseminate more misleadingrecommendations. Since SORT collects recommendations

    only from acquaintances, reputation queries return morereliable information than FloodRQ method.

    Recommendation-based attacks. As in individual pseu-dospoofers, collaborating pseudospoofers are isolated morefrom good peers after every pseudonym change. They getless recommendation requests and thus they can do nearlyzero recommendation-based attacks in 10 percent maliciousnetwork. In 50 percent malicious network, collaboratingpseudospoofers can distribute more misleading recommen-dations since good peers need to interact with more strangersto find each other. However, these misleading recommenda-tions are still in a negligible level.

    Trust metrics. Like individual pseudospoofers, collabor-

    ating pseudospoofers cannot gain high reputation, servicetrust, or recommendation trust values since they losereputation after every pseudonym change. Due to their lowrecommendation trust values, collaborators are not askedfor recommendations by good peers. Therefore, they candistribute small number of misleading recommendations.

    5 CONCLUSION

    A trust model for P2P networks is presented, in which a peercan develop a trust network in its proximity. A peer canisolate malicious peers around itself as it develops trustrelationships with good peers. Two context of trust, service

    and recommendation contexts, are defined to measurecapabilities of peers in providing services and givingrecommendations. Interactions and recommendations areconsidered with satisfaction, weight, and fading effectparameters. A recommendation contains the recommender’sown experience, information from its acquaintances, andlevel of confidence in the recommendation. These para-meters provided us a better assessment of trustworthiness.

    Individual, collaborative, and pseudonym changing at-tackers are studied in the experiments. Damage of collabora-tion and pseudospoofing is dependent to attack behavior.Although recommendations are important in hypocriticaland oscillatory attackers, pseudospoofers, and collaborators,

    they are less useful in naive and discriminatory attackers.SORT mitigated both service and recommendation-basedattacks in most experiments. However, in extremely mal-icious environments such as a 50 percent malicious network,collaborators can continue to disseminate large amount of misleading recommendations. Another issue about SORT ismaintaining trust all over the network. If a peer changes itspoint of attachment to the network, it might lose a part of itstrust network. These issues might be studied as a futureworkto extend the trust model.

    Using trust information does not solve all securityproblems in P2P systems but can enhance security andeffectiveness of systems. If interactions are modeled

    correctly, SORT can be adapted to various P2P applications,e.g., CPU sharing, storage networks, and P2P gaming.Defining application specific context of trust and relatedmetrics can help to assess trustworthiness in various tasks.

    CAN AND BHARGAVA: SORT: A SELF-ORGANIZING TRUST MODEL FOR PEER-TO-PEER SYSTEMS 25

    TABLE 5Percentage of Service-Based Attacks

    Prevented for Collaborating Pseudospoofers

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    13/14

    REFERENCES[1] K. Aberer and Z. Despotovic, “Managing Trust in a Peer-2-Peer

    Information System,” Proc. 10th Int’l Conf. Information and Knowl-edge Management (CIKM),  2001.

    [2]   F. Cornelli, E. Damiani, S.D.C. di Vimercati, S. Paraboschi, and P.Samarati, “Choosing Reputable Servents in a P2P Network,” Proc.11th World Wide Web Conf. (WWW),  2002.

    [3]   S. Kamvar, M. Schlosser, and H. Garcia-Molina, “The (Eigentrust)Algorithm for Reputation Management in P2P Networks,”  Proc.

    12th World Wide Web Conf. (WWW),  2003.[4]   L. Xiong and L. Liu, “Peertrust: Supporting Reputation-Based

    Trust for Peer-to-Peer Ecommerce Communities,”   IEEE Trans.Knowledge and Data Eng.,  vol. 16, no. 7, pp. 843-857, July 2004.

    [5]   A.A. Selcuk, E. Uzun, and M.R. Pariente, “A Reputation-BasedTrust Management System for P2P Networks,”   Proc. IEEE/ACMFourth Int’l Symp. Cluster Computing and the Grid (CCGRID),  2004.

    [6]   R. Zhou, K. Hwang, and M. Cai, “Gossiptrust for Fast ReputationAggregation in Peer-to-Peer Networks,”   IEEE Trans. Knowledgeand Data Eng.,  vol. 20, no. 9, pp. 1282-1295, Sept. 2008.

    [7]   J. Kleinberg, “The Small-World Phenomenon: An AlgorithmicPerspective,” Proc. 32nd ACM Symp. Theory of Computing,  2000.

    [8]   S. Saroiu, P. Gummadi, and S. Gribble, “A Measurement Study of Peer-to-Peer File Sharing Systems,”   Proc. Multimedia Computingand Networking,  2002.

    [9]   M. Ripeanu, I. Foster, and A. Iamnitchi, “Mapping the Gnutella

    Network: Properties of Large-Scale Peer-to-Peer Systems andImplications for System Design,”  IEEE Internet Computing,  vol. 6,no. 1, pp. 50-57, Jan. 2002.

    [10]  S. Saroiu, K. Gummadi, R. Dunn, S.D. Gribble, and H.M. Levy,“An Analysis of Internet Content Delivery Systems,”  Proc. FifthUSENIX Symp. Operating Systems Design and Implementation(OSDI), 2002.

    [11]   S. Marsh, “Formalising Trust as a Computational Concept,” PhDthesis, Dept. of Math. and Computer Science, Univ. of Stirling,1994.

    [12]   A. Abdul-Rahman and S. Hailes, “Supporting Trust in VirtualCommunities,”   Proc. 33rd Hawaii Int’l Conf. System Sciences(HICSS), 2000.

    [13]   B. Yu and M. Singh, “A Social Mechanism of ReputationManagement in Electronic Communities,”   Proc. Cooperative In-

     formation Agents (CIA), 2000.[14]   L. Mui, M. Mohtashemi, and A. Halberstadt, “A Computational

    Model of Trust and Reputation for E-Businesses,”   Proc. 35th Hawaii Int’l Conf. System Sciences (HICSS), 2002.

    [15]   A. Jøsang, E. Gray, and M. Kinateder, “Analysing Topologies of Transitive Trust,”   Proc. First Int’l Workshop Formal Aspects inSecurity and Trust (FAST),  2003.

    [16]   E. Terzi, Y. Zhong, B. Bhargava, Pankaj, and S. Madria, “AnAlgorithm for Building User-Role Profiles in a Trust Environ-ment,”   Proc. Fourth Int’l Conf. Data Warehousing and KnowledgeDiscovery (DaWaK),  vol. 2454, 2002.

    [17]   Y. Zhong, “Formalization of Dynamic Trust and UncertainEvidence for User Authorization,” PhD thesis, Dept. of ComputerScience, Purdue Univ., 2004.

    [18]   D.H. McKnight, “Conceptualizing Trust: A Typology andE-Commerce Customer Relationships Model,”   Proc. 34th Ann.

     Hawaii Int’l Conf. System Sciences (HICSS),  2001.[19]   P. Resnick, K. Kuwabara, R. Zeckhauser, and E. Friedman,“Reputation Systems,”   Comm. ACM,   vol. 43, no. 12, pp. 45-48,2000.

    [20]   Z. Despotovic and K. Aberer, “Trust-Aware Delivery of Compo-site Goods,” Proc. First Int’l Conf. Agents and Peer-to-Peer Comput-ing, 2002.

    [21]   A. Jøsang, R. Ismail, and C. Boyd, “A Survey of Trust andReputation Systems for Online Service Provision,”   DecisionSupport Systems,  vol. 43, no. 2, pp. 618-644, 2007.

    [22]   C. Dellarocas, “Immunizing Online Reputation Reporting SystemsAgainst Unfair Ratings and Discriminatory Behavior,”   Proc.Second ACM Conf. Electronic Commerce (EC),  2000.

    [23]   B. Yu and M.P. Singh, “Detecting Deception in ReputationManagement,”   Proc. Second Int’l Joint Conf. Autonomous Agentsand Multiagent Systems,  2003.

    [24]   R. Guha, R. Kumar, P. Raghavan, and A. Tomkins, “Propagationof Trust and Distrust,”   Proc. 13th Int’l Conf. World Wide Web(WWW),  2004.

    [25]  J. Douceur, “The Sybil Attack,”   Proc. First Int’l Workshop Peer-to-Peer Systems (IPTPS),  2002.

    [26]   H. Yu, M. Kaminsky, P.B. Gibbons, and A. Flaxman, “Sybilguard:Defending against Sybil Attacks via Social Networks,”   ACMSIGCOMM Computer Comm. Rev., vol. 36, no. 4, pp. 267-278, 2006.

    [27]   N. Tran, B. Min, J. Li, and L. Subramanian, “Sybil-Resilient OnlineContent Voting,”   Proc. Sixth USENIX Symp. Networked SystemsDesign and Implementation (NSDI),  2009.

    [28]   K. Hoffman, D. Zage, and C. Nita-Rotaru, “A Survey of Attackand Defense Techniques for Reputation Systems,”  ACM Comput-ing Surveys,  vol. 42, no. 1, pp. 1:1-1:31, 2009.

    [29]   K. Aberer, A. Datta, and M. Hauswirth, “P-Grid: Dynamics of Self-Organization Processes in Structured P2P Systems,” Peer-to-PeerSystems and Applications, vol. 3845, 2005.

    [30]   S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker, “AScalable Content-Addressable Network,”   ACM SIGCOMM Com-

     puter Comm. Rev.,  vol. 31, no. 4, pp. 161-172, 2001.[31]   S. Song, K. Hwang, R. Zhou, and Y.-K. Kwok, “Trusted P2P

    Transactions with Fuzzy Reputation Aggregation,”   IEEE InternetComputing, vol. 9, no. 6, pp. 24-34, Nov.-Dec. 2005.

    [32]   I. Stoica, R. Morris, D. Karger, M.F. Kaashoek, and H. Balakrish-nan, “Chord: A Scalable Peer-to-Peer Lookup Service for InternetApplications,”   ACM SIGCOMM Computer Comm. Rev.,   vol. 31,no. 4, pp. 149-160, 2001.

    [33]   R. Zhou and K. Hwang, “Powertrust: A Robust and ScalableReputation System for Trusted Peer-to-Peer Computing,”  IEEETrans. Parallel and Distributed Systems,  vol. 18, no. 4, pp. 460-473,

    Apr. 2007.[34]   F. Cornelli, E. Damiani, S.D.C. di Vimercati, S. Paraboschi, and P.

    Samarati, “Implementing a Reputation-Aware Gnutella Servent,”Proc. Networking 2002 Workshops Web Eng. and Peer-to-PeerComputing, 2002.

    [35]   B. Yu, M.P. Singh, and K. Sycara, “Developing Trust in Large-Scale Peer-to-Peer Systems,”   Proc. IEEE First Symp. Multi-AgentSecurity and Survivability,  2004.

    [36]   S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, “RandomizedGossip Algorithms,”  IEEE/ACM Trans. Networking,  vol. 52, no. 6,pp. 2508-2530, June 2006.

    [37]   B. Ooi, C. Liau, and K. Tan, “Managing Trust in Peer-to-PeerSystems Using Reputation-Based Techniques,”  Proc. Fourth Int’lConf. Web Age Information Management,  2003.

    [38]   R. Sherwood, S. Lee, and B. Bhattacharjee, “Cooperative Peer

    Groups in Nice,”  Computer Networks,  vol. 50, no. 4, pp. 523-544,2006.[39]   Y. Wang and J. Vassileva, “Bayesian Network Trust Model in

    Peer-to-Peer Networks,” Proc. Second Workshop Agents and Peer-to-Peer Computing at the Autonomous Agents and Multi Agent SystemsConf. (AAMAS), 2003.

    [40]   P. Victor, C. Cornelis, M. De Cock, and P. Pinheiro da Silva,“Gradual Trust and Distrust in Recommender Systems,”   FuzzySets Systems,  vol. 160, no. 10, pp. 1367-1382, 2009.

    [41]   G. Swamynathan, B.Y. Zhao, and K.C. Almeroth, “DecouplingService and Feedback Trust in a Peer-to-Peer Reputation System,”Proc. Int’l Conf. Parallel and Distributed Processing and Applications(ISPA), 2005.

    [42]  M. Gupta, P. Judge, and M. Ammar, “A Reputation System forPeer-to-Peer Networks,”   Proc. 13th Int’l Workshop Network andOperating Systems Support for Digital Audio and Video (NOSSDAV),

    2003.[43]   S. Staab, B. Bhargava, L. Lilien, A. Rosenthal, M. Winslett, M.

    Sloman, T. Dillon, E. Chang, F.K. Hussain, W. Nejdl, D. Olmedilla,and V. Kashyap, “The Pudding of Trust,” IEEE Intelligent Systems,vol. 19, no. 5, pp. 74-88, 2004.

    [44]   M. Virendra, M. Jadliwala, M. Chandrasekaran, and S. Upad-hyaya, “Quantifying Trust in Mobile Ad-Hoc Networks,”   Proc.IEEE Int’l Conf. Integration of Knowledge Intensive Multi-AgentSystems (KIMAS),  2005.

    [45]   E.J. Friedman and P. Resnick, “The Social Cost of CheapPseudonyms,”   J. Economics and Management Strategy,   vol. 10,no. 2, pp. 173-199, 2001.

    [46]   S. Xiao and I. Benbasat, “The Formation of Trust and Distrust inRecommendation Agents in Repeated Interactions: A Process-Tracing Analysis,” Proc. Fifth ACM Conf. Electronic Commerce (EC),

    2003.[47]   A. Habib, D. Xu, M. Atallah, B. Bhargava, and J. Chuang, “A Tree-Based Forward Digest Protocol to Verify Data Integrity inDistributed Media Streaming,”   IEEE Trans. Knowledge and DataEng., vol. 17, no. 7, pp. 1010-1014, July 2005.

    26 IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTI NG, VOL. 10, NO. 1, JANUARY/FEBRU ARY 2013

  • 8/20/2019 Self ORganizing Trust Model for Peer to Peer Systems PDF

    14/14

    [48]   G. Caronni and M. Waldvogel, “Establishing Trust in DistributedStorage Providers,”   Proc. IEEE Third Conf. Peer-to-Peer Computing(P2P), 2003.

    [49]   A.B. Can, “Trust and Anonymity in Peer-to-Peer Systems,”PhD thesis, Dept. of Computer Science, Purdue Univ., 2007.

    Ahmet Burak Can   received the BS and MSdegrees in computer science and engineeringfrom Hacettepe University and the PhD degreein computer science from Purdue University,

    West Lafayette. Currently, he is affiliated with theDepartment of Computer Science and Engineer-ing at Hacettepe University, Turkey. His mainresearch areas include computer networks,distributed systems, and network security. Hiscurrent research activities focus on trust and

    reputation management, anonymity protection, and incentive mechan-isms in peer-to-peer systems. He is a member of the IEEE.

    Bharat Bhargava  is a professor of the Depart-ment of Computer Science at Purdue University.He is conducting research in security andprivacy issues in distributed systems. He serveson seven editorial boards of international jour-nals. He also serves the IEEE Computer Societyon technical achievement award and fellowcommittees. He is the founder of the IEEESymposium on Reliable and Distributed Sys-tems, IEEE conference in the Digital Library, and

    the ACM Conference on Information and Knowledge Management. Hehas been awarded the charter Gold Core Member distinction by theIEEE Computer Society for his distinguished service. He is a fellow ofthe IEEE and IETE.

    .   For more information on this or any other computing topic,please visit our Digital Library at   www.computer.org/publications/dlib.

    CAN AND BHARGAVA: SORT: A SELF-ORGANIZING TRUST MODEL FOR PEER-TO-PEER SYSTEMS 27