The Cognitive Neuroscience of Social Reasoning

download The Cognitive Neuroscience of Social Reasoning

of 13

Transcript of The Cognitive Neuroscience of Social Reasoning

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    1/13

    THENEW COGNITIVENEUROSCIENCESSecond Edition

    Michael S. Gazzaniga, Editor-in- hief

    A Bradford BookThe MIT PressCambridge, MassachusettsLondon, England

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    2/13

    87 The Cognitive Neuroscience ofSocial ReasoningLEDA COSMIDES AND JOHN TOOBY

    ABSTRACT Cognitive scientists need theoretical guidance thatis grounded in something beyond intuition. They need evolu-tionary biology's "adaptationist program": a research strategyin which theories of adaptive function are key inferential tools,used to identify and investigate the design of evolved systems.Using research on how humans reason about social exchange,the authors will ( 1 ) illustrate how theories of adaptive functioncan generate detailed and highly testable hypotheses about thedesign of computational machines in the human mind and (2)review research that tests for the presence of these machines.This research suggests that the human computational architec-ture contains an expert system designed for reasoning aboutcooperation for mutual benefit, with a subroutine specializedfor cheater detection.Natural competencesScientists have been dissecting the neural architecture ofthe human mind for several centuries. Dissecting itscomputational architecture has proven more difficult,however. Our natural competences-our abilities to see,to speak, to find someone beautiful, to reciprocate a fa-vor, to fear disease, to fall in love, to initiate an attack, toexperience moral outrage, to navigate a landscape, andmyriad others-are possible only because there is a vastand heterogeneous array of complex computational ma-chinery supporting and regulating these activities. Butthis machinery works so well that we do not even realizethat it exists. Our intuitions blur our scientific vision. Asa result, we have neglected to study some of the most in-teresting machinery in the human mind.

    Theories of adaptive function are powerful lensesthat allow one to see beyond one's intuitions. Asidefrom those properties acquired by chance or imposedby engineering constraint, the mind consists of a set ofinformation-processing circuits that were designed bynatural selection to solve adaptive problems that ourancestors faced, generation after generation. If weLEDA COSMIDES Center for Evolutionary Psychology andDepartment of Psychology, University of California, SantaBarbara, Calif.JOHN TOOBY Center for Evolutionary Psychology andDepartment of Anthropology, University of California, SantaBarbara, Calif.

    know what these problems were, we can seek mecha-nisms that are wellengineered for solving them.

    The exploration and definition of adaptive problems isa major activity in evolutionary biology. By combiningresults derived from mathematical modeling, compara-tive studies, behavioral ecology, paleoanthropology, andother fields, evolutionary biologists try to identify (1)what problems the mind was designed to solve, (2) why itwas designed to solve those problems rather than otherones, and (3)what information was available in ancestralenvironments that a problem-solving mechanism couldhave used. These are the components of what Marr(1982) called a "computational theoryn of an informa-tion-processing problem: a task analysis defining what acomputational device does and why it does it.

    Because there are multiple ways of achieving any so-lution, experiments are always needed to determinewhich algorithms and representations actually evolvedto solve a particular problem. But the more preciselyyou can define the goal of processing-the more tightlyyou can constrain what would count as a solution-themore clearly you can see what a program capable ofproducing that solution would have to look like. Themore constraints you can discover, the more the field ofpossible solutions is narrowed, and the more you canconcentrate your experimental efforts on discriminatingbetween viable hypotheses.

    In this way, theories of adaptive problems can guidethe search for the cognitive programs that solve them.Knowing what cognitive programs exist can, in turn,guide the search for their neural basis. To illustrate thisapproach, we will show how it guided a research pro-gram of our own on how people reason about social in-teractions.

    Some of the most important adaptive problems ourancestors had to solve involved navigating the socialworld, and some of the best work in evolutionary biol-ogy is devoted to analyzing constraints on the evolutionof mechanisms that solve these problems. Constructingcomputational theories from these constraints led us tosuspect that the human cognitive architecture containsexpert systems specialized for reasoning about the social

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    3/13

    world. If these exist, then their inference procedures,representational primitives, and default assumptionsshould reflect the structure of adaptive problems thatarose when our hominid ancestors interacted with oneanother. Our first task analysis was of the adaptive infor-mation-processing problems entailed by the human abil-ity to engage in social exchange.Social exchange and conditional reasoningIn categorizing social interactions, there are two basicconsequences that humans can have on each other: help-ing or hurting, bestowing benefits or inflicting costs.Some social behavior is unconditional: One nurses an in-fant without ashng it for a favor in return, for example.But most social acts are delivered conditionally. Thiscreates a selection pressure for cognitive designs that candetect and understand social conditionals reliably,precisely, and economicaIly (Cosmides, 1985, 1989;Cosmides and Tooby, 1989, 1992).Two major categoriesof social conditionals are social exchange and threat-conditional helping and conditional hurting-carried outby individuals or groups on individuals or groups. Weinitially focused on social exchange (for review, seeCosmides and Tooby, 1992).A social exchange involvesa conditional of the approximate form: Ifperson A providesthe requested benejit to or meets the requirement ofperson orgroupB, then B willprovide the rationad benefit to A. (Herein, a ruleexpressing this kind of agreement to cooperate will bereferred to as a social contract.)

    We elected to study reasoning about social exchangefor several reasons:

    1. Many aspects of the evolutionary theory of socialexchange (sometimes called cooperation, reciprocal altru-ism, or reciprocation) are relatively well developed andunambiguous. Consequently, certain features of thefunctional logic of social exchange could be confidentlyrelied on in constructing a priori hypotheses about thestructure of the information-processingprocedures thatthis activity requires.

    2. Complex adaptations are constructed in response toevolutionarily long-enduring problems. Situations in-volving social exchange have constituted a long-enduringselection pressure on the hominid line: Evidence fromprimatology and paleoanthropology suggests that our an-cestors have engaged in social exchange for at least sev-eral million years.

    3. Social exchange appears to be an ancient, perva-sive, and central part of human social life. The universal-ity of a behavioral phenotype is not a suficient conditionfor claiming that it was produced by a cognitive adapta-tion, but it is suggestive. As a behavioral phenotype, so-

    cial exchange is as ubiquitous as the human heartbeat.The heartbeat is universal because the organ that gener-ates it is everywhere the same. This is a parsimoniousexplanation for the universality of social exchange aswell: the cognitive phenotype of the organ that gener-ates it is everywhere the same. Like the heart, its devel-opment does not seem to require environmentalconditions (social or otherwise) that are idiosyncratic orculturally contingent.

    4. Social exchange is relatively rare across species,however. Many species have the ability to recognize pat-terns (as connectionist systems do) or change their be-havior in response to rewards and punishments (seechapter 81 of this volume). Yet these abilities alone areinsuficient for social exchange to emerge, despite the re-wards it can produce. This suggests that social exchangebehavior is generated by cognitive machinery special-ized for that task.'

    5. Finding procedures specialized for reasoning aboutsocial exchange would challenge a central assumption ofthe behavioral sciences: that the evolved architecture ofthe mind consists solely or predominantly of a smallnumber of content-free, general-purpose mechanisms(Tooby and Cosmides, 1992).

    Reasoning is among the most poorly understood areasin the cognitive sciences. Its study has been dominatedby a pre-Darwinian view, championed by the BritishEmpiricists and imported into the modem behavioralsciences in the form of the Standard Social ScienceModel (SSSM). According to this view, reasoning is ac-complished by circuits designed to operate uniformlyover every class of content (see chapter 81), and the mindhas no content that was not derived from the perceptualdata these circuits take as input. These circuits werethought to be few in number, content free, and generalpurpose, part of a hypothetical faculty that generates so-lutions to all problems: "general intelligence." Experi-ments were designed to reveal what computationalprocedures these circuits embodied; prime candidateswere all-purpose heuristics and "rational" algorithms-ones that implement formal methods for inductive anddeductive reasoning, such as Bayes's rule or the proposi-tional calculus. These algorithms are jacks of all trades:Because they are content free, they can operate on infor-mation from any domain (their strength). They are alsomasters of none: To be content independent means thatthey lack any domain-specialized information that wouldlead to correct inferences in one domain but would notapply to others (their weakness).

    This view of reasoning as a unitary faculty composedof content-free procedures is intuitively compelling tomany people. But the discipline of asking what adaptive

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    4/13

    information-processing problems our minds evolved tosolve changes one's scientific intuitions/sensibilities.One begins to appreciate (1) the complexity of mostadaptive information-processing problems; (2) that theevolved solution to these problems is usually machinerythat is well engineered for the task; (3)that this machin-ery is usually specialized to fit the particular nature ofthe problem; and (4) that its evolved design must em-body "knowledge" about problem-relevant aspects ofthe world.

    The human computational architecture can be thoughtof as a collection of evolved problem-solvers. Some ofthese may indeed embody content-free formalisms frommathematics or logic, which can act on any domain andacquire all their specific content from perceptual dataalone (Gigerenzer, 1991; Brase, Cosmides, and Tooby,1998).But many evolved problem-solvers are expert sys-tems, equipped with "crib sheets": inference proceduresand assumptions that embody knowledge specific to agiven problem domain. These generate correct (or, atleast, adaptive) inferences that would not be warrantedon the basis of perceptual data alone. For example, therecurrently is at least some evidence for the existence of in-ference systems that are specialized for reasoning aboutobjects (Baillergeon, 1986; Spelke, 1990), physical cau-sality (Brown, 1990; Leslie, 1994), number (Gallistel andGelman, 1992; Wynn, 1992, 1995), the biological world(Atran, 1990; Hatano and Inagaki, 1994; Keil, 1994;Springer, 1992), he beliefs and motivations of other indi-viduals (Baron-Cohen, 1995; Leslie, 1987; see also chap-ters 85 and 86), and social interactions (Cosmides andTooby, 1992; Fiske, 1991). These domain-specific infer-ence systems have a distinct advantage over domain-independent ones, akin to the difference between expertsand novices: Experts can solve problems faster and moreefficiently than novices because they already know a lotabout the problem domain.

    So what design features might one expect an expertsystem that is well engineered for reasoning about socialexchange to have?Designfeaturespredicted by thecomputational theoryThe evolutionary analysis of social exchange parallels theeconomist's concept of trade. Sometimes known as "re-ciprocal altruism," social exchange is an "I'll scratch yourback if you scratch mine" principle. Economists and evo-lutionary biologists had already explored constraints onthe emergence or evolution of social exchange usinggame theory, modeling it as a repeated Prisoners' Di-lemma. Based on these analyses, and on data from pale-oanthropology and primatology, we developed a

    computational theory ( s e w Marr, 1982) specifying de-sign features that algorithms capable of satisfying theseconstraints would have to have. For example:

    1. To discriminate social contracts from threats andother kinds of conditionals, the algorithms involvedwould have to be sensitive to the presence of benefitsand costs and be able to recognize a well-formed socialcontract (for a grammar of social exchange, seeCosmides and Tooby, 1989).Social exchange is coopera-tion for mutual benefit. The presence of a benefit is crucialfor a situation to be recognized as involving social ex-change. The presence of a cost is not a necessary condi-tion: providing a benefit may cause one to incur a cost,but it need not. There must, however, be algorithms thatcan assess relative benefits and costs, to provide input todecision rules that cause one to accept a social contractonly when the benefits outweigh the costs.

    2. The game theoretic analyses indicated that socialexchange cannot evolve in a species or be sustained sta-bly in a social group unless the cognitive machinery ofthe participants allows a potential cooperator to detectindividuals who cheat, so that they can be excludedfrom future interactions in which they would exploit co-operators (Axelrod, 1984;Axelrod and Hamilton, 1981;Boyd, 1988; Trivers, 1971; Williams, 1966). In this con-text, a cheateris an individual who accepts a benefit with-out satisfying the requirements that provision of thatbenefit was made contingent upon. This definition doesnot map onto content-free definitions of violation foundin the propositional calculus and in most other reason-ing theories (e.g., Rips, 1994;Johnson-Laird and Byme,1991). A system capable of detecting cheaters wouldneed to define the concept using contentful representa-tional primitives, referring to illicitly taken benefits. Thedefinition also is perspective dependent because theitem or action that one party views as a benefit, theother views as a requirement. Given "If you give meyour watch, I'll give you $10," you would have cheatedme if you took my $10 but did not give me your watch; Iwould have cheated you if I had taken your watch with-out giving you the $10. This means that the systemneeds to be able to compute a cost-benefit represent.-tion from the perspective of each participant, and definecheating with respect to that perspective-relative repre-sentation.

    In short, what counts as cheating is so content de-pendent that a detection mechanism equipped with adomain-general definition of violation would not be ableto solve the problem of cheater detection. Hence, an ex-pert system designed for conditional reasoning about so-cial exchange should have a subroutine specialized fordetecting cheaters.

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    5/13

    TABLE7.1Computational machinery that governs reasoning about social contracts

    Design features predicted (and established)1. It includes inference procedures specialized for detecting cheaters.2. The cheater detection procedures cannot detect violations that do not correspond to cheating (e.g, mistakes when no one

    profits from the violation).3. The machinery operates even in situations that are unfamiliar and culturally alien.4. The definition of cheating varies lawfully as a function of one's perspective.5 . The machinery is just as good at computing the cost-benefit representation of a social contract from the perspective of oneparty as from the perspective of another.6. It cannot detect cheaters unless the rule has been assigned the cost-benefit representation of a social contract.7 It translates the surface content of situations involving the contingent provision of benefits into representational primitives,

    such as "benefit," "cost," "obligation," "entitlement," "intentional," and "agent."8. It imports these conceptual primitives, even when they are absent from the surface content.9. It derives the implications specified by the computational theory, even when these are not valid inferences of the propositional

    calculus (e.g., "If you take the benefit, then you are obligated to pay the cost" implies "If you paid the cost, then you areentitled to take the benefit").10. It does not include procedures specialized for detecting altruists (individualswho have paid costs but refused to accept thebenefits to which they are therefore entitled).

    11. It cannot solve problems drawn from other domains (e.g., it will not allow one to detect bluffs and double-crosses n situationsof threat).12. It appears to be neurologically isolable from more general reasoning abilities (e.g., it is unimpaired in schizophrenic patientswho show other reasoning deficits; Maljkovic, 1987).13. It appears to operate across a wide variety of cultures (including an indigenous population of hunter-horticulturists n theEcuadorian Amazon; Sugiyarna, Tooby, and Cosmides, 1995).

    Alternative (by-~roduct)ypotheses eliminated1. That familiarity can explain the social contract effect.2. That social contract content merely activates the rules of inference of the propositional calculus.3. That social contract content merely promotes (for whatever reason) "clear thinking."4. That permission schema theory can explain the social contract effect.5 . That any problem involving payoffs will elicit the detection of violations.6. That a content-independentdeontic logic can explain the effect.

    Based on evidence reviewed in Cosmides and Tooby, 1992.

    3. Algorithms regulating social exchange should beable to operate even ove r unfamiliar contents and situa-tions. Unlike other primates, who exchange only a lim-ited array of favors (e.g., grooming, food, protection),human s trade a n almost unlimited variety of goods andservices. Moreover, one needs to be able to interpreteach new situation that arises-not merely ones that haveoccurred in the past. Thus, the algorithms should beable to operate properly e ven in unfamiliar situations, aslong as they can be interpreted as involving the condi-tional provision of benefits. This means the representa-tional format of the algorithms cannot be tied to specificitems with a fixed exchange ra te (e.g., one could imaginethe reciprocation algorithms of vampire bats, who shareregurgitated blood, to specify an exchange rate in bloodvolume). From the surface content of a situation, the al-gorithms should compute an abstract level of repre-sentation, with representational primitives such asbene$twt 7, requirementag,,, 7, agent 7 , agent 2, costagmt2and so forth.

    4. In the context of social exchange, modals such as"mustn and "may" should be interpreted deontically, asreferring to obligation and entitlement (rather than tonecessity and possibility). As a result, cheating is taking abenefit one is not entitled to. It does not matter whereterms such as "benefit taken" o r "requirement not me tnfall in the logical structure of a rule. In addition, thereare constraints on when one should punish cheating,and by how much (Cosmides, 1985; Cosmides andTooby, 1989).

    Such analyses provided a principled basis for gener-ating detailed hypotheses about reasoning proceduresthat, because of their domain-specialized structure,would be well designed for detecting social condition-als involving exchange, interpreting their meaning, andsuccessfully solving the inference problems they pose(table 87.1). These hypotheses were tested using stan-dar d methods from cognitive psychology, as describedbelow.

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    6/13

    Part of your new job for the City of Cambridge is to study the demographics of transportation. You rea d a previously donereport on the habits of Cambridge residents that says: "If a person goes into Boston, then that person takes the subway."

    The cards below have information about four Cambridge residents. Each card represents one person. One side of a cardtells where a person went, and the other side of the card tells how that person got there . Indicate only those card(s) youdefinitely need to turn over to see if any of these people viola te this rule.

    FIGURE87.1 The Wason selection task (descriptive rule, fa- P and not-Qcan violate this rule, so the correct answer is tomiliar content). In a Wason selection task, there is always a rule check the Pcard (to see whether it has a not-Qon the back), theof the form I fP then Q, and four cards showing the values P, not-Qcard (to see whether it has a P o n the back), and no oth-not-P, Q, and not-Q(respective1y) on the side that the subject ers. Few subjects answer correctly, however, when given prob-can see. From a logical point of view, only the combination of lems with descriptive rules, such as the problem in figure 87.1.

    Computational theories (or task analyses) are impor-tant because they specify a mechanism's adaptive func-tion: the problem it was designed by natural selection tosolve. They are central to any evolutionary investigationof the mind, for a simple reason. To show that an aspectof the phenotype is an adaptation, one needs to demon-strate a fit between form and function: One needs designevidence. There are now a number of experiments on hu-man reasoning comparing performance on tasks inwhich a conditional rule either did or did not express asocial contract. These experiments-some of which aredescribed below-have provided evidence for a series ofdomain-specific effects -predicted by our analysis of theadaptive problems that arise in social exchange. Socialcontracts activate content-dependent rules of inferencethat appear to be complexly specialized for processinginformation about this domain. These rules include sub-routines that are specialized for solving a particularproblem within that domain: cheater detection. The pro-grams involved do not operate so as to detect potentialaltruists (individuals who pay costs but do not take bene-fits),nor are they activated in social contract situations inwhich errors would correspond to innocent mistakesrather than intentional cheating. Nor are they designedto solve problems drawn from domains other than socialexchange; for example, they do not allow one to detectbluffs and double-crosses in situations of threat, nor dothey allow one to detect when a safety rule has been vio-lated. The pattern of results elicited by social exchangecontent is so distinctive that we believe reasoning in thisdomain is governed by computational units that are do-main specific and functionally distinct: what we havecalled social contract algorithms (Cosmides, 1985, 1989;Cosmides and Tooby, 1992).

    To help readers track which hypotheses are tested byeach experiment, we will refer to the list in table 87.1.For example, D l = design feature #1 (inference proce-dures specialized for detecting cheaters); B1 = byprod-uct hypothesis #I (that familiarity can explain the socialcontract effect).Ests wi th the Wason selection taskTo test for the presence of the design features predicted,we used an experimental paradigm called the Wason se-lection task (Wason, 1966; Wason and Johnson-Laird,1972). For more than 30 years, psychologists have beenusing this paradigm (which was orginally developed as atest of logical reasoning) to probe the structure of humanreasoning mechanisms. In this task, the subject is askedto look for violations of a conditional rule of the form IfP then 4,Consider the Wason selection task presented infigure 87.1.

    From a logical point of view, the rule has been violatedwhenever someone goes to Boston without taking thesubway. Hence, the logically correct answer is to turnover the Boston card (to see if this person took the sub-way) and the cab card (to see if the person taking the cabwent to Boston). More generally, for a rule of the form IfP then Q, one should turn over the cards that representthe values P (a true antecedent) and not-Q(a false conse-quent).

    If the human mind develops reasoning proceduresspecialized for detecting logical violations of conditionalrules, this would be intuitively obvious. But it is not. Ingeneral, fewer than 25% of subjects spontaneously makethis response. Moreover, even formal training in logicalreasoning does little to boost performance on descriptive

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    7/13

    rules of this kind (Cheng et al., 1986; Wason andJohnson-Laird, 1972). Indeed, a large literature exists thatshows that ~e o p l ere not very good at detecting logicalviolations of if-then rules in Wason selection tasks, evenwhen these rules deal withfamiliar content drawn fiom every-day life (Manktelow and Evans, 1979; Wason, 1983).

    The Wason selection task provided an ideal tool fortesting hypotheses about reasoning specializations de-signed to operate on social conditionals, such as socialexchanges, threats, permissions, obligations, and so on,because (1)it tests reasoning about conditional rules, (2)the task structure remains constant while the content ofthe rule is changed, (3) content effects are easily elicited,and (4) there was already a body of existing experimen-tal results against which performance on new contentdomains could be compared.

    For example, to show that people who ordinarily can-not detect violations of conditional rules can do so whenthat violation represents cheating on a social contractwould constitute initial support for the view that peoplehave cognitive adaptations specialized for detectingcheaters in situations of social exchange. To find that vi-olations of conditional rules are spontaneously detectedwhen they represent bluffing on a threat would, for simi-lar reasons, support the view that people have reasoningprocedures specialized for analyzing threats. Our gen-eral research plan has been to use subjects' inability tospontaneously detect violations of conditionals express-ing a wide variety of contents as a comparative baselineagainst which to detect the presence of performance-boosting reasoning specializations. By seeing which con-tent-manipulations switch on or off high performance,the boundaries of the domains within which reasoningspecializations successfully operate can be mapped.

    The results of these investigations were striking. Peo-ple who ordinarily cannot detect violations of if-thenrules can do so easily and accurately when that violationrepresents cheating in a situation of social exchange(Cosmides, 1985, 1989; Cosmides and Tooby, 1989,1992). Given a rule of the general form, "If you take ben-efit B, then you must satisfy requirement R," subjectschoose the benefit accepted card and the requirement not metcard-the cards that represent potential cheaters. Theadaptively correct answer is immediately obvious to al-most all subjects, who commonly experience a "pop out"effect. No formal training is needed. Whenever the con-tent of a problem asks one to look for cheaters in a socialexchange, subjects experience the problem as simple tosolve, and their performance jumps dramatically. In gen-eral, 65% to 80% of subjects get it right, the highest per-formance found for a task of this kind (supportsD 1).

    This is true for familiar social contracts, such as, "Ifa person drinks beer, then that person must be over 19

    E V ~ E X P ~ / ~ ~ ~ ~XP 4OSodal Contract

    Exp 1 8 3:Socialcontract= social ruleExp 2 8 4: Socialcontract = personal exchangeFIGURE87.2 Detecting violations of unfamiliar conditionalrules: social contracts versus descriptive rules. In theseexperiments, the same, unfamiliar rule was embedded eitherin a story that caused it to be interpreted as a social contractor in a story that caused it to be interpreted as a ruledescribing some state of the world. For social contracts, thecorrect answer is always to pick the benejt accepted card andthe requirement not met card. (A)For standard social contracts,these correspond to the logical categories P and not-% P andnot-Qalso happens to be the logically correct answer. Morethan 70% of subjects chose these cards for the social con-tracts, but fewer than 25% chose them for the matching de-scriptive rules. (B) For switched social contracts, the beneftaccepted and requirement not met cards correspond to the logi-cal categories Q and not-l! This is not a logically correctresponse. Nevertheless, approximately 70% of subjects choseit for the social contracts; virtually no one chose it for thematching descriptive rule.

    years old" (Griggs and Cox, 1982; Cosmides, 1985).According to our computational theory, however, per-formance also should be high for unfamiliar ones-such-as, "If a man eats cassava root, then he must have atattoo on his face," where cassava root is portrayed asa highly desirable aphrodisiac and having a facial tat-too is a sign of being married. As figure 872A shows,this is true. Subjects choose the benefit accepted card(e.g., "ate cassava root") and the requirement not metcard (e.g., "no tattoo") for any social conditional thatcan be interpreted as a social contract and in whichlooking for violations can be interpreted as looking forcheaters (supports Dl, D3, D7, D8; disconfirms B1;Cosmides, 1985, 1989; Gigerenzer and Hug, 1992;Platt and Griggs, 1993). Indeed, familiarity did nothelp at all. Cosmides (1985) found that performancewas just as high on unfamiliar as on familiar socialcontracts-an uncomfortable result for any explanationthat invokes domain-general learning, which dependson familiarity and repetition.

    From a domain-general, formal view, investigatingmen eating cassava root and men without tattoos is logi-

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    8/13

    cally equivalent to investigating people going to Bostonand people taking cabs. But everywhere it has beentested (adults in the United States, United Kingdom,Germany, Italy, France, Hong-Kong, Japan; schoolchil-dren in Ecuador; Shiwiar hunter-horticulturists in thethe Ecuadorian Amazon (Sugiyama, Tooby, and Cos-mides, 1995), people do not treat social exchange prob-lems as equivalent to other kinds of reasoning problems.Their minds distinguish social exchange contents andreason as if they were translating these situations intorepresentational primitives such as "benefit," "cost,""obligation," "entitlement," "intentional," and "agent"(Cheng and Holyoak, 1985; Cosmides, 1989; Platt andGriggs, 1993; supports D13). Indeed, the relevant infer-ence procedures are not activated unless the subject hasrepresented the situation as one in which one is entitledto a benefit only if one has satisfied a requirement.Do social contracts simply activate content-peelogical rules?The procedures activated by social contract rules do notbehave as if they were designed to detect logical viola-tions per se; instead, they prompt choices that trackwhat would be useful for detecting cheaters, regardlessof whether this happens to correspond to the logicallycorrect selections. For example, by switching the orderof requirement and benefit within the if-then structure ofthe rule (figure 87.3), one can elicit responses that arefunctionally correct from the point of view of cheater de-tection, but logically incorrect. Subjects choose the bene-jit accepted card and the requirement not met card-theadaptively correct response if one is looking for cheat-ers-no matter what logical category these cardsfall into (fig-w e 87.2B; supports D9, D8; disconfirms B2, B3, B1)This means that cheater detection is not accomplishedby procedures embodying the content-free rules oflogical inference. If it were, then subjects would choosethe logically correct response-P and not-Q-even onswitched rules, where these represent the requirement metcard (P) and the bent$ not accepted card (not-&. Yet theserepresent people who cannot possibly have cheated. Aperson who has met a requirement without acceptingthe benefit this entitles one to is either an altruist or afool, but not a cheater.

    That content-free rules of logic are not responsiblefor cheater detection was demonstrated in a differentway by Gigerenzer and Hug (1992) in experiments de-signed to test the prediction that what counts as cheat-ing should depend on one's perspective (D4, D5, B2,B4).Subjects were asked to look for violations of rulessuch as, "If a previous employee gets a pension from afim, then that person must have worked for the firm

    for at least 10 years." For half of them, the surroundingstory cued the subjects into the role of the employer,whereas for the other half, it cued them into the role ofan employee.

    If social contracts activate rules for detecting logicalviolations, then this manipulation should make no dif-ference. Perspectives, such as that of employer versusemployee, play no role in formal logic: To detect a log-ical violation, the P card ("employee got the pension")and the not-Qcard ("employee worked for less than 10years") should be chosen, no matter whose perspec-tive you have taken. But in social contract theory, per-spective matters. Because these two cards representinstances in which the employer may have beencheated, a cheater detection subroutine should choosethem in the employer condition. But in the employeecondition, the same cheater detection procedures oughtto draw attention to situations in which an employeemight have been cheated, that is, the "employeeworked for more than 10 years" card (Q) and "em-ployee got no pension" card (not-8. In the employeecondition, Qand not-P is logically incorrect, but adap-tively correct.

    The results confirmed the social contract prediction.In the employer condition, 73% of subjects chose P andnot-&(which is both logically and adaptively correct),and less than 1% chose Qand not-P (which is both logi-cally and adaptively incorrect). But in the employeecondition, 66% chose Qand not-P-which is logically in-correct but adaptively correct-and only 8% chose P andnot-Ewhich is logically correct, but adaptively incorrect(supports D4, D7, D9; disconfirms B2, B4). Further-more, the percent of subjects choosing the correctcheater detection response did not differ significantly inthese two conditions (73% vs. 66%), indicating that it wasjust as easy for subjects to compute a cost-benefit repre-sentation from the perspective of the employer as fromthat of the employee (supports D5).Are cheater detection procedures part of'keneral intelligence"?Data from Maljkovic (1987) show that the ability todetect cheaters can remain intact even in individualssuffering large impairments in their more general intel-lectual functioning. Maljkovic tested the reasoning ofpatients suffering from positive symptoms of schizo-phrenia, comparing their performance to that of hospi-talized controls. The schizophrenic patients were indeedimpaired on more general tests of logical reasoning, in away typical of individuals with frontal lobe dysfunction.But their ability to detect cheaters on Wason selectiontasks was unimpaired (supports Dl, D12; disconfirms

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    9/13

    Consider the following rule:Standard version:I f you take the benefit, then you meet the requirement (e.g., If I give you $10, then you give me your watch").If P then QSwitched version:Ifyou meet the requirement, then you take the benefit (e.g., If you give me your watch, then 1'11 give you $10").If P then Q/ FlStandard: P not-P Q not-QSwitched: Q not-Q P not-P

    FIGURE 7.3 Generic structure of a social contract. A cheaterdetection subroutine always would cause one to look at in-stances in which the potential violator has taken the benefit andnot met the requirement. But whether this corresponds to thelogically correct answer (Pan d not-Q) depends on exactly howthe exchange was expressed. Th e same social contract has beenoffered to you, whether the offerer says, "If I give you $10, thenyou give me your watch" (standard) or "If you give me yourwatch, I'll give you $10" (switched). But the benefit to you (get-ting the $10) is in the Pclause in the standard version and in the

    B1, B3). This is all the more remarkable given thatschizophrenics usually show impairments on virtuallyany test of intellectual functioning that they are given(McKenna, Clare, and Baddeley, 1995). Maljkovic ar-gues that a dissociation of this kind is what one wouldexpect if social exchange, which is a long-enduringadaptive problem, is generated by mechanisms in moreevolutionarily ancient parts of the brain than the frontallobes. Whether this conjecture is true or not, her resultsindicate that the algorithms responsible for cheater de-tection are different from those responsible for perfor-mance on the more general lopcal reasoning tasks thesepatients were given.

    Are these effects produced by permission schemas?The human cognitive phenotype has many features thatappear to be complexly specialized for solving the adap-tive problems that arise in social exchange. But demon-strating this is not sufficient for claiming that thesefeatures are cognitive adaptations for social exchange.One also needs to show that these features are not moreparsimoniously explained as the by-product of mecha-nisms designed to solve some other adaptive problem orclass of problems.

    Qclause in the switched version. Likewise, your not meetingthe requirement (not giving me the watch) would correspond tothe logical category not-Qin the standard version and not-Pinthe switched version. By always choosing the bent@ acceptedand requirement not met cards, a cheater detection procedurewould cause one to choose Pand not-Qin the standard version-a logically correct response-and Q nd not-P in the switchedversion-a logically incorrect response. By testing switched so-cial contracts, one can see that the reasoning procedures acti-vated cause one to detect cheaters, not logical violations.

    TABLE7.2The permission schema is composed offour production rules-- - -- - - - - - - --

    1. Rule 1: If the action is to be taken, then the preconditionmust be satisfied.2. Rule 2: If the action is not to be taken, then the

    precondition need not be satisfied.3. Rule 3: If the precondition is satisfied, then the action maybe taken.4. Rule 4: If the precondition is not satisfied, then the actionmust not be taken.

    From Cheng and Holyoak, 1985.For example, Cheng and Holyoak (1985, 1989) also

    invoke content-dependent computational mechanismsto explain reasoning performance that varies across do-mains. But they attribute performance on social contractrules to the operation of a permission schema (and/or anobligation schema; these do not lead to different predic-tions on the kinds of rules usually tested; see Cosmides,1989), which operates over a larger class of problems.They propose that this schema consists of four produc-tion rules (table 87.2) and that their scope is any permis-sion rule, that is, any conditional rule to which thesubject assigns the following abstract representation: "Ifaction A is to be taken, then precondition P must be sat-isfied." All social contracts are permission rules, but notall permission rules are social contracts. The conceptual

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    10/13

    Permission rules

    FIGURE 87.4 All social contracts are permission rules; all pre-caution rules are permission rules; but not all permission ~ l e sare social contracts or precautions. Many permission rules thathave been tested fall into the white area (neither social con-tracts nor precautions): these do not elicit the high levels ofperformance that social contracts and precaution rules do. Thisargues against permission schema theory. By testing patientswith focal brain damage and through priming studies, one cansee whether reasoning about social contracts and precautionscan be dissociated. These methods allow one to determinewhether reasoning about social contracts and precaution rulesis generated by one permission schema, or by two separate do-main-specific mechanisms.

    primitives of a permission schema have a larger scopethan those of social contract algorithms. For example, a"benefit taken" is a kind of "action taken," and a "costpaid" (i.e., a benefit offered in exchange) is a kind of"precondition satisfied." They take evidence that peopleare good at detecting violations of precaution rules-rules of the form, "If hazardous action H is taken, thenprecaution P must be met"-as evidence for their hy-pothesis (on precautions, see Manktelow and Over,1988, 1990). After all, a precaution rule is a kind of per-mission rule, but it is not a kind of social contract. We,however, have hypothesized that reasoning about pre-caution rules is governed by a functionally specializedinference system that differs from social contract algo-rithms and operates independently of them (Cosmidesand Tooby, 1992, 1997; Fiddick, 1998; Fiddick,Cosmides, and Tooby, 1995; figure 87.4).

    In other words, there are two competing proposals forhow the computational architecture that causes reason-ing in these domains should be dissected. Several linesof evidence speak to these competing claims.ARE BENEFITSNECESSARY?According to the gram-mar of social exchange, a rule is not a social contract un-less it contains a benefit to be taken. Transformations ofinput should not matter, as long as the subject continuesto represent an action or state of affairs as beneficial to thepotential violator and the violator as illicitly obtainingthis benefit The corresponding argument of the permis-sion schema-an action to be taken-has a larger scope: Not

    all "actions taken" are "benefits taken." If this construal ofthe rule's representational structure is correct, then thebehavior of the reasoning system should be invariantover transformations of input that preserve it . But it is not.For example, consider two rules: (1) "If one goes out atnighf then one must tie a small piece of red volcanic rockaround one's ankle" and (2 ) "If one takes out the garbageat night, then one must tie a small piece of red volcanicrock around one's ankle." Most undergraduate subjectsperceive the action to be taken in (1)-going out at night-as a benefit, and 80% of them answered correctly. Butwhen one substitutes a different action-taking out thegarbage-into the same place in the argument structure,then performance drops to 44% (supports D6, D7; dis-confirms B4, B6; Cosmides and Toohy, 1992). This trans-formation of input preserves the action to be takenrepresentational structure, but it does not preserve thebenejt to be taken representational structure-most peoplethink of taking out the garbage as a chore, not a benefit.If the syntax of the permission schema were correct, thenperformance should be invariant over this transforma-tion. But a drop in performance is expected if the syntaxof the social contract algorithms is correct.

    We have been doing similar experiments with precau-tion rules (e.g., "If you make poison darts, then you mustwear rubber gloves."). All precaution rules are permis-sion rules (but not all permission rules are precautionrules). We have been finding that the degree of hazarddoes not affect performance, but the nature of the pre-caution does-even though all the precautions taken are in-stances of preconditions satzsfed. Performance drops whenthe precaution is not perceived as a good safeguardgiven the hazard specified (Rutherford, Tooby, andCosmides, 1996). This is what one would expect if thesyntax of the rules governing reasoning~in his domaintake representations such as facing a k r d nd precautiontaken; it is not what one would expect if the representa-tions were action taken and precondition satzsfed .DOESTHE VIOLATION AVE O BE CHEATING? Byhypothesis, social contract algorithms contain certainconceptual primitives that the permission schema lacks.For example, cheating is taking a benefit that one is notentitled to; we have proposed that social contract algo-rithms have procedures that are specialized for detectingcheaters. This conceptual primitive plays no role in theoperation of the permission schema. For this schema,whenever the action has been taken but the precondi-tion has not been satisfied, a violation has occurred. Peo-ple should be good at detecting violations, whether thatviolation counts as cheating (the benefit has been illicitlytaken by the violator) or a mistake (the violator does notget the benefit stipulated in the rule).

    COSMIDES AND TOOBY: CO(

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    11/13

    Given the same social contract rule, one can manipu-late contextual factors to change the nature of the viola-tion from cheating to a mistake. When we did this,performance changed radically, from 68% correct in thecheating condition to 27% correct in the mistake condi-tion (supports D2; disconfirms B1-B6). Gigerenzer andHug (1992) found the same drop in response to a similarcontext manipulation.In bargaining games, experimental economists havefound that subjects are twice as likely to punish defec-tions when it is clear that the defector intended to cheatas when the defector is a novice who might have simplymade a mistake (Hoffman, McCabe, and Smith, 1997).This provides interesting convergent evidence, using en-tirely different methods, for a conceptual distinction be-tween mistakes and cheating, where intentionality alsoplays a role.IS REASONING ABOUT SOCIAL CONTRACTSANDPRECAUTIONS GENERATED Y ONE MECHANISMOR TWO? If reasoning about social contracts and pre-cautions is caused by one and the same mechanism-apermission schema-then neurological damage to thisschema should lower performance on both rulesequally. But if reasoning about these two domains iscaused by two, functionally distinct mechanisms, thenone could imagine neurological damage to the socialcontract algorithms that leaves the precaution mecha-nisms unimpaired, and vice versa. Stone and col-leagues (Stone, Cosmides, and Tooby, 1996; Stone,Cosmides, and Tooby, forthcoming; Stone et al., 1997)tested R. M., a patient with bilateral damage to his orb-itofrontal and anterior temporal cortex, as well as to theleft amygdala, on social contract and precaution prob-lems that had been matched for difficulty on normalcontrols (who got 71% and 73010 correct, respectively).R. M.'s performance on the precaution problems was70% correct: equivalent to that of the normal controls.In contrast, his performance on the social contractproblems was only 39% correct. This is a marked im-pairment, whether compared to the normal controls orto his own performance on the precaution problems(figure 87.5). R. M.'s difference score-his percent cor-rect for precautions minus his percent correct for so-cial contracts-was 31 percentage points. In contrast, thedifference scores for individual normal subjects were allclose to zero (mean = 0.14 percentage points). If rea-soning on both social contracts and precautions werecaused by a single mechanism-whether a permissionschema or anything else-then one would not be able tofind individuals who perform well on one class of con-tent but not on the other. This pattern of results is bestexplained by the hypothesis that reasoning about these

    FIGURE87.5 Performance of R.M. versus normal controls:precautions versus social contracts. Selective impairment of so-cial contract reasoning in R. M., a patient with bilateral dam-age to the orbitofrontal and anterior temporal lobes anddamage to the left amygdala. R. M. reasons normally whenasked to look for violations of precaution rules but has diffi-culty when asked to look for (logically isomorphic) violationsof social contracts (i.e., detecting cheaters). The paxis plots thedifference between R. M.'s percent correct and the mean per-formance of 37 normal controls. (Data from Stone, Cosmides,and Tooby, 1996.)

    two types of content is governed by two separate mech-anisms.

    Although tests of this kind cannot conclusively estab-lish the anatomical location of a mechanism, tests withother patients suggest that arnygdalar damage was impor-tant in creating this selective deficit. Stone and associatestested two other patients who had no damage to theamygdala. One had bilateral orbitofrontal and anteriortemporal damage; the other had bilateral anterior tempo-ral damage, but no orbitofrontal damage. Neither patientexhibited a selective deficit; indeed, both scored ex-tremely high on both classes of problems.

    Convergent evidence for the single dissociation foundby Stone and colleagues comes from a study by Fid-dick and associates (Fiddick, Cosmides, and Tooby,1995; Fiddick, 1998). Using a priming paradigm, theyproduced a functional dissociation in reasoning in nor-mal, brain-intact subjects. Indeed, they were able toproduce a double dissociation between social contractand precaution reasoning. More specifically, they foundthat: (1) when the problem used as a prime involved aclear social contract rule, performance on the targetproblem-an ambiguous social contract-increased.Moreover, this was due to the activation of social con-tract categories, not logcal ones: when the prime was aswitched social contract, in which the correct cheater

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    12/13

    detection answer is not the logically correct answer (seefigure 87.3), subjects matched their answers on the targetto the prime's benefit/requirement categories, not itslogical categories. (2) When the prime was a clear pre-caution rule, performance on an ambiguous precautiontarget increased. Most importantly, (3) these effectswere caused by the operation of two mechanisms,rather than one: the precaution prime produced little orno increase in performance on an ambiguous socialcontract target; similarly, the social contract prime pro-duced little or no increase in performance on an ambi-guous precaution target.

    This should not happen if permission schema theorywere correct. In that view, it should not matter which ruleis used as a prime because the only way in which socialcontracts and precautions can affect the interpretation ofambiguous rules is through activating the more generalpermission schema. Because both types of rules stronglyactivate this schema, an ambiguous target should beprimed equally by either one.ConclusionMany cognitive scientists believe that theories of adap-tive function are an explanatory luxury-fanciful, unfalsi-fiable post-hoc speculations that one indulges in at theend of a project, after the hard work of experimentationhas been done. Nothing could be farther from the truth.By using a computational theory specifying the adaptiveproblems entailed by social exchange, we and our col-leagues were able to predict, in advance, that certainvery subtle changes in content and context would pro-duce dramatic differences in how people reason. With-out this theory, it is unlikely that this very precise seriesof effects would have been found. Even if someonestumbled upon a few of them, it is unlikely that their sig-nificance would have been recognized. Indeed, the situ-ation would be similar to that in 1982, when we startedthis work: cognitive scientists could not understand whysome familiar content-such as the drinking-age prob-lem-produced "logical" reasoning whereas other famil-iar content-such as the transportation problem-did not(Griggs and Cox, 1982). By applying the adaptationistprogram, we were able to explain what was alreadyknown and to discover design features that no one hadthought to test for before.

    To isolate a functionally organized mechanism withina complex system, one needs a theory of what functionthat mechanism was designed to perform. The goal ofcognitive neuroscience is to dissect the computationalarchitecture of the human mind into functional units.The adaptationist program is cognitive neuroscience'sbest hope for achieving this goal.

    ACKNOWLEDGMENTS The authors gratefully acknowledgethe financial support of the James S. McDonnell Foundation,the National Science Foundation (NSF Grant BNS9157-449 toJohn Tooby), and a Research Across Disciplines grant (Evolu-tion and the Social Mind) from the UCSB office of Research.

    NOTE1. Its relative rarity also suggests that this machinery is un-likely to evolve unless certain other cognitive capacities-components of a theory of mind, perhaps-are already in

    place.REFERENCES

    ATRAN,., 1990. The Cognitive Foundations of Natural History.New York: Cambridge University Press.AXELROD,., 1984. The Evolution of Cooperation. New York:Basic Books.AXELROD, R., nd W. D. HAMILTON,981. The evolution ofcooperation. Science 221: 390- 1396.BAILLARGEON,., 1986. Representing the existence and thelocation of hidden objects: Object permanence in 6- and 8-month old infants. Cognition23:21-41.BARON-COHEN,., 1995. Mindblindness: An Essay on Autirm andTheory ofMind. Cambridge, Mass.: MIT Press.BOYD,R., 1988. Is the repeated prisoner's dilemma a goodmodel of reciprocal altruism?Ethol. Sociobiol. 9:211-222.BRASE, . , L. COSMIDES,nd J. TOOBY,998. Individuation,counting, and statistical inference: The role of frequency andwhole object representations in judgment under uncertainty.J. Psychol. Gen. 127:l-19.BROWN, ., 1990. Domain-specific principles affect learningand transfer in children. Cogn. Sci. 14:107-133.CHENG,., and K. HOLYOAK,985. Pragmatic reasoning sche-mas. Cogn. Psychol. 17:391-416.CHENG,., and K. HOLYOAK, 989. On the natural selectionof reasoning theories. Cognition 33:285-313.CHENG, ., K. HOLYOAK,. NISBEIT, nd L. OLIVER,986.Pragmatic versus syntactic approaches to training deductivereasoning. Cogn. Psychol. 18:293-328.COSMIDES,., 1985. Deduction or Darwinian algorithmr?An Ex-planation ofthe 'Elusive" Content Effect on the Wason SelectionTmk.Doctoral dissertation, Department of Psychology, Har-vard University, Cambridge, Mass., University Microfilms,#8 6 02206.COSMIDES,., 1989. The logic of social exchange: Has naturalselection shaped how humans reason? Studies with the Wa-son selection task. Cognition31:187-276.COSMIDES,., and J. TOOBY, 989. Evolutionary psychologyand the generation of culture, part 11. Case study: A compu-tational theory of social exchange. Ethol. Sociobiol. 10:51-97.

    COSMIDES,., and J. TOOBY,992. Cognitive adaptations forsocial exchange. In The Adapted Mind, J. Barkow, L.Cosmides, andJ. Tooby, eds. New York: Oxford UniversityPress, pp. 163-228.COSMIDES,., and J. ~ O B Y ,997. Dissecting the computa-tional architecture of social inference mechanisms. In Char-acterizing Human Psychological Adaptations (Ciba FoundationSymposium X208). Chichester: Wiley, pp. 132-156.FIDDICK,., 1988. Eze Deal and the Danger: An EuolutionaryAnalysis of Deonitic Reasoning. Doctoral dissertation, Univer-sity of California, Santa Barbara, Calif.

  • 7/29/2019 The Cognitive Neuroscience of Social Reasoning

    13/13

    FIDDICK,L., L. COSMIDES,nd J. TOOBY,1995. Pliming Dar-winian Algorithm: Converging Lines o f Evidence for Domain-Specifc Inference Modules. Presented at the Seventh Annualmeeting of the Human Behavior and Evolution Society,San ta Barbara, Calif., Jun e 28-July 2, 1995.FISKE, A., 1991. Structures o f Social Life: The Four ElementaryForms ofHuman Relations. New York: Free Press.GALLISTEL, ., and R. GELM AN, 992. Preverbal and verbalcounting and com putation. Cognition44:43-74.GIGERENZER,., 1991. How to make cognitive illusions disap-pear: Beyond heuristics and biases. Eur. Rev. Soc. Psychol.2:83-115.GIGERENZER,., and K. H UG , 1992. Domain-specific reason-ing: Social contracts, cheating and perspective change. Cog-nition 43: 127-171.GRIGGS,R., and J. CO X, 1982. Th e elusive thematic-ma terialseffect in Wason's selection task. Br.J. Psychol. 73:407-420.HATANO,G., and K. INAGA KI, 994. Young children's naivetheory of biology. Cognition 50 : 171-188.HOFFMAN, ., MCCABE,K., and SMITH,V., 1997. Behavioralfoundations of reciprocity: Experimental economics andevolutionary psychology. Economic Science Laboratory,University of Arizona. To app ear in Economic Inquiry.JOHNSON-LAIRD,., and R. BYRNE, 991. Deduction Hillsdale,N.J.: Erlbaum.KEIL, F., 1994. Th e birth an d nu rturan ce of conc epts by do-main: T he origins of co ncepts of living things. In Mapping theMind: Domain Specifcity and Culture, L. Hirschfeld and S.Gelman, eds. New York: Cambridge University Press, pp.234-254.LESLIE, A., 1987. Pre tense a nd repre senta tion: T he origin s of"theory of mind." Psychol. Rev. 94:412-426.LESLIE,A., 1994. ToMM , TOBY,and agency: Core architectureand doma in specificity. In Mapping the Mind: Domain SpeciJic-ity in Cognition and Culture, L. Hirschfeld and S. Gelman, eds.New York: Cambridge University Press, pp. 119-148.MALJKOVIC, ., 1987. Reasoning in Evolutionarily Important Do -mains and Schizophrenia: Dissociation Between Content-Dependent and Content Independent Reasoning. Unpublishedundergraduate honors thesis, Department of Psychology,Harvard University, Cambridge, Mass.MANKTELOW,, and D. OVER ,1988. Sentences, Stories, Scenar-ios, and the Selection Task. Presented at the First InternationalConfe rence on Th inking. Plymouth, U.K. July 1988.MANKTELOW,., and D. OVER ,1990. Deontic thought an d theselection task. In Lines ofni nking , Vol. 1, K.J. Gilhooly, M. T.G. K eane, R. H. Logie, and G. Erdos, eds. London : Wiley.MANKTELOW,. I., and J. ST. B. T. EVANS, 1979. Facilita tionof reasoning by realism: Effect or non-effect? BY.J. Psychol.70:477-488.MARR,D., 1982. Vision: A Computational Investigation into theHuman Representation and Processing of Visual Information SanFrancisco: Freeman.MCKENNA,P., L. CLARE, and A. BADDELEY,995. Schizo-phrenia. In Handbook of Memory Disorders, A. D. Baddeley,B. A. Wilson, and F. N. Watts, eds. New York: Wiley.

    U?T,., and R. GR IGGS, 1993. Darw inian algo rithms andthe Wason selection task: A factorial analysis of social con-tract selection task pro blems. Cognition48: 163-192.RIPS,L., 1994. The Psychology of Proof: Cambridge, Mass.: M ITPress.RUTHERFOR D, .,J. TOOBY, nd L. CO SMIDE S, 996. AdaptiveSex Differences in Reasoning About Sel f-D efm e. Presented at theEighth Annual Meeting of the Human Behavior and Evolu-tion S ociety, Northwestern U niversity, Ill., June 26-30,1996.SPELKE, ., 1990. Principles of object perception. Cogn. Sci.14:29-56.SPRINGER,K., 1992. Child ren's awa rene ss of the implicationsof biological kinship. Child Dev. 63:950-959.STONE,V. E., S. BARON-COHEN,. COSMIDES,. ~ Q o B Y ,ndR. T. KN IGH T, 997. Selective impairment of social inferen ceabilities following orbitofrontal cortex damag e. I n Proceedingsof the Nineteenth Annual Conference of the Cognitive Science Soci-ety, M. G. Shafto and P. Langley, eds. London: LawrenceErlbaum, p. 1062.STONE,V., L. COSMIDES,nd J. TOOBY, 996. Selective Impair-ment of Cheater Detection: Neurological Evidence for Adaptive Spe-cializiztion Presented at the Eighth Annual Meeting of theHuman Behavior and Evolution Society, Northwestern Uni-versity, Ill., Jun e 26-30, 1996.STONE,V., L. COSMIDES,nd J. TOOBY, orthcoming. S electiveimpairm ent of social reasoning in an orbitofrontal and an te-rior tem poral patient.SUGIY AMA , ., J. TOOBY, and L. COSM IDES,1995. Cross-Cultural Evidence o f Cogni tive Adaptations for Social Exchangeamong the Shiwiar ofEcuadorian Ama~onia. resented at theSeventh Annual Meetings of the Human Behavior and Evo-lution Society, University of California, Santa Barbara.Calif.,Ju ne 28-July 2, 1995.TOOBY, ., and L. COSMIDES, 1992. The psychological foun-dations of culture. In The Adapted Mind, J. Barkow, L.Cosmides, and J. Tooby, eds. New York: Oxford UniversityPress, pp. 19-36.T~VERS , ., 1971. The evolution of reciprocal altruism. Q:Rev.Biol. 46 33-57.WASON , ., 1983. Realism and rationality in the selection task.In Thinking and Reasoning: Psychological Approaches,J. St. B. T.Evans, ed. London : Routledge, pp. 44-75.WASON,P., 1966. Reason ing. In New Horizons in Psychology,B . M. Foss, ed. Harmondsw orth: Penguin.WASON , ., and P. JOHNS ON-LA IRD,972. The Psychology of Rea-soning: Structure and content. Cambridge, Mass.: Harvard Uni-versity Press.WILLIAMS, ., 1966. Adaptation and Natural Selection Prince-ton: P rinceton University Press.WYNN,K., 1992. Addition and subtraction by huma n infants.Nature 358:749-750.WYNN,K., 1995. Origins of num erical knowledg e. Math. Cop.1:35-60.