Judgment under Uncertainty: Heuristics and...

8
Judgment under Uncertainty: Heuristics and Biases Biases in judgments reveal some heuristics of thinking under uncertainty. Amos Tversky and Daniel Kahneman Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an elec- tion, the guilt of a defendant, or the future value of the dollar. These beliefs are usually expressed in statements such as "I think that . . . ," "chances are . . . ," "it is unlikely that . . . ," and so forth. Occasionally, beliefs concern- ing uncertain events are expressed in numerical form as odds or subjective probabilities. What determines such be- liefs? How do people assess the prob- ability of an uncertain event or the value of an uncertain quantity? This article shows that people rely on a limited number of heuristic principles which reduce the complex tasks of as- sessing probabilities and predicting val- ues to simpler judgmental operations. In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of proba- bility resembles the subjective assess- ment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heu- ristic rules. For example, the apparent distance of an object is determined in part by its clarity. The more sharply the object is seen, the closer it appears to be. This rule has some validity, -because in any given scene the more distant objects are seen less sharply than nearer objects. However, the reliance on this rule leads to systematic errors in the estimation of distance. Specifically, dis- tances are often overestimated when visibility is poor because the contours of objects are blurred. On the other hand, distances are often underesti- The authors are members of the department of psychology at the Hebrew University, Jerusalem, Israel. mated when visibility is good because the objects are seen sharply. Thus, the reliance on clarity as an indication of distance leads to common biases. Such biases are also found in the intuitive judgment of probability. This article describes three heuristics that are em- ployed to assess probabilities and to predict values. Biases to which these heuristics lead are enumerated, and the applied and theoretical implications of these observations are discussed. Representativeness Many of the probabilistic questions with which people are concerned belong to one of the following types: What is the probability that object A belongs to class B? What is the probability that event A originates from process B? What is the probability that process B will generate event A? In answering such questions, people typically rely on the representativeness heuristic, in which probabilities are evaluated by the degree to which A is representative of B, that is, by the degree to which A resembles B. For example, when A is highly representative of B, the proba- bility that A originates from B is judged to be high. On the other hand, if A is not similar to B, the probability that A originates from B is judged to be low. For an illustration of judgment by representativeness, consider an indi- vidual who has been described by a former neighbor as follows: "Steve is very shy and withdrawn, invariably helpful, but with little interest in peo- ple, or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail." How do people assess the probability that Steve is engaged in a particular 1124 occupation from a list of possibilities (for example, farmer, salesman, airline pilot, librarian, or physician)? How do people order these occupations from most to least likely? In the representa- tiveness heuristic, the probability that Steve is a librarian, for example, is assessed by the degree to which he is representative of, or similar to, the stereotype of a librarian. Indeed, re- search with problems of this type has shown that people order the occupa- tions by probability and by similarity in exactly the same way (1). This ap- proach to the judgment of probability leads to serious errors, because sim- ilarity, or representativeness, is not in- fluenced by several factors that should affect judgments of probability. Insensitivity to prior probability of outcomes. One of the factors that have no effect on representativeness but should have a major effect on probabil- ity is the prior probability, or base-rate frequency, of the outcomes. In the case of Steve, for example, the fact that there are many more farmers than li- brarians in the population should enter into any reasonable estimate of the probability that Steve is a librarian rather than a farmer. Considerations of base-rate frequency, however, do not affect the similarity of Steve to the stereotypes of librarians and farmers. If people evaluate probability by rep- resentativeness, therefore, prior proba- bilities will be neglected. This hypothesis was tested in an experiment where prior probabilities were manipulated (1). Subjects were shown brief personality descriptions of several individuals, al- legedly sampled at random from a group of 100 professionals-engineers and lawyers. The subjects were asked to assess, for each description, the prob- ability that it belonged to an engineer rather than to a lawyer. In one experi- mental condition, subjects were told that the group from which the descrip- tions had been drawn consisted of 70 engineers and 30 lawyers. In another condition, subjects were told that the group consisted of 30 engineers and 70 lawyers. The odds that any partictular description belongs to an engineer rather than to a lawyer should be higher in the first condition, where there is a majority of engineers, than in the second condition, where there is a majority of lawyers. Specifically, it can be shown by applying Bayes' rule that the ratio of these odds should be (.7/.3)2, or 5.44, for each description. In a sharp violation of Bayes' rule, the subjects in the two conditions produced essen- SCIENCE, VOL. 185 w-

Transcript of Judgment under Uncertainty: Heuristics and...

Page 1: Judgment under Uncertainty: Heuristics and Biasesmatt.colorado.edu/teaching/highcog/fall11/readings/tk74.pdfJudgmentunder Uncertainty: Heuristics and Biases Biases in judgments reveal

Judgment under Uncertainty:Heuristics and Biases

Biases in judgments reveal some heuristics of

thinking under uncertainty.

Amos Tversky and Daniel Kahneman

Many decisions are based on beliefsconcerning the likelihood of uncertainevents such as the outcome of an elec-tion, the guilt of a defendant, or thefuture value of the dollar. These beliefsare usually expressed in statements suchas "I think that . . . ," "chances are. . . ," "it is unlikely that . . . ," andso forth. Occasionally, beliefs concern-ing uncertain events are expressed innumerical form as odds or subjectiveprobabilities. What determines such be-liefs? How do people assess the prob-ability of an uncertain event or thevalue of an uncertain quantity? Thisarticle shows that people rely on alimited number of heuristic principleswhich reduce the complex tasks of as-sessing probabilities and predicting val-ues to simpler judgmental operations.In general, these heuristics are quiteuseful, but sometimes they lead to severeand systematic errors.The subjective assessment of proba-

bility resembles the subjective assess-ment of physical quantities such asdistance or size. These judgments areall based on data of limited validity,which are processed according to heu-ristic rules. For example, the apparentdistance of an object is determined inpart by its clarity. The more sharply theobject is seen, the closer it appears tobe. This rule has some validity, -becausein any given scene the more distantobjects are seen less sharply than nearerobjects. However, the reliance on thisrule leads to systematic errors in theestimation of distance. Specifically, dis-tances are often overestimated whenvisibility is poor because the contoursof objects are blurred. On the otherhand, distances are often underesti-

The authors are members of the department ofpsychology at the Hebrew University, Jerusalem,Israel.

mated when visibility is good becausethe objects are seen sharply. Thus, thereliance on clarity as an indication ofdistance leads to common biases. Suchbiases are also found in the intuitivejudgment of probability. This articledescribes three heuristics that are em-ployed to assess probabilities and topredict values. Biases to which theseheuristics lead are enumerated, and theapplied and theoretical implications ofthese observations are discussed.

Representativeness

Many of the probabilistic questionswith which people are concerned belongto one of the following types: What isthe probability that object A belongs toclass B? What is the probability thatevent A originates from process B?What is the probability that process Bwill generate event A? In answeringsuch questions, people typically rely onthe representativeness heuristic, inwhich probabilities are evaluated by thedegree to which A is representative ofB, that is, by the degree to which Aresembles B. For example, when A ishighly representative of B, the proba-bility that A originates from B is judgedto be high. On the other hand, if A isnot similar to B, the probability that Aoriginates from B is judged to be low.For an illustration of judgment by

representativeness, consider an indi-vidual who has been described by aformer neighbor as follows: "Steve isvery shy and withdrawn, invariablyhelpful, but with little interest in peo-ple, or in the world of reality. A meekand tidy soul, he has a need for orderand structure, and a passion for detail."How do people assess the probabilitythat Steve is engaged in a particular

1124

occupation from a list of possibilities(for example, farmer, salesman, airlinepilot, librarian, or physician)? How dopeople order these occupations frommost to least likely? In the representa-tiveness heuristic, the probability thatSteve is a librarian, for example, isassessed by the degree to which he isrepresentative of, or similar to, thestereotype of a librarian. Indeed, re-search with problems of this type hasshown that people order the occupa-tions by probability and by similarityin exactly the same way (1). This ap-proach to the judgment of probabilityleads to serious errors, because sim-ilarity, or representativeness, is not in-fluenced by several factors that shouldaffect judgments of probability.

Insensitivity to prior probability ofoutcomes. One of the factors that haveno effect on representativeness butshould have a major effect on probabil-ity is the prior probability, or base-ratefrequency, of the outcomes. In the caseof Steve, for example, the fact thatthere are many more farmers than li-brarians in the population should enterinto any reasonable estimate of theprobability that Steve is a librarianrather than a farmer. Considerations ofbase-rate frequency, however, do notaffect the similarity of Steve to thestereotypes of librarians and farmers.If people evaluate probability by rep-resentativeness, therefore, prior proba-bilities will be neglected. This hypothesiswas tested in an experiment where priorprobabilities were manipulated (1).Subjects were shown brief personalitydescriptions of several individuals, al-legedly sampled at random from agroup of 100 professionals-engineersand lawyers. The subjects were askedto assess, for each description, the prob-ability that it belonged to an engineerrather than to a lawyer. In one experi-mental condition, subjects were toldthat the group from which the descrip-tions had been drawn consisted of 70engineers and 30 lawyers. In anothercondition, subjects were told that thegroup consisted of 30 engineers and 70lawyers. The odds that any partictulardescription belongs to an engineerrather than to a lawyer should behigher in the first condition, where thereis a majority of engineers, than in thesecond condition, where there is amajority of lawyers. Specifically, it canbe shown by applying Bayes' rule thatthe ratio of these odds should be (.7/.3)2,or 5.44, for each description. In a sharpviolation of Bayes' rule, the subjectsin the two conditions produced essen-

SCIENCE, VOL. 185

w-

on

Janu

ary

11, 2

010

ww

w.s

cien

cem

ag.o

rgD

ownl

oade

d fr

om

Page 2: Judgment under Uncertainty: Heuristics and Biasesmatt.colorado.edu/teaching/highcog/fall11/readings/tk74.pdfJudgmentunder Uncertainty: Heuristics and Biases Biases in judgments reveal

tially the same probability judgments.Apparently, subjects evaluated the like-lihood that a particular description be-longed to an engineer rather than to alawyer by the degree to which thisdescription was representative of thetwo stereotypes, with little or no regardfor the prior probabilities of the cate-gories.The subjects used prior probabilities

correctly when they had no other infor-mation. In the absence of a personalitysketch, they judged the probability thatan unknown individual is an engineerto be .7 and .3, respectively, in the twobase-rate conditions. However, priorprobabilities were effectively ignoredwhen a description was introduced,even when this description was totallyuninformative. The responses to thefollowing description illustrate this phe-nomenon:

Dick is a 30 year old man. He is mar-ried with no children. A man of highability and high motivation, he promisesto be quite successful in his field. He iswell liked by his colleagues.

This description was intended to conveyno information relevant to the questionof whether Dick is an engineer or alawyer. Consequently, the probabilitythat Dick is an engineer should equalthe proportion of engineers in thegroup, as if no description had beengiven. The subjects, however, judgedthe probability of Dick being an engi-neer to be .5 regardless of whether thestated proportion of engineers in thegroup was .7 or .3. Evidently, peoplerespond differently when given no evi-dence and when given worthless evi-dence. When no specific evidence isgiven, prior probabilities are properlyutilized; when worthless evidence isgiven, prior probabilities are ignored(1).

Insensitivity to sample size. To eval-uate the probability of obtaining a par-ticular result in a sample drawn froma specified population, people typicallyapply the representativeness heuristic.That is, they assess the likelihood ofa sample result, for example, that theaverage height in a random sample often men will be 6 feet ( 180 centi-meters), by the similarity of this resultto the corresponding parameter (thatis, to the average height in the popula-tion of men). The similarity of a sam-ple statistic to a population parameterdoes not depend on the size of thesample. Consequently, if probabilitiesare assessed by representativeness, thenthe judged probability of a sample sta-tistic will be essentially independent of27 SEPTEMBER 1974

sample size. Indeed, when subjectsassessed the distributions of averageheight for samples of various sizes,they produced identical distributions.For example, the probability of obtain-ing an average height greater than 6feet was assigned the same value forsamples of 1000, 100, and 10 men (2).Moreover, subjects failed to appreciatethe role of sample size even when itwas emphasized in the formulation ofthe problem. Consider the followingquestion:

A certain town is served by two hos-pitals. In the larger hospital about 45babies are born each day, and in 1thesmaller hospital about 15 babies are borneach day. As you know, about 50 percentof all babies are boys. However, the exactpercentage varies from day to day. Some-times it may be higher than 50 percent,sometimes lower.For a period of 1 year, each hospital

recorded the days on which more than 60percent of the babies born were boys.Which hospital do you think recordedmore such days?

- The larger hospital (21)- The smaller hospital (21)- About the same (that is, within 5

percent of each other) (53)

The values in parentheses are the num-ber of undergraduate students whochose each answer.

Most subjects judged the probabilityof obtaining more than 60 percent boysto be the same in the small and in thelarge hospital, presumably because theseevents are described by the same sta-tistic and are therefore equally repre-sentative of the general population. Incontrast, sampling theory entails thatthe expected number of days on whichmore than 60 percent of the babies areboys is much greater in the small hos-pital than in the large one, because alarge sample is less likely to stray from50 percent. This fundamental notionof statistics is evidently not part ofpeople's repertoire of intuitions.A similar insensitivity to sample size

has been reported in judgments of pos-terior probability, that is, of the prob-ability that a sample has been drawnfrom one population rather than fromanother. Consider the following ex-ample:

Imagine an urn filled with balls, ofwhich 23 are of one color and 1/3 ofanother. One individual has drawn 5 balls-from the urn, and found that 4 were redand 1 was white Another individual hasdrawn 20 balls and found that 12 werered and 8 were white. Which of the twoindividuals should feel more confident thatthe urn contains 23 red balls and 1/3 whiteballs, rather than the opposite? What oddsshould each individual give?

In this problem, the correct posteriorodds are 8 to 1 for the 4: 1 sampleand 16 to 1 for the 12 : 8 sample, as-suming equal prior probabilities. How-ever, most people feel that the firstsample provides much stronger evidencefor the hypothesis that the urn is pre-dominantly red, because the proportionof red balls is larger in the first than inthe second sample. Here again, intuitivejudgments are dominated by the sampleproportion and are essentially unaffectedby the size of the sample, which playsa crucial role in the determination ofthe actual posterior odds (2). In ad-dition, intuitive estimates of posteriorodds are far less extreme than the cor-rect values. The underestimation of theimpact of evidence has been observedrepeatedly in problems of this type (3, 4).It has been labeled "conservatism."

Misconceptions of chance. People ex-pect that a sequence of events generatedby a random process will represent theessential characteristics of that processeven when the sequence is short. Inconsidering tosses of a coin for headsor tails, for example, people regard thesequence H-T-H-T-T-H to be morelikely than the sequence H-H-H-T-T-T,which does not appear random, andalso more likely than the sequence H-H-H-H-T-H, which does not represent thefairness of the coin (2). Thus, peopleexpect that the essential characteristicsof the process will be represented, notonly globally in the entire sequence,but also locally in each of its parts. Alocally representative sequence, how-ever, deviates systematically from chanceexpectation: it contains too many al-ternations and too few runs. Anotherconsequence of the belief in local rep-resentativeness is the well-known gam-bler's fallacy. After observing a longrun of red on the roulette wheel. forexample, most people erroneously be-lieve that black is now due, presumablybecause the occurrence of black willresult in a more representative sequencethan the occurrence of an additionalred. Chance is commonly viewed as aself-correcting process in which a devi-ation in one direction induces a devia-tion in the opposite direction to restorethe equilibrium. In fact, deviations arenot "corrected" as a chance processunfolds, they are merely diluted.

Misconceptions of chance are notlimited to naive subjects. A study ofthe statistical intuitions of experiencedresearch psychologists (5) revealed alingering belief in what may be calledthe "law of small numbers," accordingto which even small samples are highly

1125

on

Janu

ary

11, 2

010

ww

w.s

cien

cem

ag.o

rgD

ownl

oade

d fr

om

Page 3: Judgment under Uncertainty: Heuristics and Biasesmatt.colorado.edu/teaching/highcog/fall11/readings/tk74.pdfJudgmentunder Uncertainty: Heuristics and Biases Biases in judgments reveal

representative of the populations fromwhich they are drawn. The responsesof these investigators reflected the ex-pectation that a valid hypothesis abouta population will be represented by astatistically significant result in a sam-ple-with little regard for its size. Asa consequence, the researchers put toomuch faith in the results of small sam-ples and grossly overestimated thereplicability of such results. In theactual conduct of research, this biasleads to *the selection of samples ofinadequate size and to overinterpretationof findings.

Insensitivity to predictability. Peopleare sometimes called upon to make suchnumerical predictions as the future valueof a stock, the demand for a commod-ity, or the outcome of a football game.Such predictions are often made byrepresentativeness. For example, sup-pose one is given a description of acompany and is asked to predict itsfuture profit. If the description of thecompany is very favorable, a veryhigh profit will appear most represen-tative of that description; if the descrip-tion is mediocre, a mediocre perform-ance will appear most representative.The degree to which the description isfavorable is unaffected by the reliabilityof that description or by the degree towhich it permits accurate prediction.Hence, if people predict solely in termsof the favorableness of the description,their predictions will be insensitive tothe reliability of the evidence and tothe expected accuracy of the prediction.

This mode of judgment violates ithenormative statistical theory in whichthe extremeness and the range of pre-dictions are controlled by considerationsof predictability. When predictabilityis nil, the same prediction should bemade in all cases. For example, if thedescriptions of companies provide noinformation relevant to profit, then thesame value (such as average profit)should be predicted for all companies.If predictability is perfect, of course,the values predicted will match theactual values and the range of predic-tions will equal the range of outcomes.In general, the higher the predictability,the wider the range of predicted values.

Several studies of numerical predic-tion have demonstrated that intuitivepredictions violate this rule, and thatsubjects show little or no regard forconsiderations of predictability (1). Inone of these studies, subjects were pre-sented with several paragraphs, eachdescribing the performance of a stu-

1126

dent teacher during a particular prac-tice lesson. Some subjects were askedto evaluate the quality of the lessondescribed in the paragraph in percentilescores, relative to a specified population.Other subjects were asked to predict,also in percentile scores, the standingof each student teacher 5 years afterthe practice lesson. The judgments madeunder the two conditions were identical.That is, the prediction of a remotecriterion (success of a teacher after 5years) was identical to the evaluationof the information on which the predic-tion was based (the quality of thepractice lesson). The students who madethese predictions were undoubtedlyaware of the limited predictability ofteaching competence on the basis of asingle trial lesson 5 years earlier; never-theless, their predictions were as ex-tremne as their evaluations.

The illusion of validity. As we haveseen, people often predict by selectingthe outcome (for example, an occupa-tion) that is most representative of theinput (for example, the description ofa person). The confidence -they havein their prediction depends primarilyon the degree of representativeness(that is, on the quality of the matchbetween the selected outcome and theinput) with little or no regard for thefactors that limit predictive accuracy.Thus, people express great confidencein the prediction that a person is alibrarian when given a description ofhis personality which matches thestereotype of librarians, even if thedescription is scanty, unreliable, or out-dated. The unwarranted confidencewhich is produced by a good fit betweenthe predicted outcome and the inputinformation may be called the illusionof validity. This illusion persists evenwhen the judge is aware of the factorsthat limit the accuracy of his predic-tions. It is a common observation thatpsychologists who conduct selectioninterviews often experience considerableconfidence in their predictions, evenwhen they know of the vast literaturethat shows selection interviews to behighly fallible. The continued relianceon the clinical interview for selection,despite repeated demonstrations of itsinadequacy, amply attests to the strengthof this effect.The internal consistency of a pattern

of inputs is a major determinant ofone's confidence in predictions basedon these inputs. For example, peopleexpress more confidence in predicting thefinal grade-point average of a student

whose first-year record consists entirelyof B's than in predicting the grade-point average of a student whose first-year record includes many A's and C's.Highly consistent patterns are mostoften observed when the input vari-ables are highly redundant or correlated.Hence, people tend to have great con-fidence in predictions based on redun-dant input variables. However, anelementary result in the statistics of cor-relation asserts that, given input vari-ables of stated validity, a predictionbased on several such inputs canachieve higher accuracy when they areindependent of each other than whenthey are redundant or correlated. Thus,redundancy among inputs decreasesaccuracy even as it increases confidence,and people are often confident in pre-dictions that are quite likely to be offthe mark (1).

Misconceptions of regression. Supposea large group of children has beenexamined on two equivalent versions ofan aptitude test. If one selects ten chil-dren from among those who did best onone of the two versions, he will usuallyfind their performance on the secondversion to be somewhat disappointing.Conversely, if one selects ten childrenfrom among those who did worst onone version, they will be found, on theaverage, to do somewhat better on theother version. More generally, considertwo variables X and Y which have thesame distribution. If one selects indi-viduals whose average X score deviatesfrom the mean of X by k units, thenthe average of their Y scores will usual-ly deviate from the mean of Y by lessthan k units. These observations illus-trate a general phenomenon known asregression toward the mean, which wasfirst documented by Galton more than100 years ago.

In the normal course of life, oneencounters many instances of regressiontoward the mean, in the comparisonof the height of fathers and sons, ofthe intelligence of husbands and wives.or of the performance of individualson consecutive examinations. Neverthe-less, people do not develop correct in-tuitions about this phenomenon. First,they do not expect regression in manycontexts where it is bound to occur.Second, when they recognize the occur-rence of regression, they often inventspurious causal explanations for it (1).We suggest that the phenomenon of re-gression remains elusive because it is in-compatible with the belief that thepredicted outcome should be maximally

SCIENCE, VOL. 185

F"-,-

on

Janu

ary

11, 2

010

ww

w.s

cien

cem

ag.o

rgD

ownl

oade

d fr

om

Page 4: Judgment under Uncertainty: Heuristics and Biasesmatt.colorado.edu/teaching/highcog/fall11/readings/tk74.pdfJudgmentunder Uncertainty: Heuristics and Biases Biases in judgments reveal

representative of the input, and, hence,that the value of the outcome variableshould be as extreme as the value ofthe input variable.The failure to recognize the import

of regression can have pernicious con-sequences, as illustrated by the follow-ing observation (1). In a discussionof flight training, experienced instruc-tors noted that praise for an exception-ally smooth landing is typically followedby a poorer landing on the next try,while harsh criticism after a roughlanding is usually followed by an im-provement on the next try. The instruc-tors concluded that verbal rewards aredetrimental to learning, while verbalpunishments are beneficial, contrary toaccepted psychological doctrine. Thisconclusion is unwarranted because ofthe presence of regression toward ithemean. As in other cases of repeatedexamination, an improvement will usu-ally follow a poor performance anda deterioration will usually follow anoutstanding performance, even if theinstructor does not respond to ithetrainee's achievement on the first at-tempt. Because the instructors hadpraised their trainees after good land-ings and admonished them after poorones, they reached the erroneous andpotentially harmful conclusion that pun-ishment is more effective than reward.

Thus, the failure to understand theeffect of regression leads one to over-estimate the effectiveness of punish-ment and to underestimate the effec-tiveness of reward. In social interaction,as well as in training, rewards are typ-ically administered when performanceis good, and punishments are typicallyadministered when performance ispoor. By regression alone, therefore,behavior is most likely to improve afterpunishment and most likely to deterio-rate after reward. Consequently, thehuman condition is such that, by chancealone, one is most often rewarded forpunishing others and most often pun-ished for rewarding them. People aregenerally not aware of this contingency.In fact, the elusive role of regressionin determining the apparent conse-quences of reward and punishmentseems to have escaped the notice of stu-dents of this area.

Availability

There are situations in which peopleassess the frequency of a class or theprobability of an event by the ease with

27 SEPTEMBER 1974

which instances or occurrences can bebrought to mind. For example, one mayassess the risk of heart attack amongmiddle-aged people by recalling suchoccurrences among one's acquaintances.Similarly, one may evaluate the proba-bility that a given business venture willfail by imagining various difficulties itcould encounter. This judgmental heu-ristic is called availability. Availabilityis a useful clue for assessing frequencyor probability, because instances oflarge classes are usually recalled betterand faster than ifistances of less fre-quent classes. However, availability isaffected by factors other than frequencyand probability. Consequently, the re-liance on availability leads to predicta-ble biases, some of which are illustratedbelow.

Biases due to the retrievability of in-stances. When the size of a class isjudged by the availability of its in-stances, a class whose instances areeasily retrieved will appear more nu-merous than a class of equal frequencywhose instances are less retrievable. Inan elementary demonstration of this ef-fect, subjects heard a list of well-knownpersonalities of both sexes and weresubsequently asked to judge whether thelist contained more names of men thanof women. Different lists were presentedto different groups of subjects. In someof the lists the men were relatively morefamous than the women, and in othersthe women were relatively more famousthan the men. In each of the lists, thesubjects erroneously judged that theclass (sex) that had the more famouspersonalities was the more numerous(6).

In addition to familiarity, there areother factors, such as salience, whichaffect the retrievability of instances. Forexample, the impact of seeing a houseburning on the subjective probability ofsuch accidents is probably greater thanthe impact of reading about a fire inthe local paper. Furthermore, recent oc-currences are likely to be relativelymore available than earlier occurrences.It is a common experience that thesubjective probability of traffic accidentsrises temporarily when one sees a caroverturned by the side of the road.

Biases due to the effectiveness of asearch set. Suppose one samples a word(of three letters or more) at randomfrom an English text. Is it more likelythat the word starts with r or thatr is the third letter? People approachthis problem by recalling words that

begin with r (road) and words thathave r in the third position (car) andassess the relative frequency by theease with which words of the two typescome to mind. Because it is much easierto search for words by their first letterthan by their third letter, most peoplejudge words that begin with a givenconsonant to be more numerous thanwords in which the same consonant ap-pears in the third position. They do soeven for consonants, such as r or k,that are more frequent in the thirdposition than in the first (6).

Different tasks elicit different searchsets. For example, suppose you areasked to rate the frequency with whichabstract words (thought, love) and con-crete words (door, water) appear inwritten English. A natural way toanswer this question is to search forcontexts in which the word could ap-pear. It seems easier to think ofcontexts in which an abstract conceptis mentioned (love in love stories) thanto think of contexts in which a concreteword (such as door) is mentioned. Ifthe frequency of words is judged by theavailability of the contexts in whichthey appear, abstract words will bejudged as relatively more numerous thanconcrete words. This bias has been ob-served in a recent study (7) whichshowed that the judged frequency ofoccurrence of abstract words was muchhigher than that of concrete words,equated in objective frequency. Abstractwords were also judged to appear in amuch greater variety of contexts thanconcrete words.

Biases of imaginability. Sometimesone has to assess the frequency of aclass whose instances are not stored inmemory but can be generated accord-ing to a given rule. In such situations,one typically generates several instancesand evaluates frequency or probabilityby the ease with which the relevant in-stances can be constructed. However,the ease of constructing instances doesnot always reflect their actual frequency,and this mode of evaluation is proneto biases. To illustrate, consider a groupof 10 people who form committees ofk members, 2 < k < 8. How manydifferent committees of k members canbe formed? The correct answer to thisproblem is given by the binomial coef-ficient (10) which reaches a maximumof 252 for k = 5. Clearly, the numberof committees of k members equalsthe number of committees of (10 - k)members, because any committee of k

1127

on

Janu

ary

11, 2

010

ww

w.s

cien

cem

ag.o

rgD

ownl

oade

d fr

om

Page 5: Judgment under Uncertainty: Heuristics and Biasesmatt.colorado.edu/teaching/highcog/fall11/readings/tk74.pdfJudgmentunder Uncertainty: Heuristics and Biases Biases in judgments reveal

members defines a unique group of(10 - k) nonmembers.One way to answer this question with-

out computation is to mentally con-struct committees of k members andto evaluate their number by the easewith which they come to mind. Com-mittees of few members, say 2, aremore available than committees of manymembers, say 8. The simplest schemefor the construction of commititees is apartition of the group into disjoint sets.One readily sees that it is easy to con-struct five disjoint committees of 2members, while it is impossible to gen-erate even two disjoint committees of8 members. Consequently, if fre-quency is assessed by imaginability, orby availability for construction, thesmall committees will appear more num-erous than larger committees, in con-trast to the correct bell-shaped func-tion. Indeed, when naive subjects wereasked to estimate the number of distinctcommittees of various sizes, their esti-mates were a decreasing monotonicfunction of committee size (6). Forexample, the median estimate of thenumber of committees of 2 memberswas 70, while the estimate for com-mittees of 8 members was 20 (the cor-rect answer is 45 in both cases).

Imaginability plays an important rolein the evaluation of probabilities in real-life situations. The risk involved in anadventurous expedition, for example, isevaluated by imagining contingencieswith which the expedition is notequipped to cope. If many such difficul-ties are vividly portrayed, the expedi-tion can be made to appear exceedinglydangerous, although the ease with whichdisasters are imagined need not reflecttheir actual likelihood. Conversely, therisk involved in an undertaking may begrossly underestimated if some possibledangers are either difficult to conceiveof, or simply do not come to mind.

Illusory correlation. Chapman andChapman (8) have described an interest-ing bias in the judgment of the fre-quency with which two events co-occur.They presented naive judges with in-formation concerning several hypothet-ical mental patients. The data for eachpatient consisted of a clinical diagnosisand a drawing *of a person made bythe patient. Later the judges estimatedthe frequency with which each diagnosis(such as paranoia or suspiciousness)had been accompanied by various fea-tures of the drawing (such as peculiareyes). The subjects markedly overesti-mated the frequency of co-occurrence of

1128

natural associates, such as suspicious-ness and peculiar eyes. This effect waslabeled illusory correlation. In their er-roneous judgments of the data to whichthey had been exposed, naive subjects"rediscovered" much of the common,but unfounded, clinical lore concern-ing the interpretation of the draw-a-person test. The illusory correlationeffect was extremely resistant to con-tradictory data. It persisted even whenthe correlation between symptom anddiagnosis was actually negative, and itprevented the judges from detectingrelationships that were in fact present.

Availability provides a natural ac-count for the illusory-correlation effect.The judgment of how frequently twoevents co-occur could be based on thestrength of the associative bond betweenthem. When the association is strong,one is likely to conclude that the eventshave been frequently paired. Conse-quently, strong associates will be judgedto have occurred together trequently.According to this view, the illusorycorrelation between suspiciousness andpeculiar drawing of the eyes, for ex-ample, is due to the fact that suspi-ciouLsness is more readily associated withthe eyes than with any other part ofthe body.

Lifelong experience has taught usthat, in general, instances of largeclasses are recalled better and fasterthan instances of less frequent classes;that likely occurrences are easier toimagine than unlikely ones; and thatthe associative connections betweenevents are strengthened when the eventsfrequently co-occur. As a result, manhas at his disposal a procedure (theavailability heuristic) for estimating thenumerosity of a class, the likelihood ofan event, or the frequency of co-occur-rences, by the ease with which therelevant mental operations of retrieval,construction, or association can beperformed. However, as the precedingexamples have demonstrated, this valu-able estimation procedure results insystematic errors.

Adjustment and Anchoring

In many situations, people make esti-mates by starting from an initial valuethat is adjusted to yield the final answer.The initial value, or starting point, maybe suggested by the formulation of theproblem, or it may be the result of apartial computation. In either case,adjustments are typically insufficient (4).

That is, different st .ting points yielddifferent estimates, which are biasedtoward the initial values. We call thisphenomenon anchoring.

Inisufficient adjustment. In a demon-stration of the anchoring effect, subjectswere asked to estimate various quanti-ties, stated in percentages (for example,the percentage of African countries inthe United Nations). For each quantity,a number between 0 and 100 was deter-mined by spinning a wheel of fortunein the subjects' presence. The subjectswere instructed to indicate first whetherthat number was higher or lower thanthe value of the quantity, and then toestimate the value of the quantity bymoving upward or downward from thegiven number. Different groups weregiven different numbers for each quan-tity, and these arbitrary numbers had amarked effect on estimates. For example,the median estimates of the percentageof African countries in the United Na-tions were 25 and 45 for groups that re-ceived 10 and 65, respectively, as start-ing points. Payoffs for accuracy did notreduce the anchoring effect.

Anchoring occurs not only when thestarting point is given to the subject,but also when the subject bases hisestimate on the result of some incom-plete computation. A study of intuitivenumerical estimation illustrates this ef-fect. Two groups of high school studentsestimated, within 5 seconds, a numericalexpression that was written on theblackboard. One group estimated theproduct

8x7X6X5 X4x3 x2x 1

while another group estimated theproduct

1 x 2 x 3 x 4 x 5 x 6 x 7x 8

To rapidly answer such questions, peo-ple may perform a few steps of compu-tation and estimate the product byextrapolation or adjustment. Because ad-justments are typically insufficient, thisprocedure should lead to underestima-tion. Furthermore, because the result ofthe first few steps of multiplication (per-formed from left to right) is higher inthe descending sequence than in theascending sequence, the former expres-sion should be judged larger than thelatter. Both predictions were confirmed.The median estimate for the ascendingsequence was 512, while the medianestimate for the descending sequencewas 2,250. The correct answer is 40,320.

Biases in the evaluation of conjunc-tive and disjunctive events. In a recent

SCIENCE, VOL. 185

-m~

on

Janu

ary

11, 2

010

ww

w.s

cien

cem

ag.o

rgD

ownl

oade

d fr

om

Page 6: Judgment under Uncertainty: Heuristics and Biasesmatt.colorado.edu/teaching/highcog/fall11/readings/tk74.pdfJudgmentunder Uncertainty: Heuristics and Biases Biases in judgments reveal

study by Bar-Hillel (9) subjects weregiven the opportunity to bet on one oftwo events. Three types of events wereused: (i) simple events, such as drawinga red marble from a bag containing 50percent red marbles and 50 percentwhite marbles; (ii) conjunctive events,such as drawing a red marble seventimes in succession, with replacement,from a bag containing 90 percent redmarbles and 10 percent white marbles;and (iii) disjunctive events, such asdrawing a red marble at least once inseven successive tries, with replacement,from a bag containing 10 percent redmarbles and 90 percent white marbles.In this problem, a significant majorityof subjects preferred to bet on the con-junctive event (the probability of whichis .48) rather than on the simple event(the probability of which is .50). Sub-jects also preferred to bet on the simpleevent rather than on the disjunctiveevent, which has a probability of .52.Thus, most subjects bet on the less likelyevent in both comparisons. This patternof choices illustrates a general finding.Studies of choice among gambles andof judgments of probability indicatethat people tend to overestimate theprobability of conjunctive events (10)and to underestimate the probability ofdisjunctive events. These biases arereadily explained as effects of anchor-ing. The stated pro'bability of theelementary event (success at any onestage) provides a natural starting pointfor the estimation of the probabilities ofboth conjunctive and disjunctive events.Since adjustment from the starting pointis typically insufficient, the final esti-mates remain too close to the probabili-ties of the elementary events in bothcases. Note that the overall probabilityof a conjunctive event is lower thanthe probability of each elementaryevent, whereas the overall probability ofa disjunctive event is higher than theprobability of each elementary event.As a consequence of anchoring, theoverall probability will be overestimatedin conjunctive problems and underesti-mated in disjunctive pro'blems.

Biases in the evaluation of compoundevents are particularly significant in thecontext of planning. The successfulcompletion of an undertaking, such asthe development of a new product, typi-cally has a conjunctive character: forthe undertaking to succeed, each of aseries of events must occur. Even wheneach of these events is very likely, theoverall probability of success can bequite low if the number of events is27 SEPTEMBER 1974

large. The general tendency to overesti-mate the probability of conjunctiveevents leads to unwarranted optimism inthe evaluation of the likelihood that aplan will succeed or that a project willbe completed on time. Conversely, dis-junctive structures are typically encoun-tered in the evaluation of risks. A com-plex system, such as a nuclear reactoror a human body, will malfunction ifany of its essential components fails.Even when the likelihood of failure ineach component is slight, the probabilityof an overall failure can be high ifnlany components are involved. Be-cause of anchoring, people will tend tounderestimate the probabilities of failurein complex systems. Thus, the direc-tion of the anchoring bias can some-times be inferred from the structure ofthe event. The chain-like structure ofconjunctions leads to overestimation, thefunnel-like structure of disjunctionsleads to underestimation.

Anchoring in the assessment of sub-jective probability distributions. In deci-sion analysis, experts are often requiredto express their beliefs about a quantity,such as the value of the Dow-Jonesaverage on a particular day, in theform of a probability distribution. Sucha distribution is usually constructed byasking the person to select values ofthe quantity that correspond to specifiedpercentiles of his subjective probabilitydistribution. For example, the judgemay be asked to select a number, X9o,such that his subjective pro'bability thatthis number will be higher than thevalue of the Dow-Jones average is .90.That is, he should select the value X90so that he is just willing to accept 9 to1 odds that the Dow-Jones average willnot exceed it. A subjective probabilitydistribution for the value of the Dow-Jones average can be constructed fromseveral such judgments corresponding todifferent percentiles.By collecting subjective probability

distributions for many different quanti-ties, it is possible to test the judge forproper calibration. A judge is properly(or externally) calibrated in a set ofproblems if exactly II percent of thetrue values of the assessed quantitiesfalls below his stated values of XIT. Forexample, the true values should fallbelow X0l for 1 percent of the quanti-ties and a'bove X19 for 1 percent of thequantities. Thus, the true values shouldfall in the confidence interval betweenX01 and XIo on 98 percent of the prob-lems.

Several investigators (11) have ob-

tained probability distributions for manyquantities from a large number ofjudges. These distributions indicatedlarge and systematic departures fromproper calibration. In most studies, theactual values of the assessed quantitiesare either smaller than Xol or greaterthan X99 for about 30 percent of theproblems. That is, the subjects stateoverly narrow confidence intervals whichreflect more certainty than is justified bytheir knowledge about the assessedquantities. This 'bias is common tonaive and to sophisticated subjects, andit is not eliminated by introducing prop-er scoring rules, which provide incentivesfor external calibration. This effect is at-tributable, in part at least, to anchoring.To select Xi)( for the value of the

Dow-Jones average, for example, it isnatural to begin by thinking about one'sbest estimate of the Dow-Jones and toadjust this value upward. If this adjust-ment-like most others-is insufficient,then X90 will not be sufficiently extreme.A similar anchoring effect will occur inthe selection of X10, which is presumablyobtained by adjusting one's best esti-mate downward. Consequently, the con-fidence interval between X1o and Xgowill be too narrow, and the assessedprobability distribution will be too tight.In support of this interpretation it canbe shown that subjective probabilitiesare systematically altered by a proce-dure in which one's best estimate doesnot serve as an anchor.

Subjective probability distributionsfor a given quantity (the Dow-Jonesaverage) can be obtained in two differ-ent ways: (i) by asking the subject toselect values of the Dow-Jones thatcorrespond to specified percentiles ofhis probability distribution and (ii) byasking the subject to assess the prob-abilities that the true value of theDow-Jones will exceed some specifiedvalues. The two procedures are formallyequivalent and should yield identicaldistributions. However, they suggest dif-ferent modes of adjustment from differ-cent anchors. In procedure (i), thenatural starting point is one's best esti-mate of the quantity. In procedure (ii),on the other hand, the subject may beanchored on the value stated in thequestion. Alternatively, he may be an-chored on even odds, or 50-50 chances,which is a natural starting point in theestimation of likelihood. In either case,procedure (ii) should yield less extremeodds than procedure (i).To contrast the two procedures, a

set of 24 quantities (such as the air dis-1129

on

Janu

ary

11, 2

010

ww

w.s

cien

cem

ag.o

rgD

ownl

oade

d fr

om

Page 7: Judgment under Uncertainty: Heuristics and Biasesmatt.colorado.edu/teaching/highcog/fall11/readings/tk74.pdfJudgmentunder Uncertainty: Heuristics and Biases Biases in judgments reveal

tance from New Delhi to Peking)-waspresented to a group of subjects whoassessed either X1o or XAT0 for each prob-lem. Another group of subjects re-ceived the median judgment of the firstgroup for each of the 24 quantities.They were asked to assess the odds thateach of the given values exceeded thetrue value of the relevant quantity. Inthe absence of any bias, the secondgroup should retrieve the odds specifiedto the first group, that is, 9: 1. How-ever, if even odds or the stated valueserve as anchors, the odds of the sec-ond group should be less extreme, thatis, closer to 1 : 1. Indeed, the medianodds stated by this group, across allproblems, were 3:1. When the judg-ments of the two groups were testedfor external calibration, it was foundthat subjects in the first group were tooextreme, in accord with earlier studies.The events that they defined as havinga probability of .10 actually obtained in24 percent of the cases. In contrast,subjects in the second group were tooconservative. Events to which they as-signed an average probability of .34actually obtained in 26 percent of thecases. These results illustrate the man-ner in which the degree of calibrationdepends on the procedure of elicitation.

Discussion

This article has been concerned withcognitive biases that stem from the reli-ance on judgmental heuristics. Thesebiases are not attributable to motiva-tional effects such as wishful thinking orthe distortion of judgments by payoffsand penalties. Indeed, several of thesevere errors of judgment reportedearlier occurred despite the fact thatsubjects were encouraged to be accurateand were rewarded for the correctanswers (2, 6).The reliance on heuristics and the

prevalence of biases are not restrictedto laymen. Experienced researchers arealso prone to the same biases-whenthey think intuitively. For example, thetendency to predict the outcome thatbest represents the data, with insufficientregard for prior probability, has beenobserved in the intuitive judgments ofindividuals who have had extensivetraining in statistics (1, 5). Althoughthe statistically sophisticated avoidelementary errors, such as the gambler'sfallacy, their intuitive judgments areliable to similar fallacies in more in-tricate and less transparent problems.

1130

It is not surprising that useful heuris-tics such as representativeness andavailability are retained, even thoughthey occasionally lead to errors in pre-diction or estimation. What is perhapssurprising is the failure of people toinfer from lifelong experience suchfundamental statistical rules as regres-sion toward the mean, or the effect ofsample size on sampling variability. Al-though everyone is exposed, in the nor-mal course of life, to numerous ex-amples from which these rules couldhave been induced, very few peoplediscover the principles of sampling andregression on their own. Statistical prin-ciples are not learned from everydayexperience because the relevant in-stances are not coded appropriately. Forexample, people do not discover thatsuccessive lines in a text differ more inaverage word length than do successivepages, because they simply do not at-tend to the average word length of in-dividual lines or pages. Thus, peopledo not learn the relation between samplesize and sampling variability, althoughthe data for such learning are abundant.The lack of an appropriate code also

explains why people usually do notdetect the biases in their judgments ofprobability. A person could conceivablylearn whether his judgments are exter-nally calibrated by keeping a tally of theproportion of events that actually occuramong those to which he assigns thesame probability. However, it is notnatural to group events by their judgedprobability. In the absence of suchgrouping it is impossible for an indivi-dual to discover, for example, that only50 percent of the predictions to whichhe has assigned a probability of .9 orhigher actually came true.The empirical analysis of cognitive

biases has implications for the theoreti-cal and applied role of judged probabili-ties. Modern decision theory (12, 13)regards subjective probability as thequantified opinion of an idealized per-son. Specifically, the subjective proba-bility of a given event is defined by theset of bets about this event that such aperson is willing to accept. An inter-nally consistent, or coherent, subjectiveprobability measure can be derived foran individual if his choices among betssatisfy certain principles, that is, theaxioms of the theory. The derived prob-ability is subjective in the sense thatdifferent individuals are allowed to havedifferent probabilities for the same event.The major contribution of this ap-proach is that it provides a rigorous

subjective interpretation of probabilitythat is applicable to unique events andis embedded in a general theory of ra-tional decision.

It should perhaps be noted that, whilesubjective probabilities can sometimesbe inferred from preferences amongbets, they are normally not formed inthis fashion. A person bets on team Arather than on team B because he be-lieves that team A is more likely towin; he does not infer this belief fromhis betting preferences. Thus, in reality,subjective probabilities determine pref-erences among bets and are not de-rived from them, as in the axiomatictheory of rational decision (12).The inherently subjective nature of

probability has led many students to thebelief that coherence, or internal con-sistency, is the only valid criterion bywhich judged probabilities should beevaluated. From the standpoint of theformal theory of subjective probability,any set of internally consistent probabil-ity judgments is as good as any other.This criterion is not entirely satisfactory,because an internally consistent set ofsubjective probabilities can be incom-patible with other beliefs held by theindividual. Consider a person whosesubjective probabilities for all possibleoutcomes of a coin-tossing game reflectthe gambler's fallacy. That is, his esti-mate of the probability of tails on aparticular toss increases with the num-ber of consecutive heads that precededthat toss. The judgments of such a per-son could be internally consistent andtherefore acceptable as adequate sub-jective probabilities according to thecriterion of the formal theory. Theseprobabilities, however, are incompatiblewith the generally held belief that acoin has no memory and is therefore in-capable of generating sequential de-pendencies. For judged probabilities tobe considered adequate, or rational, in-ternal consistency is not enough. Thejudgments must be compatible with theentire web of beliefs held by the in-dividual. Unfortunately, there can beno simple formal procedure for assess-ing the compatibility of a set of proba-bility judgments with the judge's totalsystem of beliefs. The rational judgewill nevertheless strive for compatibility,even though internal consistency ismore easily achieved and assessed. Inparticular, he will attempt to make hisprobability judgments compatible withhis knowledge about the subject mat-ter, the laws of probability, and his ownjudgmental heuristics and biases.

SCIENCE, VOL. 185

on

Janu

ary

11, 2

010

ww

w.s

cien

cem

ag.o

rgD

ownl

oade

d fr

om

Page 8: Judgment under Uncertainty: Heuristics and Biasesmatt.colorado.edu/teaching/highcog/fall11/readings/tk74.pdfJudgmentunder Uncertainty: Heuristics and Biases Biases in judgments reveal

Summary

This article described three heuristicsthat are employed in making judgmentsunder uncertainty: (i) representativeness,which is usually employed when peo-ple are asked to judge the probabilitythat an object or event A belongs toclass or process B; (ii) availability of in-stances or scenarios, which is often em-ployed when people are asked to assessthe frequency of a class or the plausibil-ity of a particular development; and(iii) adjustment from an anchor, whichis usually employed in numerical predic-tion when a relevant value is available.These heuristics are highly economical

and usually effective, but they lead tosystematic and predictable errors. Abetter understanding of these heuristicsand of the biases to which they leadcould improve judgments and decisionsin situations of uncertainty.

References and Notes

1. D. Kahneman and A. Tversky, Psychol. Rev.80, 237 (1973).

2. , Cognitive Psychol. 3, 430 (1972).3. W. Edwards, in Formal Representation of

Human Judgment, B. Kleinmuntz, Ed. (Wiley,New York, 1968), pp. 17-52.

4. P. Slovic and S. Lichtenstein, Organ. Behav.Hum. Performance 6, 649 (1971).

5. A. Tversky and D. Kahneman, Psychol. Bull.76, 105 (1971).

6. , Cognitive Psychol. 5, 207 (1973).7. R. C. Galbraith and B. J. Underwood,

Mem. Cognition 1, 56 (1973).

8. L. J. Chapman and J. P. Chapman, J.Abnorm. Psychol. 73, 193 (1967); ibid., 74,271 (1969).

9. M. Bar-Hillel, Organ. Behav. Hum. Per-formance 9, 396 (1973).

10. J. Cohen, E. I. Chesnick, D. Haran, Br. J.Psychol. 63, 41 (1972).

11. M. Alpert and H. Raiffa, unpublished manu-script; C. A. S. von Holstein, Acta Psychol.35, 478 (1971); R. L. Winkler, J. Am. Stat.Assoc. 62, 776 (1967).

12. L. J. Savage, The Foundations of Statistics(Wiley, New York, 1954).

13. B. De Finetti, in International Encyclopediaof the Social Sciences, D. E. Sills, Ed. (Mac-millan, New York, 1968), vol. 12, pp. 496-504.

14. This research was supported by the AdvancedResearch Projects Agency of the Departmentof Defense and was monitored by the Officeof Naval Research under contract N00014-73-C-0438 to the Oregon Research Institute,Eugene. Additional support for this researchwas provided by the Research and Develop-ment Authority of the Hebrew University,Jerusalem, Israel.

Rural Health Care in Mexico?Present educational and administrative structures must bechanged in order to improve health care in rural areas.

Luis Caiiedo

The present health care structure inMexico focuses attention on the urbanpopulation, leaving the rural communi-ties practically unattended. There aretwo main factors contributing to thissituation. One is the lack of coordina-tion among the different institutionsresponsible for the health of the com-munity and among the educationalinstitutions. The other is the lack ofinformation concerning the nature ofthe problems in rural areas. In an at-tempt to provide a solution to theseproblems, a program has been designedthat takes into consideration the en-vironmental conditions, malnutrition,poverty, and negative cultural factorsthat are responsible for the high inci-dences of certain diseases among ruralpopulations. It is based on the develop-ment of a national information systemfor the collection and dissemination ofinformation related to general, as wellas rural, health care, that will providethe basis for a national health care sys-tem, and depends on the establishmentof a training program for professionalsin community medicine.27 SEPTEMCBR 1974

The continental and insular area ofMexico, including interior waters, is2,022,058 square kilometers (1, 2). In1970 the population of Mexico was48,377,363, of which 24,055,305 per-sons (49.7 percent) were under 15years of age. The Indian populationmade up 7.9 percent of the total (2, 3).As indicated in Table 1, 42.3 percentof the total population live in commu-nities of less than 2,500 inhabitants, andin such communities public services aswell as means of communication arevery scarce or nonexistent. A large per-centage (39.5 percent) of the econom-ically active population is engaged inagriculture (4).The country's population growth rate

is high, 3.5 percent annually, and itseems to depend on income, beinghigher among the 50 percent of thepopulation earning less than 675 pesos($50) per family per month (5). Themajority of this population lives in therural areas. The most frequent causesof mortality in rural areas are malnu-trition, infectious and parasitic diseases(6, 7), pregnancy complications, and

accidents (2). In 1970 there were 34,-107 doctors in Mexico (2). The ratioof inhabitants to doctors, which is1423.7, is not a representative indexof the actual distribution of resourcesbecause there is a great scarcity ofhealth professionals in rural areas anda high concentration in urban areas(Fig. 1) (7, 8).

In order to improve health at a na-tional level, this situation must bechanged. The errors made in previousattempts to improve health care mustbe avoided, and use must be made ofthe available manpower and resourcesof modern science to produce feasibleanswers at the community level. Al-though the main objective of a special-ist in community medicine is to controldisease, such control cannot beachieved unless action is taken againstthe underlying causes of disease; it hasalready been observed that partial solu-tions are inefficient (9). As a back-ground to this new program that hasbeen designed to provide health carein rural communities, I shall first givea summary of the previous attemptsthat have been made to provide suchcare, describing the various medical in-stitutions and other organizations thatare responsible for the training of med-ical personnel and for constructing thefacilities required for health care.

The author is an investigator in the departmentof molecular biology at the Instituto de Investi-gaciones Biomedicas, Universidad Nacional Aut6-noma de Mtxico, Ciudad Universitaria, M6xico20, D.F. This article is adapted from a paperpresented at the meeting on Science and Man inthe Americas, jointly organized by the ConsejoNacional de Ciencia y Tecnologia de M6xico andthe American Association for the Advancementof Science and held in Mexico City, 20 June to4 July 1973.

1131

on

Janu

ary

11, 2

010

ww

w.s

cien

cem

ag.o

rgD

ownl

oade

d fr

om