Download - Biases april 2012

Transcript
Page 1: Biases april 2012

Jeran Binning DAU San Diego

Page 2: Biases april 2012

Some cognitive biases, of course, are flagrantly exhibited even in the most natural of settings. Take what Kahneman calls the “planning fallacy”: our tendency to overestimate benefits and underestimate costs, and hence foolishly to take on risky projects.

In 2002, Americans remodeling their kitchens, for example, expected the job to cost $18,658 on average, but they ended up paying $38,769.

Page 3: Biases april 2012

The planning fallacy is “only one of the manifestations of a pervasive optimistic bias,” Kahneman writes, which “may well be the most significant of the cognitive biases.”

Now, in one sense, a bias toward optimism is obviously bad, since it generates false beliefs — like the belief that we are in control, and not the playthings of luck.

But without this “illusion of control,” would we even be able to get out of bed in the morning?

Optimists are more psychologically resilient, have stronger immune systems, and live longer on average than their more reality-based counterparts.

Page 4: Biases april 2012

Biases in the evaluation of compound events are particularly significant in the context of planning.

The successful completion of an undertaking, such as the development of a new product, typically has a conjunctive character: for the undertaking to succeed, each of a series of events must occur.

Even when each of these events is very likely, the overall probability of success can be quite low if the number of events is large.

Page 5: Biases april 2012

In a series of experiments, Ellen Langer (1975) demonstrated first the prevalence of the illusion of control and second, that people were more likely to behave as if they could exercise control in a chance situation where “skill cues” were present.

By skill cues, Langer meant properties of the situation more normally associated with the exercise of skill, in particular the exercise of choice, competition, familiarity with the stimulus and involvement in decisions.

One simple form of this fallacy is found in casinos: when rolling dice in craps, it has been shown that people tend to throw harder for high numbers and softer for low numbers.

Under some circumstances, experimental subjects have been induced to believe that they could affect the outcome of a purely random coin toss. Subjects who guessed a series of coin tosses more successfully began to believe that they were actually better guessers, and believed that their guessing performance would be less accurate if they were distracted.

Page 6: Biases april 2012

Drawing on a vast body of research, Lears ranges through the entire sweep of American history as he uncovers the hidden influence of risk taking, conjuring, soothsaying, and sheer dumb luck on our culture, politics, social lives, and economy.

T.J. Jackson Lears “Something for Nothing” (2003)

Page 7: Biases april 2012

Moreover, as Kahneman notes, exaggerated optimism serves to protect both individuals and organizations from the paralyzing effects of another bias, “loss aversion”: our tendency to fear losses more than we value gains.

It was exaggerated optimism that John Maynard Keynes had in mind when he talked of the “animal spirits” that drive capitalism.

Page 8: Biases april 2012

"losses loom larger than corresponding gains” "In prospect theory, loss aversion refers to the tendency for people to strongly

prefer avoiding losses than acquiring gains. Some studies suggest that losses are as much as twice as psychologically powerful as gains.

Loss aversion was first convincingly demonstrated by Amos Tversky and Daniel Kahneman.”

Tversky and Kahneman (1991) "The central assumption of the theory is that

losses and disadvantages have greater impact on preferences than gains and advantages.”

"Numerous studies have shown that people feel losses more deeply than gains of the same value (Kahneman and Tversky 1979, Tversky and Kahneman 1991)."Goldberg and von Nitzsch (1999) pages 97-98

"Both the status quo bias and the endowment effect are part of a more general issue known as loss aversion." (Montier 2007, p. 32)

Page 9: Biases april 2012

The tendency to like things to stay same.

Page 10: Biases april 2012

People demand more to give something up than they would be willing to pay to acquire it.

Page 11: Biases april 2012

The tendency to rely to heavily or “anchor,” on a past reference or on one trait or piece of information when making decisions.

Page 12: Biases april 2012

The framing of alternatives also affects decisions.

For example, when people (including doctors) who are considering a risky medical procedure are told that 90 percent survive five years, they are far more likely to accept the procedure than when they are told that 10 percent do not survive five years.

Because framing affects people's behavior, providing more information cannot remedy matters, unless the information is presented in a fully neutral fashion.

In some cases, additional information only increases people's anxiety and confusion, thereby reducing their welfare.

Page 13: Biases april 2012

In social psychology, the fundamental attribution error (also known as correspondence bias or attribution effect) describes the tendency to over-value dispositional or personality-based explanations for the observed behaviors of others while under-valuing situational explanations for those behaviors.

The fundamental attribution error is most visible when people explain the behavior of others. It does not explain interpretations of one's own behavior—where situational factors are often taken into consideration. This discrepancy is called the actor–observer bias.

As a simple example, if Alice saw Bob trip over a rock and fall, Alice might consider Bob to be clumsy or careless (dispositional). If Alice tripped over the same rock herself, she would be more likely to blame the placement of the rock (situational).

Page 14: Biases april 2012

Illusory correlation – inaccurately perceiving a relationship between two unrelated events

Page 15: Biases april 2012

Searching for an interpretation of information in a way that confirms one’s preconceptions.

Page 16: Biases april 2012

The inclination to see events that have occurred as more predictable than they in fact were before they took place. Hindsight bias has been demonstrated experimentally in a variety of settings, including politics, games and medicine.

In psychological experiments of hindsight bias, subjects also tend to remember their predictions of future events as having been stronger than they actually were, in those cases where those predictions turn out correct.

Prophecy that is recorded after the fact is an example of hindsight bias, given its own rubric, as

vaticinium ex eventu. foretelling after the event

One explanation of the bias is the availability heuristic: the event that did occur is more salient in one's mind than the possible outcomes that did not.

It has been shown that examining possible alternatives may reduce the effects of this bias.

Page 17: Biases april 2012

It's what we call in some circles the observation bias, or the related data mining problem. When you look at anything; say the stock market; you see the survivors, the winners; you don 't see the losers because you don't observe the cemetery and you will be likely to misattribute the causes that led to the winning.

Nassim Taleb

Page 18: Biases april 2012

Nassim Taleb : We have vital research in risk-bearing. The availability heuristic tells you that

your perception of a risk is going to be proportional to how salient the event comes to your mind.

It can come in two ways, either because it compressed a vivid image, or because it's going to elicit an emotional reaction in you. The latter is called the affect heuristic, recently developed as the "risk as

feeling" theory. We observe it in trading all the time. Basically you only worry about what you

know, and typically once you know about something the damage is done.

Page 19: Biases april 2012

Virtually all current theories of choice under risk or uncertainty are cognitive and consequentialist. They assume that people assess the desirability and likelihood of possible outcomes of choice alternatives and integrate this information through some type of expectation-based calculus to arrive at a decision.

The authors propose an alternative theoretical perspective, the risk-as-feelings hypothesis, that highlights the role of affect experienced at the moment of decision making. Drawing on research from clinical, physiological, and other subfields of psychology, they show that emotional reactions to risky situations often diverge from cognitive assessments of those risks.

When such divergence occurs, emotional reactions often drive behavior. The risk-as-feelings hypothesis is shown to explain a wide range of phenomena that have resisted interpretation in cognitive-consequentialist terms.

Loewenstein GF, Weber EU, Hsee CK, Welch N. Department of Social and Decision Sciences, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213-3890, USA. [email protected]

Page 20: Biases april 2012

Rationality • risk analysis • risk perception • the affect heuristic

Modern theories in cognitive psychology and neuroscience indicate that there are two fundamental ways in which human beings comprehend risk. The "analytic system" uses algorithms and normative rules, such as probability calculus, formal logic, and risk assessment. It is relatively slow, effortful, and requires conscious control. The "experiential system" is intuitive, fast, mostly automatic, and not very accessible to conscious awareness.

The experiential system enabled human beings to survive during their long period of evolution and remains today the

most natural and most common way to respond to risk. It relies on images and associations, linked by experience to emotion and affect (a feeling that something is good or bad). This system represents risk as a feeling that tells us whether it is safe to walk down this dark street or drink this strange-smelling water.

Proponents of formal risk analysis tend to view affective responses to risk as irrational.

Current wisdom disputes this view. The rational and the experiential systems operate in parallel and each seems to depend on the other for guidance. Studies have demonstrated that analytic reasoning cannot be effective unless it is guided by emotion and affect.

Rational decision making requires proper integration of both modes of thought. Both systems have their advantages, biases, and limitations. Now that we are beginning to understand the complex interplay between emotion and reason that is essential to rational behavior, the challenge before us is to think creatively about what this means for managing risk.

On the one hand, how do we apply reason to temper the strong emotions engendered by some risk events? On the other hand, how do we infuse needed "doses of feeling" into circumstances where lack of experience may otherwise leave us too "coldly rational"? This article addresses these important questions.

DIGITAL OBJECT IDENTIFIER (DOI) 10.1111/j.0272-4332.2004.00433.x About DOI

Paul Slovic 1,2*, Melissa L. Finucane 3 , Ellen Peters 1,2 , and Donald G. MacGregor 1 *Address correspondence to Paul Slovic, Decision Research, 1201 Oak Street, Eugene, OR 97401; .

Page 21: Biases april 2012

"Affect", in this context, is simply a feeling—fear, pleasure, humorousness, etc. It is shorter in duration than a mood, occurring rapidly and involuntarily in response to a stimulus. Reading the words "lung cancer" usually generates an affect of dread, while reading the words "mother's love" usually generates an affect of affection and comfort. For the purposes of the psychological heuristic, affect is often judged on a simple diametric scale of "good" or "bad".

The theory of affect heuristic is that a human being's affect can influence their decision-making. The affect heuristic got recent attention when it was used to explain the unexpected negative correlation between benefit and risk perception. and others theorized in 2000 that a good feeling towards a situation (i.e., positive affect) would lead to a lower risk perception and a higher benefit perception, even when this is logically not warranted for that situation. This implies that a strong emotional response to a word or other stimulus might alter a person's judgment. (S)he might make different decisions based on the same set of facts and might thus make an illogical decision.

For example, in a blind taste test, a man might like Mirelli Beer better than Saddle Sweat Beer; however, if he has a strong gender identification, an advertisement touting Saddle Sweat as "a real man's brew" might cause him to prefer Saddle Sweat. Positive affect related to gender pride biases his decision sufficiently to overcome his cognitive judgment.

Page 22: Biases april 2012

A heuristic (hyu ̇-ris-tik) is a method to help solve a problem, commonly informal. It is particularly used for a method that often rapidly leads to a solution that is usually reasonably close to the best possible answer.

Heuristics are "rules of thumb” educated guesses, intuitive judgments or simply common sense.

In more precise terms, heuristics stand for strategies using readily accessible, though loosely applicable, information to control problem-solving in human beings and machines.

Page 23: Biases april 2012

“There is always a well-known solution to every human problem – neat, plausible, and wrong.”

H. L. Mencken, Prejudices: Second Series, 1920

Tailors' Rule of Thumb. This is the fictional rule described by Jonathan Swift in his satirical novel Gulliver's Travels:

Then they measured my right Thumb, and desired

no more; for by a mathematical Computation, that

twice round the Thumb is once around the Wrist,

and so on to the Neck and Waist, and by the help of

my old Shirt, which I displayed on the Ground

before them for a Pattern, they fitted me exactly."

Page 24: Biases april 2012

People rely on a limited number of heuristic (essentially rules of thumbs) principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations.

In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors.

Daniel Kahneman

Rules of Thumb

Page 25: Biases april 2012

Financial - Rule of 72 A rule of thumb for exponential growth at a constant rate. An approximation of the "doubling time" formula used in population growth, which says divide 70 by the percent growth rate (the actual number is 69.3147181 from the natural logarithm of 2, if the percent growth is much much less than 1%). In terms of money, it is frequently easier to use 72 (rather than 70) because it works better in the 4%-10% range where interest rates often lie. Therefore, divide 72 by the percent interest rate to determine the approximate amount of time to double your money in an investment. For example, at 8% interest, your money will double in approximately 9 years (72/8 = 9).

Tailors' Rule of Thumb A simple approximation that was used by tailors to determine the wrist, neck, and waist circumferences of a person through one single measurement of the circumference of that person's thumb. The rule states, typically, that twice the circumference of a person's thumb is the circumference of their wrist, twice the circumference of the wrist is the circumference of the neck, and twice around the neck is the person's waist. For example, if the circumference of the thumb is 4 inches, then the wrist circumference is 8 inches, the neck is 16 and the waist is 32. An interesting consequence of this is that — for those to whom the rule applies — this simple method can be used to determine if trousers will fit: the trousers are wrapped around the neck, and if the two ends barely touch, then they will fit. Any overlap or lack thereof corresponds to the trousers being too loose or tight, respectively.

Marine Navigation A ship's captain should navigate to keep the ship more than a thumb's width from the shore, as shown on the nautical chart being used. Thus, with a coarse scale chart, that provides few details of near shore hazards such as rocks, a thumb's width would represent a great distance, and the ship would be steered far from shore; whereas on a fine scale chart, in which more detail is provided, a ship could be brought closer to shore.[1]

Statistics The Statistical Rule of Thumb says that for most large data sets, 68% of data points will occur within one standard deviation from the mean, and 95% will occur within two standard deviations. Chebyshev's inequality is a more general rule along these same lines and applies to all data sets.

Page 26: Biases april 2012

In an effort that spanned several years, we attempted to answer one basic question:

Under what conditions are the intuitions of professionals worthy of trust?

We do not claim that the conclusions we reached are surprising (many were anticipated by Shanteau, 1992, Hogarth, 2001, and Myers, 2002, among others), but we believe that they add up to a coherent view of expert intuition, which is more than we expected to achieve when we began.

Page 27: Biases april 2012

The NDM approach, which focuses on the successes of expert intuition, grew out of early research on master chess players conducted by deGroot (1946/1978) and later by Chase and Simon (1973). DeGroot showed that chess grand masters were generally able to identify the most promising moves rapidly, while mediocre chess players often did not even consider the best moves.

The chess grand masters mainly differed from weaker players in their unusual ability to appreciate the dynamics of complex positions and quickly judge a line of play as promising or fruitless.

Chase and Simon (1973) described the performance of chess experts as a form of perceptual skill in which complex patterns are recognized. They estimated that chess masters acquire a repertoire of 50,000 to 100,000 immediately recognizable patterns, and that this repertoire enables them to identify a good move without having to calculate all possible contingencies.

Strong players need a decade of serious play to assemble this large collection of basic patterns, but of course they achieve impressive levels of skill even earlier.

On the basis of this work, Simon defined intuition as the recognition of patterns stored in memory.

Page 28: Biases april 2012

Kahneman read Meehl’s book in 1955 while serving in the Psychological Research Unit of the Israel Defense Forces, and the book helped him make sense of his own encounters with the difficulties of clinical judgment.

One of Kahneman’s duties was to assess candidates for officer training, using field tests and other observations as well as a personal interview.

Kahneman (2003) described the powerful sense of getting to know each candidate and the accompanying conviction that he could foretell how well the candidate would do in further training and eventually in combat.

The subjective conviction of understanding each case in isolation was not diminished by the statistical feedback from officer training school, which indicated that the validity of the assessments was negligible.

Kahneman coined the term illusion of validity for the unjustified sense of confidence that often comes with clinical judgment. His early experience with the fallibility of intuitive impressions could hardly be more different from Klein’s formative encounter with the successful decision making of fire-fighting ground commanders.

Page 29: Biases april 2012

Our starting point is that intuitive judgments can arise from genuine skill—the focus of the Naturalistic Decision Making (NDM) approach—but that they can also arise from inappropriate application of the heuristic processes on which students of the Heuristics Based tradition have focused.

Page 30: Biases april 2012

Skilled judges are often unaware of the cues that guide them, and individuals whose intuitions are not skilled are even less likely to know where their judgments come from.

Page 31: Biases april 2012

True experts, it is said, know when they don’t know.

However, non-experts (whether or not they think they are) certainly do not know when they don’t know.

Subjective confidence is therefore an unreliable indication of the validity of intuitive judgments and decisions.

Page 32: Biases april 2012

The determination of whether intuitive judgments can be trusted requires an examination of the environment in which the judgment is made and of the opportunity that the judge has had to learn the regularities of that environment.

Page 33: Biases april 2012

We describe task environments as “high-validity” if there are stable relationships between objectively identifiable cues and subsequent events or between cues and the outcomes of possible actions.

Medicine and firefighting are practiced in environments of fairly high validity.

In contrast, outcomes are effectively unpredictable in zero-validity environments.

To a good approximation, predictions of the future value of individual stocks and long-term forecasts of political events are made in a zero-validity environment.

Page 34: Biases april 2012

Validity and uncertainty are not incompatible. Some environments are both highly valid and substantially uncertain.

Poker and warfare are examples. The best moves in such situations reliably increase the potential for success.

Page 35: Biases april 2012

An environment of high validity is a necessary condition for the development of skilled intuitions.

Other necessary conditions include adequate opportunities for learning the environment (prolonged practice and feedback that is both rapid and unequivocal).

If an environment provides valid cues and good feedback, skill and expert intuition will eventually develop in individuals of sufficient talent.

Page 36: Biases april 2012

Although true skill cannot develop in irregular or unpredictable environments, individuals will sometimes make judgments and decisions that are successful by chance.

These “lucky” individuals will be susceptible to an illusion of skill and to overconfidence (Arkes, 2001).

The financial industry is a rich source of examples.

Page 37: Biases april 2012

The situation that we have labeled fractionation of skill is another source of overconfidence. Professionals who have expertise in some tasks are sometimes called upon to make judgments in areas in which they have no real skill.

(For example, financial analysts may be skilled at evaluating the likely commercial success of a firm, but this skill does not extend to the judgment of whether the stock of that firm is underpriced.)

It is difficult both for the professionals and for those who observe them to determine the boundaries of their true expertise.

Page 38: Biases april 2012

We agree that the weak regularities available in low-validity situations can sometimes support the development of algorithms that do better than chance. These algorithms only achieve limited accuracy, but they outperform humans because of their advantage of consistency.

However, the introduction of algorithms to replace human judgment is likely to evoke substantial resistance and sometimes has undesirable side effects.

Page 39: Biases april 2012

The Drunkard’s Walk

– Functional magnetic resonance imaging, for example, shows that risk and reward are assessed by parts of the dopaminergic system.

– A brain-reward circuit important for motivational and emotional processes

• The images show, too, that the amygdala, which is also linked to our emotional state, especially fear, is activated when we make decisions couched in uncertainty.

Page 40: Biases april 2012

The Drunkard’s Walk

• Fortune is fair in potentialities, she is not fair in outcomes. • pp 13

• When we look at extraordinary accomplishments in sports---or elsewhere—we should keep in mind that extraordinary events can happen without extraordinary causes.

• Random events often look like nonrandom events, and in interpreting human affairs we must take care not to confuse the two.pp20

Page 41: Biases april 2012

Selection Bias • This bias makes us miscompute the odds and wrongly ascribe skills. If you

funded 1,000,000 unemployed people endowed with no more than the ability to say "buy" or "sell", odds are that you will break-even in the aggregate, minus transaction costs, but a few will hit the jackpot, simply because the base cohort is very large. It will be almost impossible not to have small Warren Buffets by luck alone. After the fact they will be very visible and will derive precise and well-sounding explanations about why they made it. It is difficult to argue with them; "nothing succeeds like success". All these retrospective explanations are pervasive, but there are scientific methods to correct for the bias. This has not filtered through to

the business world or the news media; researchers have evidence that professional fund managers are just no better than random and cost money to society (the total revenues from these transaction costs is in the hundreds of billion of dollars) but the public will remain convinced that "some" of these investors have skills.

Page 42: Biases april 2012

False Discovery Rate • July 13, 2008 • STRATEGIES • The Prescient Are Few • By MARK HLBERT • HOW many mutual fund managers can consistently pick stocks that outperform the broad stock market averages — as opposed to just being lucky now and then? • Countless studies have addressed this question, and have concluded that very few managers have the ability to beat the market over the long term. Nevertheless, researchers

have been unable to agree on how small that minority really is, and on whether it makes sense for investors to try to beat the market by buying shares of actively managed mutual funds.

• A new study builds on this research by applying a sensitive statistical test borrowed from outside the investment world. It comes to a rather sad conclusion: There was once a small number of fund managers with genuine market-beating abilities, as judged by having past performance so good that their records could not be attributed to luck alone. But virtually none remain today. Index funds are the only rational alternative for almost all mutual fund investors, according to the study’s findings.

• The study, “False Discoveries in Mutual Fund Performance: Measuring Luck in Estimating Alphas,” has been circulating for over a year in academic circles. Its authors are Laurent Barras, a visiting researcher at Imperial College’s Tanaka Business School in London; Olivier Scaillet, a professor of financial econometrics at the University of Geneva and the Swiss Finance Institute; and Russ Wermers, a finance professor at the University of Maryland.

• The statistical test featured in the study is known as the “False Discovery Rate,” and is used in fields as diverse as computational biology and astronomy. In effect, the method is designed to simultaneously avoid false positives and false negatives — in other words, conclusions that something is statistically significant when it is entirely random, and the reverse.

• Both of those problems have plagued previous studies of mutual funds, Professor Wermers said. The researchers applied the method to a database of actively managed domestic equity mutual funds from the beginning of 1975 through 2006. To ensure that their results were not biased by excluding funds that have gone out of business over the years, they included both active and defunct funds. They excluded any fund with less than five years of performance history. All told, the database contained almost 2,100 funds.

• The researchers found a marked decline over the last two decades in the number of fund managers able to pass the False Discovery Rate test. If they had focused only on managers running funds in 1990 and their records through that year, for example, the researchers would have concluded that 14.4 percent of managers had genuine stock-picking ability. But when analyzing their entire fund sample, with records through 2006, this proportion was just 0.6 percent — statistically indistinguishable from zero, according to the researchers.

• This doesn’t mean that no mutual funds have beaten the market in recent years, Professor Wermers said. Some have done so repeatedly over periods as short as a year or two. But, he added, “the number of funds that have beaten the market over their entire histories is so small that the False Discovery Rate test can’t eliminate the possibility that the few that did were merely false positives” — just lucky, in other words.

• Professor Wermers says he was surprised by how rare stock-picking skill has become. He had “generally been positive about the existence of fund manager ability,” he said, but these new results have been a “real shocker.”

• WHY the decline? Professor Wermers says he and his co-authors suspect various causes. One is high fees and expenses. The researchers’ tests found that, on a pre-expense basis, 9.6 percent of mutual fund managers in 2006 showed genuine market-beating ability — far higher than the 0.6 percent after expenses were taken into account.

This suggests that one in 10 managers may still have market-beating ability. It’s just that they can’t come out ahead after all their funds’ fees and expenses are paid.

• Another possible factor is that many skilled managers have gone to the hedge fund world. Yet a third potential reason is that the market has become more efficient, so it’s harder to identify undervalued or overvalued stocks. Whatever the causes, the investment implications of the study are the same: buy and hold an index fund benchmarked to the broad stock market.

• Professor Wermers says his advice has evolved significantly as a result of this study. Until now, he says, he wouldn’t have tried to discourage a sophisticated investor from trying to pick a mutual fund that would outperform the market. Now, he says, “it seems almost hopeless.”

• Mark Hulbert is editor of The Hulbert Financial Digest, a service of MarketWatch. E-mail: [email protected]. •

• Copyright 2008 The New York Times Company