Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in...

30
Just a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 / Accepted: 28 July 2011 /Published online: 6 September 2011 # Springer Science+Business Media B.V. 2011 Abstract Evidence-Based Medicine (EBM) developed from the work of clinical epidemiologists at McMaster University and Oxford University in the 1970s and 1980s and self-consciously presented itself as a "new paradigm" called "evidence- based medicine" in the early 1990s. The techniques of the randomized controlled trial, systematic review and meta-analysis have produced an extensive and powerful body of research. They have also generated a critical literature that raises general concerns about its methods. This paper is a systematic review of the critical literature. It finds the description of EBM as a Kuhnian paradigm helpful and worth taking further. Three kinds of criticism are evaluated in detail: criticisms of procedural aspects of EBM (especially from Cartwright, Worrall and Howick), data showing the greater than expected fallibility of EBM (Ioaanidis and others), and concerns that EBM is incomplete as a philosophy of science (Ashcroft and others). The paper recommends a more instrumental or pragmatic approach to EBM, in which any ranking of evidence is done by reference to the actual, rather than the theoretically expected, reliability of results. Emphasis on EBM has eclipsed other necessary research methods in medicine. With the recent emphasis on translational medicine, we are seeing a restoration of the recognition that clinical research requires an engagement with basic theory (e.g. physiological, genetic, biochemical) and a range of empirical techniques such as bedside observation, laboratory and animal studies. EBM works best when used in this context. Keywords Evidence-based medicine . Philosophy of medicine . Kuhnian paradigm . Nancy Cartwright . Randomized controlled trial . Translational medicine . Evidence hierachy . Meta-analysis . John Ioannidis . John Worrall Euro Jnl Phil Sci (2011) 1:451466 DOI 10.1007/s13194-011-0034-6 I am grateful to two anonymous reviewers for the European Journal for Philosophy of Science for helpful corrections and comments on this paper. M. Solomon (*) Department of Philosophy, Temple University, Philadelphia, PA 19122, USA e-mail: [email protected] ORIGINAL PAPER IN PHILOSOPHY OF MEDICINE

Transcript of Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in...

Page 1: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

Just a paradigm: evidence-based medicinein epistemological context

Miriam Solomon

Received: 8 October 2010 /Accepted: 28 July 2011 /Published online: 6 September 2011# Springer Science+Business Media B.V. 2011

Abstract Evidence-Based Medicine (EBM) developed from the work of clinicalepidemiologists at McMaster University and Oxford University in the 1970s and1980s and self-consciously presented itself as a "new paradigm" called "evidence-based medicine" in the early 1990s. The techniques of the randomized controlledtrial, systematic review and meta-analysis have produced an extensive and powerfulbody of research. They have also generated a critical literature that raises generalconcerns about its methods. This paper is a systematic review of the criticalliterature. It finds the description of EBM as a Kuhnian paradigm helpful and worthtaking further. Three kinds of criticism are evaluated in detail: criticisms ofprocedural aspects of EBM (especially from Cartwright, Worrall and Howick), datashowing the greater than expected fallibility of EBM (Ioaanidis and others), andconcerns that EBM is incomplete as a philosophy of science (Ashcroft and others).The paper recommends a more instrumental or pragmatic approach to EBM, inwhich any ranking of evidence is done by reference to the actual, rather than thetheoretically expected, reliability of results. Emphasis on EBM has eclipsed othernecessary research methods in medicine. With the recent emphasis on translationalmedicine, we are seeing a restoration of the recognition that clinical research requiresan engagement with basic theory (e.g. physiological, genetic, biochemical) and arange of empirical techniques such as bedside observation, laboratory and animalstudies. EBM works best when used in this context.

Keywords Evidence-based medicine . Philosophy of medicine . Kuhnian paradigm .

Nancy Cartwright . Randomized controlled trial . Translational medicine . Evidencehierachy . Meta-analysis . John Ioannidis . JohnWorrall

Euro Jnl Phil Sci (2011) 1:451–466DOI 10.1007/s13194-011-0034-6

I am grateful to two anonymous reviewers for the European Journal for Philosophy of Science for helpfulcorrections and comments on this paper.

M. Solomon (*)Department of Philosophy, Temple University, Philadelphia, PA 19122, USAe-mail: [email protected]

ORIGINAL PAPER IN PHILOSOPHY OF MEDICINE

Page 2: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

1 Introduction

Evidence-Based Medicine (EBM)1 is the application of methods of clinicalepidemiology to the practice of medicine more generally. It was inspired by thepost-World War II work of Archibald Cochrane, developed from the work ofclinical epidemiologists at McMaster University and Oxford University in the1970s and 1980s, and self-consciously presented as a “new paradigm” called“evidence-based medicine” in the early 1990s (Evidence-Based Medicine WorkingGroup 1992). EBM was embraced in Canada and the UK in the 1990s, receivedwith some ambivalence in the United States, and adopted in many other countries,both developed and developing (Daly 2005). Its techniques of population basedstudies and systematic review have produced an extensive and powerful body ofknowledge about medical diagnosis and treatment. A canonical and helpfuldefinition of EBM2 is that of Davidoff et al. (Davidoff et al. 1995) in an editorial inthe British Medical Journal:

“In essence, evidence based medicine is rooted in five linked ideas: firstly,clinical decisions should be based on the best available scientific evidence;secondly, the clinical problem - rather than habits or protocols - shoulddetermine the type of evidence to be sought; thirdly, identifying the bestevidence means using epidemiological and biostatistical ways of thinking;fourthly, conclusions derived from identifying and critically appraisingevidence are useful only if put into action in managing patients or makinghealth care decisions; and, finally, performance should be constantlyevaluated.”

EBM regards its own epistemic techniques as superior to other more traditionalmethods such as clinical experience, expert opinion, and physiological reasoning.This is because the more traditional techniques are viewed as more fallible. There isnot one new technique, but several. The following are typically regarded as part ofEBM:

1. Rigorous design of clinical trials, especially the randomized controlled trial(RCT). The RCT is to be used wherever physically and ethically feasible. Thetrial should be double-masked (traditionally, “double-blinded”3) whereverpossible.

2. Systematic evidence review and meta-analysis, including grading of theevidence in “evidence hierarchies.”

3. Outcome measures (leading to suggestions for improvement)

1 The term “evidence-based practice” may be replacing EBM, acknowledging the fact that the practice ofmedicine requires not only physicians but other health care professionals.2 Some canonical definitions are unhelpful for a general understanding of EBM, for example, that given in(Sackett et al. 1996): “Evidence-based medicine is the conscientious, explicit and judicious use of currentbest evidence in making decisions about the care of individual patients.” This particular definition isprobably used widely because it is brief, and because it has a rhetorical purpose—to address the frequentcriticism of EBM that it applies to populations, not individuals.3 “Double-masked” has largely replaced “double-blinded,” in order to avoid inappropriate use of termsrelating to disability.

452 Euro Jnl Phil Sci (2011) 1:451–466

Page 3: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

The RCT has often been described as the “gold standard” of evidence foreffectiveness of medical interventions. It is a powerful technique, originallydeveloped by the geneticist R.A. Fisher and applied for the first time in a medicalcontext by A. Bradford Hill’s 1948 evaluation of streptomycin for tuberculosis (Dollet al. 1999). The double-masked RCT is designed to control for the placebo effect,for selection and other confounding biases, and for confirmation biases.

EBM also includes systematic and formal techniques for combining the results ofdifferent clinical trials. A systematic review does a thorough search of the literatureand an evaluation and grading of clinical trials. An evidence hierarchy is typicallyused to structure the judgments of quality and strength of evidence. Meta-analysisintegrates the actual data from different but similar high-quality trials to give anoverall single statistical result.

Often, EBM is supplemented with formal techniques from Medical DecisionMaking (MDM) such as risk/benefit calculations. The risk/benefit calculations canbe made for individual patients, making use of patient judgments of utility, or theycan be made in the context of health care economics, for populations. MDM seeks toavoid common errors of judgment, such as availability and salience biases, inmedical decision making.4

The overall project is to use the techniques of EBM (and sometimes also MDM)to construct practice guidelines and to take care of individual patients. Eachtechnique—the RCT, other high quality clinical trials, meta-analysis and systematicreview–is based on its own core technical successes. The techniques fit together, andshare a reliance on statistics, probability theory and utility theory. Journals, centers,clearinghouses, collaborations, educational programs, textbooks, committees andgovernments all produce and disseminate EBM.

EBM rose to dominance right after the prominence of consensus conferences forassessment of complex and sometimes conflicting evidence and may have beenpartly responsible for the decline of traditional consensus conferences. As late as1990, an Institute of Medicine report evaluating the international uses of medicalconsensus conferences said “Group judgment methods are perhaps the most widelyused means of assessment of medical technologies in many countries” (Baratz et al.1990). Just a few years later, expert consensus is viewed in the same medical circlesas the lowest level of evidence, when it is included in the evidence at all. Forexample, the Canadian Task force on Preventive Health Care which began in 1979 asa consensus conference program now explicitly declares “Evidence takes precedenceover consensus” (Canadian Task Force on Preventive Health Care).

EBM has not completely replaced group judgment, however. Consensusconferences (or something similar) are often still used for producing evidence-based guidelines or policy, that is, for translating a systematic review into apractical recommendation. And group judgment may be needed to set thestandards to be used in systematic review. I will comment on this continuedreliance on consensus methods later in the paper.

There is some indication that EBM is now past its peak, and being overshadowedin part by a new approach, that of “translational medicine” (Woolf 2008).Considerable resources from the NIH, from the European Commission and from

4 A useful essay discussing the differences and interactions between EBM and MDM is (Elstein 2004)

Euro Jnl Phil Sci (2011) 1:451–466 453453

Page 4: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

the National Institute for Health Research in the UK have been redirected to “benchto bedside and back” research, which is typically the research that takes place beforethe clinical trials that are core to EBM. Donald Berwick, the founder of the Institutefor Healthcare Improvement (which is the leading organization for qualityimprovement in healthcare), now claims that “we have overshot the mark” withEBM and created an “intellectual hegemony” that excludes important researchmethods from recognition (Berwick 2005). Berwick calls the overlooked methods“pragmatic science” and sees them as crucial for scientific discovery. He mentionsthe same sorts of approaches (use of local knowledge, exploration of hypotheses)that “translational medicine” advocates describe. After the discussions in this paper,some reasons for the recent turn to translational medicine will become clearer.

There is a vast literature on evidence-based medicine, most consisting ofsystematic evidence reviews for particular health care questions. A substantialportion of the literature, however, is a critical engagement with EBM as a whole,pointing out both difficulties and limitations. These discussions come from outsidersas well as insiders to the field of EBM. My goal in this paper is to do something likea systematic review of this literature, discerning the kinds of criticisms that seemcogent and presenting them in a structured manner. EBM, like all methodologies inmedicine, has both core strengths and limitations. I will begin with an overview ofsome general social and philosophical characteristics of EBM, and then turn to thecriticisms.

2 EBM as a “Kuhnian paradigm”

When the Evidence-Based Medicine Working Group described themselves as havinga “new paradigm” of medical knowledge (Evidence-Based Medicine Working Group1992), they particularly had in mind Kuhn’s (1962) characterization of a paradigm assetting the standards for what is to count as admissible evidence.5 EBM assessmentsmake use of an “evidence hierarchy” (often called “levels of evidence”) in whichhigher levels of evidence are regarded as of higher quality than lower levels ofevidence. A typical evidence hierarchy6 puts double-masked (or “double-blinded”)RCTs at the top, or perhaps right after meta-analyses or systematic reviews of RCTs.Unmasked RCTs come next, followed by well designed case controlled or cohortstudies and then observational studies and case reports. Expert opinion, expertconsensus, clinical experience and physiological rationale are at the bottom. Therationale for the evidence hierarchy is that higher levels of evidence are thought toavoid biases that are present in the lower levels of evidence. Specifically,randomization avoids selection and other confounding biases (but see (Worrall2007b)) and masking helps to distinguish real from placebo effects (but see (Howick2008; Howick 2011)) and avoid confirmation bias. Powering the trial with sufficientnumbers of participants and using statistical tools avoids the salience and availability

5 They were not, however, claiming along with many Kuhnians that there is subjectivity or relativityinvolved in what gets to count as evidence.6 There are many such hierarchies in use, but all put double-masked RCTs at the top, or right after meta-analyses of RCTs, and clinical experience, expert consensus and physiological rationale at the bottom.

454 Euro Jnl Phil Sci (2011) 1:451–466

Page 5: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

biases that can skew informal assessments and unsystematic clinical experience.Ultimately, trial results are graded for both quality and strength of evidence.

The language of Kuhnian paradigms has been overused and become somewhatclichéd, meaning something like “transformational new theory” in the typical quotefrom the EBM Working Group cited in the previous paragraph. In fact, EBM hasmany characteristics of a traditional Kuhnian paradigm,7 having all three of thecharacteristics of a Kuhnian paradigm discerned by Margaret Masterman (Lakatosand Musgrave 1970) and agreed to by Kuhn (Kuhn et al. 2000). These characteristics8

are helpful for understanding the import of EBM. First, EBM is a social movementwith associated institutions such as Evidence-Based Practice Centers, officialcollaborations, textbooks, courses and journals. It is also, secondly, a generalphilosophy of medicine, defining both the questions of interest and the appropriateevidence. It is seen as the central methodology of medicine by its practitioners and asan unwelcome politically dominant movement by its detractors (e.g. (Charlton andMiles 1998; Denny 1999)). And third (sometimes overlooked by those who useKuhn’s term “paradigm”), it is characterized by a core of technical results andsuccessful exemplars that have been extended over time. Kuhn referred to suchexemplars as “concrete puzzle solutions…employed as models or examples” (Kuhn1970) and later as a “disciplinary matrix” including “symbolic generalization, modelsand exemplars” (Kuhn 1977). He regards this third meaning as the original andfundamental meaning of the term “paradigm” (Kuhn 1977).

Contrary to appearances and self-presentation, this core of technical results is notproduced by a general algorithm or set of precise methodological rules. One of thethings that Kuhn emphasized about paradigms is that they are driven primarily byexemplars, and not by rules. He writes (Kuhn 1970) that exemplars are “one sort ofelement…the concrete puzzle-solutions employed as exemplars which can replaceexplicit rules as a basis for the solution of the remaining puzzles of normal science.Kuhn argued that this is significant because the rules are not the basis for thedevelopment of the science. Rather, Kuhn argues, less precise judgments aboutsimilarity of examples are used (Kuhn 1977).

The medical RCT traces its beginning to A. Bradford Hill’s 1948 evaluation ofstreptomycin for tuberculosis (Doll et al. 1999). It was initially resisted by manyphysicians used to treating each patient individually, therapeutically and withconfidence in treatment choice (Marks 1997). Nevertheless, important trials such asthe polio vaccine field trial of 1954 and the 1955 evaluation of treatments forrheumatic fever helped bring the RCT into routine use (Meldrum 1998; Meldrum2000). In 1970 the RCT achieved official status in the USAwith inclusion in the newFDA requirements for pharmaceutical testing (Meldrum 2000). One of the mostwell-known early successful uses of RCTs was the 1980s international study ofaspirin and streptokinase for the prevention of myocardial infarction. However, not

7 Interestingly, Sehon and Stanley (2003) argue that EBM should not be thought of as a Kuhnianparadigm, but, instead, as part of a Quinean holistic network of beliefs. My focus here is on the historic,rather than the popular meaning of “paradigm” and on what EBM is, not on what it should be.8 There are many other characteristics of Kuhnian paradigms, such as the narrative of paradigm change,the emphasis on high-level theory, and the idea that paradigms replace one another, that do not apply tothis case. I am not trying to apply Kuhn exhaustively, just to use some of his concepts where they may beexplanatorily useful.

Euro Jnl Phil Sci (2011) 1:451–466 455455

Page 6: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

all early use of the RCT was straightforward or successful: the 1960s-1970s NCIrandomized controlled trial of lumpectomy versus mastectomy for early stage breastcancer was not masked,9 yet it was highly regarded, and the attempt to conduct aDiet-Heart study in the 1960s was hampered and finally frustrated by the difficultiesin implementing major changes in diet in one arm of the study (Marks 1997). This isan example of the finding that the methodology of the RCT does not readily apply toall the situations in which we might wish to use it. As Kuhn might put it, normalscience is not a matter of simple repetition of the paradigm case; it requires minor ormajor tinkering, and sometimes ends in frustration (or what Kuhn would call an“anomaly”).

There are also variations in the design and analyses of RCTs. For example, sometrials do an “intention to treat” analysis, dropping no experimental subjects from thetrial, even if they fail to go through the course of treatment, and some trials do a“per-protocol” analysis in which only patients who complete the trial are included inthe final results. Some trails have a placebo in the control arm and some trials havean established treatment in the control arm. It is often said that design and evaluationof an RCT requires “judgment” (see for example (Rawlins 2008)); by this what ismeant is that trials cannot be designed by a universal set of rules and that the designand evaluation of trials requires domain expertise, not only statistical expertise. Forexample, domain expertise is needed in order to design both the dosage and theintervention in the control arm, and domain expertise is needed in order to specifyappropriate trial selection criteria.

The same insights apply to systematic reviews and meta-analyses. The firstsystematic review is often identified as the Oxford Database of Perinatal Trials 1989study of corticosteroids for fetal lung development; this study was the basis for thedevelopment of the Cochrane Collaboration in 1993 which has since then done over3,000 systematic reviews. Other organizations producing systematic reviews includethe Agency for Health Care Research and Quality (AHRQ) and its fourteenEvidence-Based Practice Centers and the American College of Physicians (ACP)Journal Club. All systematic reviews use evidence hierarchies, but there is somevariation in the hierarchies in use. The RCT is always at the top or just below meta-analyses of RCTs, but there are variations in where other kinds of studies are ranked,and in whether or not animal trials, basic science and expert opinion are included.Hierarchical rank is just one measure of the quality of a trial, which needs to beconsidered together with other measures of quality such as how well the trial handleswithdrawals and how well it is randomized and masked. In 2002 the AHRQ reportedforty systems of rating in use, six of them within its own network of evidence-basedpractice centers (AHRQ 2002). The GRADE Working Group, established in 2000, isattempting to reach consensus on one system of rating the quality and strength ofevidence (Guyatt et al. 2008). This is an ironic development, given that EBM intendsto replace group judgment methods!

Meta-analysis combines the results of several high-quality trials to get an overallmeasure of strength of evidence. It requires judgments about the similarity of trialsfor combination and the quality of evidence in each trial, as well as about thepossibility of systematic bias in the evidence overall, for example due to publication

9 Double-masking is the standard for high quality RCTs.

456 Euro Jnl Phil Sci (2011) 1:451–466

Page 7: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

bias and pharmaceutical company support. Meta-analysis is a formal technique, butnot an algorithmic one: judgments need to be made about trial quality (as withsystematic reviews, use of an evidence hierarchy is part of the process) and similarityof trial endpoints or other aspects of studies. Different meta-analyses of the samedata have produced different conclusions (Juni et al. 1999; Yank et al. 2007). StevenGoodman (2002) is concerned that the disagreement between meta-analyses,specifically in the case of mammography screening, represents a “crisis for EBM.”I think it is not so much a crisis as a reminder of the limits of EBM.

The identification of EBM with a Kuhnian paradigm, useful though it is, shouldnot be taken too scrupulously. Exemplars and judgments of similarity are important,but rules also play a role. Kuhnian claims about incommensurability betweenparadigms and the social constitution of objectivity are controversial here and wouldcertainly be denied by practitioners of EBM.10 We have moved on from Kuhn’sideas, revolutionary in the 1960s, but now built upon and transformed in moresophisticated ways.

3 Critical discussions of EBM

Critical discussions of EBM tend to focus on questioning the procedural necessityand sufficiency of the technical requirements (especially for the RCT), the reliabilityof EBM in practice, or on EBM’s explicit or implicit claims to be a generalphilosophy of medicine. I’ll examine these three areas in turn.

3.1 Criticisms of procedural aspects of EBM

Many of these criticisms of EBM procedures have come from British philosophersof science associated with the London School of Economics. Their main approach isto argue that the “gold standard” (the double-masked RCT) is neither necessary(Howick 2011; Worrall 2007b) nor sufficient (Cartwright 2010) for clinical research.They argue that RCTs do not always control for the biases they are intended tocontrol, they do not produce reliably generalizable knowledge, or they can beunnecessary constraints on clinical testing. These arguments are theoretical andabstract in character, although they are sometimes illustrated by examples. Idistinguish them from arguments that RCTs have difficulties in practice, that is,from evaluation of RCTs based on the actual outcomes of such studies, which willbe discussed in the next subsection (3B).

John Worrall (2002, 2007a, 2007b) argues that randomization is just one way, andan imperfect way, of controlling for confounding factors that might produce bias.The problem is that randomization can control only for most but not for allconfounding factors. When there are indefinitely many factors, both known andunknown, which may lead to bias, chances are that any one randomization will notrandomize with respect to all these factors. Under these circumstances, Worrallconcludes, chances are that any particular clinical trial will have at least one kind of

10 Critics of EBM have occasionally presented EBM as a political and rhetorical movement, e.g. (Charltonand Miles 1998), emphasizing the ways in which it appears to lack rationality.

Euro Jnl Phil Sci (2011) 1:451–466 457457

Page 8: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

bias, making the experimental group relevantly different from the control group, justby accident. The only way to avoid this is to re-randomize and do another clinicaltrial, which may, again by chance, eliminate the first trial’s confounding bias butintroduce another. Worrall concludes that the RCT does not yield reliable resultsunless it is repeated time and again, re-randomizing each time, and the results areaggregated and analyzed overall. This is practically speaking impossible. In context,Worrall is less worried about the reliability of RCTs than he is about the assumptionthat they are much more reliable—in a different epistemic class—than e.g. well-designed observational (“historically controlled”) studies in which there is norandomization. He is arguing that the RCT should be taken off its pedestal and thatall trials can have inadvertent bias due to differences between the control and theexperimental group.

In a series of articles, Nancy Cartwright (2007a,b, 2009, 2010) points out thatRCTs may have internal validity, but their external validity and hence theirapplicability to real world questions is dependent on the similarity of the testpopulation and context to the population and context targeted by the intervention.For example, she cites the failure of the California class-size reduction program,which was based on the success of a RCT in Tennessee, as due to failure of externalvalidity (Cartwright 2009). She does not give a medical example of actual failure ofexternal validity (hence my classification of her criticisms of EBM as theoretical incharacter), although she gives one of possible failure: prophylactic antibiotictreatment of children with HIV in developing countries. UNAIDS and UNICEF2005 treatment recommendations were based on the results of a 2004 RCT in Zaire.Cartwright is concerned that the Zaire results will not generalize to resource-poorsettings across other countries in sub-Saharan Africa (Cartwright 2007b) Concernabout external validity is reasonable and there are classic medical examples of lackof external validity. For example, some recommendations for the treatment of heartdisease, developed in trials of men only, do not apply to women. There is a history ofchallenges to RCTs on the grounds that they have excluded certain groups fromparticipation (e.g. women, the elderly, children) yet are used for general healthrecommendations. The exclusions are made on epistemic and/or ethical grounds.These days, women are less likely to be excluded because the NIH and othergranting organizations require their participation in almost all clinical trials, but otherexclusions, such as those based on age, remain. Cartwright expresses the concernabout external validity in its most general form. In her most recent work (Cartwright2010) she describes four conditions that need to be met for external validity.11 Thesefour conditions demand considerable domain knowledge i.e. knowledge of theparticular causal interactions that the intervention relies upon.

Jeremy Howick (2008, 2011) argues that masking is not useful outside of contextsin which outcomes are measured subjectively and that masking is both impossibleand unnecessary when dealing with large effect size. It is impossible when dealingwith large effect size because the effects of the drug unmask the assignment. He alsoargues that masking is in practice inadequate for placebo controlled trials, since

11 Cartwright’s (2010) four conditions are knowledge of “Roman laws“(laws that are general enough), “theright support team” (all necessary conditions), “straight sturdy ladders” (for climbing up and down levelsof abstraction) and “unbroken bridges” (no interfering conditions).

458 Euro Jnl Phil Sci (2011) 1:451–466

Page 9: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

participants can usually tell through the presence of side effects whether or not theyare receiving an active intervention.

These three sets of criticisms by Worrall, Cartwright and Howick are significant,and I evaluate them next, beginning with Worrall’s papers. Randomization is unlikelyto control for confounding factors only in the event that there are many unrelatedpopulation variables that influence outcome, because only in that complex case isone of those variables likely to be accidentally unbalanced by the randomization.Worrall considers only the abstract possibility of multiple unknown variables; hedoes not consider the likely relationship (correlation) of those variables with oneanother and he does not give us reason to think that, in practice, randomizationgenerally (rather than rarely) leaves some causally relevant population variablesaccidentally selected for and thereby able to bias the outcome.12 In addition,successful replication adds evidence that any confounder inadvertently introduced isnot causally responsible for the outcome.

Cartwright uses examples from education, economics and international developmentto show lack of external validity. In general, her examples show failures of interventionsto generalize, often because of cultural differences between populations. Externalvalidity in medical trials is more explored territory. We typically already know, fromsome of the trial selection criteria, where the controversies about generalization lie (seealso Table 2 in Rawlins (2008), which sets out the problems with generalization). Thisdoes not mean that we can figure out the domain of application of trial results in asimple or formulaic manner. Domain expertise is essential for projection, as of courseNelson Goodman argued long ago (1955). Cartwright’s four conditions (2010) have arole here as non-formal criteria for assessing external validity. It should also be notedthat Cartwright’s discussion applies broadly to experimental and evidential reasoningand not specifically to trial methodology. It is not a specific criticism of RCTs,although because of what she calls the “vanity of rigor” of EBM (Cartwright 2007a),the criticism is especially pertinent to RCTs.

Howick is correct that masking is important only for detecting small effects withsubjectively measured outcomes, but this is (unfortunately) true of many recentadvances in medicine and therefore widely applicable. It is not often that we have anew intervention with the dramatic success of e.g. insulin for diabetes or surgery foracute appendicitis. Howick is right to see that the methodology of the RCT is suitedfor some interventions and not suited for others, but the methodology is, in fact,suited for many if not most of the health care interventions currently in development.

The approaches used by Worrall, Cartwright and Howick argue that the RCT isneither a necessary nor a sufficient method for getting knowledge from clinical trials.They argue that other methods can be equally or more effective in specificcircumstances and that knowledge from trials always involves projection to untesteddomains. EBM enthusiasts are beginning to acknowledge this sensible moderation oftheir views, as the recent Harveian Oration by Sir Michael Rawlins13 shows(Rawlins 2008). In this paper, Rawlins argues that “the notion that evidence can be

12 Worrall might respond that he is only criticizing those methodologists who make abstract and generalclaims about freedom from “all possible” biases. This is fine, so long as no conclusions are drawn forRCTs in practice.13 Michael Rawlins is the head of NICE (National Institute of Health and Clinical Excellence) in the UK,which bases its policies and guidelines on the results of EBM.

Euro Jnl Phil Sci (2011) 1:451–466 459459

Page 10: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

reliably placed in hierarchies is illusory,” that “striking effects can be discernedwithout the need for RCTs” and that the findings of RCTs should be extrapolatedwith caution.

Granting these qualifications puts EBM in the same category as other successfulscientific methodologies. They are useful tools in the domains in which they work,but they do not work everywhere or always.

3.2 Effectiveness of EBM methods in practice

How reliable is EBM in practice? RCT and meta-analyses generate claims withstated confidence levels. Typically, RCTs give 95% confidence levels and meta-analyses much higher confidence levels. It follows that each RCT has a 5% chanceof producing a false positive (and each meta-analysis much less). Yet, in practice,RCTs and meta-analyses are much more fallible.

Ioannidis (2005) did a study of 59 highly cited original research studies. Less thanhalf (44%) were replicated; 16% were contradicted by subsequent studies and 16%found the effect to be smaller than in the original study; the rest were not repeated orchallenged. Another, more well known statistic is that studies funded bypharmaceutical companies—even when properly masked and of highest quality—have an astonishingly higher chance (three or four times the probability of studiesnot funded by pharmaceutical companies) of showing effectiveness of anintervention than studies not funded by pharmaceutical companies (Als-Nielsen etal. 2003; Bekelman et al. 2003; Bero et al. 2007; Lexchin et al. 2003). And LeLorieret al. (LeLorier et al. 1997) found that 35% of the time, the outcome of RCTs is notpredicted accurately by previous meta-analysis.

This is a large and partly unexplained failure rate. Some suggest that factors suchas publication bias, time to publication bias and pharmaceutical funding bias (whichsubtly affects trial design and evaluation) are responsible for the worse-than-expected track record of RCTs, systematic reviews and meta-analyses. Publicationbias occurs when studies with null or negative results14 are not written up or notaccepted for publication because they are wrongly thought to be of less scientificsignificance. Steps to address this bias have been taken in many areas of medicalresearch by creating trial registries and making the results of all trials public. Time topublication bias is a more recently discovered phenomenon: trials with null ornegative results, even when they are published, take much longer than trials withpositive results (6–8 years for null or negative results compared with 4–5 years forpositive results) (Hopewell et al. 2007). It is possible that steps taken to correct forpublication bias will also help correct for time to publication bias.

The additional bias created by pharmaceutical funding is not fully understood,especially since many of these trials are properly randomized and double-maskedand satisfy rigorous methodological criteria. Some suggest that pharmaceuticalcompanies deliberately select a weak control arm, for example by selecting a lowdose of the standard treatment, giving the new drug a greater chance of relative

14 In this context, a positive trial is one in which the experimental arm of the trial is more effective, a nullresult is one in which both arms are equally effective, and a negative trial is one in which the control armis more effective.

460 Euro Jnl Phil Sci (2011) 1:451–466

Page 11: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

success. There can also be biases that enter into the analysis of data, particularlywhen endpoints are not specified in advance. It is hoped that a clinical trials registrywill help correct for publication bias and ex post facto manipulation of endpoints.However, at present we are a long way from correcting for bias created bypharmaceutical funding.15 Disclosure of funding source is helpful for evaluation, butoften this information is lost in systematic review and meta-analysis.

Since the performance of RCTs is so flawed, it is worth asking the questionwhether other kinds of clinical trials, further down the evidence hierarchy, are evenless reliable. This would be expected in the abstract, since the further down theevidence hierarchy, the more possible sources of bias. Studies by Benson and Hartz(2000) and Concato et al. (2000) find that many well-designed observational studiesproduce the same results as RCTs.16 The matter is controversial, but a recent articleby Ian Shrier et al. strongly argues for the inclusion of observational studies insystematic reviews and meta-analyses (Shrier et al. 2007)

This result corroborates the intervention of early AIDS activists, who arguedagainst the imposition of RCTs for AZT on both ethical and epistemic grounds(Epstein 1996). They argued that such trials are morally objectionable in that theydeprive the individuals in the placebo arm of the only hope for a cure (at that time).And they argued that such trials are epistemically unnecessary because an RCT is notthe only way to discern the effectiveness of anti-retroviral drugs—a claim that, inhindsight, has proved correct as a combination of historical controlled trials andlaboratory studies have provided the knowledge of dramatically effective anti-retrovirals in clinical use today. These days, of course, no-one needs to get a placebo,and RCTs can continue to detect small improvements of protocol without suchstrenuous moral objections.

Finally, EBM has been asked to evaluate itself using its own standards ofevaluation This would involve showing not merely that a specific EBM interventionimproves outcomes, but that more general use of systematic evidence reviews and soforth in clinical decision making results in improved outcomes for patients. In theory,we would of course expect improved outcomes. But what matters here is not theorybut practice, and no-one has yet designed or carried out a study to test this (Charltonand Miles 1998; Cohen et al. 2004; Straus and McAlister 2000).

3.3 Criticisms of EBM as a general philosophy of medicine

Like most paradigms (new ways of knowing) the light shone on the paradigm flattersit and puts everything else into the shadows. Critics have protested, variously, thatEBM overlooks the role of clinical experience, expert judgment, intuition, medicalauthority, patient goals and values, local health care constraints and the basic medicalsciences including the structure of theory and the relations of causation. This is along and complex list of intertwined scientific, hermeneutic, political and ethicalconsiderations. Perhaps the most common criticism of EBM is that it deals with

15 I recommend that in the meantime we correct for funding bias by asking for a higher level ofsignificance from the results of trials funded by pharmaceutical companies.16 An editorial in the same issue of NEJM (Pocock and Elbourne 2000) strongly protests theseconclusions, partly in the name of EBM orthodoxy, but partly also on the basis of some well known RCTswhich contradicted the results of observational trials

Euro Jnl Phil Sci (2011) 1:451–466 461461

Page 12: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

statistical results, and application of those results to particular cases is said to requirea different set of skills (e.g. (Cohen et al. 2004; Feinstein and Horwitz 1997;Hampton 2002; Straus and McAlister 2000; Tonelli 1998)). EBM advocates disputethis, and this is the reason for the early redefinition of the enterprise: “Evidence-based medicine is the conscientious, explicit and judicious use of current bestevidence in making decisions about the care of individual patients” (Sackett et al.1996). Sorting out this complicated dispute is beyond the scope of this paper; I do soelsewhere (Solomon, in progress).

Another common complaint, which covers several specific complaints, is that EBM isa scientific approach that overlooks the “art” of medicine (e.g. (Montgomery 2006;Tonelli 1998). Elsewhere I have argued that the traditional dichotomy between the “art”and the “science” of medicine is no longer helpful (Solomon 2008). This paper focuseson what EBM leaves out, rather than on whether to count that as art or as science.

In this subsection I will focus on the persistent criticism that EBM ignores thebasic sciences that guide both research and clinical practice (Ashcroft 2004; Bluhm2005; Charlton andMiles 1998; Cohen et al. 2004; Harari 2001; Tonelli 1998). Basicsciences guide research in suggesting hypotheses about disease processes andmechanisms for action of interventions. Basic sciences guide clinical practice inhelping physicians tailor the results of epidemiological studies to the needs ofparticular patients, who may have unique physiological and pathological conditions.EBM is scientifically superficial: it measures correlations. EBM does not model ortheorize about the complete organism, still less the complete organism in its socialand environmental context. In terms of scientific theory, it is thin; what some havecalled “empiricistic” (Harari 2001).17 Charlton and Miles (1998) claim that it is“statistical rather than scientific.” Ashcroft writes that EBM is “autonomous of thebasic sciences” and “blind to mechanisms of explanation and causation” (2004, p.134). Ashcroft regards this as an advantage, rather than a disadvantage, because itmeans that EBM does not have to worry that our basic theories may be incorrect.Ashcroft allies himself with Nancy Cartwright’s realism about phenomenal laws andantirealism about deeper laws18 at this point. Others, however, see the eschewing ofscientific theorizing in favor of discovery of robust statistical correlations asproblematic (Harari 2001; Tonelli 2006). They consider theorizing as important tomedicine as it is to the pure sciences.

Whatever one’s views about scientific realism, EBM typically depends upon abackground of basic science research that develops the interventions and suggeststhe appropriate protocols. It is rare for an intervention without physiologicalrationale to be tested (although this does happen, especially in the areas ofcomplementary and alternative medicine, but in these cases there is typically analternative rationale, perhaps in the frameworks of Asian metaphysics). Moreover, asdiscussed above, scientific judgment enters into the design of appropriaterandomized controlled trials (choice of control, test population etc.) and into theinterpretation of the applicability of results (external validity). Of course, manyinterventions with excellent physiological rationales and good in vitro and in vivoperformance fail when tested in human beings or fail when tested for external

17 In fact, EBM is the successor to ancient “empiric” approaches to medicine.18 Cartwright is, of course, a realist about underlying causal processes.

462 Euro Jnl Phil Sci (2011) 1:451–466

Page 13: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

validity. That does not mean that there is anything wrong with physiologicalreasoning or that we can use a more reliable method. The basic science work isfallible, but it is not dispensable. Even Nancy Cartwright (1989) would agree that wecannot replace physical theory with phenomenal laws alone.

In the past 5 years a new approach to medical research has risen to prominenceinternationally: what is called “translational medicine,” to be achieved by creatingresearch centers, as well as journals, conferences, training programs and so forth.The NIH has made it a priority in its “Roadmap” in 2004 and started offeringClinical and Translational Science Awards in 2006. 55 Institutes have been created(as of May 2011), mostly in universities and medical centers, and the NIH hopes tofund 60, at a total cost of $500 million annually. The European Commission plans touse most of its billion Euro a year budget for the next few years for translationalresearch. In the UK, the National Institute for Health Research has established 11centers at a total cost of about 100 million pounds annually. The idea behindtranslational medicine is to facilitate greater interaction between basic scienceresearch and research in clinical medicine.19 The buzzwords are “synergize,”“catalyze” and “interdisciplinary.” The idea is to bring the different researchers andtheir laboratories into greater physical proximity. This is an interesting retro-intervention in these days of global electronic communication and global travel.

From the perspective of the discussion in this section, the development oftranslational medicine or something like it was only a matter of time. EBM has suchhigh claims to scientific objectivity that it attracted much talent and effort fromclinical researchers. Perhaps the increased focus on formal epidemiological workeventually made apparent what was left out, namely engagement with substantialphysiological and biomolecular theories. The model of basic science doing theresearch and clinical researchers testing the products is now perceived as limited;actually, it leaves all the fun and the creativity to the basic researchers, and deprivesthem of the input of clinical knowledge and observations from the clinicalresearchers.

4 Conclusions

EBM gives a set of formal techniques for evaluating the effectiveness of clinicalinterventions. The techniques are powerful, especially when evaluating interventionsthat offer incremental advantages to current standards of care, and especially whenthe determination of success has subjective elements. EBM techniques do not deliverthe reliability that is theoretically and statistically expected from them. Results arecompromised by publication bias, time to publication bias, interests of fundingorganizations and other unknown factors. Maintaining a strict evidence hierarchymakes little sense when the actual reliability of “gold standard” evidence is so muchless than the expected reliability. I recommend a more instrumental or pragmaticapproach to EBM, in which any ranking of evidence is done by reference to the

19 Technically, “translational research” includes both the bench-to-bedside-and-back (T1) and the clinicalresearch to everyday practice (T2) “translational blocks.” See (Woolf 2008). But most of the resources andrhetoric favor the former.

Euro Jnl Phil Sci (2011) 1:451–466 463463

Page 14: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

actual, rather than the theoretically expected, reliability of results. So, for example,RCTs and observational trials might be at the same level in the hierarchy (based ontheir comparable reliability in practice) and trials designed and funded bypharmaceutical companies a level below independent trials (irrespective of apparenttrial rigor, based on their track record of biased outcomes).

Emphasis on EBM has eclipsed other necessary research methods in medicine,even those methods necessary for its own development and application. With therecent emphasis on translational medicine, we are seeing a restoration of therecognition that clinical research requires an engagement with basic theory (e.g.physiological, genetic, biochemical) and a range of empirical techniques such asbedside observation, laboratory and animal studies. EBM works best when used inthis pluralistic methodological context.

References

AHRQ. (2002). Systems to rate the strength of scientific evidence. Evidence Report/TechnologyAssessment 47,

Als-Nielsen, B., Chen, W., Gluud, C., & Kjaergard, L. L. (2003). Association of funding and conclusionsin randomized drug trials: A reflection of treatment effect or adverse events? JAMA: The Journal ofthe American Medical Association, 290(7), 921–928.

Ashcroft, R. E. (2004). Current epistemological problems in evidence based medicine. Journal of MedicalEthics, 30(2), 131–135.

Baratz, S. R., Goodman, C., & Council on Health Care Technology. (1990). Improving consensusdevelopment for health technology assessment: An international perspective. Washington: NationalAcademy Press.

Bekelman, J. E., Li, Y., & Gross, C. P. (2003). Scope and impact of financial conflicts of interest inbiomedical research: A systematic review. JAMA: The Journal of the American Medical Association,289(4), 454–465.

Benson, K., & Hartz, A. J. (2000). A comparison of observational studies and randomized, controlledtrials. The New England Journal of Medicine, 342(25), 1878–1886.

Bero, L., Oostvogel, F., Bacchetti, P., &Lee, K. (2007). Factors associatedwith findings of published trials of drug-drug comparisons: Why some statins appear more efficacious than others. PLoS Medicine, 4(6), e184.

Berwick, D. M. (2005). Broadening the view of evidence-based medicine. Qual. Saf. Health Care, 14(5),315–316.

Bluhm, R. (2005). From hierachy to network: a richer view of evidence for evidence-based medicine.Perspectives in Biology and Medicine, 48(4), 535–547.

Canadian Task Force on Preventive Health Care. CTFPHC History/Methodology. Retrieved 07/07, 2009,from http://www.ctfphc.org

Cartwright, N. (1989). Nature's capacities and their measurement. Oxford; New York: Clarendon Press;Oxford University Press.

Cartwright, N. (2007a). Are RCTs the gold standard? Biosocieties, 2(2), 11–20.Cartwright, N. (2007b). Evidence-based policy: Where is our theory of evidence? Center for Philosophy

of Natural and Social Science, London School of Economics, Technical Report 07/07,Cartwright, N. (2009). Evidence-based policy: What's to be done about relevance. Philosophical Studies,

143(1), 127–136.Cartwright, N. (2010). Will this policy work for you? predicting effectiveness better: How philosophy

helps. Unpublished manuscript.Charlton, B. G., & Miles, A. (1998). The rise and fall of EBM. Quarterly Journal of Medicine, 91(5), 371-

371-374.Cohen, A. M., Stavri, P. S., & Hersh, W. R. (2004). A categorization and analysis of the criticisms of

evidence-based medicine. International Journal of Medical Informatics, 73(1), 35-35-43.Concato, J., Shah, N., & Horwitz, R. I. (2000). Randomized, controlled trials, observational studies, and

the hierarchy of research designs. The New England Journal of Medicine, 342(25), 1887–1892.

464 Euro Jnl Phil Sci (2011) 1:451–466

Page 15: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

Daly, J. (2005). Evidence-based medicine and the search for a science of clinical care. Berkeley; NewYork: University of California Press; Milbank Memorial Fund.

Davidoff, F., Haynes, B., Sackett, D., & Smith, R. (1995). Evidence based medicine. BMJ (ClinicalResearch Ed.), 310(6987), 1085–1086.

Denny, K. (1999). Evidence-based medicine and medical authority. Journal of Medical Humanities, 20(4),247–263.

Doll, R., Peto, R., & Clarke, M. (1999). First publication of an individually randomized trial. ControlledClinical Trials, 20(4), 367–368.

Elstein, A. S. (2004). On the origins and development of evidence-based medicine and medical decisionmaking. Inflammation Research, 53(Supplement 2), S184-S184-S189.

Epstein, S. (1996). Impure science: AIDS, activism, and the politics of knowledge. Berkeley: University ofCalifornia Press.

Evidence-Based Medicine Working Group. (1992). Evidence-Based Medicine. A new approach toteaching the practice of medicine. JAMA: The Journal of the American Medical Association, 268(17),2420–2425.

Feinstein, A. R., & Horwitz, R. I. (1997). Problems in the “evidence” of “evidence-based medicine”. TheAmerican Journal of Medicine, 103(6), 529–535.

Goodman, N. (1955). Fact, fiction and forecast. Cambridge: Harvard University Press.Goodman, S. N. (2002). The mammography dilemma: A crisis for evidence-based medicine? Annals of

Internal Medicine, 137(5 Part 1), 363–365.Guyatt, G. H., Oxman, A. D., Vist, G. E., Kunz, R., Falck-Ytter, Y., Alonso-Coello, P., et al. (2008).

GRADE: An emerging consensus on rating quality of evidence and strength of recommendations.BMJ (Clinical Research Ed.), 336(7650), 924–926.

Hampton, J. R. (2002). Evidence-based medicine, opinion-based medicine, and real-world medicine.Perspectives in Biology and Medicine, 45(4), 549–568.

Harari, E. (2001). Whose evidence? lessons from the philosophy of science and the epistemology ofmedicine. Australian and New Zealand Journal of Psychiatry, 35(6), 724-724-730.

Hopewell, S., Clarke, M. J., Stewart, L., & Tierney, J. (2007). Time to publication for results of clinicaltrials. Cochrane Database of Systematic Reviews, (1) doi:10.1002/14651858.MR000011.pub2

Howick, J. (2008). Against a priori judgments of bad methodology: Questioning double-blinding as auniversal methodological virtue of clinical trials. PhilSci Archive

Howick, J. (2011). The philosophy of evidence-based medicine Wiley-Backwell.Ioannidis, J. P. (2005). Contradicted and initially stronger effects in highly cited clinical research. JAMA:

The Journal of the American Medical Association, 294(2), 218–228.Juni, P., Witschi, A., Bloch, R., & Egger, M. (1999). The hazards of scoring the quality of clinical trials for

meta-analysis. JAMA: The Journal of the American Medical Association, 282(11), 1054–1060.Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press.Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). Chicago: University of Chicago Press.Kuhn, T. S. (1977). The essential tension: Selected studies in scientific tradition and change. Chicago:

University of Chicago Press.Kuhn, T. S., Conant, J., & Haugeland, J. (2000). The road since structure: Philosophical essays, 1970–

1993, with an autobiographical interview. Chicago: University of Chicago Press.Lakatos, I., & Musgrave, A. (1970). Criticism and the growth of knowledge. Cambridge: University Press.LeLorier, J., Gregoire, G., Benhaddad, A., Lapierre, J., & Derderian, F. (1997). Discrepancies between

meta-analyses and subsequent large randomized, controlled trials. The New England Journal ofMedicine, 337(8), 536–542.

Lexchin, J., Bero, L. A., Djulbegovic, B., & Clark, O. (2003). Pharmaceutical industry sponsorship andresearch outcome and quality: Systematic review. BMJ (Clinical Research Ed.), 326(7400), 1167–1170.

Marks, H. M. (1997). The progress of experiment: Science and therapeutic reform in the united states,1900–1990. Cambridge: Cambridge University Press.

Meldrum, M. L. (1998). A calculated risk: The salk polio vaccine field trials of 1954. British MedicalJournal, 317, 1233-1233-6.

Meldrum, M. L. (2000). A brief history of the randomized controlled trial: From oranges and lemons tothe gold standard. Hematology/Oncology Clinics of North America, 14(4), 745-745-760.

Montgomery, K. (2006). How doctors think: Clinical judgment and the practice of medicine. Oxford:Oxford University Press.

Pocock, S. J., & Elbourne, D. R. (2000). Randomized trials or observational tribulations? The NewEngland Journal of Medicine, 342(25), 1907–1909.

Euro Jnl Phil Sci (2011) 1:451–466 465465

Page 16: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

Rawlins, M. (2008). De testimonio: On the evidence for decisions about the use of therapeuticinterventions. Lancet, 372(9656), 2152–2161.

Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence basedmedicine: What it is and what it isn't. BMJ (Clinical Research Ed.), 312(7023), 71–72.

Sehon, S. R., & Stanley, D. E. (2003). A philosophical analysis of the evidence-based medicine debate.BMC Health Services Research, 3(14).

Shrier, I., Boivin, J. F., Steele, R. J., Platt, R. W., Furlan, A., Kakuma, R., et al. (2007). Should meta-analyses of interventions include observational studies in addition to randomized controlled trials? Acritical examination of underlying principles. American Journal of Epidemiology, 166(10), 1203–1209.

Solomon, M. (2008). Epistemological reflections on the art of medicine and narrative medicine.Perspectives in Biology and Medicine, 51(3), 406–417.

Solomon, M. (in progress). Beyond the art and science of medicine. Unpublished manuscript.Straus, S. E., & McAlister, F. A. (2000). Evidence-based medicine: A commentary on common criticisms.

Canadian Medical Association Journal, 163(7), 837–841.Tonelli, M. R. (1998). The philosophical limits of evidence-based medicine. Academic Medicine, 73(12),

1234–1240.Tonelli, M. R. (2006). Integrating evidence into clinical practice: An alternative to evidence-based

approaches. Journal of Evaluation in Clinical Practice, 12(3), 248–256.Woolf, S. H. (2008). The meaning of translational research and why it matters. JAMA: The Journal of the

American Medical Association, 299(2), 211–213.Worrall, J. (2002). What evidence in evidence-based medicine? Philosophy of Science, 69(S3), 316–330.Worrall, J. (2007a). Evidence in medicine and evidence-based medicine. Philosophy Compass, 2(6), 981–

1022.Worrall, J. (2007b). Why there's no cause to randomize. The British Journal for the Philosophy of Science,

58(3), 451–488.Yank, V., Rennie, D., & Bero, L. A. (2007). Financial ties and concordance between results and

conclusions in meta-analyses: Retrospective cohort study. BMJ (Clinical Research Ed.), 335(7631),1202–1205.

466 Euro Jnl Phil Sci (2011) 1:451–466

Page 17: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published
Page 18: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published
Page 19: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published
Page 20: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published
Page 21: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published
Page 22: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published
Page 23: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published
Page 24: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published
Page 25: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

Evidence-Based MedicineA New Approach to Teaching the Practice of MedicineEvidence-Based Medicine Working Group

A NEW paradigm for medical practiceis emerging. Evidence-based medicinede-emphasizes intuition, unsystematicclinical experience, and pathophysiolog-ic rationale as sufficient grounds for clin-ical decision making and stresses theexamination ofevidence from clinical re-search. Evidence-based medicine re-

quires new skills of the physician, in-cluding efficient literature searching andthe application of formal rules of evi-dence evaluating the clinical literature.An important goal of our medical res-

idency program is to educate physiciansin the practice of evidence-based med-icine. Strategies include a weekly, for-mal academic half-day for residents, de-voted to learning the necessary skills;recruitment into teaching roles of phy-sicians who practice evidence-basedmedicine; sharing among faculty of ap-proaches to teaching evidence-basedmedicine; and providing faculty withfeedback on their performance as rolemodels and teachers of evidence-basedmedicine. The influence of evidence-based medicine on clinical practice andmedical education is increasing.CLINICAL SCENARIOA junior medical resident working in

a teaching hospital admits a 43-year-oldpreviously well man who experienced awitnessed grand mal seizure. He hadnever had a seizure before and had nothad any recent head trauma. He drankalcohol once or twice aweek and had nothad alcohol on the day of the seizure.Findings on physical examination arenormal. The patient is given a loading

A complete list of members of the Evidenced-BasedMedicine Working Group appears at the end of this ar-ticle.

Reprint requests to McMaster University Health Sci-ences Centre, Room 3W10,1200 Main St W, Hamilton,Ontario, Canada L8N 3Z5 (Gordon Guyatt, MD).

dose ofphenytoin intravenously and thedrug is continued orally. A computedtomographic head scan is completely nor¬mal, and an electroencephalogram showsonly nonspecific findings. The patient isvery concerned about his risk of seizurerecurrence. How might the residentproceed?The Way of the PastFaced with this situation as a clinical

clerk, the resident was told by her se¬nior resident (who was supported in hisview by the attending physician) thatthe risk of seizure recurrence is high(though he could not put an exact num¬ber on it) and that was the informationthat should be conveyed to the patient.She now follows this path, emphasizingto the patient not to drive, to continuehis medication, and to see his familyphysician in follow-up. The patient leavesin a state of vague trepidation about hisrisk of subsequent seizure.The Way of the FutureThe resident asks herselfwhether she

knows the prognosis of a first seizureand realizes she does not. She proceedsto the library and, using the GratefulMed program,1 conducts a computerizedliterature search. She enters the Med¬ical Subject Headings terms epilepsy,prognosis, and recurrence, and the pro¬gram retrieves 25 relevant articles. Sur¬veying the titles, one2 appears directlyrelevant. She reviews the paper, findsthat it meets criteria she has previouslylearned for a valid investigation ofprog¬nosis,3 and determines that the resultsare applicable to her patient. The searchcosts the resident $2.68, and the entireprocess (including the trip to the libraryand the time to make a photocopy of thearticle) took half an hour.The results of the relevant study show

that the patient risk of recurrence at 1

year is between 43% and 51%, and at 3years the risk is between 51% and 60%.After a seizure-free period of 18 monthshis risk of recurrence would likely beless than 20%. She conveys this infor¬mation to the patient, along with a rec¬ommendation that he take his medica¬tion, see his family doctor regularly, andhave a review ofhis need for medicationifhe remains seizure-free for 18 months.The patient leaves with a clear idea ofhis likely prognosis.A PARADIGM SHIFTThomas Kuhn has described scientific

paradigms as ways of looking at theworld that define both the problems thatcan legitimately be addressed and therange of admissible evidence that maybear on their solution.4 When defects inan existing paradigm accumulate to theextent that the paradigm is no longertenable, the paradigm is challenged andreplaced by a new way of looking at theworld. Medical practice is changing, andthe change, which involves using themedical literature more effectively inguiding medical practice, is profoundenough that it can appropriately be calleda paradigm shift.The foundations of the paradigm shift

lie in developments in clinical researchover the last 30 years. In 1960, the ran¬domized clinical trial was an oddity. It isnow accepted that virtually no drug canenter clinical practice without a demon¬stration of its efficacy in clinical trials.Moreover, the same randomized trialmethod increasingly is being applied tosurgical therapies6 and diagnostic tests.6Meta-analysis is gaining increasing ac¬

ceptance as amethod ofsummarizing theresults of a number of randomized trials,and ultimately may have as profound aneffect on setting treatment policy as haverandomized trials themselves.7 Whileless dramatic, crucial methodological ad-

Downloaded From: http://jama.jamanetwork.com/ by a University of Ottawa User on 10/02/2014

Page 26: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

vanees have also been made in other ar¬eas, such as the assessment ofdiagnostictests8·9 and prognosis.2A new philosophy ofmedical practice

and teaching has followed these meth¬odological advances. This paradigm shiftis manifested in a number of ways. Aprofusion of articles has been publishedinstructing clinicians on how to access,10evaluate,11 and interpret12 the medicalliterature. Proposals to apply the prin¬ciples of clinical epidemiology to day-to-day clinical practice have been putforward.3 A number of major medicaljournals have adopted a more informa¬tive structured abstract format, whichincorporates issues of methods and de¬sign into the portion of an article thereader sees first.13 The American Col¬lege of Physicians has launched a jour¬nal, ACP Journal Club, that summa¬rizes new publications ofhigh relevanceand methodological rigor.14 Textbooksthat provide a rigorous review of avail¬able evidence, including a methods sec¬tion describing both the methodologicalcriteria used to systematically evaluatethe validity of the clinical evidence andthe quantitative techniques used forsummarizing the evidence, have begunto appear.1516 Practice guidelines basedon rigorous methodological review of theavailable evidence are increasingly com¬mon.17 A final manifestation is the grow¬ing demand for courses and seminarsthat instruct physicians on how to makemore effective use of the medical liter¬ature in their day-to-day patient care.3We call the new paradigm "evidence-

based medicine."18 In this article, we de¬scribe how this approach differs fromprior practice and briefly outline howwe are building a residency program inwhich a key goal is to practice, act as arole model, teach, and help residentsbecome highly adept in evidence-basedmedicine. We also describe some of theproblems educators and medical prac¬titioners face in implementing the new

paradigm.The Former ParadigmThe former paradigm was based on

the following assumptions about theknowledge required to guide clinicalpractice.

1. Unsystematic observations fromclinical experience are a valid way ofbuilding and maintaining one's knowl¬edge about patient prognosis, the valueof diagnostic tests, and the efficacy oftreatment.

2. The study and understanding ofbasic mechanisms of disease and patho-physiologic principles are a sufficientguide for clinical practice.

3. A combination of thorough tradi¬tional medical training and common

sense is sufficient to allow one to eval¬uate new tests and treatments.

4. Content expertise and clinical ex¬perience are a sufficient base from whichto generate valid guidelines for clinicalpractice.According to this paradigm clinicians

have a number ofoptions for sorting outclinical problems they face. They canreflect on their own clinical experience,reflect on the underlying biology, go toa textbook, or ask a local expert. Read¬ing the introduction and discussion sec¬tions of a paper could be considered an

appropriate way ofgaining the relevantinformation from a current journal.This paradigm puts a high value on

traditional scientific authority and ad¬herence to standard approaches, and an¬swers are frequently sought from directcontact with local experts or referenceto thewritings ofinternational experts.19The New ParadigmThe assumptions of the new paradigm

are as follows:1. Clinical experience and the devel¬

opment of clinical instincts (particularlywith respect to diagnosis) are a crucialand necessary part of becoming a com¬

petent physician. Many aspects of clin¬ical practice cannot, or will not, ever beadequately tested. Clinical experienceand its lessons are particularly impor¬tant in these situations. At the same

time, systematic attempts to record ob¬servations in a reproducible and unbi¬ased fashion markedly increase the con¬fidence one can have in knowledge aboutpatient prognosis, the value of diagnos¬tic tests, and the efficacy of treatment.In the absence of systematic observa¬tion one must be cautious in the inter¬pretation of information derived fromclinical experience and intuition, for itmay at times be misleading.

2. The study and understanding ofbasic mechanisms of disease are neces¬

sary but insufficient guides for clinicalpractice. The rationales for diagnosis andtreatment, which follow from basicpathophysiologic principles, may in factbe incorrect, leading to inaccurate pre¬dictions about the performance of diag¬nostic tests and the efficacy of treat¬ments.

3. Understanding certain rules ofevidence is necessary to correctly in¬terpret literature on causation, progno¬sis, diagnostic tests, and treatmentstrategy.It follows that clinicians should reg¬

ularly consult the original literature (andbe able to critically appraise the meth¬ods and results sections) in solving clin¬ical problems and providing optimal pa¬tient care. It also follows that cliniciansmust be ready to accept and live with

uncertainty and to acknowledge thatmanagement decisions are often madein the face of relative ignorance of theirtrue impact.The new paradigm puts amuch lower

value on authority.20 The underlying be¬lief is that physicians can gain the skillsto make independent assessments ofev¬idence and thus evaluate the credibilityof opinions being offered by experts.The decreased emphasis on authoritydoes not imply a rejection of what onecan learn from colleagues and teachers,whose years of experience have provid¬ed them with insight into methods ofhistory taking, physical examination, anddiagnostic strategies. This knowledgecan never be gained from formal scien¬tific investigation. A final assumption ofthe new paradigm is that physicianswhose practice is based on an under¬standing ofthe underlying evidence willprovide superior patient care.REQUIREMENTS FOR THEPRACTICE OF EVIDENCE-BASEDMEDICINEThe rolemodeling, practice, and teach¬

ing ofevidence-based medicine requiresskills that are not traditionally part ofmedical training. These include precise¬ly defining a patient problem, and whatinformation is required to resolve theproblem; conducting an efficient searchof the literature; selecting the best ofthe relevant studies and applying rulesof evidence to determine their validity3;being able to present to colleagues in asuccinct fashion the content of the ar¬ticle and its strengths and weaknesses;and extracting the clinical message andapplying it to the patient problem. Wewill refer to this process as the criticalappraisal exercise.Evidence-based medicine also involves

applying traditional skills of medicaltraining. A sound understanding ofpathophysiology is necessary to inter¬pret and apply the results of clinical re¬search. For instance, most patients towhom we would like to generalize theresults of randomized trials would, forone reason or another, not have beenenrolled in the most relevant study. Thepatient may be too old, be too sick, haveother underlying illnesses, or be unco¬

operative. Understanding the underly¬ing pathophysiology allows the clinicianto better judge whether the results areapplicable to the patient at hand andalso has a crucial role as a conceptualand memory aid.Another traditional skill required of

the evidence-based physician is a sen¬

sitivity to patients' emotional needs. Un¬derstanding patients' suffering21 and howthat suffering can be ameliorated by thecaring and compassionate physician are

Downloaded From: http://jama.jamanetwork.com/ by a University of Ottawa User on 10/02/2014

Page 27: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

Evaluation Form for Clinical Teaching Unit Attending PhysiciansRating

Domain Unsatisfactory Needs Improvement Satisfactory Good ExcellentRole model of practiceof evidence-basedmedicine

Seldom cites evidenceto support clinicaldecisions

Often fails to substantiatedecisions with evidence

Usually substantiatesdecisions withevidence

Substantiates decisions;is aware ofmethodological issues

Always substantiatesdecisions or

acknowledgeslimitations of evidence

Leads practice ofevidence-basedmedicine

Never assigns problemsto be resolvedthrough literature

Produces suboptimal volumeor follow-through of problemresolution through literature

Assigns problems andfollows through withdiscussion, includingmethodology

Discusses literatureretrieval, methodologyof papers, applicationto individual patient

Same as "Good" rating,and makes it excitingand fun

fundamental requirements for medicalpractice. These skills can be acquiredthrough careful observation of patientsand of physician role models. Here too,though, the need for systematic studyand the limitations of the present evi¬dence must be considered. The new par¬adigmwould call forusing the techniquesofbehavioral science to determine whatpatients are really looking for from theirphysicians22 and how physician and pa¬tient behavior affects the outcome ofcare.23 Ultimately, randomized trialsusing different strategies for interact¬ing with patients (such as the random¬ized trial conducted by Greenfield andcolleagues24 that demonstrated the pos¬itive effects of increasing patients' in¬volvement with their care) may beappropriate.Since evidence-based medicine in¬

volves skills ofproblem defining, search¬ing, evaluating, and applying originalmedical literature, it is incumbent on

residency programs to teach these skills.Understanding the barriers to educat¬ing physicians-in-training in evidence-based medicine can lead to more effec¬tive teaching strategies.EVIDENCE-BASED MEDICINE IN AMEDICAL RESIDENCYThe Internal Medicine Residency Pro¬

gram at McMaster University has an

explicit commitment to producing prac¬titioners of evidence-based medicine.While other clinical departments atMcMaster have devoted themselves toteaching evidence-based medicine, thecommitment is strongest in the Depart¬ment of Medicine. We will therefore fo¬cus on the Internal Medicine Residencyin our discussion and briefly outline someof the strategies we are using in imple¬menting the paradigm shift.

1. The residents spend each Wednes¬day afternoon at an academic half-day.At the beginning of each new academicyear, the rules of evidence that relate toarticles concerning therapy, diagnosis,prognosis, and overviews are reviewed.In subsequent sessions, the discussionis built around a clinical case, and twooriginal articles that bear on the prob¬lem are presented. The residents are

responsible for critically appraising the

articles and arriving at bottom lines re¬

garding the strength of evidence andhow it bears on the clinical problem.They learn to present the methods andresults in a succinct fashion, emphasiz¬ing only the key points. A wide-rangingdiscussion, including issues of underly¬ing pathophysiology and related ques¬tions of diagnosis and management, fol¬lows presentation of the articles.The second part of the half-day is de¬

voted to the physical examination. Clin¬ical teachers present optimal techniquesof examination with attention to what isknown about their reproducibility andaccuracy.

2. Facilities for computerized litera¬ture searchingare available on the teach¬ing medical ward in each of the fourteaching hospitals. Costs of searchingare absorbed by the residency program.Residents not familiar with computersearching, or the Grateful Med programwe use, are instructed at the beginningof the rotation. Research in our insti¬tution has shown that MEDLINEsearching from clinical settings is fea¬sible with brief training.26 A subsequentinvestigation demonstrated that inter¬nal medicine house staffwho have com¬

puter access on the ward and feedbackconcerning their searching do an aver¬

age of more than 3.6 searches permonth.26 House staff believe that morethan 90% oftheir searches that are stim¬ulated by a patient problem lead to some

improvement in patient care.253. Assessment of searching and crit¬

ical appraisal skills is being incorporat¬ed into the evaluation of residents.

4. We believe that the new paradigmwill remain an academic mirage withlittle relation to the world ofday-to-dayclinical practice unless physicians-in-training are exposed to role models whopractice evidence-based medicine. As a

result, the residency program hasplaced major emphasis on ensuring thisexposure.First, a focus of recruitment for our

Department of Medicine faculty hasbeen internists with training in clinicalepidemiology. These individuals have theskills and commitment to practice evi¬dence-based medicine. The residencyprogram works to ensure they have clin-

ical teaching roles available to them.Second, a program of more rigorous

evaluation of attending physicians hasbeen instituted. One of the areas eval¬uated is the extent to which attendingphysicians are effective in teaching ev¬idence-based medicine. The relevantitems from the evaluation form are re¬

produced in the Table.Third, because it is new to both teach¬

ers and learners, and because most clin¬ical teachers have observed few rolemodels and have not received formaltraining, teaching evidence-based med¬icine is not easy. To help attending phy¬sicians improve their skills in this area,we have encouraged them to form part¬nerships, which involve attending thepartner's clinical rounds, making obser¬vations, and providing formal feedback.One learns through observation andthrough criticisms ofone's performance.A number of facultymembers have par¬ticipated in this program.To further facilitate attending physi¬

cians' improving their skills, the De¬partment of Medicine held a retreat de¬voted to sharing strategies for effectiveclinical teaching. Part of the workshop,attended by more than 30 faculty mem¬

bers, was devoted to teaching evidence-based medicine. Some of the strategiesthat were adduced are briefly summa¬rized in the next section.

EFFECTIVE TEACHING OFEVIDENCE-BASED MEDICINERole ModelingAttending physicians must be enthu¬

siastic, effective role models for the prac¬tice ofevidence-based medicine (even inhigh-pressure clinical settings, such asintensive care units). Providing a modelgoes a long way toward inculcating at¬titudes that lead learners to developskills in critical appraisal. Acting as arole model involves specifying thestrength of evidence that supports clin¬ical decisions. In one case, the teachercan point to a number of large random¬ized trials, rigorously reviewed and in¬cluded in a meta-analysis, which allowsone to say how many patients one musttreat to prevent a death. In other cases,the best evidence may come from ac-

Downloaded From: http://jama.jamanetwork.com/ by a University of Ottawa User on 10/02/2014

Page 28: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

cepted practice or one's clinical experi¬ence and instincts. The clinical teachershould make it clear to learners on whatbasis decisions are beingmade. This canbe done efficiently. For instance:Prospective studies suggest that Mr Jones'risk of a major vascular event in the firstyear after his infarct is 4%; a meta-analysisof randomized trials of aspirin in this situa¬tion suggests a risk reduction of 25%; wewould have to treat 100 such patients to pre¬vent an event21; given the minimal expenseand toxicity of low-dose, enteric-coated as¬pirin, treatingMr Jones is clearlywarranted.Or:

How longto treat a patientwith antibiotics fol¬lowingpneumonia has not been systematicallystudied; so, my recommendation that we giveMrs Smith 3 days of intravenous antibioticsand treat her for a total of 10 days is arbitrary;somewhat shorter or longer courses of treat¬ment would be equally reasonable.In the latter type of situation, dog¬

matic or rigid insistence on following a

particular course of action would not beappropriate.Critical AppraisalIt is crucial that critical appraisal is¬

sues arise from patient problems thatthe learner is currently confronting,demonstrating that critical appraisal isa pragmatic and central aspect, not anacademic or tangential element of op¬timal patient care. The problem select¬ed for critical appraisalmust be one thatthe learners recognize as important, feeluncertain, and do not fully trust expertopinion; in other words, they must feelit is worth the effort to find out what theliterature says on a topic. The likeliestcandidate topics are common problemswhere learners have been exposed todivergent opinions (and thus there isdisagreement and/or uncertainty amongthe learners). The clinical teacher shouldkeep these requirements in mind whenconsidering questions to encourage thelearners to address. It can be useful toask all members of the group their opin¬ion about the clinical problem at hand.One can then ensure that the problem isappropriate for a critical appraisal ex¬ercise by asking the group the followingquestions:

1. It seems the group is uncertainabout the optimal approach. Is thatright?

2. Do you feel it is important for us tosort out this question by going to theoriginal literature?Methodological CriteriaCriteria formethodological rigormust

be few and simple. Most published crite¬ria can be overwhelming for the novice.Suggested criteria for studies of diagno¬sis, treatment, and review articles follow:

Diagnosis.—Has the diagnostic testbeen evaluated in a patient sample thatincluded an appropriate spectrum ofmildand severe, treated and untreated dis¬ease, plus individuals with different butcommonly confused disorders?28 Wasthere an independent, blind comparisonwith a "gold standard" of diagnosis?28Treatment.—Was the assignment of

patients to treatments randomized?29Were all patients who entered the studyaccounted for at its conclusion?29Review Articles.—Were explicit

methods used to determine which arti¬cles to include in the review?30As learners becomemore sophisticat¬

ed, additional criteria can be introduced.The criteria should not be presented insuch a way that fosters nihilism (if thestudy is not randomized, it is uselessand provides no valuable information),but as a way of helping arrive at thestrength of inference associated with aclinical decision. Teachers can point outinstances in which criteria can be vio¬lated without reducing the strength ofinference.

METHODS FOR SCALING THEBARRIERS TO THE DISSEMINATIONOF EVIDENCE-BASED MEDICINEMisapprehensions AboutEvidence-Based MedicineIn developing the practice and teach¬

ing of evidence-based medicine at ourinstitution, we have found that the na¬ture of the new paradigm is sometimesmisinterpreted. Recognizing the limita¬tions of intuition, experience, and un¬

derstanding of pathophysiology in per¬mitting strong inferences may be mis¬interpreted as rejecting these routes toknowledge. Specific misinterpretationsofevidence-based medicine and their cor¬rections follow:Misinterpretation 1.—Evidence-

based medicine ignores clinical experi¬ence and clinical intuition.Correction.—On the contrary, it is

important to expose learners to excep¬tional clinicians who have a gift for in¬tuitive diagnosis, a talent for preciseobservation, and excellent judgment inmaking difficult management decisions.Untested signs and symptoms shouldnot be rejected out of hand. They mayprove extremely useful and ultimatelybe proved valid through rigorous test¬ing. Themore the experienced clinicianscan dissect the process they use in di¬agnosis,31 and clearly present it to learn¬ers, the greater the benefit. Similarly,the gain for students will be greatestwhen clues to optimal diagnosis andtreatment are culled from the barrageof clinical information in a systematicand reproducible fashion.

Institutional experience can also pro¬vide important insights. Diagnostic testsmay differ in their accuracy dependingon the skill of the practitioner. A localexpert in, for instance, diagnostic ultra¬sound may produce far better resultsthan the average from the published lit¬erature. The effectiveness and compli¬cations associated with therapeutic in¬terventions, particularly surgical pro¬cedures, may also differ among institu¬tions. When optimal care is taken toboth record observations reproduciblyand avoid bias, clinical and institutionalexperience evolves into the systematicsearch for knowledge that forms the coreof evidence-based medicine.32Misinterpretation 2.—Understand¬

ing ofbasic investigation and pathophys-iology plays no part in evidence-basedmedicine.Correction.—The dearth ofadequate

evidence demands that clinical problemsolving must rely on an understandingof underlying pathophysiology. More¬over, a good understanding of patho¬physiology is necessary for interpretingclinical observations and for appropri¬ate interpretation of evidence (especial¬ly in deciding on its generalizability).Misinterpretation 3.—Evidence-

based medicine ignores standard aspectsof clinical training, such as the physicalexamination.Correction.—Careful history taking

and physical examination provide much,and often the best, evidence for diag¬nosis and direct treatment decisions. Theclinical teacher of evidence-based med¬icine must give considerable attentionto teaching the methods of history tak¬ing and clinical examination, with par¬ticular attention to which items havedemonstrated validity and to strategiesthat enhance observer agreement.Barriers to TeachingEvidence-Based MedicineDifficulties we have encountered in

teaching evidence-based medicine in¬clude the following:

1. Many house staff start with rudi¬mentary critical appraisal skills and thetopic may be threatening for them.

2. People like quick and easy answers.Cookbook medicine has its appeal. Crit¬ical appraisal involves additional timeand effort and may be perceived as in¬efficient and distracting from the realgoal (to provide optimal care for pa¬tients).

3. For many clinical questions, high-quality evidence is lacking. If such ques¬tions predominate in attempts to intro¬duce critical appraisal, a sense of futilitycan result.

4. The concepts of evidence-basedmedicine are met with skepticism by

Downloaded From: http://jama.jamanetwork.com/ by a University of Ottawa User on 10/02/2014

Page 29: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

many faculty members who are there¬fore unenthusiastic about modifyingtheir teaching and practice in accordancewith its dictates.These problems can be ameliorated

by use of the strategies described in theprevious section on effective teaching ofevidence-based medicine. Threat can bereduced by making a contract with theresidents, which sets out modest andachievable goals, and further reducedby the attending physician role model¬ing the practice of evidence-based med¬icine. Inefficiency can be reduced byteaching effective searching skills andsimple guidelines for assessing the va¬

lidity of the papers. In addition, one can

emphasize that critical appraisal as a

strategy for solving clinical problems ismost appropriate when the problems arecommon in one's own practice. Futilitycan be reduced by, particularly initially,targeting critical appraisal exercises toareas in which there is likely to be high-quality evidence that will affect clinicaldecisions. Skepticism of faculty mem¬bers can be reduced by the availabilityof "quick and dirty" (as well as more

sophisticated) courses on critical apprais¬al of evidence and by the teaching part¬nerships and teaching workshops de¬scribed earlier.Many problems in the practice and

teaching of evidence-based medicine re¬main. Many physicians, including bothresidents and faculty members, are stillskeptical about the tenets of the new

paradigm. A medical residency is full ofcompeting demands, and the appropri¬ate balance between goals is not alwaysevident. At the same time, we are buoyedby the number of residents and facultywho have enthusiastically adopted thenew approach and found ways to inte¬grate it into their learning and practice.Barriers to PracticingEvidence-Based MedicineEven ifour residency program is suc¬

cessful in producing graduates who en¬ter the world of clinical practice enthu¬siastic to apply what they have learnedabout evidence-basedmedicine, theywillface difficult challenges. Economic con¬straints and counterproductive incen¬tives may compete with the dictates ofevidence as determinants of clinical de¬cisions; the relevant literature may notbe readily accessible; and the time avail¬able may be insufficient to carefully re¬view the evidence (which may be volu¬minous) relevant to a pressing clinicalproblem.

Some solutions to these problems arealready available. Optimal integrationofcomputer technology into clinical prac¬tice facilitates finding and accessing ev¬idence. Reference to literature over-

views meeting scientific principles30,33and collections ofmethodologically soundand highly relevant articles14 can mark¬edly increase efficiency. Other solutionswill emerge over time. Health educa¬tors will continue to find better ways ofrole modeling and teaching evidence-based medicine. Standards in writingreviews and texts are likely to change,with a greater focus on methodologicalrigor.15·16 Evidence-based summaries willtherefore become increasingly available.Practical approaches to making evi¬dence-based summaries easier to applyin clinical practice, many based on com¬

puter technology, will be developed andexpanded. As described earlier, we are

already using computer searching on theward. In the future, the results of di¬agnostic tests may be providedwith theassociated sensitivity, specificity, andlikelihood ratios. Health policymakersmay find that the structure of medicalpracticemust be shifted in basic ways tofacilitate the practice of evidence-basedmedicine. Increasingly, scientific over¬views will be systematically integratedwith information regarding toxicity andside effects, cost, and the consequencesof alternative courses of action to de¬velop clinical policy guidelines.34 Theprospects for these developments areboth bright and exciting.DOES TEACHING AND LEARNINGEVIDENCE-BASED MEDICINEIMPROVE PATIENT OUTCOMES?The proof of the pudding of evidence-

based medicine lies in whether patientscared for in this fashion enjoy betterhealth. This proof is no more achievablefor the new paradigm than it is for theold, for no long-term randomized trialsof traditional and evidence-based med¬ical education are likely to be carriedout. What we do have are a number ofshort-term studies which confirm thatthe skills ofevidence-based medicine canbe taught to medical students35 and med¬ical residents.36 In addition, a study com¬

pared the graduates of a medical schoolthat operates under the new paradigm(McMaster) with the graduates of a tra¬ditional school. A random sample ofMcMaster graduates who had chosencareers in family medicine were more

knowledgeable with respect to currenttherapeutic guidelines in the treatmentof hypertension than were the gradu¬ates of the traditional school.37 Theseresults suggest that the teaching of ev¬idence-based medicine may help grad¬uates stay up-to-date. Further evalua¬tion of the evidence-based medicine ap¬proach is necessary.Our advocating evidence-based med¬

icine in the absence of definitive evi¬dence of its superiority in improving pa-

tient outcomes may appear to be an in¬ternal contradiction. As has been point¬ed out, however, evidence-basedmedicine does not advocate a rejectionof all innovations in the absence of de¬finitive evidence. When definitive evi¬dence is not available, onemust fall backon weaker evidence (such as the com¬

parison of graduates of two medicalschools that use different approachescited above) and on biologic rationale.The rationale in this case is that physi¬cians who are up-to-date as a function oftheir ability to read the current litera¬ture critically, and are able to distin¬guish strong from weaker evidence, arelikely to be more judicious in the ther¬apy they recommend. Physicians whounderstand the properties ofdiagnostictests and are able to use a quantitativeapproach to those tests are likely to makemore accurate diagnoses. While this ra¬tionale appears compelling to us, com¬pelling rationale has often proved mis¬leading. Until more definitive evidenceis adduced, adoption of evidence-basedmedicine should appropriately be re¬stricted to two groups. One group com¬

prises those who find the rationale com¬

pelling, and thus believe that use of theevidence-based medicine approach islikely to improve clinical care. A secondgroup comprises those who, while skep¬tical of improvements in patient out¬come, believe it is very unlikely thatdeterioration in care results from theevidence-based approach and who findthat the practice of medicine in the new

paradigm is more exciting and fun.

CONCLUSIONBased on an awareness of the limita¬

tions of traditional determinants of clin¬ical decisions, a new paradigm for med¬ical practice has arisen. Evidence-basedmedicine deals directly with the uncer¬tainties of clinical medicine and has thepotential for transforming the educa¬tion and practice of the next generationofphysicians. These physicianswill con¬tinue to face an exploding volume ofliterature, rapid introduction of new

technologies, deepening concern aboutburgeoning medical costs, and increas¬ing attention to the quality and outcomesof medical care. The likelihood that ev¬idence-based medicine can help amelio¬rate these problems should encourageits dissemination.Evidence-based medicine will require

new skills for the physician, skills thatresidency programs should be equippedto teach. While strategies for inculcat¬ing the principles ofevidence-based med¬icine remain to be refined, initial expe¬rience has revealed a number of effec¬tive approaches. Incorporating thesepractices into postgraduate medical ed-

Downloaded From: http://jama.jamanetwork.com/ by a University of Ottawa User on 10/02/2014

Page 30: Just a paradigm: evidence-based medicine in … a paradigm: evidence-based medicine in epistemological context Miriam Solomon Received: 8 October 2010 /Accepted: 28 July 2011 /Published

ucation and continuing to work on theirfurther development will result in more

rapid dissemination and integration ofthe new paradigm into medical practice.The Evidence-Based Medicine Working Group

comprised the following: Gordon Guyatt (chair),MD, MSc, John Cairns, MD, David Churchill, MD,MSc, Deborah Cook, MD, MSc, Brian Haynes, MD,MSc, PhD, Jack Hirsh, MD, Jan Irvine, MD, MSc,Mark Levine, MD, MSc, Mitchell Levine, MD, MSc,Jim Nishikawa, MD, and David Sackett, MD, MSc,Departments of Medicine and Clinical Epidemiol¬ogy and Biostatistics, McMaster University,Hamilton, Ontario; Patrick Brill-Edwards, MD,Hertzel Gerstein, MD, MSc, Jim Gibson, MD, Ro¬man Jaeschke, MD, MSc, Anthony Kerigan, MD,MSc, Alan Neville, MD, and Akbar Panju, MD, De-

References

partment ofMedicine, McMaster University; AllanDetsky, MD, PhD, Department of Clinical Epide¬miology and Biostatistics, McMaster Universityand Departments of Health Administration andMedicine, University of Toronto (Ontario); MurrayEnkin, MD, Departments of Clinical Epidemiologyand Biostatistics and Obstetrics and Gynaecology,McMaster University; Pamela Frid, MD, Depart¬ment of Pediatrics, Queen's University, Kingston,Ontario; MarthaGerrity, MD, Department of Med¬icine, University of North Carolina, Chapel Hill;Andreas Laupacis, MD, MSc, Department of Clin¬ical Epidemiology and Biostatistics, McMasterUniversity and Department of Medicine, Univer¬sity of Ottawa (Ontario); Valerie Lawrence, MD,Department of Medicine, University of TexasHealth Science Center at San Antonio and Audie L.MurphyMemorial Veterans Hospital, San Antonio,Tex; Joel Menard, MD, Centre de Medicine Trezen-

tize Cardio-Vasculaires, Paris, France; VirginiaMoyer, MD, Department of Pediatrics, Universityof Texas, Houston; Cynthia Mulrow, MD, Depart¬ment of Medicine, University of Texas, San Anto¬nio; Paul Links, MD, MSc, Department of Psychi¬atry, McMaster University; Andrew Oxman, MD,MSc, Departments of Clinical Epidemiology andBiostatistics and Family Medicine, McMaster Uni¬versity; Jack Sinclair, MD, Departments of ClinicalEpidemiology and Biostatistics and Pediatrics,McMaster University; and Peter Tugwell, MD,MSc, Department of Medicine, University of Ot¬tawa (Ontario).

Drs Cook and Guyatt are Career Scientists ofthe OntarioMinistry of Health. Dr Haynes is a Na¬tional Health Scientist, National Health Researchand Development Program, Canada. Drs Jaeschkeand Cook are Scholars of the St Joseph's HospitalFoundation, Hamilton, Ontario.

1. Lindberg DA. Information systems to supportmedical practice and scientific discovery. MethodsInform Med. 1989;28:202-206.2. Hart YM, Sander JW, Johnson AL, Shorvon SD.National general practice study of epilepsy: recur-rence after a first seizure. Lancet. 1990;336:1271\x=req-\1274.3. Sackett DL, Haynes RB, Guyatt GH, Tugwell P.Clinical Epidemiology: A Basic Science for Clini-cial Medicine. 2nd ed. Boston, Mass: Little Brown& Co Inc; 1991:173-186.4. Kuhn TS. The Structure of Scientific Revolu-tions. Chicago, Ill: University of Chicago Press;1970.5. European Carotid Surgery Trialists' Collabora-tive Group. MRC European Carotid Surgery Trial:interim results for symptomatic patients with se-vere (70-99%) orwith mild (0-29%) carotid stenosis.Lancet. 1991;337:1235-1243.6. Larsen ML, Horder M, Mogensen EF. Effect oflong-term monitoring of glycosylated hemoglobinlevels in insulin-dependent diabetesmellitus. NEnglJ Med. 1990;323:1021-1025.7. L'Abbe KA, Detsky AS, O'Rourke K. Meta-anal-ysis in clinical research. Ann Intern Med. 1987;107:224-233.8. RansohoffDF, Feinstein AR. Problems of spec-trum and bias in evaluating the efficacy of diag-nostic tests. N Engl J Med. 1978;299:926-930.9. NierenbergAA, Feinstein AR. How to evaluatea diagnostic market test: lessons from the rise andfall ofdexamethasone suppression test. JAMA. 1988;259:1699-1702.10. Haynes RB, McKibbon KA, Fitzgerald D,Guyatt GH, Walker CJ, Sackett DL. How to keepup with the medical literature, V: access by per-sonal computer to the medical literature. Ann In-tern Med. 1986;105:810-816.11. Bulpitt CJ. Confidence intervals. Lancet. 1987;1:494-497.12. Godfrey K. Simple linear regression in medicalresearch. N Engl J Med. 1985;313:1629-1636.

13. Haynes RB,Mulrow CD, Huth EJ, Altman DG,Gardner MJ. More informative abstracts revisited.Ann Intern Med. 1990;113:69-76.14. Haynes RB. The origins and aspirations ofACPJournal Club. Ann Intern Med. 1991;114(ACP JClub. suppl 1):A18.15. Chalmers I, Enkin M, Keirse MJNC, eds. Ef-fective Care in Pregnancy and Childbirth. NewYork, NY: Oxford University Press Inc; 1989.16. Sinclair JC, Braken MB, eds. Effective Care ofthe Newborn Infant. New York, NY: Oxford Uni-versity Press Inc; 1992.17. AudetAM, Greenfield S, Field M. Medical prac-tice guidelines: current activities and future direc-tions. Ann Intern Med. 1990;113:709-714.18. Guyatt GH. Evidence-based medicine. Ann In-tern Med. 1991;114(ACP J Club. suppl 2):A-16.19. Light DW. Uncertainty and control in profes-sional training. J Health Soc Behav. 1979;20:310\x=req-\322.20. Chalmers I. Scientific inquiry and authoritar-ianism in perinatal care and education. Birth. 1983;10:151-164.21. Cassell EJ. The nature of suffering and thegoals ofmedicine. NEngl J Med. 1982;306:639-645.22. Ende J, Kazis L, Ash A, Moskowitz MA. Mea-suring patients' desire for autonomy: decision-mak-ing and information-seeking preferences amongmedical patients. J Gen Intern Med. 1989;4:23-30.23. Carter WB, Inui TS, Kukull WA, Haigh VH.Outcome-based doctor-patient interaction analysis.Med Care. 1982;20:550-556.24. Greenfield S, Kaplan S,Ware JE Jr. Expandingpatient involvement in care: effects on patient out-comes. Ann Intern Med. 1985;102:520-528.25. Haynes RB, McKibbon A, Walker CJ, Ryan N,Fitzgerald D, Ramsden ME. Online access toMED-LINE in clinical settings: a study of use and use-fulness. Ann Intern Med. 1990;112:78-84.26. McKibbon KA, Haynes RB, Johnston ME,Walker CJ. A study to enhance clinical end-userMEDLINE search skills: design and baseline find-

ings. In: Clayton PD, ed. Proceedings of the FifthAnnual Symposium on Computer Applications inMedical Care, November 1991; Washington, DC.New York, NY: McGraw-Hill International BookCo; 1992:73-77.27. Laupacis A, Sackett DL, Roberts RS. An as-

sessment of clinically useful measures of the con-

sequences of treatment. N Engl J Med. 1988;318:1728-1733.28. Department of Clinical Epidemiology and Bio-statistics, McMaster University. How to read clin-ical journals, II: to learn about a diagnostic test.Can Med Assoc J. 1981;124:703-710.29. Department of Clinical Epidemiology and Bio-statistics, McMaster University. How to read clin-ical journals, V: to distinguish useful from uselessor even harmful therapy. Can Med Assoc J. 1981;124:1156-1162.30. Oxman AD, Guyatt GH. Guidelines for readingliterature reviews. Can Med Assoc J. 1988;138:697\x=req-\703.31. Campbell EJM. The diagnosing mind. Lancet.1987;1:849-851.32. Feinstein AR. What kind of basic science forclinical medicine? N Engl J Med. 1970;283:847-853.33. Mulrow CD. The medical review article: stateof the science. Ann Intern Med. 1987;106:485-488.34. Eddy DM. Guidelines for policy statements: theexplicit approach. JAMA. 1990;263:2239-2243.35. Bennett KJ, Sackett DL, Haynes RB, et al. Acontrolled trial of teaching critical appraisal of theclinical literature tomedical students. JAMA. 1987;257:2451-2454.36. Kitchens JM, Pfeifer MP. Teaching residents toread the medical literature: a controlled trial of acurriculum in critical appraisal/clinical epidemiol-ogy. J Gen Intern Med. 1989;4:384-387.37. Shin J, Haynes RB. Does a problem-based, self-directed undergraduate medical curriculum pro-mote continuing clinical competence? Clin Res. 1991;39:143A.

Downloaded From: http://jama.jamanetwork.com/ by a University of Ottawa User on 10/02/2014