Post on 03-Apr-2018
7/28/2019 Applying Randomized Controlled Trials and Systematic Reviews in Social Work Research
http://slidepdf.com/reader/full/applying-randomized-controlled-trials-and-systematic-reviews-in-social-work 1/9
http://rsw.sagepub.com/ Research on Social Work Practice
http://rsw.sagepub.com/content/18/4/311The online version of this article can be found at:
DOI: 10.1177/1049731507307788
2008 18: 311 originally published online 17 October 2007Research on Social Work Practice
Haluk SoydanApplying Randomized Controlled Trials and Systematic Reviews in Social Work Research
Published by:
http://www.sagepublications.com
can be found at:Research on Social Work Practice Additional services and information for
http://rsw.sagepub.com/cgi/alertsEmail Alerts:
http://rsw.sagepub.com/subscriptionsSubscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://rsw.sagepub.com/content/18/4/311.refs.htmlCitations:
at UNIV OF TEXAS on February 23, 2011rsw.sagepub.comDownloaded from
7/28/2019 Applying Randomized Controlled Trials and Systematic Reviews in Social Work Research
http://slidepdf.com/reader/full/applying-randomized-controlled-trials-and-systematic-reviews-in-social-work 2/9
Applying Randomized Controlled Trials and
Systematic Reviews in Social Work Research
Haluk SoydanUniversity of Southern California
This article elaborates on the centrality of interventions for social work practice and the importance of under-
standing the effects of interventions for a more efficient, harmless, transparent, and ethical social work practice.
Low-bias research designs and meta-analyses are important means of generating the best possible evidence on
what works in social work practice. An evidence-based practice model is promising in terms of translating and
implementing scientific evidence that is uncertain, volatile, and incomplete and might be difficult to access.
Keywords: social work practice; interventions; systematic reviews; evidence-based practice
311
Originally this article was written for an international con-
ference, titled What Works? Modernizing the Knowledge
Base of Social Work, that took place in Bielefeld,
Germany, in November 2005. The conference was struc-
tured as a platform to debate the merits and deficits of pro-
duction of scientific evidence and related research designs
for the advancement of evidence-based social work prac-
tice and the controversies in relation to other approaches
to scientific knowledge growth in social work.
The article defines social work and argues that the
purpose of social work practice is to infuse change in
the lives of individuals and in the community to reduceor eradicate social problems and enhance betterment.
Thus, interventions are at the core of social work prac-
tice. To understand whether social work interventions
work or might be harmful is imperative to the profession
for a variety of reasons, including ethical aspects. Then,
the utility of high-quality randomized controlled stud-
ies, whenever they are possible to conduct and accessi-
ble, is discussed, and real-life examples are given to
illustrate the practical and policy importance of know-
ing what works and what is harmful in social work.
Also, limitations of randomized controlled studies are dis-
cussed and related to the issue of uncertainty of knowing
in general. Finally, evidence-based practice is described
and presented as a model of practice to implement gener-
alized scientific estimates of effectiveness in contextual
social work practice situations and to handle the lack and
scarcity of pertinent knowledge as well as the uncertainty
of knowing as related to social work practice.
SOCIAL WORK DEFINED
As we well know, the profession’s understanding and
definition of social work as professional and practical
activity has, over time, changed frequently and substan-
tially. On one extreme, a Swedish researcher has, for
instance, suggested that the history of social work is as old
as human history, and the profession is defined as taking
care of fellow human beings based on a sense of sociality
and mutuality (Swedner, 1985). For example, he has noted,
As early as in the oldest well documented societies—the
Sumerian Empire in the Iraq of today, the Egypt of the Pharaohs,
ancient China, ancient Greece and the Roman Empire—there is adivision of labor that points towards the development of profes-
sions specializing in care and social welfare, principally doctors
and midwives, but also people responsible for the material well-
being of the population. (Swedner, 1985, pp. 10-11)
Similarly, but in a less exaggerated fashion, a more
recent account of the history of the definition of social
work suggests that the profession of social work has
existed for some 400 years (Holosko, 2003, p. 271).
Elsewhere, I have argued that a more reasonable and
productive definition of social work practice must be
Author’s Note: Portions of this article were previously presented at the con-
ference What Works? Modernizing the Knowledge Base of Social Work,
sponsored by the Center for Social Service Studies at the University of
Bielefeld, Germany, November 10-12, 2005. Correspondence concerning this
article may be addressed to Haluk Soydan, PhD, School of Social Work,
Hamovitch Center for Science in the Human Services, University of Southern
California, Los Angeles, CA 90089-0411; e-mail: Soydan@usc.edu. This
article was accepted by the editor.
Research on Social Work Practice, Vol. 18 No. 4, July 2008 311-318
DOI: 10.1177/1049731507307788
© 2008 Sage Publications
at UNIV OF TEXAS on February 23, 2011rsw.sagepub.comDownloaded from
7/28/2019 Applying Randomized Controlled Trials and Systematic Reviews in Social Work Research
http://slidepdf.com/reader/full/applying-randomized-controlled-trials-and-systematic-reviews-in-social-work 3/9
understood in the context of the genesis and develop-
ment of modern and empirical social scientific research
methods followed by the development of the idea that
empirical research results could and should be used to
infuse social change and betterment (Soydan, 1999).
Several articles in a special issue of Research on
Social Work Practice (“Evaluating,” 2003) give a com-prehensive account of how definitions have shifted over
the years. The definition developed and adopted by the
International Federation of Social Workers (IFSW) is
very useful for the purposes of this article. It reads,
The social work profession promotes social change, problem
solving in human relationships and the empowerment and lib-
eration of people to enhance well-being. Utilizing theories of
human behavior and social systems, social work intervenes at
the points where people interact with their environments.
Principles of human rights and social justice are fundamental
to social work. (IFSW, 2000, p. 1)
Furthermore,
Social work bases its methodology on a systematic body of
evidence-based knowledge derived from research and practice
evaluation, including local and indigenous knowledge specific
to its context. (IFSW, 2000, p. 1)
Similarly, the code of ethics of the National
Association of Social Workers (NASW) in the United
States prescribes the following as regards to social
workers’ professional skills:
Social workers should provide services in substantive areas or
use intervention techniques or approaches that are new to
them only after engaging in appropriate study, training, con-
sultation, and supervision from people who are competent in
those techniques. (NASW, 2007, p. 6)
So, what do we learn from these definitions?
1. We learn that the social work profession promotes socialchange, problem solving in human relationships, and theempowerment and liberation of people from hardship toenhance well-being.
2. We learn that the principles of human rights and social
justice are fundamental to the social work profession.3. We learn that it is a universal imperative that social work-ers must use high-quality knowledge and skills to learnand understand whether social work interventions work,if they may cause harm to the client, and most desirably,are effective in the betterment of the client’s situation.
SOCIAL WORK AND INTERVENTIONS
The scope of research on social work practice is often
understood in very broad terms because social work
practice involves several core factors, such as values,
relationships, legislation, clinical experience, and orga-
nizations. However, there is an emphasis on social work
practice as intervention. In the core of social work prac-
tice are interventions for the betterment of clients,
whether they be individuals, groups, or communities. In
other words, it is sensible to suggest that the essence of
social work practice is intervention.In general terms, an intervention is any interference
that would modify a process or situation. In social work,
the purpose of an intervention is to induce change to
slow down or eradicate risk factors, stimulate and acti-
vate protective factors, reduce or eliminate harm, or if
possible, introduce betterment beyond harm control. So
understanding the effects of social work interventions is
of essential importance to the social work profession.
However, the profession’s history of conducting and
using results from high-quality effectiveness studies is
shaky. One major problem of the social work profession
that has been and remains is that interventions are madewithout regard or access to rigorous evidence on
whether an intervention harms, works, or does not have
any effect at all. There are cases in which services are
provided in spite of existing evidence on the ineffective-
ness or harmfulness of the intervention. There are other
cases where evidence exists for the effectiveness of an
intervention but the services are either not provided or
insufficiently provided. This problem leads to little or
no progress and potential harm in social work. Let us
see a few real-life examples.
The Drug Abuse Resistance Education program(DARE) is a well-known example of an intervention
program that does not work but is widely used by school
districts in the United States. The program was devel-
oped by the Los Angeles Police Department and the Los
Angeles Unified School District in 1983 and was imple-
mented by more than 80% of school districts in the
United States by 2001 (www.dare.org). The intervention
program aims to prevent the use of drugs, alcohol, and
tobacco among youth and is implemented in the class-
room. However, repeated evaluations and meta-analyses
of effectiveness studies all indicated its ineffectiveness.
In 1994, a meta-analysis of eight effectiveness studiesshowed that the program had small effects on self-
reported drug use when compared with control groups
(Ennett, Tobler, Ringwalt, & Flewelling, 1994). The U.S.
General Accounting Office (GAO) reviewed the long-
terms effects of the program and concluded that the
program was ineffective in preventing long-term drug use
when the youth becomes an adolescent (U.S. GAO,
2003). A recent meta-analysis 10 years later showed even
smaller effects of DARE (West & O’Neal, 2004). Yet the
program remains widely used and prevalent.
312 RESEARCH ON SOCIAL WORK PRACTICE
at UNIV OF TEXAS on February 23, 2011rsw.sagepub.comDownloaded from
7/28/2019 Applying Randomized Controlled Trials and Systematic Reviews in Social Work Research
http://slidepdf.com/reader/full/applying-randomized-controlled-trials-and-systematic-reviews-in-social-work 4/9
Poverty is a major problem. In the United States,
poverty rates are high: In 2004, an average of 12.7% of the
population was living in poverty. The rates for children and
for single female-headed families were even higher: 17.8%
and 30.5% in 2004, respectively (McKernan & Ratcliffe,
2006). There are a few U.S. programs for low-income
families that are shown to be effective, but they are notwidely implemented, and the nation’s poverty rates remain
high (Blank, 2002; McKernan & Ratcliffe, 2006).
A promising intervention program is the Nurse–
Family Partnership Home Visiting Program (NFP; www
.nursefamilypartnership.org). Effectiveness studies show
that this program improves maternal and child health for
mothers and children visited by NFP-trained nurses. The
improvements include reduction in prenatal cigarette
smoking and in prenatal hypertensive disorders, reduction
in children’s health care visits for injuries, fewer unin-
tended subsequent pregnancies, increase in father involve-
ment and women’s employment, reduction in families’useof welfare and food stamps, and increase in children’s
school readiness (Olds et al., 1998, 2004).
Violence is an important public health problem among
adolescents in the United States. One particular type of
intervention that is widely used in the United States and
elsewhere is Scared Straight and similar awareness
programs. However, a Campbell Collaboration systematic
review on Scared Straight and other juvenile awareness
programs for preventing juvenile delinquency (Petrosino,
Petrosino-Turpin, & Buehler, 2002) showed that these
intervention programs do not work. These interventionprograms involve organized visits to prisons by juvenile
delinquents or children at risk for criminal behavior and are
designed to deter juveniles from future offending. These
studies remain in worldwide use despite studies that show
that they do not work and are likely to have a harmful
effect. These programs should not be used! On the more
positive side, Life Skills Training programs seem to
have promising results (Botvin, Griffin, & Nichols,
2006; Fraguela, Martin, & Trinanes, 2003; Griffin,
Botvin, & Nichols, 2006). Life Skills Training programs
are school based and instruct students in the necessary
skills to resist social (peer) pressure to smoke, drink,and use drugs. They help students to develop greater
self-esteem and self-confidence, enable them to effec-
tively cope with anxiety, increase their knowledge of the
immediate consequences of substance abuse, and
enhance cognitive and behavioral competency to reduce
and prevent a variety of health risk behaviors. These
programs should be used!
These examples illustrate the utility of effectiveness
studies in understanding effects of social work interven-
tions. Rigorous evidence can bring sustained progress to
social work and social policy and thus enhance well-
being and client safety. Despite the fact that intervention
studies (or effectiveness studies) are not abundant in
social work and related human service areas, their abil-
ity to detect what works and what is harmful creates the
potential for great opportunity.
RESEARCH IMPLICATIONS
Obviously, research on social work practice has been
impacted by the methodological controversies of the
social sciences. The history of social sciences is very
much a history of paradigm wars. Elsewhere, I have
argued that it is time to abandon the dichotomous lan-
guage of qualitative and quantitative methods and recog-
nize that all research designs are good for the types of
scientific questions for which they are tailored (Soydan,
2007). It might be correct that phenomenological researchdesigns are the best fit for understanding events such as the
social or cultural meaning of a conversation between a
social worker and a client. Or ethnographic and partici-
pant observations might be the best methods of depicting
behavior patterns of populations of social institutions
such as prisons and hospitals, networks of street gangs, or
victims of commercial sexual exploitation.
However, when it comes to measure the effects of
social work interventions, experimental studies, espe-
cially when randomized, conducted very carefully, and
large enough to generate statistical power, are thedesigns that are best fit for the purpose. From a scien-
tific point of view, the study of social work interventions
is the measurement of the efficacy and effectiveness of
the intervention. Efficacy measurement involves effects of
a social work intervention in well-controlled environments
where the intervention is delivered with high fidelity and
outcomes are measured to compare results of the experi-
mental group with that of the control group(s). Results of
efficacy studies are assumed to be strongly related to the
circumstances of the specific testing environment. If effi-
cacy studies generate positive results, the intervention is
considered promising and, it is assumed, sensible to testfor effectiveness. Effectiveness studies take place in sites
where the clients are under less controlled but realistic
conditions. Repeated effectiveness testing generates infor-
mation to understand whether the intervention works
under real-life conditions, to what extent, and under what
diverse conditions.
In experimental studies of social work interventions,
eligible clients or entities such as mental health clinics,
dormitories, neighborhoods, and villages are randomly
allocated to each of the two or more treatment conditions.
Soydan / APPLYING RANDOMIZED CONTROLLED TRIALS 313
at UNIV OF TEXAS on February 23, 2011rsw.sagepub.comDownloaded from
7/28/2019 Applying Randomized Controlled Trials and Systematic Reviews in Social Work Research
http://slidepdf.com/reader/full/applying-randomized-controlled-trials-and-systematic-reviews-in-social-work 5/9
The treatment conditions may be a social work inter-
vention program and a nontreatment control group. One
or more social work intervention programs may be
tested in one and the same study. The groups that are
constructed by random allocation do not differ system-
atically. However, they may differ by the chance factor.
Alternative (nonrandomized) designs to randomizedcontrolled studies are used for a number of reasons,
including practical problems of implementation, ethical
concerns, budget and time restrictions, and unwilling-
ness to use randomized controlled studies. Alternative
designs often include before-and-after comparisons and
time-series analysis but also designs highly dependent
on mathematical (re)constructions.
The best way of empirically understanding the rela-
tively biased estimates produced by alternative designs in
comparison to the estimates of randomized controlled
studies is to compare results of different designs. To
understand the differences of bias produced by random-ized controlled experiments with those of nonexperimen-
tal or “quasiexperimental” designs, researchers usually
make “between-study” and “within-study” comparisons.
In between-study comparisons of experimental and non-
experimental studies, researchers include multiple studies
conducted with different research designs. The bias in esti-
mations is calculated by looking at the relationship
between the design and the estimates of effect. Reynolds
and Temple (1995) compared three studies, Shadish and
Ragsdale (1996) compared dozens of studies, and Lipsey
and Wilson (1993) compared 74 randomized and nonran-domized studies. All of these studies show mixed results.
One major problem with these types of studies is that they
are not capable of distinguishing whether the difference
between estimates is because of design or some other
factor (Glazerman, Levy, & Myers, 2003).
Within-study comparisons estimate an intervention
program’s effect by using a randomized control group and
one or several nonrandomized comparison groups. These
studies use design replication as a method, which is a rees-
timation of the effect by using one or several comparison
groups. This type of study is capable of explaining that the
estimated differences between randomized and nonexperi-mental study design are because of the differences in
design and not other factors such as investigator bias, dif-
ferences in treatment environments, or implementation
itself. Glazerman et al. (2003) conducted a within-study
comparison of 12 labor market–related studies. They found
that nonexperimentally measured estimates sometimes
came close to replicating experimentally generated results
but often produced estimates that differed with margins of
importance for policy making. This is considered an esti-
mate of bias. The researchers concluded that “although the
empirical evidence from this literature can be used in
the context of training and welfare programs to improve
non-experimental research designs, it cannot on its own,
justify the use of such designs” (Glazerman et al., 2003,
p. 63). For a thoughtful account of this problem, see
Boruch (2007).
Certainly, randomized controlled effect studies, whenconducted properly, generate best possible or less biased
estimates of effects of social work interventions. However,
it is not always possible to conduct controlled experi-
ments. At times, it can be difficult to randomize individu-
als but possible to randomize social entities, such as
school classes or schools, hospitals clinics or hospitals,
neighborhoods, and villages. This type of randomization
is called “place-based” or “cluster” randomization and cir-
cumvents some of the ethical and practical problems asso-
ciated with random allocation of individuals.
Other times, all types of randomization might be
impossible because of ethical concerns, budget con-straints, research practicalities, and several other reasons.
Then, we will have to use nonrandomized controlled
research designs. In a later section, I will come back to
practice and policy implications of depending on less rig-
orous effect studies.
BEYOND SINGLE EXPERIMENTAL
STUDIES: SYSTEMATIC REVIEWS
From the early 1980s, social scientists began to developresearch synthesis and meta-analytic methods to synthesize
results (effect sizes) of multiple effect studies. The primary
driving force of this development was perhaps the acceler-
ating number of scientific publications. This made it diffi-
cult and complicated to access most or all publications in
one specific specialty, to control the scientific quality of the
accessed publication, and to have a reasonable overview of
the literature (state of the art) in a specific specialty.
The development of systematic reviews took off very
strongly from the mid-1990s, fueled by an increasing
awareness among professionals and decision makers, and
subsequently among the general public, of the importanceof high-quality evidence in professional practice and policy
making. Later, inception and development of the Cochrane
Collaboration (www.cochrane.org) of the health-related
sciences and practices by the mid-1990s and the Campbell
Collaboration (www.campbellcollaboration.org) of the
behavioral and social sciences from early 2000 established
the science and technology of systematic research reviews
and meta-analysis.
The aim of systematic reviews is to generate scien-
tific generalizations by integrating empirical research.
314 RESEARCH ON SOCIAL WORK PRACTICE
at UNIV OF TEXAS on February 23, 2011rsw.sagepub.comDownloaded from
7/28/2019 Applying Randomized Controlled Trials and Systematic Reviews in Social Work Research
http://slidepdf.com/reader/full/applying-randomized-controlled-trials-and-systematic-reviews-in-social-work 6/9
The systematic research review is a broader concept
than the meta-analysis. Systematic research reviews
may focus on outcomes of effect studies as well as the-
ories, methods, or applications. Meta-analysis, on the
other hand, is used to integrate quantitative estimates of
effects of interventions. Thus, a systematic research
review may or may not include a meta-analysis.Already by the mid-1970s, Glass (1976) launched the
term meta-analysis and defined it as “the statistical
analysis of a large collection of analysis results from
individual studies for the purpose of integrating the find-
ings” (p. 3). Later, important publications on meta-
analysis and systematic research reviews include Glass,
McGraw, and Smith (1981); Rosenthal (1984); Hunter
and Schmidt (1990); Cooper and Hedges (1994); and
Lipsey and Wilson (2001).
With the progress of the Cochrane Collaboration,
which develops, maintains, and distributes systematic
research reviews of health-related interventions, theterm systematic was coined to designate the systematic
nature of the research reviews. The Cochrane Handbook
for Systematic Reviews of Interventions (www.cochrane
.org/resources/handbook/) describes and prescribes
procedures that make a research review systematic. This
includes planning and formatting of a review, transparency
and ethical requirements, peer control, problem formula-
tion, locating and selecting of studies for reviews (e.g., use
of electronic databases, hand-searching, and unpublished
studies), study quality assessment, study inclusion and
exclusion procedures, data coding, analysis and meta-analytic standards, result interpretation, and updating of
reviews. Because the Cochrane systematic reviews
include primarily and exclusively randomized con-
trolled studies (if not indicated otherwise) and use
advanced methods to control all known biases, these
reviews are considered the highest standards of research
reviews.
With the inception of the Campbell Collaboration (aka
C2) in social and behavioral sciences in 2000, social work
acquired an abode for itself. The international Campbell
Collaboration produces, maintains, and disseminates sys-
tematic reviews of the studies of effects of behavioral andsocial interventions (including social work interventions).
These reviews are primarily based on randomized con-
trolled studies and on nonrandomized studies if there are
no randomized controlled studies of a specific interven-
tion of interest. The potential value of the Campbell
Collaboration—and that of its older sibling in health care,
the Cochrane Collaboration—is partly in its dedication to
“gold-standard” research reviews. The Campbell Collabo-
ration is also unique in social and behavioral sciences at
the international level because it adopts transparent and
uniform standards of evidence, specifies rigorous pro-
cedures to avoid bias in the screening of studies and
producing of synthesis, employs advanced statistical
methods, continuously updates reviews, collaborates with
end-user networks, and is multidisciplinary. Currently,
respected knowledge clearinghouses and knowledge data-
bases benchmark themselves with Campbell Collabo-ration standards, even if they are not able to limit
themselves to the Campbell standards at the time being.
As David Sackett, who is one of the foremost pio-
neers of evidence-based practice puts it, the value of
systematic high quality reviews is undisputable:
Because the randomized trial, and especially the systematic
review of several randomized trials, is so much more likely to
inform us and so much less likely to mislead us, it has become
the “gold standard” for judging whether a treatment does
more good than harm. (Sackett, Richardson, Rosenberg, &
Haynes, 1997, pp. 4-5)
INSECURITY OF KNOWING
Nevertheless, it is good science to recognize the lim-
its of any gold standard, because a major problem in
science is that it is impossible to know with 100% cer-
tainty what the truth is in any given research question.
This assumption has two consequences of importance
for the purpose of this article. The first one is a scientific
perspective urging more empirical evidence to under-
stand merits of various research designs, a question thatI will look at in this section. The second consequence is
the implications of this uncertainty for the professional
practice, a question that I will look at in the next section.
Simply, scientific research designs are devices that
the human mind uses to filter and organize experience,
an idea that was powerfully argued for and established
by the Austrian British scientist Karl Popper. The merit
(or deficit) of any scientific research design is thus
related to the ability of this design to test the falsifica-
tion of a hypothesis that we have about a specific phe-
nomenon. Popper first published his The Logic of
Scientific Discovery ( Logik der Forschung) in 1934 andbasically revolutionized the entire idea of the nature of
growth in science. Popper famously emphasized that
“no matter how many instances of white swans we may
have observed, this does not justify the conclusion that
all swans are white” (Popper, 1934/1972, p. 27).
Popper (1934/1972) concluded that scientific theories
are only hypotheses and may be falsified and replaced
any day. Consequently, what is important for the growth
of science (evidence) is not the confirmation but the
attempted falsification of theories. For the practitioner
Soydan / APPLYING RANDOMIZED CONTROLLED TRIALS 315
at UNIV OF TEXAS on February 23, 2011rsw.sagepub.comDownloaded from
7/28/2019 Applying Randomized Controlled Trials and Systematic Reviews in Social Work Research
http://slidepdf.com/reader/full/applying-randomized-controlled-trials-and-systematic-reviews-in-social-work 7/9
scientist, this means that she or he should always con-
tinue to test complex ramifications of theories and test
them in as many different types of situations as possible.
In Karl Popper’s own words,
I shall certainly admit a system as empirical or scientific only
if it is capable of being tested by experience. These consider-
ations suggest that not the verifiability but the falsifiability of
a system is to be taken as a criterion of demarcation. In other
words: I shall not require of a scientific system that it shall be
capable of being singled out, once and for all, in a positive
sense; but I shall require that its logical form shall be such that
it can be singled out, by means of empirical tests, in a nega-
tive sense: it must be possible for an empirical scientific
system to be refuted by experience. (pp. 40-41)
In August 2005, John Ioannidis published a remark-
able study with the provocative title “Why Most
Published Research Findings Are False.” He summa-
rized his finding as follows:
There is increasing concern that most current published
research findings are false. The probability that a research
claim is true may depend on study power and bias, the number
of other studies on the same question, and, importantly, the
ratio of true to no relationships among the relationships
probed in each scientific field. In this framework, a research
finding is less likely to be true when the studies conducted in
a field are smaller; when effect sizes are smaller; when there
is greater number and lesser pre-selection of tested relation-
ships; where there is greater flexibility in designs, definitions,
outcomes, and analytic modes; when there is greater financial
and other interests and prejudice; and when more teams are
involved in a scientific field in chase of statistical signifi-cance. Simulations show that for most study designs and set-
tings, it is more likely for a research claim to be false than
true. Moreover, for many scientific fields, claimed research
findings may often be simply accurate measures of the pre-
vailing bias. (Ioannidis, 2005, p. 696)
Given that the scientific community is confronted
with this major problem, what can be done to improve
the situation? Ioannidis suggests that scientists should
try to obtain better powered evidence from large studies
and low-bias meta-analyses; be aware of the fact that it
is misleading to emphasize the statistically significant
findings of any single research team because what
matters is the totality of evidence; and instead of chas-
ing statistical significance, scientists should improve our
understanding of the range of R values, that is, the ratio
of the number of “true relationships” to “no relation-
ships” among those tested in the field.
If this is the state of the art of scientific evidence at
present, it is most likely that we cannot attain the gold
standard of evidence and will have to live with the
awareness of insecurity of knowing. So what does this
mean for a profession such as social work? If we cannot
know with 100% certainty whether a social work inter-
vention works or is harmful, should we then stop inter-
vening at all? Or how can this uncertainty of knowing be
handled within the framework of social work practice?
This will be the issue I address in the next section.
EVIDENCE-BASED SOCIAL
WORK PRACTICE
Seen in a historical perspective, contemporary mod-
els of evidence-based practice and policy making were
developed largely to manage concerns related to the
uncertainty of knowing how an intervention would work
in a case that takes place in real-life situations. Yes,
well-conducted and robust randomized controlled stud-
ies and well-conducted meta-analyses of a set of high-
quality effectiveness studies say a lot as to whether an
intervention works or is harmful on a generalized levelby providing us with impact estimates. But will an inter-
vention work in individual cases? Or will an interven-
tion work in a social context for which it was not tested?
Here comes the uncertainty! Originally, evidence-based
practice was conceived as a model for medical practice.
It was defined, not incidentally but purposefully, as “the
conscientious, explicit and judicious use of current best evi-
dence in making decisions about the care of individual
patients” (Sackett et al., 1997, p. 2). The term current best
evidence is the most explicit expression of the recognition
that we cannot know with 100% certainty and thus the gold
standard is unattainable in this sense. Therefore, the original
developers of evidence-based medicine built their model of
intervention on a platform where three fundamental factors
intersect: current best evidence; physicians’ professional
expertise; and patients’ predicaments, rights, and prefer-
ences. This basic model was later transported to other fields
of human services, including social work practice.
It is didactic to read some more on how the idea of
evidence-based medicine was formulated by the originators:
The practice of evidence-based medicine means integrating indi-
vidual clinical expertise with the best available external clinical
evidence from systematic research. By individual clinical exper-
tise, we mean the proficiency and judgment that individual clini-
cians acquire through clinical experience and clinical practice.
Increased expertise is reflected in many ways, but especially in
more effective and efficient diagnosis and in the more thoughtful
identification and compassionate use of individual patients’
predicaments, rights and preferences in making clinical decisions
about their care. By best available external clinical evidence,
we mean clinically relevant research, often from the basic
sciences of medicine, but especially from patient-centered clini-
cal research into the accuracy and precision of diagnostic tests
(including the clinical examination), the power of prognostic
markers and the efficacy and safety of therapeutic, rehabilitative
316 RESEARCH ON SOCIAL WORK PRACTICE
at UNIV OF TEXAS on February 23, 2011rsw.sagepub.comDownloaded from
7/28/2019 Applying Randomized Controlled Trials and Systematic Reviews in Social Work Research
http://slidepdf.com/reader/full/applying-randomized-controlled-trials-and-systematic-reviews-in-social-work 8/9
and preventive regimens. External clinical evidence both invali-
dates previously accepted diagnostic tests and treatments and
replaces them with ones that are more powerful, more accurate,
more efficacious and safer. (Sackett et al., 1997, p. 2)
The language of Sackett and his colleagues was of
course developed for the purposes of the medical profes-
sions. Later, other human services scientists and practi-tioners developed a language better fit for their own
profession and professional context. In social work, earlier
and more prominent examples of translators include
Gambrill (1999, 2001), Gibbs (2003), Gibbs and Gambrill
(2002), Macdonald (1999), and Sheldon (2003).
The process of evidence-based practice was trans-
lated to the context of social work practice as a seven-
step process model (Gibbs, 2003), summarized below:
Step 1: Become motivated to apply evidence-based practice.Step 2: Convert information need into an answerable question.Step 3: Track down best available evidence to answer the
question.Step 4: Appraise the evidence critically.Step 5: Integrate evidence with practice experience and char-
acteristics of client or situation.Step 6: Evaluate effectiveness and efficiency in exercising the
steps.Step 7: Teach others to do the same.
A very useful model was also presented by Haynes,
Devereux, and Guyatt (2002). In the social work lan-
guage, this model shows the interplay between research
evidence, a social work agency’s state and circumstance,
and clients’ preferences and actions. In the intersection of
these, their institutionalized fields and professional
expertise operate and facilitate the interplay between three
fields (Figure 1).
Evidence-based practice operates in a world that is not
perfect. The evidence needed to know whether an inter-
vention works or is harmful might be lacking, incomplete,
or uncertain. Even though an intervention was proven towork, it might not work in specific client contexts. Even
though an intervention has the best prognosis, it might not
be possible to implement because of client refusal, cost
issues, or agency organizational deficiencies. In all such
situations, and others, the golden rule of evidence-based
practice is to create transparency and open communica-
tions with the client and other stakeholders related to the
client. Especially important are the instances in which a
higher degree of uncertainty about intervention outcomes
is present; this requires a high degree of professional skills
to try to take the best possible measures for the good of the
patient, in particular to avoid harm.In sum, evidence-based practice was developed as a
solution to practical problems related to the implementa-
tion of interventions in real-life situations. Later, it was also
made explicit that evidence-based practice has other very
important advantages. By using best current evidence to
understand what is harmful and what works and then inte-
grating this knowledge with clients’preferences and values
as well as with agency realities, evidence-based social
work practice is ethical, democratic, sensitive to profes-
sional experience, faithful to client values and acceptance,
and open to a reasonable assessment of the economic fea-sibility of an intervention. Arguments and the empirical
basis for these aspects of evidence-based practice in social
work are presented in an example by Gambrill (2004,
2006) and by Mullen and Streiner (2004).
CONCLUSIONS
1. Interventions are very central to social work practice, andthe profession needs to understand whether social work interventions work, are harmful, or are promising.
2. Randomized controlled research designs generate the
least-possible-bias estimates of effectiveness of socialwork interventions.
3. Although results of randomized controlled effectivenessstudies and low-bias meta-analyses produce best possibleestimates, these are still estimates. Thus, knowing whatworks and what is harmful is not possible up to 100%.
4. There is also the problem of knowing whether interven-tions that work on a statistical estimate level will work inevery single intervention case.
5. Evidence-based social work practice offers a sensiblealternative in terms of translating and implementingresearch evidence in real-life social work situations.
6. Evidence-based practice is ethical, democratic, faithful toclient acceptance, and sensitive to economic feasibilityand makes use of clinical experience.
Soydan / APPLYING RANDOMIZED CONTROLLED TRIALS 317
Figure 1: An Updated Model for Evidence-Based Clinical DecisionsSOURCE: Adapted from Haynes, Devereaux, & Guyatt (2002). Clinical
expertise in the era of evidence-based medicine and patient choice.
Evidence-Based Medicine, 7 , 36-38.
at UNIV OF TEXAS on February 23, 2011rsw.sagepub.comDownloaded from
7/28/2019 Applying Randomized Controlled Trials and Systematic Reviews in Social Work Research
http://slidepdf.com/reader/full/applying-randomized-controlled-trials-and-systematic-reviews-in-social-work 9/9
REFERENCES
Blank, R. (2002). Evaluating welfare reform in the United States.
Journal of Economic Literature, 40, 1105-1166.
Boruch, R. (2007). Encouraging the flight of error: Ethical stan-
dards, evidence standards, and randomized trials. New Directions
in Evaluation, 113, 55-73.
Botvin, G. J., Griffin, K. W., & Nichols, T. R. (2006). Preventingyouth violence and delinquency through a universal school-based
prevention approach. Prevention Science, 7 , 403-408.
Cooper, H., & Hedges, L. V. (Eds.). (1994). The handbook of
research synthesis. New York: Russell Sage.
Ennett, S. T., Tobler, N. S., Ringwalt, C. L., & Flewelling, R. L.
(1994). How effective is drug abuse resistance education? A
meta-analysis of Project DARE outcome evaluations. American
Journal of Public Health, 84, 1394-1401.
Evaluating the definition of social work practice [Special issue].
(2003). Research on Social Work Practice, 13(3).
Fraguela, J. A., Martin, A. L., & Trinanes, E. A. (2003). Drug-abuse
prevention in the school: Four-year follow-up of a program.
Psychology in Spain, 7 , 29-38.
Gambrill, E. (1999). Evidence-based practice: An alternative toauthority-based practice. Families in Society, 80, 341-350.
Gambrill, E. (2001). Social work: An authority-based profession.
Research on Social Work Practice, 11, 166-175.
Gambrill, E. (2004). Contributions of critical thinking and evidence-
based practice to the fulfillment of the ethical obligations of pro-
fessionals. In H. Briggs & T. L. Rzepnicki (Eds.), Using evidence
in social work practice (pp. 3-19). Chicago: Lyceum.
Gambrill, E. (2006). Evidence-based practice and policy: Choices
ahead. Research on Social Work Practice, 16 , 338-357.
Gibbs, L. E. (2003). Evidence-based practice for helping profes-
sions: A practical guide with integrated multimedia. Pacific
Grove, CA: Brooks/Cole–Thompson Learning.
Gibbs, L. E., & Gambrill, E. (2002). Evidence-based practice:
Counterarguments to objections. Research on Social Work
Practice, 12, 452-476.
Glass, G. V. (1976). Primary, secondary, and meta-analysis. Educational
Researcher , 5, 3-8.
Glass, G. V., McGraw, B., & Smith, M. L. (1981). Meta-analysis in
social research. Beverly Hills, CA: Sage.
Glazerman, S., Levy, D. M., & Myers, D. (2003). Non-experimental
versus experimental estimates of earnings impact. Annals of the
American Academy of Political and Social Sciences, 589, 63-93.
Griffin, K. W., Botvin, G. J., & Nichols, T. R. (2006). Effects of a
school-based drug abuse prevention program for adolescents on HIV
risk behaviors in young adulthood. Prevention Science, 7 , 103-112.
Haynes, R. B., Devereaux, P. J., & Guyatt, G. H. (2002). Clinical
expertise in the era of evidence-based medicine and patient
choice. Evidence-based Medicine, 7 , 36-38.
Holosko, M. J. (2003). The history of the working definition of
practice. Research on Social Work Practice, 13(3), 271-283.
Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis.
Correcting error and bias in research findings. Newbury Park:Sage.
International Federation of Social Workers. (2000). Definition of
social work . Retrieved August 13, 2007, from http://www.ifsw
.org/en/p38000208.html
Ioannidis, J. P. A. (2005). Why most published research findings are
false. PLoS Medicine, 2(8), 696-701. Retrieved August 13, 2007,
from http://medicine.plosjournals.org/perlserv/?request=get-document
&doi=10.1371/journal.pmed.0020124
Lipsey, M. W., & Wilson, D. B. (1993). The efficacy of psychologi-
cal, educational, and behavioral treatment: Confirmation from
meta-analysis. American Psychologist , 48, 1181-1209.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis.
Thousand Oaks, CA: Sage.
Macdonald, G. (1999). Evidence-based social care: Wheels off the
runway? Public Money and Management , 19, 25-32.
McKernan, S.-M., & Ratcliffe, C. (2006). The effect of specific wel- fare policies on poverty. Washington, DC: Urban Institute.
Mullen, E. J., & Streiner, D. L. (2004). The evidence for and against
evidence-based practice. Brief Treatment and Crisis Intervention,
4, 111-121.
National Association of Social Workers. (2007). Code of ethics. Retrieved
August 13, 2007, from http://www.socialworkers.org/pubs/code/
default.asp
Olds, D., Henderson, C. R., Cole, R., Eckenrode, J., Kitzman, H.,
Luckey, D., et al. (1998). Long-term effects of nurse home visitation
on children’s criminal and antisocial behavior: 15-year follow-up
of a randomized controlled trial. Journal of the American
Medical Association, 280(14), 1238-1244.
Olds, D., Robinson, J., Pettitt, L., Luckey, D., Holmberg, J., Ng, R. K.,
et al. (2004). Effects of home visits by paraprofessionals and bynurses: Age 4 follow-up results of a randomized trial. Pediatrics,
114, 1560-1568.
Petrosino, A., Petrosino-Turpin, C., & Buehler, J. (2002). Scared
Straight and other juvenile awareness programs for preventing
juvenile delinquency. Available from the Campbell Library,
http://www.campbellcollaboration.org/doc-pdf/ssr.pdf
Popper, K. R. (1972). The logic of scientific discovery. London:
Hutchinson. (Original work published 1934)
Reynolds, A. J., & Temple, J. A. (1995). Quasi-experimental esti-
mates of the effects of a preschool intervention: Psychometric
and econometric comparisons. Evaluation Review, 19, 347-373.
Rosenthal, R. (1984). Meta-analytic procedures for social science.
Beverly Hills, CA: Sage.
Sackett, D. L., Richardson, W. S., Rosenberg, W., & Haynes, R. B.
(1997). Evidence-based medicine: How to practice and teach
EBM . New York: Churchill Livingstone.
Shadish, W. R., & Ragsdale, K. (1996). Random versus nonrandom
assignment in psychotherapy experiments: Do you get the same
answer? Journal of Consulting and Clinical Psychology, 64,
1290-1305.
Sheldon, B. (2003). Brief summary of the ideas behind the Centre for
Evidence-based Social Services. Retrieved August 16, 2006,
from http://www.ex.ac.uk/cebss/introduction.html
Soydan, H. (1999). The history of ideas in social work . Birmingham,
UK: Venture.
Soydan, H. (2007). Improving the teaching of evidence-based prac-
tice: Challenges and priorities. Research on Social Work Practice,17 (5), 612-618.
Swedner, H. (1985). Forskning i socialt arbete: Dess historiska bak-
ground och utvecklingsmöjligheter [Research in social work: Its his-
torical background and growth potential]. Gothenburg, Sweden:
Göteborg University.
U.S. General Accounting Office. (2003). Youth illicit drug use pre-
vention: DARE long-term evaluations and federal efforts to iden-
tify effective programs (Report GAO-03-172R). Washington, DC:
U.S. Government Printing Office.
West, S. L., & O’Neal, K. K. (2004). Project DARE outcome
effectiveness revisited. American Journal of Public Health, 94,
1027-1029.
318 RESEARCH ON SOCIAL WORK PRACTICE