Experimentation in Counseling and Psychotherapy

download Experimentation in Counseling and Psychotherapy

of 6

Transcript of Experimentation in Counseling and Psychotherapy

  • 8/2/2019 Experimentation in Counseling and Psychotherapy

    1/6

    Experimentation in Counseling andPsychotherapy

    Part I: New Myths About Old

    RealitiesJOHN J. HORAN

    The Pennsylvania State University

    I n his 1976 presidential address

    to the American Educational Research Association, Gene Glasssuggested that we have foundourselves in "the mildly embarrassing position of knowing lessthan we have proven." He coinedthe term "meta-analysis" to referto a particular method of extracting information from a large accumulation of individual studies.In this talk and in subsequentpublications, Glass and his colleague Mary Lee Smith (Glass,1978; Smith & Glass, 1977) ap

    plied meta-analysis to a largepopulation of counseling and psychotherapy outcome studies.They concluded that aggregatedpsychotherapies do indeed workand that the various individualpsychotherapies are equally effective.

    The adequacy of these conclusions, of course, depends on theimperviousness of the 1977 psychotherapy meta-analysis to logical assault and also on the degree

    of confidence one has in the individual studies that the meta-

    analysis attempted to collate.

    The former topic will be exploredlater in Part II of this paper; thesorry state of our counseling andpsychotherapy lit era tur e is ofmore immediate concern.

    In contrast to Glass, it is mybelief that we find ourselves inthe terribly embarrassing position of having proven far lessthan we purport to know. Thereis a quantum leap between ourexperimental literature and ourmethodological sophistication.We now know what's wrong with

    our data, but too many of us pretend to our students and to ourpublic that there is solid empirical evidence behind our variedproclamations. Like the seers ofancient Greece we perpetuate ourown Olympian myths with themost specious of argumentsrather than admit our ignoranceof the natural phenomena inquestion.

    The page allocations of this journal prevent me from reciting

    the entire anthology of fairy taleswhich pervade our profession. In

    stead I will focus on three myths

    of relatively recent vintage thatportend to be classic examples ofself-deceit. Briefly, we mistakenly believe and profess that thesubjects in our experiments receive treatments appropriate totheir clinical problems, that thetreatments are, in fact, deployedas purported, and that our customary control groups allow us todetermine the existence of atreatment effect. I have labeledthese myths, respectively, as TheAppropriate Treatment Myth,

    The Treatment DeploymentMyth, and The Control GroupMyth; I will discuss each in turn.

    The Appropriate TreatmentMyth

    Publication of Campbell andStanley's (1966) classic littlebook on experimental design hadan enormous impact on the fieldof counseling and psychotherapy.Even today, dissertation proposals which do not quite fit the

    experimental mode are viewedwith a jaundiced eye. It is regrettable that we do not have a similar Campbell and Stanley "bible"to guide our empirical conduct inthe area of clinical problem definition. Most counseling outcomestudies rest on subject screeningcriteria or pretest measures thatmay give the illusion of rigor, butin fact provide insufficient information on which to make an appropriate treatment decision.

    This article is based on a VicePresidential state-of-the-art addressentitled "Experimentation in Counseling and Psychotherapy: New andRenewed Mythologies" delivered atthe Annual Meeting of the AmericanEducational Research Association,Boston, April 1980. Part II of this article, now in preparation, was originally presented under the subtitle"The Renewed Myths of Meta-

    analysis." These remarks will beexpanded to include a review of anew psychotherapy meta-analysisnot yet published.

    John J. Horan is Professor of Ed-ucation, The Pennsylvania StateUniversity, College of Education,Division of Counseling and Educa-tional Psychology, Carpenter Build-ing, UniversityPark, PA 16802.

    December, 1980 5

  • 8/2/2019 Experimentation in Counseling and Psychotherapy

    2/6

    Consequently, a substantial percentage of any subject pool invariably receives a counseling intervention that is theoreticallyirrelevant to their actual clinicalproblem. Let me cite several examples.

    Many subjects are operationally labeled "phobic" becausethey refuse to approach or handlea snake, spider, rat, or whatever,and their verbal reports indicatea similar reluctance. Althoughsuch avoidance behavior is typical oftruly phobic subjects, it isalso displayed by subjects whoare adoptively skeptical. (Behavioral-approach-test animals maybe nonpoisonous, but they areperfectly capable ofbiting.) Furthermore, this particular operational umbrella covers a goodlynumber ofnonphobic people whoessentially have erroneous (andnot so erroneous) beliefs aboutthe animal, such as its beingslimy, dirty, or a carrier ofdisease. Still other subjects willhave widely differing degrees offear, skepticism, and mistakenbelief in combination. A treatment such as desensitizationwould be theoretically appropriate only to the truly phobic characteristics of a subject pool, andthese characteristics may be minor or possibly even nonexistent.

    In the counseling and psychotherapy literature, the only thingmore common than small animalphobia studies ar e complaintsabout such studies (see, e.g., Barrios, 1977; Bernstein & Paul,1971; Cooper, Furst, & Bridger,1969). Perhaps the AppropriateTreatment Myth would be betterillustrated with the clinical problem oftest anxiety. The classic1908 Yerkes-Dodson law posits acurvilinear relationship betweenanxiety and performance. This

    law suggests that a moderateamount oftest anxiety may behelpful to students desiringhigher grades. Strictly speakingthen, we cannot assume that aclinical problem' ofmaladaptivetest anxiety exists unless we candocument that the anxiety produced by "testing stimuli" inturn yields lowered performancelevels. I have yet to encounter acounseling outcome study thatclearly established the existence

    of such maladaptive test anxietyin its subject pool.

    Be that as it may. Even if weassume verbal reports or the actof volunteering for treatment tobe sufficient grounds for the establishment ofmaladaptive testanxiety1, there are still many different clinical problems fallingunder this generic label and no

    single treatment is appropriateto all ofthem. Cue-controlled relaxation, for example, might betheoretically relevant to theacute anxiety experienced by apreviously unanxious highachiever who now faces an entrance examination for a professional school. It would probablybe inappropriate, however, forstudents with deficient readingor study skills whose self-reported test anxiety is a consequence rather than a cause of

    chronic poor performance. Moreover, any treatment other thancognitive restructuring wouldhave highly debatable relevanceto the anxious perfectionist whobelieves that a less than "curve-setting" performance would beabsolutely catastrophic. Finally,at least one form oftest anxietyis essentially un treatab le,namely the natural consequenceofa decision to play instead of tostudy.

    In the counseling and psychotherapy literature, other examples ofinappropriate treatmentsapplied to crudely defined clinical problems abound. Many instances of"unassertiveness," forexample, are really decision-making concerns rather thanskill deficits (see Fiedler &Beach, 1978). Thus, the frequently deployed procedurecalled "behavioral rehearsal"would be irrelevant to subjectswho can already act in an assert

    ive, or extinguishing, or polite, orempathic manner, but who adap-tively wonder which responsepattern will maximize the probability ofgetting promoted, making a sale, salvaging a familialrelationship, or acquiring someother utility. To paraphrase thewords of my good friend and colleague George Hudson, even inuniversity settings supposedlycharacterized by higher levels ofrationality and receptivity to

    honest communication, it sometimes shows a fine command ofthe language to say nothing!

    The Appropriate TreatmentMyth owes it s existence to twocommon lapses ofthought. Thefirst involves the erroneous assumption that because we have abaptismal name for our screeningcriteria or dependent measures,we therefore must be assessing ahomogeneous clinical concern.From this precarious cognitiveprecipice it is but a short hop tothe equally mistaken belief thatbecause our favorite counselingintervention may be theoreticallylinked to a particular form ofthat problem, it must consequently be relevant to the entiresubject pool. In point offact, virtually all clinical problems mentioned in the titles of our journalarticles ar e essentially crude

    general descriptions of specificclient concerns that probably require differential treatment.

    Failure to recognize the Appropriate Treatment Myth has threeserious consequences. In the firstplace, inclusion ofsubjects whoseactual clinical problem is irrelevant to the experimental treatment will lower or indeed washout the average impact of thattreatment. Even if the study isfortunate enough to escape this

    particular type II error, theemerging "significant" gain willinevitably be trivial by clinicalstandards. Though science doesindeed advance by small steps,our counseling and psychotherapy literature is plagued by artifacts that are as frequent and aspowerful as our most effectivetreatments (see Badia, Haber, &Runyon, 1970; Barber, 1976; Rosenthal & Rosnow, 1969). I forone would derive considerablecomfort from the knowledge that

    at least one counseling treatmentcan consistently make a whopping big difference on one particular kind of clinical problem,however narrowly defined.

    In addition to obscuring the effects of a potentially powerfultreatment, failure to respond tothe Appropriate Treatment Mythcan erode our understanding ofwhy a particular effect did indeedemerge. To illustrate this point,let us consider the treatment of

    6 Educational Researche

  • 8/2/2019 Experimentation in Counseling and Psychotherapy

    3/6

    phobias. When desensitized sub jects move a foot closer to test animals than alternatively treatedcontrols, researchers commonlyconclude that their theoreticallyrelevant treatment has caused areduction in fear. It is possible,however, that actual phobic characteristics of the subject pool mayremain unchanged; the gain in

    fact may be due to certain contextual variables of the treatment which inadvertently alteredskepticism levels or mistaken beliefs. For example, certain scenesin the desensitization hierarchymight underscore the notion thatthe animal is absolutely passiveand harmless, or the scenesmight contain new informationsuch as the snake skin being cooland dry as opposed to wet andslimy. In this instance the researcher erroneously credits the

    theoretical framework of desensitization for producing a significant but trivial effect whichmight have been enormouslymagnified had an alternativetreatment specifically addressedadaptive skepticism and/or mistaken belief.

    The previous two consequencesof failure to recognize the Appropriate Treatment Myth chronically occur. Linda Craighead hassuggested to me the possibility ofa third consequence. Perhaps thesituation exemplified by "threestudies reporting superiority oftreatment A over B vis vis fourstudies claiming victory for Bover A," is really a function of idiosyncratic subject pool characteristics. In other words, themagnitude of effect varies withthe relevance of the treatment tothe particular majority of thesubjects. As the constellation ofactual clinical problems changesfrom study to study, so might theoutcome.

    The Treatment DeploymentMyth

    The Treatment DeploymentMyth is really a generic name fora number of interrelated delusions about how our counselingand psychotherapy treatmentsare implemented in the contextof an experimental study. We arevastly mistaken if we think thatour treatments are standardized,

    that they necessarily correspondto the theoretical principles onwhich they are supposed to bebased, and that they are in factreceived by the subjects in agiven study. Let me briefly address each of these delusions.

    (1) The Standardized Treatment Delusion. Our literaturesuggests that we have few if any

    standardized treatments. Unlikethe pharmacologist whose independent variables are capable ofbeing held constant across time,geography, and publication outlet, counseling and psychotherapy interventions routinely varyon all conceivable dimensions.Even the originators of our treatment strategies rarely replicatethe identical procedure fromstudy to study, so it shouldhardly come as a surprise to findtheir students and peers in the

    research community making further alterations.

    To illustrate, consider therapid-smoking treatment for cigarette addiction. Studies purporting to test this seemingly circumscribed procedure have in factvaried on (a) the numbers andnicotine ratings of cigarettes consumed, (b) the amount of timesmoking per trial and the number of trials per session, (c) thenumber and spacing of treatmentsessions, (d) treatment groupsize, (e) the presence or absenceof therapeutic relationship qualities, homework assignments,booster sessions, and so forth (seeDanaher, 1977). As one might expect, the oucomes of these endeavors have also been quite variable. Similar diversity of courseexists in the literatures of desensitization and modeling. If I wereto ask for a full description of theprocedure commonly known as"behavioral rehearsal," I'm suredozens of differing operationaldefinitions would emerge.

    The effects of the StandardizedTreatment Delusion are not entirely disadvantageous. For example, one might argue ratherconvincingly on the need to avoidprematurely freezing our treatment programs. In so doing wemight shut off the opportunityfor conceptual and pragmatic improvements, not to mention thepossibility of serendipitous find

    ings. On the other hand, capturing the concensus of our literature on the efficacy of a fluidlydefined technique is a bit liketrying to pick up mercury withone's fingers. It is hard to get ahold of something to say.

    The irony here is that our current methodological sophistication allows us the opportunity to

    enjoy the best of both worlds,consistency and diversity. Component, parametric, constructive,and dismantling analyses, for example, permit the replication ofimportant treatment effectswhile at the same time allowingthe investigator the opportunityto explore whatever other variables are of interest. Regrettably,these roads remain relatively un-traveled.

    (2) The Theory-Practice Congruence Delusion. Several philosophers of science have fully discussed the logical error ofbelieving that the emergence of aparticular hypothesized treatment effect confirms the underlying theory (e.g., Cook & Campbell, 1979; Mahoney, 1976;Popper, 1959; Weimer, 1976).There is a more fundamental delusion, however, that undergirdsour literature, namely, the beliefthat our counseling interventionsnecessarily correspond to the theoretical principles on which theyare supposed to be based. Let mecite several glaring examples oftheory-practice incongruity.

    We are all aware of the tenantsof classical client-centered therapy (Rogers, 1959, 1961). Unconditional positive regard, for example, by definition, precludesthe faintest hint of therapist-imposed values. Many of us arealso familiar with Truax's (1966)illuminating analysis of CarlRogers in practice; Truax conclu

    sively showed that Rogers differentially reinforced, via verbalconditioning, those kinds of client statements seen by Rogers asdesirable. What then is client-centered counseling? Is it whatRogers says he does (i.e.,his theory)? Or is it what he in fact does(i.e., his practice)? From an empirical standpoint we can cleanup the situation by either revising the theory of client-centeredtherapy or by excluding all data

    December, 19807

  • 8/2/2019 Experimentation in Counseling and Psychotherapy

    4/6

    produced by erratically behavingcounselors including Rogers himself. Our literature suggests wehave done neither.

    In the foregoing example, wefind the proponent of a techniquein violation of his theoreticalprinciples. Can we thus seriouslyexpect antagonistic individualsto provide adequate representa

    tion of a given theory or practicein the context of their experiments? We all know ofbehavior-ists who arrogantly label theirplacebo treatments as "client-centered therapy" on the basis ofsuperficial similarities while ignoring fundamental differences.Perhaps less well known or acknowledged is the large numberof so-called "behavioral" projectsconducted by individuals whoseemingly haven't the foggiestunderstanding of the principlesand practices they purport to examine. Walt Disney's skunknamed "Flower" was still askunk. Simply because a studyclaims to examine a given intervention does not mean that theintervention was, in fact, adequately examined.

    My final example of theory-practice incongruity concernsthose theoretical principles thatseem to defy implementation incounseling practice even by the

    most well-versed and dispassionate experimenters. The theoryunderlying negative reinforcement, for example, demands thatthe escape response (e.g., anadaptive target behavior) produces a cessation of the noxiousstimulus. Yet in the counselingstrategy labeled "covert negativereinforcement," the noxious stimulus (an unpleasant image) isterminated before the adaptivebehavior is begun. Similar implementation difficulties exist with

    other interventions such as cov-erant control, time out, and response cost (see Horan, 1979; Ma-honey, 1974).

    I wish there was a simple cognitive restructuring remedy forthe Theory-Practice CongruenceDelusion which pervades our professional literature. There is not.I take little comfort in theatheoretical cop-out offered byothers: "Forget the theory," theysay; "let the operations and

    emergent data speak for themselves." True enough in the shortrun, bu t eventually we mustpresent to our consumer audienceand to our contemporaries inother professions a set ofcoherent (albeit evolving) theoreticalprinciples supported by datagathered in practice. It seems tome the time has come for coun

    seling and psychotherapy editorial reviewers to pay less attention to issues such as thecomparative merits ofANCOVAversus Repeated MeasuresANOVA in a particular manuscript and focus more on the oftentimes missing link betweenthe conceptual basis of a studyand its implementation.

    (3) The Subject Receptivity De-lusion. The first two delusionssupporting the Treatment Deployment Myth concern mattersthat are to some degree underthe control of the experimenter.The author of a study decideswhich version of a "standard"treatment he or she wishes toevaluate, and moreover determines whether or not the treatment corresponds to the principles on which it is supposed to bebased. Authors do not necessarilycontrol, however, what their sub

    jects do with the treatment. Inpharmacological research, this

    problem is called "cheeking thepill" (instead ofswallowing), andthere are simple ways to dealwith it. The field ofcounselingand psychotherapy, however, isnot so fortunate.

    To illustrate, much has beenwritten about the stimulus control approach to the treatment ofobesity. The logic of stimuluscontrol rests on the assumptionthat the eating behavior ofobesesubjects is essentially "out ofcontrol;" that is, they purportedly

    take large bites, eat rapidly, andlet extraneous factors such astime of day and the availabilityof food determine how much theyeat. Apart from the fact thatthese propositions have come under some empirical assault (e.g.,Mahoney, 1975), we have remarkably little evidence to support our further assumption thatobese individuals who are givenstimulus control training actually alter their eating style upon

    leaving the counseling cubicleThe stimulus control treatmenof obesity typifies the perplexinsituation in which a powerfutreatment effect can be expectedto occur in spite of the fact thasubjects many routinely "cheekthe pill."

    The converse ofthis situatio

    exists, ofcourse, when a potentially powerful treatment is fosome undetermined reason ignored by the subjects and a nuleffect ensues. In a recent component analysis ofstress inoculation, for example, we found thaself-instructions training waconspicuously ineffective on aloutcome measures (Hackett &Horan, 1980). In contrast, twother categories ofcoping-skiltraining definitely proved theiworth. A check on the indepen

    dent variable manipulation, however, revealed that only halfothe subjects who received selfinstructions training actually puthat training into practice. Fothe other two coping-skill categories, adherence to the treatmenwas nearly universal.

    Independent variable manipulation analyses are routinely conducted in certain areas ofeducation and psychology, but they arsurprisingly rare in the counse

    ing and psychotherapy literatureOne would think that experimenters themselves might wonder ifhigh percentages ofsub

    jects in the various treatmenconditions were, in fact, doinwhat they were supposed to bdoing (and not doing what theshouldn't be doing). Certainlythis sort of information woulgreatly enhance our understanding ofboth null and positive efects.

    Failure to rectify the three delusions supporting the TreatmenDeployment Myth exacerbatethe consequences of The Appropriate Treatment Myth; namelywe increase the risk ofwashinout treatment effects whicmight otherwise occur and wthoroughly obfuscate the meaning ofthose that do emerge. Thfinal myth that I wish to addrehere, however, is perhaps thmost problematic of all.

    8 Educational Researche

  • 8/2/2019 Experimentation in Counseling and Psychotherapy

    5/6

    The Control Group Myth

    In the counseling and psychotherapy l i terature, authors invariably write as if their controlgroups had received "everythingbut" the experimental treatment.In point of fact, "a nyt hi ng but "would be a more apt descriptor.This distinction is extremely im

    portant because the nature of thecontrol condition has profoundimplications for the proper interpreta tion of what might appearto be a treatment effect. By TheControl Group Myth I mean thecommon but erroneous belief thatthe inclusion of a randomly assigned control condition allowsone to determine whether or notthe exper imen ta l t r ea t men tmade a difference. Possibly so,but usually not. To place this issue in perspective, a brief survey

    of counseling and psychotherapycontrol groups might be helpful.

    Control group variations arelegion. We have no- tre atm entcontrols and de layed-t reatmentcontrols, each of which exist under varying levels of therapistcontact, attention, concern, andhope for the future. We also havewhat are called "placebo controls." Placebo controls are supposed to be theoretically inert alternative treatments; however,

    when placebos are found to"wo rk" (as is freq uentl y thecase), we rename them and buildour careers on subsequent theorydevelopment.

    Then there are minimal treatment controls, which involve thedeployment of active counselingin tervent ions in quant i t ies

    judged too small to make a difference, alternative treatment controls in which no amount oftr ea tm en t is expected to mak emuch of a difference, and stan

    dard treatment controls whichpit,.our exp eri men tal int erve ntions against the best, or at leastmodal, practices in the field. Andthe list goes on. We havecounterdemand phases which allow the measurement of improvement in spite of posited subjectexpectations to the contrary, andwhat might be called "counter-treatment controls" which theoretically produce deterioration inspite of posited subject expecta

    tions for improvement.

    In the midst of all these variations and permutations, it is easyto lose sight of why we botherwith control groups in the firstplace. Investigators who use no-treatment controls or delayed-treatment controls are essent ial ly ask ing, "Did anyt hin ghappen at all?" They view pla

    cebo influences as either nonexis-tant or trivial, or at least not impor ta nt to dis tin gui sh from theeffects of treatment per se. Theproblem here, of course, is thatthe treatment itself may be nothing more than a placebo.

    In contrast, investigators whoemploy alternative activity control treatments would like us tobelieve that thay have controlledthe placebo problem. Aye, buthere's the rub: The placebo phenomenon is not necessarily afunction of what we in fact do toour subjects, but rather whatthey believe we are doing tothem. Thus researchers who compare humd rum bib l io therapywith fancy experimental treatments involving lab coats, lights,whistles and buzzers, routinelyfail to realize that the emergentsignificant differences on outcome measures might well be afuncti on of diff ere ntia l subjec texpectat ions for improvement.

    Equal iz ing "minutes-of- thera-p is t -contact - t ime" by adding"verbal filler" does not resolvethe problem and may even compound it. Such "p sych obab ble"could conceivably alienate sub

    jects and erode whatever placeboinfluences the control treatmentwould otherwise muster.

    A basic quest ion which goesunanswered in most experimental studies of counseling and psychoth erap y is essen tial l y this:Did the subjects in the experi

    mental and control conditions expect equivalent amounts of benefit? Kazdin and Wilcoxon's (1976)timely review of the desensitiza-t ion l i terature, for example,found only 5 out of 98 projectsthat provided such assurances.Incidentally, only one of theseprojects unequivocally supportedthe efficacy of desensitization,and dese nsit izat ion is often considered to be the most empirically validated treatment strat

    egy in the field of counseling andpsychotherapy.

    Recent breakthroughs in ourunderstanding of the biology andpsychology of pain dra mat ica llyillustrate the need for counselingand psychotherapy researchers toensure that their control treatments generate equivalent expectations for improvement. There is

    evidence, for example, suggestingthat the mere belief that one isreceiving a pain-killing drug actually causes one's body to produce and secrete endorphin, aform of opium (e.g., Levine,Gordon, & Fields, 1978). The placebo phenomenon thus may havebiochemical reference points!

    The problem of differentialsubject expectations for improvement is known by a variety ofnames in the methodological literature. Some authors speak ofdifferential demand characteristics; others refer to differentialcredibility or believability; stillothe rs l ist "rival hypot hes es"which exist in spite of random assignment (see, Cook & Campbell,1979, Jacobson & Baucom, 1977;Kazdin, 1979; Liberman &Dunlap, 1979; Loney & Milich,1978; and O'Leary & Borkovec,1978). Fine lines of distinctionmight be drawn between each ofthese concepts, but it is not im

    portant to do so now. Generallyspeaking, in the counseling andpsychotherapy literature we donot need a new name for the placebo phenomenon, just a morewidespread realization that anyso-called control treatment whichdoes not generate equivalent sub

    ject expectations for improvement does not, in fact, control forthe placebo phenomenon. Andunless, of course, we contain thiswides pread and powerful artifact,we cannot speak pridefully of a

    treatment effect regardless of theal tit ude of the obtaine d significance level.

    Essentially then, before we cananswer global questions concerning the overall effectiveness ofcounseling and psychot herapy,we need to establish a data basethat inspires confidence. Unfortunately, however, our experimental literature constitutes, inmany respects, a contemporaryGrimm mythology. Contrary to

    December, 1980 9

  • 8/2/2019 Experimentation in Counseling and Psychotherapy

    6/6

    popular opinion, our exper imen

    tal subjects often do not receive

    t r e a t m e n t s a p p r o p r i a t e t o t h e i r

    clinical problems. Moreover, our

    t reatments are f requent ly not de

    ployed as purp or te d , and f inal ly

    our so - ca l l ed con t r o l g r oup s

    r a r e l y addr es s one o f t h e mo s t

    powerf ul ar tif act s of all. In spit e

    of th es e fau l ty bel iefs an d cus

    t o m s , we now have the methodo log ica l soph i s t i ca t ion to l ay a

    f i r m c o n c e p t u a l a n d e m p i r i c a l

    basis for our field. But unless we

    choose to purge these myths from

    our midst , the practice of counsel

    ing and ps ych o t he r apy wi l l r e

    main jus t tha t .

    Notes

    ' In so doing, we ar e in effect saying that it does not matter that thesubjects may be perform ing bett erbecause of the anxiety; the fact that

    they do not like the anxiety is reasonenough to try and reduce it. Thisconcession can cause a curious logical contradiction in those studies using Grade Point Average as an ancillary dependent measure!

    References

    Badia, P., Haber, A., & Runyon, R.P. Research problems in psychology. Reading, Mass.: Addison-Wesley, 1970.

    Barber, T. X. Pitfalls in human research: Ten pivotal points. Elms-ford, New York: Pergamon, 1976.

    Barrios, M. S. Repeating the mistakes of the past: A note on subjectrecruitment and selection procedures in analogue research onsmall animal phobias. AABT

    Newsletter, 1977, 4 (6), 19.

    Bernstein, D. A., & Paul. G. L. Somecomments on therapy analogue research with small animal "phobias." Journal of Behavior Therapyand Experimental Psychiatry,1971,2.225-237.

    Campbell, D. T., & Stanley, J. C. Ex

    perimental and quasi-experimentaldesigns for research. Chicago:Rand McNally, 1966.

    Cook, T. D., & Campbell, D. T.Quasi-experimentation: Design, andanalysis issues for field settings.Chicago: Rand McNally, 1979.

    Cooper, A., Furst, J., & Bridger, W.A brief commentary on the usefulness of studying fear of snakes.

    Journal of Abnormal. Psychology,

    1969, 74. 413-414.Dan ahe r, B. G. Rese arch on rapid

    smokin g: Inte rim sum mar y andrecommendat ions. Addictive Behaviors, 1977,2, 151-166.

    Fiedler, D., & Beach, L. R. On thedecision to be assertive. Journal ofConsulting and Clinical Psychology, 1978, 46, 537-546.

    Glass, G. V Primary, secondary, andmeta-analysis of research. Educational Researcher. 1978, 5 (10), 3-8.

    Glass , G. V Reply to Mansfield andBusse . Educational Researcher,1978, 7(1), 3.

    Hackett, G., & Horan, J. J. Stress inoculation for pain: What's reallygoing on? Journal of CounselingPsychology. 1980.27. 107-116.

    Horan, J. J. Counseling for effectivedecision making: A cognitive-behavioral perspective. North Sci-tuate, Mass.: Duxbury, 1979.

    Jacobson, N., & Baucom, D. Designand assessment of nonspecific control groups in behavior modification research. Behavior Therapy,1977, 8. 709-720.

    Kazdin, A. E. Therapy outcome questions requiring control of credibil

    i ty and treatment generated expectancies . Behavior Therapy,1979, 10, 81-93.

    Kazdin, A. E., & Wilcoxon, L. A. Systematic desensitization and nonspecific treatment effects: A methodological evaluation. Psychological Bulletin. 1976, 83. 729-758.

    Levi ne, J. D., Gordon , N. C , &Fiel ds, H. L. The me cha nis m ofplacebo analgesia. Lancet, September 23, 1978, 654-657.

    Lieberman, L. R., & Dunlap, J. O'Leary and Borkovec's concetualization of placebo: The placeparadox. American Psycholog1979, 34, 553-554.

    Loney, J., & Milich, R. Developmand ev al ua ti on of a placebo studi es of oper ant behav ioral t e r ven t ion . Journal of BehavTherapy and Experimental. Psycatry, 1978, 9, 327-333.

    Mahoney, M. J. Cognition and havior modification. Cambr idgMass.: Ballinger, 1974.

    Mahoney , M. J. Fa t fiction. BehavTherapy, 1975, 6, 416-418.

    Mahoney, M. J. Scientist as subjeThe psychological imperatiCambr idge, Mass. : Bal l ing1976.

    O'Leary, K. D., & Borkovec, T. Conceptual, methodological, aethical problems of placebo grouin psychotherapy research. Amecan Psychologist, 1978, 33. 82830.

    Popper, K. R. The logic of scientdiscovery. New York: Harper Row, 1959.

    Rogers, C. R. A theory of therappersonality, and interpersonal lationships as developed in the cent-centered framework. In Koch (Ed.), Psychology a study science, Vol. Ill, Formations of

    person and the social context. NeYork: McGraw-Hill. 1959.

    Rogers, C. R. On becoming a persoBoston: Houghton-Mifflin, 1961.

    Rosenthal, R., & Rosnow, R. L. Ar fact in behavioral research. Ne

    York: Academic Press, 1969.Smith, M. L., & Glass, G. V Me

    analysis of psychotherapy outcomstud ies . American Psycholog1977, 32, 752-760.

    Truax, C. B. Reinforcement and noreinfor ement in Rogerian Psychterapy. Journal of Abnormal Pschology. 1966, 71, 1-9.

    Weimer, W. B. Psychology and tconceptual foundations of scienHillsdale, N.J.: Lawrence ErlbauAssociates, 1976.

    10 Educa t iona l Resea r ch