Schedules of Reinforcement at 50_ a Retrospective Application

23
The Psychological Record, 2010, 60, 151–158 SCHEDULES OF REI NFOR CEMENT AT 50: A RETROSPECTIVE APPRECIATION David L. Morgan Spaldin g University Published just over a half century ago, Schedules of Reinforcement (SOR) (Fer- ster & Skinner, 1957) marked the seminal empirical contribution of the book’s second author and ushered in an era of research on behavior–environment re- lationships. This article traces the origins of both the methods and the data presented in SOR, and its legacy within experimental psychology. The operant methodology outlined in SOR revolutionized the study of behavior pharmacol- ogy and early investigations of choice behavior. More recently, schedules have been used to examine microeconomic behavior, gambling, memory, and contin- gency detection and causal reasoning in infants, children, and adults. Implica- tions of schedules research for the psychology curriculum are also discussed. Key words: schedules of reinforcement, choice, behavioral pharmacology,  beh aviora l economics A half century has come and gone since the publication of Schedules of Reinforcement (Ferster & Sk in ner, 1 95 7), a landm ark work in experimental psychology, and inarguably Skinner’s most substantive empirical contribution to a science of behavior. The book introduced psychologists to an innovative methodology for studying the moment-to-moment dyna mics of behav ior–environment i nteractions, presented volumes of real- time data exhibiting an orderliness nearly unprecedented in psychology, and established the research agenda of the science Skinner termed the experimental analysis of behavior . Although he drew sharp and abundant criticism for his forays into social theorizing, Skinner’s experimental work was frequently recognized, even by those not sharing his epistemology, as a rigorous and important development in behav ioral science, as attested to  by nu me rou s awa rd s, in cl ud i ng th e A mer ica n Psy ch olog ical As so ci ation’s (APA’s) Distinguished Scientific Contribution Award (APA, 1958) and the National Medal of Science (APA, 1969). Skinner has frequently been acknowledged as among psychology’s most emi nent figu res (Haggbloom et al., 2002 ). Unfortun at ely, much of h is reputa tion, among both psychologists and lay persons, stems largely from h is writings on freedom, dignity, autonomous man, the limitations of cognitive Corresponden ce concerning th is art icle should be addressed to David L. Morgan, PhD, School of Professional Psychology , Spalding University, 84 5 South Thi rd St., Louisv ille, K Y 40203 (e-mail: [email protected]). Thi s artic le is dedicated to the memory of Pe ter Harzem, mentor and scholar, for his substa ntia l empirical a nd conceptual contributions to a science of behav ior.

Transcript of Schedules of Reinforcement at 50_ a Retrospective Application

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 1/22

The Psychological Record, 2010, 60, 151–158

SCHEDULES OF REINFORCEMENT AT 50:

A RETROSPECTIVE APPRECIATION

David L. Morgan

Spalding University 

Published just over a half century ago, Schedules of Reinforcement (SOR) (Fer- 

ster & Skinner, 1957) marked the seminal empirical contribution of the book’s

second author and ushered in an era of research on behavior–environment re- 

lationships. This article traces the origins of both the methods and the data

presented in SOR, and its legacy within experimental psychology. The operant

methodology outlined in SOR revolutionized the study of behavior pharmacol- 

ogy and early investigations of choice behavior. More recently, schedules have

been used to examine microeconomic behavior, gambling, memory, and contin- 

gency detection and causal reasoning in infants, children, and adults. Implica- 

tions of schedules research for the psychology curriculum are also discussed.Key words: schedules of reinforcement, choice, behavioral pharmacology,

 behavioral economics

A half century has come and gone since the publication of Schedules ofReinforcement (Ferster & Skinner, 1957), a landmark work in experimentalpsychology, and inarguably Skinner’s most substantive empiricalcontribution to a science of behavior. The book introduced psychologiststo an innovative methodology for studying the moment-to-momentdynamics of behavior–environment interactions, presented volumes of real-time data exhibiting an orderliness nearly unprecedented in psychology,

and established the research agenda of the science Skinner termed theexperimental analysis of behavior . Although he drew sharp and abundantcriticism for his forays into social theorizing, Skinner’s experimental workwas frequently recognized, even by those not sharing his epistemology, asa rigorous and important development in behavioral science, as attested to

 by numerous awards, including the American Psychological Association’s(APA’s) Distinguished Scientific Contribution Award (APA, 1958) and theNational Medal of Science (APA, 1969).

Skinner has frequently been acknowledged as among psychology’smost eminent figures (Haggbloom et al., 2002). Unfortunately, much of hisreputation, among both psychologists and laypersons, stems largely from hiswritings on freedom, dignity, autonomous man, the limitations of cognitive

Correspondence concerning this article should be addressed to David L. Morgan, PhD, Schoolof Professional Psychology, Spalding University, 845 South Third St., Louisville, KY 40203 (e-mail:[email protected]).

This article is dedicated to the memory of Peter Harzem, mentor and scholar, for his substantialempirical and conceptual contributions to a science of behavior.

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 2/22

138 MORGAN

theorizing, and social engineering. Although this work received considerableattention and provoked much dialogue both within and outside of psychology(see, e.g., Wheeler, 1973), a focus on Skinner’s social views may serve toshortchange an appreciation of the important experimental work for whichhe initially became known. Consequently, it may be fitting to revisit Schedulesof Reinforcement (SOR) and to consider its ramifications for a science of

 behavior more than 50 years since the publication of this influential work.The purpose of the present article is to discuss the psychological climate inwhich SOR was conceived, the many research programs within behavioralscience that benefitted from both the methodology and the conceptualizationprovoked by SOR, and contemporary basic and applied research utilizingschedules to illuminate many dimensions of human and nonhuman behavior,and to consider some ways in which instructors of psychology might updateand extend their coverage of schedules of reinforcement in the classroom.

History and Development of Schedules Research

As is common in science, work on schedules of reinforcement wasprompted by serendipitous and practical matters having little to do withformal empirical or theoretical questions. In a very personal account ofhis development as a scientist, Skinner (1956) discussed how the ideaof delivering reinforcers intermittently was prompted by the exigenciesof laboratory supply and demand. Because the food pellets used in theinaugural work on operant behavior were fashioned by Skinner just prior toexperimentation, their rapid depletion during the course of experimentationwas nearly axiomatic:

Eight rats eating a hundred pellets each day could easily keepup with production. One pleasant Saturday afternoon I surveyedmy supply of dry pellets, and, appealing to certain elementaltheorems in arithmetic, deduced that unless I spent the restof that afternoon and evening at the pill machine, the supplywould be exhausted by ten-thirty Monday morning. . . . It ledme to apply our second principle of unformalized scientificmethod and to ask myself why every press of the lever had to be

reinforced. (Skinner, 1956, p. 226)

This insight would lead to a fruitful program of research, carriedout by Ferster and Skinner, over the course of nearly 5 years, in whichthe patterning and rate of response of nonhuman animal subjects wereexamined under several fundamental schedules. The research itself,programmatic and unabashedly inductive, was carried out in a style thatwould scarcely be recognizable to today’s behavioral researchers. In manycases, experiments were designed and conducted in a single day, often toanswer a specific question provoked by surveying cumulative records

resulting from the previous day’s research. The research did not followthe prescriptions of the hypothetico-deductive method, because the datafrom any given experiment pointed to the logical subsequent experimentalmanipulations:

There was no overriding preconception that ruled whereresearch should or should not go. All that new facts neededfor admission to scientific respectability was that they meet

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 3/22

139SCHEDULES OF REINFORCEMENT AT 50

minimal operational requirements. New concepts had to bepublicly replicable to be verified and accepted. (Green, 2002, p.379)

Somewhat of an anomaly among monographs, Schedules of Reinforcement

offered the reader an unusually large ratio of graphed data to actual text.This configuration, although unfamiliar to many behavioral scientists, wasno mistake. That the book is dominated not by verbiage but by volumes ofcumulative records, portraying response characteristics across a wide arrayof schedule types and parameters, is testament to the spirit of the naturalscience of behavior that was unfolding in the Harvard pigeon lab. Fersterdescribed the tenor of the schedules research with enthusiasm:

Much of the early research in the various areas of schedules ofreinforcement was playful, in the sense that the behavior of theexperimenter was maintained and shaped by the interactionwith the phenomena that were emerging in the experiment.(Ferster, 1978, p. 349)

Because both the experimental methods and the data being generated broke new ground, there was the potential for nearly continuous discovery,as each distinct schedule produced noticeable, and sometimes dramatic,effects on the patterning and rate of behavior. The classic fixed-interval“scallop” evidenced by a gradual increase in response rate near the end ofthe interval, the postreinforcement pauses characteristic of both fixedratio and interval schedules, and the high response rates generated by ratio

schedules compared to interval schedules were distinct reminders thatnot all behavior–consequence relationships are created equal. Moreover,these dynamic changes in behavior were not hidden in statistical modelsor summarized descriptive measures; they were made evident by thecumulative recorder, an apparatus nearly perfectly adapted to the researchprogram itself. For Skinner, the method and corresponding apparatus hadmore than proved themselves on pragmatic grounds:

The resolving power of the microscope has been increasedmanyfold, and we can see fundamental processes of behavior

in sharper and sharper detail. In choosing rate of respondingas a basic datum and in recording conveniently in a cumulativecurve, we make important temporal aspects of behavior visible.(Skinner, 1956, p. 229)

These “molecular”changes in probability of responding aremost immediately relevant to our own daily lives. . . There isalso much to be said for an experiment that lasts half an hour—and for a quick and comprehensive look at the fine grain of the

 behavior in the cumulative curve. (Skinner, 1976, p. 218)

There was clearly a great deal of excitement in the air among the smallcadre of researchers, many of them colleagues or students of Skinner’s, whopioneered research on schedules of reinforcement. The enthusiasm was notuniversally shared, however, among experimental psychologists, and thenovelty of both the research methodology and data generated in operantlaboratories proved a sizable obstacle to dissemination of the research.Many journal editors, themselves practitioners and advocates of the large-

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 4/22

140 MORGAN

group, null-hypothesis testing designs of Ronald Fisher, were reluctant topublish studies whose principal data were from individual organisms,neither aggregated across participants nor analyzed via acceptable tests ofstatistical inference:

Operant conditioners were finding it hard to publish theirpapers. The editors of the standard journals, accustomed toa different kind of research, wanted detailed descriptions ofapparatus and procedures which were not needed by informedreaders. They were uneasy about the small number of subjectsand about cumulative records. (The cumulative record, was, infact, attacked as a “subtle curve-smoothing technique,” whichconcealed differences between cases and should be abandonedin favor of “objective statistics.”) (Skinner, 1987, p. 448)

In response to the unfavorable climate offered by the traditionalexperimental journals, those involved in the early schedules researchtook it upon themselves to establish a separate journal friendly to theinnovative methods and data of operant research. Ferster, SOR’s firstauthor, spearheaded an effort that led, in 1958, to the establishment of the

 Journal of the Experimental Analysis of Behavior (JEAB) , the primary outletfor operant research for the past half-century. Not surprisingly, JEAB  wasdominated by research on schedules of reinforcement, although perusalof the journal’s earliest volumes reveals that there was much more on theminds of scientists submitting their work to the journal (Nevin, 2008).

Indeed, the journal’s first volume contained studies utilizing an impressivearray of species (rats, pigeons, rhesus monkeys, chimpanzees, and humanadults and children), response classes (lever pressing, key pecking, wheelrunning, grooming, and stuttering), and consequences (food, water, shock,tones, tokens, money, and colored lights correlated with reinforcement).

Among the most noteworthy aspects of this pioneering work was theusefulness of operant methodology to many scientists from disciplinesseldom associated with behavior analysis. The early volumes of JEAB read like a Who’s Who of scientists whose names would eventually besynonymous with such disciplines as behavioral medicine (Brady, Porter,

Conrad, & Mason, 1958), behavior pharmacology (Dews, 1958, 1970;Dews & Morse, 1958), animal psychophysics (Blough, 1958, 1959, 1971),and animal cognition (Premack & Bahwell, 1959; Premack & Schaeffer,1963; Shettleworth & Nevin, 1965; Terrace, 1963, 1969). For many ofthese researchers, the free-operant methodology introduced by Skinnerallowed for an unprecedented degree of experimental control, andthis paid especially high dividends in work with nonhuman animals.The response rates and patterns emitted by animals under commonschedules of reinforcement exhibited remarkable stability, even overprolonged experimental sessions. This stability, or steady-state, behavior

thus became a reliable baseline for evaluating the effects of othervariables of interest, including drugs, acoustic and visual properties ofantecedent (discriminative) stimuli, and characteristics of reinforcers,such as magnitude, delay, and quality. In particular, the programming ofconcurrent schedules not only allowed for these kinds of comparisons butalso set the stage for the experimental analysis of choice behavior, a topicthat dominated the experimental analysis of behavior for decades. The

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 5/22

141SCHEDULES OF REINFORCEMENT AT 50

well-known matching law, first articulated by Herrnstein (1961, 1970), mayrepresent the single most important empirical contribution of behavioranalysis to a science of behavior.

Naturally, schedules of reinforcement served frequently as the princi-pal independent variable, and this was a logical consequence of the workdone previously by Ferster and Skinner (1957). What might surprise manypsychologists, however, is the frequent attention given, in both SOR and thepages of JEAB, to a much larger collection of behavioral contingencies thandepicted in introductory textbooks (continuous reinforcement [CRF], fixed-ratio [FR], variable-ratio [VR], fixed-interval [FI], and variable-interval [VI]).Indeed, Ferster and Skinner devoted a substantial portion of SOR  to thestudy of a variety of both simple and complex schedules, as it became appar-ent early on that consequences could be delivered contingent on behavioralong a sizable continuum of dimensions. In addition to studying response

patterns across various ratio and interval parameters, it was possible to ar-range compound schedules, in which two or more separate schedules wereoperating either successively or concurrently. Sometimes schedule altera-tions were signaled by discriminative stimuli, a procedure that allowed forthe analysis of stimulus control, conditioned reinforcement, and the devel-opment of behavioral chains. In addition, Ferster and Skinner explored thepossibility of producing very low or very high rates of behavior by arrangingreinforcement contingent only on specified interresponse times (irt). Thesedifferential reinforcement schedules would become of theoretical interest be-cause of their implications for how response classes are defined (Catania,

1970; Malott & Cumming, 1964), and several variations of differential rein-forcement have become standard alternatives to punishment in applied be-havior analysis (Cooper, Heron, & Heward, 2007).

In sum, the work begun by Ferster and Skinner (1957), and embraced byan enthusiastic group of scientists, established a strong precedent for re-searchers interested in the systematic study of behavior maintained by itsconsequences. The free-operant method would eventually find widespreadapplication, not only in the laboratories of many behavioral and natural sci-entists but, increasingly, in applied endeavors as well. Featured prominentlyin much of this research, however, is the primacy of schedules of reinforce-

ment. The pervasiveness of schedule effects was suggested by Zeiler (1977):

It is impossible to study behavior either in or outside the labora-tory without encountering a schedule of reinforcement: When-ever behavior is maintained by a reinforcing stimulus, someschedule is in effect and is exerting its characteristic influences.Only when there is a clear understanding of how schedules op-erate will it be possible to understand the effects of reinforcingstimuli on behavior. (p. 229)

Naturally, not all who have made use of schedule methodology endorse

the claim that behavior is principally controlled by schedules of reinforce-ment. Several prominent researchers in behavior analysis have suggested thatschedules are, in fact, artificially imposed contingencies that neither mimicreal world functional relationships nor cast explanatory light on the role ofthe environment in the organization of behavior. Shimp (1976, 1984), forinstance, has long argued that the molar behavior patterns observed underconventional reinforcement schedules are merely the by-product of moment-

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 6/22

142 MORGAN

to-moment adjustments of behavior to more localized contingencies. Thereis, Shimp suggested, no reason to view molar response rates as fundamentalunits of behavior when more molecular analyses reveal similar order andoffer a more nuanced view of the temporal organization of behavior. Similarly,Schwartz has claimed that the contingencies of reinforcement manipulated by

 both laboratory and applied behavior analysts have no counterpart in natureand do not reflect fundamental properties of behavior. He has suggested thatresearch has merely shown that reinforcement may influence behavior whenother causal factors are eliminated or suppressed, and this often occursonly under contrived and artificial conditions (Schwartz, 1986; Schwartz& Lacey, 1982). Finally, there is a substantive research literature on humanoperant behavior that questions the generality of nonhuman animal sched-ule performance. Differences that emerge in response patterning betweenhumans and nonhumans, especially on temporal schedules, have provoked

numerous dialogues on the interaction between verbal behavior and scheduleperformance, contingency versus rule-governed behavior, and the sensitivityof behavior to scheduled contingencies (e.g., Catania, Shimoff, & Matthews,1989; Hayes, Brownstein, Zettle, Rosenfarb, & Korn, 1986; Hayes & Ju, 1998;Matthews, Shimoff, Catania, & Sagvolden, 1977; Schlinger & Blakely, 1987).Thus, the applicability of reinforcement schedules in describing and account-ing for human behavior remains a contentious issue for many.

Prominent Areas of Research on Schedules of Reinforcement

Today’s behavior analysts, whether working in laboratories or appliedsettings, continue to pursue the implications of Ferster and Skinner’searly work, and in doing so, they have extended considerably the scope of

 behavioral phenomena to which reinforcement schedules seem clearlyrelevant. As suggested by Zeiler (1977), the importance of the schedulesresearch lay in its generic implications for behavior–consequencerelationships, rather than in the detailed analysis of pigeons pecking keysfor grain. In a sense, Ferster and Skinner (1957) had offered both a powerfulmethodology for studying behavior and a promissory note for those willingto mine further its implications for a science of behavior, especially with

respect to human affairs.

Choice Behavior 

As mentioned earlier, an especially important development in sched-ules research was the use of concurrent schedules as a vehicle for study-ing behavioral choice. The programming of two separate schedules of re-inforcement (often two variable-interval schedules), operating simultane-ously, made it possible to examine how organisms allocate their behavior toseveral available options. Among the most salient and reliable discoverieswas that relative response rates (or allocation of time to a schedule) closelytrack relative reinforcement rates—the so-called matching law (Baum, 1973;Herrnstein, 1961, 1970). Once behavior stabilized on concurrent schedules,response rates and patterns served as reliable and powerful benchmarks forevaluating qualitative and quantitative differences in reinforcement. Thus,the use of concurrent schedules became invaluable as a tool for exploringother issues in behavioral allocation.

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 7/22

143SCHEDULES OF REINFORCEMENT AT 50

The study of choice behavior became a highly sophisticated businesswithin the operant laboratory, and significant effort has been expended onidentifying the local and extended characteristics of responding on concur-rent schedules. This research has led to both significant quantification, asseen in the growth of mathematical models of operant behavior (Killeen,1994; Mazur, 2001; Shull, 1991), and substantial theoretical discussion ofwhether matching behavior is best conceptualized at the molar or molecu-lar level (Baum, 2004; Hineline, 2001). But such developments would remainof interest to a fairly small group of laboratory scientists were it not forthe general applicability of the choice paradigm itself. In fact, the relevanceof the concurrent schedules preparation is its conspicuous analogy to anycircumstance in which an organism faces a choice between two or more be-havioral options. The matching law, initially demonstrated by pigeons peck-ing keys for grain reinforcement, is now recognized as a remarkably robust

 behavioral phenomenon, characterizing the choice behavior of a large num- ber of species, emitting various topographically diverse responses, for anarray of reinforcers (Anderson, Velkey, & Woolverton, 2002; deVilliers, 1977;Houston, 1986; Pierce & Epling, 1983; Spiga, Maxwell, Meisch, & Grabowski,2005).

Perhaps the most relevant and interesting examples of the matching laware its applications to human behavior. Initial laboratory studies in which(a) responses typically included button or lever presses and (b) reinforcersincluded points on a computer console or snacks (e.g., peanuts) oftenproduced data very similar to those acquired in the animal laboratory; that

is, participants tended to allocate behavior in a manner consistent withthe matching law (Baum, 1975; Bradshaw & Szabadi, 1988; Buskist & Miller,1981). Applications of the matching law to behavior in applied settingshave largely confirmed findings from the more controlled environmentof the laboratory. Matching, for instance, has described allocation of self-injury in an 11-year-old child (McDowell, 1982), and choice of sexuallyprovocative slides by a convicted pedophile (Cliffe & Parry, 1980). Morerecently, the matching relation has been demonstrated in archival analysesof sports performance. Both college and professional basketball players have

 been shown to match the relative rate of 2-point and 3-point shots to their

respective reinforcement rates, once the differences in reinforcer magnitudeare adjusted (Romanowich, Bourret, & Vollmer, 2007; Vollmer & Bourret,2000). Reed, Critchfield, and Martens (2006) demonstrated that professionalfootball teams allocated running and passing plays in a manner predicted

 by the matching law as well. Other studies have shown that spontaneouscomments during informal conversations were made toward confederateswho reinforced responses with verbal feedback, and that the relative rates ofcomments occurred in a manner consistent with the generalized matchinglaw (Borrero et al., 2007; Conger & Killeen, 1974).

The general choice paradigm has become an enormously generative

platform for studying many aspects of behavior that involve choosing between alternative activities or reinforcers, and there would seem to befew socially significant behaviors that could not be meaningfully viewedas instances of choice activity. For instance, both classic research andcontemporary research on self-control, which typically pits an immediate,small reinforcer against a delayed, large reinforcer, have shed considerablelight on the relative importance of reinforcer quality, magnitude, and delay

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 8/22

144 MORGAN

in determining individual choice behavior (Epstein, Leddy, Temple, & Faith,2007; Logue, 1998; Mischel & Grusec, 1967; Mischel, Shoda, & Rodriguez,1989). A sizable literature attests to the extent to which reinforcer valuedecreases monotonically as a function of time delay, and this finding haspotential implications for an array of behaviors said to reflect impulsivity,or lack of self-control. Indeed, the phenomenon known as delay discountinghas informed many recent analyses of behavior problems characterized byimpulsivity, including smoking (Bickel, Odum, & Madden, 1999), substanceabuse (Bickel & Marsch, 2001; Vuchinich & Tucker, 1996), and pathologicalgambling (Dixon, Marley, & Jacobs, 2003; Petry & Casarella, 1999).

Behavioral Pharmacology 

Articles that appeared in the first volume of JEAB portended a significantrole for schedules of reinforcement in the examination of the relationship

 between drugs and behavior. By 1958, the year of the journal’s inauguralissue, schedule performance, including response rate and patterning, waswell known for several species and across several parameters of manydifferent schedule types. Consequently, predictable behavior patterns werethe norm, and these served as very sensitive baselines against which to

 judge the effects of other independent variables, including chemicals. Inreminiscing about the early laboratory work on behavioral pharmacology,some of the discipline’s pioneers reflected on the advantages of schedulesresearch for scientists interested in drug–behavior relationships:

When Charlie [Ferster] showed me the pigeon laboratory in early1953, it was immediately apparent from the counters and cumu-lative records that behavioral phenomena were being studiedin a way that was well suited for application to pharmacology.In spite of my reservations about pigeons as subjects for drugs,in a short time we had planned a joint project, on effects of pen-tobarbital on fixed-interval responding, and immediately startedexperiments. . . A sort of natural selection based on results then de-termined what lines seemed interesting and would be continued. . . .(Dews, 1987, p. 460)

 The marriage of operant methodology, particularly schedules research,with pharmacology paid immediate dividends, and much of the agenda forthe young science of behavioral pharmacology was established by the work

 being done by Peter Dews, Roger Kelleher, and William Morse. Among themore important observations coming out of this early research was theconcept of rate dependence—that the effects of any particular drug weresubstantially modulated by the current rate of the behavior, and this waslargely determined by the schedule maintaining the behavior:

Under conditions where different food-maintained performanc-

es alternated throughout an experimental session, pentobarbitalwas shown to affect behavior occurring under a fixed-interval(FI) schedule differently from behavior controlled by a fixed ra-tio (FR) schedule. . . . This simple but powerful series of experi-. . . This simple but powerful series of experi-This simple but powerful series of experi-ments lifted the evaluation and interpretation of drug effects on

 behavior from that of speculation about ephemeral emotionalstates to quantitative and observable aspects of behavior that

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 9/22

145SCHEDULES OF REINFORCEMENT AT 50

could be manipulated directly and evaluated experimentally.(Barrett, 2002, p. 471)

Although the general notion of rate dependency appeared early in behavioral pharmacology, its explanatory scope was eventually questioned,

and the discipline turned ultimately to other behavior principles expectedto interact with drug ingestion, including qualitative properties of bothreinforcement and punishment (Branch, 1984). In addition, considerableresearch focused on the function of drugs as discriminative stimuli,as this work dovetailed well with traditional pharmacological efforts atidentifying physiological mechanisms of drug action (Colpaert, 1978). Morerecently, attention has been directed to self-administration of drugs and theidentification of drugs as reinforcing stimuli (Branch, 2006).

Even in contemporary research on behavioral pharmacology, schedules ofreinforcement play an integral role, particularly when drugs are used as rein-

forcers for operant responses. Indeed, one method for evaluating the reinforc-ing strength of a drug is to vary its “cost,” and this can be done in operantstudies through the use of a progressive ratio  schedule. In a progressive ratioschedule, response requirements are increased with the delivery of each rein-forcer (Hodos, 1961). In such a study, a participant may receive injection of adrug initially based on a single response, but the required ratio of responsesto reinforcers increments with each reinforcer delivery. Under such schedules,a single drug injection may eventually require hundreds or thousands of re-sponses. The point at which responding terminates is known as the “breakpoint,” and this behavioral procedure allows for a categorization of drugs in a

quantitative manner that somewhat parallels the abuse potential for differentdrugs (Young & Herling, 1986). The use of progressive ratio schedules in eval-uating drug preference and/or choice in nonhumans and humans alike has

 become a standard procedure in behavioral pharmacological research (Hoff-meister, 1979; Katz, 1990; Loh & Roberts, 1990; Spear & Katz, 1991; Stoops,Lile, Robbins, Martin, Rush, & Kelly, 2007; Thompson, 1972).

Behavioral Economics 

In operant experiments with both nonhumans and humans, partici-

pants respond to produce consequences according to specific reinforce-ment schedules. The “reinforcers,” of course, differ from study to study andinclude such diverse consequences as food, tokens, points on a computermonitor, money, social praise or attention, music, toys, and so on. It is notterribly difficult, however, to see how these consequences could be viewedas “commodities,” with participants functioning as “consumers” in standardoperant experiments. In fact, the analogy between behavioral experimentsand the phenomena of economics has been recognized by scientists for sev-eral decades (Lea, 1978). For those interested in the study of microeconomic

 behavior, the operant laboratory afforded a controlled environment and es-

tablished experimental methods well suited to this purpose. In addition toviewing participants as consumers and reinforcers as commodities, operantanalyses of economic behavior conceptualize reinforcement schedules asrepresenting constraints on choice. Thus, different schedules of reinforce-ment can be seen as alterations in the availability, or price, of a commodity,and response allocation can be observed as a function of these changes inavailability.

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 10/22

146 MORGAN

The use of reinforcement schedules, then, allows for an experimentalinvestigation of such economic principles as elasticity —the change in con-sumption of a good as a function of its price. The consumption of some rein-forcers, for example, may be affected more than others by increases in price,usually represented by increases in ratio schedule parameters. Similarly, bymaking different commodities available concurrently and according to thesame price (schedule of reinforcement), the relative substitutability   of thecommodities can be ascertained. In essence, researchers involved in behav-ioral economics view the operant laboratory as a very useful environmentfor pursuing the classic demand curves of economic theory, but with theadded benefit of precise manipulation of independent variables and objec-tive measurement of dependent variables (Allison, 1983; Hursh, 1984; Hursh& Silberberg, 2008). Fortunately, the benefits of this cross-fertilization have

 been realized on both sides. Research in behavioral economics has had im-

portant, though unforeseen, consequences for the experimental analysis of behavior as well. Studies of the elasticity and substitutability of reinforcersconventionally used in the operant laboratory (e.g., food, water) demonstrat-ed that demand curves for these reinforcers depend substantially on howthey are made available. An especially important variable, though historical-ly ignored by behavior analysts, was whether the reinforcer used in experi-mentation was available only within the context of experimental sessions,or available in supplementary amounts outside the experimental space, as iscommon with nonhuman animal research. This distinction between “open”(supplemental food made available outside of experimental sessions) and

“closed” (no supplemental food available outside experimental sessions)economies proved immensely important to understanding response alloca-tion in studies of choice, and was a direct result of viewing operant experi-ments as investigations in microeconomics (Hursh, 1978, 1984). In general,the synthesis of microeconomic theory with the experimental methodologyof behavior analysis has moved the latter field toward a more systematicassessment of reinforcer value, as well as a greater appreciation of the deli-cate nature of reinforcer value, given the variables of price and alternativesources of reinforcement (Hursh & Silberberg, 2008).

The Pervasive Nature of Schedules of Reinforcement inHuman Behavior

Any psychology professor teaching about schedules of reinforcementhas the somewhat dubious challenge of describing to students what exper-iments on key-pecking pigeons might have to do with the lives of human

 beings as we move about our very busy environments. Ordinarily, this as-signment entails a description of the scientific logic behind using simple,well-controlled anologue strategies to examine fundamental principles be-fore proceeding with larger scale generalizations. A few students “get” that

there are important functional equivalencies between birds pecking keysand humans pressing buttons on an ATM. For those who are not so con-vinced, descriptions of behavior under schedule control must exceed somehigher threshold of ecological validity to be meaningful. These descriptionsinevitably must go beyond the conventional description of basic contin-gencies (e.g., CRF, FR, VR, FI, VI) and their putative relevance to piecework,checking the mail, and gambling. Fortunately, this task is not especially

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 11/22

147SCHEDULES OF REINFORCEMENT AT 50

daunting. The ubiquity of schedule effects on behavior in the natural envi-ronment can be readily appreciated once one acknowledges that all behavioroperates on the environment. Of course, the consequences of behavior arenot always as dramatic or programmed as the delivery of a food pellet or awinning slot machine configuration. Subtle behavior–environment relation-ships, common in the “real world” we all inhabit, offer both numerous andauthentic examples. Turning the ignition key starts the engine of a vehicle,and turning one’s head in the direction of a novel sound brings a potentiallyimportant stimulus into one’s visual field. Entering one’s correct passwordon a keyboard allows access to an electronic environment of one’s choice. Innone of these scenarios would we be likely to speak of the consequences de-scribed as “feeling good” or as “rewards” in the vernacular sense, but theirfunction in maintaining these common repertoires can hardly be ignored.

Also, real-world consequences are often of an ethereal nature in the

everyday environment, seldom delivered in a manner well-described by any of the conventionally studied reinforcement schedules. Socialreinforcement, for instance, can be especially fickle, and consequently quitedifficult to simulate in the operant chamber. Parents do not always pick upcrying babies, married couples respond to each other’s verbal statementswith fitful consistency, and teachers attend too infrequently to studentswho are academically on task. Moreover, these “natural contingencies”are of a reciprocal nature, with each actor serving both antecedent andconsequential functions in an ongoing, dynamic interplay. This reciprocitywas well appreciated by Skinner (1956), who often credited his nonhuman

animal subjects with maintaining his interest in the learning process: “Theorganism whose behavior is most extensively modified and most completelycontrolled in research of the sort I have described is the experimenterhimself . . . The subjects we study reinforce us much more effectively thanwe reinforce them” (p. 232).

More recently, complex interpersonal contingencies have been pur-sued by a number of researchers, most notably child development re-searchers interested in the interplay between infants and caregivers. Inmuch of this work, the focus is on the child’s detection of social con-tingencies. Numerous studies have demonstrated that infants respond

very differently to social stimulation (e.g., caregiver facial expressions,verbalizations, physical contact) delivered contingent on infant behavioras compared with responses to the same stimulation delivered noncon-tingently (Fagen, Morrongiello, Rovee-Collier, & Gekoski, 1984; Millar &Weir, 1992; Tarabulsy, Tessier, & Kappas, 1996). Caregiver behavior that isunpredictable or unrelated to infant behavior is seen as a violation of ex-pected social behavior and is correlated with negative affective reactionsin infants (Fagen & Ohr, 1985; Lewis, Allesandri, & Sullivan, 1990). Thiscontingency detection research suggests that human infants are highlyprepared to identify social reinforcement contingencies, particularly

those connected to their own attempts to engage caregivers. Tarabulsy etal. (1996) have argued that disruptions to, or asynchrony in, infant–care-giver interactions, known to correlate with insecure attachment (Isabella& Belsky, 1991), are fundamentally manifestations of social contingencyviolations.

Similar to contingency-detection research in developmental psychology,reinforcement schedules have also been used to evaluate causal reasoning

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 12/22

148 MORGAN

in adult participants. Reed (1999, 2001, 2003) has demonstrated thatparticipants render postexperiment judgments of causality as a functionof the contingency described by the schedule. In these experiments, theindividuals’ responses on a computer keyboard resulted in brief illuminationof an outlined triangle on the monitor according to programmed schedules.Participants moved through several schedules sequentially, and experimentspitted schedules that ordinarily produce high response rates (ratio anddifferential reinforcement of high rate; drh) against those usually associatedwith lower response rates (interval and differential reinforcement of lowrate; drl). Although reinforcement rates were held constant across schedules

 by way of a yoking procedure, ratio and drh schedules still producedgreater response rates as well as higher causality judgments. Participantsviewed their behavior as being more responsible for the illumination ofthe triangle when emitting higher rather than lower response rates. These

results were interpreted as evidence that the frequent bursts of respondingthat produce reinforcement in ratio schedules create a greater sense of behavior–consequence contingency than the pauses, or longer interresponsetimes (irt) common to interval schedules (Reed, 2001). This research,utilizing conventional operant methodology, is important because it bringsa functional analysis to bear on response topographies seldom consideredwithin the purview of behavior analysis.

Of course, being creatures of habit, scientists conducting contemporaryresearch on schedules tend to remain closely tethered to the traditionaldimensions of count and time (ratio and interval schedules) developed

 by Ferster and Skinner (1957). There are, however, many other ways inwhich behavior–environment relationships can be configured. Williamsand Johnston (1992), for instance, described a number of reinforcementschedules in which both the response and the consequence could assumeeither discrete (count-based) or continuous (duration) properties, andobserved response rates to vary as a function of the specific contingency.Broadening the properties of reinforcement schedules in this way helps

 bring research on schedules closer to the dynamic behavior–environmentinterchanges characteristic of human behavior in the natural environment.As an example, in a conjugate reinforcement schedule, the duration or

intensity of a reinforcing stimulus is proportional to the duration orintensity of the response. Conjugate reinforcement schedules were utilizedearly in the development of applied behavior analysis, particularly inthe pioneering work of Ogden Lindsley, who discovered that arranging acontinuous relationship between behavior and its consequences allowed forremarkably sensitive assessment of attention in patients recovering fromsurgery (Lindsley, 1957; Lindsley, Hobika, & Etsten, 1961). The procedurewas also used by advertising researchers to evaluate both attention andpreference for television programming, interest in commercials, andpreference for music type (monophonic vs. stereophonic) and volume. In

most of this research, participants pushed buttons or stepped on pedals, andstimulation was presented as long as the response occurred, but terminatedor decreased in duration with cessation of responding (Morgan & Lindsley,1966; Nathan & Wallace, 1965; Winters & Wallace, 1970). By requiring theparticipants to maintain stimulation through a continuous response, themethod allowed for a direct assessment of interest or attention, rather thanunreliable measures such as direction of gaze or verbal reports.

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 13/22

149SCHEDULES OF REINFORCEMENT AT 50

More recently, conjugate reinforcement schedules have been used tostudy memory in infants. Rovee-Collier developed a procedure in whichinfants in cribs developed a kicking response when their foot, attached via astring to a mobile suspended over the crib, caused movement in the mobile.Because both the intensity and duration of mobile movement is proportionalto the intensity and duration of leg movement, the contingency representsa conjugate reinforcement schedule. The procedure has become a standardmethod for evaluating not only contingency awareness in infants but alsomemory for the contingency over time, and the effects of contextual stimulion durability of memory. Using the method and distinct contextual cues,Rovee-Collier and colleagues have established impressive long-term memoryfor infants as young as 3 months of age (Butler & Rovee-Collier, 1989; Rovee-Collier, Griesler, & Earley, 1985; Rovee-Collier & Shyi, 1992; Rovee-Collier &Sullivan, 1980).

The conjugate reinforcement schedule may very well represent one ofthe most natural and common contingencies affecting humans and nonhu-mans alike. The movement of a car at a fixed speed requires a constant pres-sure on the accelerator (assuming one is not using cruise control); an anglercontrols the retrieval speed of a lure by the rate at which line is reeled in;and the rate at which a manuscript scrolls on a computer monitor is a directfunction of speed of operation of the mouse’s roller. There are, undoubtedly,many more examples of behavior–environment interplay in which contin-gent stimulation tracks response intensity or duration, especially amid themany machine–human interfaces characterizing the modern world. Ferster

and Skinner (1957) were understandably constrained in their ability to studymany of these configurations, due to the limited technology at their dispos-al, much of it fashioned by Skinner himself. But today’s researchers, benefi-ciaries of more dynamic and powerful electronic programming equipment,face fewer barriers in arranging reinforcement schedules along multipledimensions. Such research may serve as an important reminder, especiallyto those outside of behavior analysis, of the generality and applicability ofschedules of reinforcement to naturally occurring human behavior.

Schedules of Reinforcement in the “Decade of Behavior”

The American Psychological Association has declared 2000–2010 the“Decade of Behavior.” As we near the end of this decade, reflecting onthe legacy of a landmark work like Schedules of Reinforcement   (Ferster &Skinner, 1957) seems appropriate, especially if one considers the somewhatsubservient role behavior has come to play in contemporary psychologicalscience. Baumeister, Vohs, and Funder (2007), having surveyed contemporarysocial and personality psychology, reached a sobering conclusion: In afield at one time celebrated for its methodological ingenuity, classic “field”experiments, and direct assessment of social influence effects, self-report

measures have become the predominant dependent variable in contemporaryresearch. There are, of course, numerous advantages to utilizing self-reports as the primary mechanism for collecting psychological data, amongthem convenience and potential ethical considerations, as pointed out byBaumeister and colleagues (Baumeister et al., 2007). The behavior of fillingout a scale, however, is not isomorphic to the behavior to which the scale’sitems are said to refer, and the limitations in drawing viable inferences from

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 14/22

150 MORGAN

self-report data are well known (Nisbett & Wilson, 1977). In fact, behavioranalysts have contributed an important piece to this scientific dialogue inthe form of analyses of the conditions under which verbal and nonverbal

 behavior exhibit correspondence (Baer, Osnes, & Stokes, 1983; Critchfield,1993; Matthews, Shimoff, & Catania, 1987; Wulfert, Dougher, & Greenway,1991), though this research literature is not likely to be familiar to thoseoutside the field.

Behavior analysts may be comforted by the observation that other behavioral scientists have increasingly come to lament a science dependenton self-reports for its primary data. For instance, a recent series of articlesexplored both the scientific and clinical repercussions of psychology’sreliance on arbitrary metrics (Blanton & Jaccard, 2006; Embretson, 2006;Greenwald, Nosek, & Sriram, 2006; Kazdin, 2006). An arbitrary metricis a measure of a construct, such as depression or marital satisfaction,

whose quantitative dimensions cannot be interpreted with confidence asreflecting fundamental properties of the underlying construct. Because suchconstructs are ordinarily conceptualized as multidimensional, no discretenumerical index could be taken as an independent and comprehensivemeasure of the construct itself. Nor can such measures be usefully assessedas indicators of change in the construct in response to treatment regimensor independent variable manipulations. Such measures, however, are the

 bread and butter of assessment practice throughout much of the behavioralsciences, especially in clinical and applied settings. As Baumeister et al.(2007) noted, employing arbitrary metrics has become standard practice in

empirical research in social psychology, a discipline once devoted to directobservation of ecologically valid behaviors.

This state of affairs is particularly lamentable given the substantialadvances made recently in scientific instrumentation. In fact, many of ourmore mundane daily routines are now readily monitored and measured bya host of small, powerful digital devices. This technology has often comefrom other disciplines, such as animal behavior, physical and occupationaltherapy, and the medical sciences, but its utility for behavioral science iseasily acknowledged. Precision-based instrumentation is now available forthe continuous measurement of a variety of behavioral dimensions. Increas-

ingly powerful and portable digital cameras can capture the details of se-quential interactions between participants, eye gaze and attention, facialexpressions, and temporal relationships between behavior and environmen-tal events, and laboratory models are often capable of integrating these datawith corresponding physiological indices. In addition, accelerometers, telem-etric devices, and actigraphs can be used to assess numerous behavior pat-terns, such as movement during sleep and gait and balance during ambula-tion, and large-scale movement, such as migration patterns. The enhancedmobility and durability of many such instruments make their exportationoutside the laboratory and remote use more accessible than ever before.

In sum, there would seem to be no legitimate reason a science of behaviorwould not make more use of a growing technology that renders direct mea-surement of behavior more accessible than ever before.

Of course, technological advances do not, in and of themselves, makefor better science. As in other pursuits, it is the tool-to-task fit that seemsmost relevant. By today’s standards, the operant equipment fashioned andused by Ferster and Skinner (1957) seems quite pedestrian. It was, however,

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 15/22

151SCHEDULES OF REINFORCEMENT AT 50

more than adequate to the task of revealing the intricate and sensitivedance between an operant class and its environmental determinants. Meredemonstrations that consequences influenced behavior were not original;Thorndike (1911) had demonstrated as much. The advent of differentkinds of behavior–consequence arrangements, though, allowed for a moresensitive evaluation of these parameters, although little would have comefrom this early experimentation if response patterns and rates had beenuniform across different schedules. The fact that this did not happenlent immediate credibility to the research program begun by Ferster andSkinner and advanced by other operant researchers for the past halfcentury.

Conclusion

The research that led to the publication of Schedules of Reinforcement

(Ferster & Skinner, 1957) served to both justify and inspire an empiricalscience of behavior whose functional units consisted of behavior–environment relationships. The myriad ways in which behavior could beshown to operate on the environment offered a large tapestry for thosewishing to explore the many shades and colors of the law of effect. Infollowing this behavior–consequence interplay over long experimentalsessions and at the level of individual organisms, the experimental analysisof behavior distanced itself from the large-group designs and statisticalinference that had come to dominate psychological research. A perusal ofHaggbloom et al.’s (2002) list of eminent figures in psychology reveals thatSkinner is joined by Piaget (2nd on the list behind Skinner) and Pavlov(24th), both of whose fundamental contributions are well known even bynonpsychologists. What is perhaps unappreciated by behavioral scientists,however, is that the groundbreaking work of both Piaget and Pavlov resultedfrom data collected and analyzed at the individual level, with little regardfor sample size and statistical inference (Morgan, 1998). That Skinner, Piaget,and Pavlov, clearly not kindred souls with respect to much of their scientificepistemology, found common ground in the study of the individual callsinto question conventional methods of psychological science. Meehl (1978)

may have expressed it most cogently 30 years ago:I suggest to you that Sir Ronald has befuddled us, mesmerizedus, and led us down the primrose path. I believe that the almostuniversal reliance on merely refuting the null hypothesis asthe standard method for corroborating substantive theoriesin the soft areas is a terrible mistake, is basically unsound,poor scientific strategy, and one of the worst things that everhappened in the history of psychology. (p. 817)

As mentioned previously, Schedules of Reinforcement was an unorthodox

publication in part because the primary data presented were real-timecumulative records of response patterns, often emitted over prolongedexperimental sessions. These displays were, in many respects, antithetical tothe manner in which behavioral data were being treated within psychology

 by the 1950s. Consequently, some justification for this deviation in methodmay have seemed necessary, and Skinner (1958) offered the followingrationale shortly after SOR’s publication:

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 16/22

152 MORGAN

Most of what we know about the effects of complex schedulesof reinforcement has been learned in a series of discoveries noone of which could have been proved to the satisfaction of astudent in Statistics A. ... The curves we get cannot be averagedor otherwise smoothed without destroying properties whichwe know to be of first importance. These points are hard tomake. The seasoned experimenter can shrug off the protestsof statisticians, but the young psychologist should be preparedto feel guilty, or at least stripped of the prestige conferredupon him by statistical practices, in embarking upon researchof this sort .... Certain people—among them psychologistswho should know better—have claimed to be able to say howthe scientific mind works. They have set up normative rulesof scientific conduct. The first step for anyone interested in

studying reinforcement is to challenge that claim. Until a greatdeal more is known about thinking, scientific or otherwise, asensible man will not abandon common sense. Ferster and Iwere impressed by the wisdom of this course of action when,in writing our book, we reconstructed our own scientific

 behavior. At one time we intended—though, alas, we changedour minds—to express the point in this dedication: “To themathematicians, statisticians, and scientific methodologistswith whose help this book would never have been written."(p. 99)

References

ALLISON, J. (1983). Behavioral economics. New York: Praeger.AMERICAN PSYCHOLOGICAL ASSOCIATION. (1958). American Psychological

Association: Distinguished scientific contribution awards. AmericanPsychologist, 13, 729–738.

AMERICAN PSYCHOLOGICAL ASSOCIATION. (1969). National medal of scienceaward. American Psychologist, 24, 468.

ANDERSON, K. G., VELKEY, A. J., & WOOLVERTON, W. L. (2002). The generalized

matching law as a predictor of choice between cocaine and food in rhesusmonkeys. Psychopharmacology, 163, 319–326.

BAER, D. M., OSNES, P. G., & STOKES, T. F. (1983). Training generalizedcorrespondence between verbal behavior at school and nonverbal

 behavior at home. Education and Treatment of Children, 6, 379–388.BARRETT, J. E. (2002). The emergence of behavioral pharmacology. Molecular

Interventions, 2, 470–475.BAUM, W. M. (1973). The correlation-based law of effect. Journal of the

Experimental Analysis of Behavior, 20, 137–153. BAUM, W. M. (1975). Time allocation and human vigilance. Journal of the

Experimental Analysis of Behavior, 23, 45–53.BAUM, W. M. (2004). Molar and molecular views of choice. Behavioural

Processes, 66, 349–359.BAUMEISTER, R. F., VOHS, K. D., & FUNDER, D. C. (2007). Psychology as

the science of self-reports and finger movements: Whatever happenedto actual behavior? Perspectives on Psychological Science, 2, 396–403.

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 17/22

153SCHEDULES OF REINFORCEMENT AT 50

BICKEL, W. K., & MARSCH, L. A. (2001). Toward a behavioral economicunderstanding of drug dependence: Delay discounting processes.Addiction, 96, 73–86.

BICKEL, W. K., ODUM, A. L., & MADDEN, G. J. (1999). Impulsivity and cigarettesmoking: Delay discounting in current, never, and ex-smokers.Psychopharmacology, 146, 447–454.

BLANTON, H., & JACCARD, J. (2006). Arbitrary metrics in psychology.American Psychologist, 61, 27–41.

BLOUGH, D. S. (1958). A method for obtaining psychophysical thresholds fromthe pigeon. Journal of the Experimental Analysis of Behavior, 1, 31–43.

BLOUGH, D. S. (1959). Generalization and preference on a stimulus–intensitycontinuum. Journal of the Experimental Analysis of Behavior, 2, 307–317.

BLOUGH, D. S. (1971). The visual acuity of the pigeon for distant targets. Journal of the Experimental Analysis of Behavior, 15, 57.

BORRERO, J. C., CRISOLO, S. S., TU, Q., RIELAND, W. A., ROSS, N. A., FRANCISCO, M. T.,& YAMAMOTO, K. Y. (2007). An application of the matching law to socialdynamics. Journal of Applied Behavior Analysis, 40, 589–601.

BRADSHAW, C. M., & SZABADI, E. (1988). Quantitative analysis of humanoperant behavior. In G. Davey & C. Cullen (Eds.), Human operant conditioningand behavior modification (pp. 225–259). Chichester, England: John Wiley& Sons.

BRADY, J. V., PORTER, R. W., CONRAD, D. G., & MASON, J. W. (1958). Avoidance behavior and the development of gastroduodenal ulcers. Journal of theExperimental Analysis of Behavior, 1, 69–72.

BRANCH, M. N. (1984). Rate dependency, behavioral mechanisms, and behavioralpharmacology. Journal of the Experimental Analysis of Behavior, 42,511–522.

BRANCH, M. N. (2006). How research in behavioral pharmacology informs behavioral science. Journal of the Experimental Analysis of Behavior, 85,407–423.

BUSKIST, W. F., & MILLER, H. L. (1981). Concurrent operant performance in humans:Matching when food is the reinforcer. Psychological Record, 31, 95–100.

BUTLER, J., & ROVEE-COLLIER, C. (1989). Contextual gating of memory retrieval.Developmental Psychobiology, 22, 533–552.

CATANIA, A. C. (1970). Reinforcement schedules and psychophysical judgments:A study of some temporal properties of behavior. In W. N. Schoenfeld(Ed.), The theory of reinforcement schedules (pp. 1–42). New York:Appleton-Century-Crofts.

CATANIA, A. C., SHIMOFF, E., & MATTHEWS, B. A. (1989). An experimentalanalysis of rule-governed behavior. In S. C. Hayes (Ed.), Rule-governedbehavior: Cognition, contingencies, and instructional control (pp. 119–152). New York: Plenum.

CLIFFE, M. J., & PARRY, S. J. (1980). Matching to reinforcer value: Humanconcurrent variable-interval performance. Quarterly Journal of Experimental

Psychology, 32, 557–570.COLPAERT, F. C. (1978). Discriminative stimulus properties of narcotic

analgesic drugs. Pharmacology, Biochemistry and Behavior, 9, 863–887.CONGER, R., & KILLEEN, P. (1974). Use of concurrent operants in small group

research. Pacific Sociological Review, 17, 399–416.COOPER, J. O., HERON, T. E., & HEWARD, W. L. (2007). Applied behavior

analysis (2nd ed.). Upper Saddle River, NJ: Pearson/Merrill Prentice-Hall.

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 18/22

154 MORGAN

CRITCHFIELD, T. S. (1993). Signal-detection properties of verbal self-reports. Journal of the Experimental Analysis of Behavior, 60, 495–514.

DEVILLIERS, P. (1977). Choice in concurrent schedules and a quantitativeformulation of the law of effect. In W. K. Honig & J. E. R. Staddon (Eds.),Handbook of operant behavior (pp. 233–287). Englewood Cliffs, NJ:Prentice-Hall.

DEWS, P. B. (1958). Effects of chlorpromazine and promazine on performanceon a mixed schedule of reinforcement. Journal of the ExperimentalAnalysis of Behavior, 1, 73–82.

DEWS, P. B. (1970). Drugs in psychology. A commentary on Travis Thompsonand Charles R. Schuster’s Behavioral pharmacology. Journal of theExperimental Analysis of Behavior, 13, 395–406.

DEWS, P. B. (1987). An outsider on the inside. Journal of the ExperimentalAnalysis of Behavior, 48, 459–462.

DEWS, P. B., & MORSE, W. H. (1958). Some observations on an operant in humansubjects and its modification by dextro amphetamine. Journal of theExperimental Analysis of Behavior, 1, 359–364.

DIXON, M. R., MARLEY, J., & JACOBS, E. A. (2003). Delay discounting bypathological gamblers. Journal of Applied Behavior Analysis, 36, 449–458.

EMBRETSON, S. E. (2006). The continued search for nonarbitrary metrics inpsychology. American Psychologist, 61, 50–55. 

EPSTEIN, L. H., LEDDY, J. J., TEMPLE, J. L., & FAITH, M. S. (2007). Food reinforcementand eating: A multilevel analysis. Psychological Bulletin, 133(5), 884–906.

FAGEN, J. W., MORRONGIELLO, B. A., ROVEE-COLLIER, C. K., & GEKOSKI, M. J. 

(1984). Expectancies and memory retrieval in three-month-old infants.Child Development, 55, 936–943.

FAGEN, J. W., & OHR, P. S. (1985). Temperament and crying in response toviolation of a learned expectancy in early infancy. Infant Behavior andDevelopment 8, 157–166. 

FERSTER, C. B. (1978). Is operant conditioning getting bored with behavior?A review of Honig and Staddon’s Handbook of Operant Behavior. Journalof the Experimental Analysis of Behavior, 29, 347–349.

FERSTER, C. B., & SKINNER, B. F. (1957). Schedules of reinforcement. New York:Appleton-Century-Crofts.

GREEN, E. J. (2002). Reminiscences of a reformed pigeon pusher. Journal ofthe Experimental Analysis of Behavior, 77, 379–380.

GREENWALD, A. G., NOSEK, B. A., & SRIRAM, N. (2006). Consequentialvalidity of the Implicit Association Test: Comment on Blanton andJaccard (2006). American Psychologist, 61, 56–61.

HAGGBLOOM, S. J., WARNICK, R., WARNICK, J. E., JONES, V. K., YARBROUGH, G. L.,

RUSSELL, T. M., ET AL. (2002). The 100 most eminent psychologists of the20th century. Review of General Psychology, 6( 2), 139–152.

HAYES, S. C., BROWNSTEIN, A. J., ZETTLE, R. D., ROSENFARB, I., & KORN, Z. (1986).Rule-governed behavior and sensitivity to changing consequences of

responding. Journal of the Experimental Analysis of Behavior, 45, 237–256.HAYES, S. C., & JU, W. (1998). The applied implications of rule-governed

 behavior. In W. O’Donohue (Ed.), Learning and behavior therapy (pp.374–391). Boston: Allyn & Bacon.

HERRNSTEIN, R. J. (1961). Relative and absolute strength of response as afunction of frequency of reinforcement. Journal of the ExperimentalAnalysis of Behavior, 4, 267–272.

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 19/22

155SCHEDULES OF REINFORCEMENT AT 50

HERRNSTEIN, R. J. (1970). On the law of effect. Journal of the ExperimentalAnalysis of Behavior, 13, 243–266.

HINELINE, P. H. (2001). Beyond the molar–molecular distinction: We needmultiscaled analyses. Journal of the Experimental Analysis of Behavior, 75,342–347.

HODOS, W. (1961). Progessive ratio as a measure of reward strength. Science,134, 943–944.

HOFFMEISTER, F. (1979). Progressive-ratio performance in the rhesus monkeymaintained by opiate infusions. Psychopharmacology, 62, 181–186.

HOUSTON, A. (1986). The matching law applies to wagtails’ foraging in thewild. Journal of the Experimental Analysis of Behavior, 45, 15–18.

HURSH, S. R. (1978). The economics of daily consumption controlling food- andwater-reinforced responding. Journal of the Experimental Analysis ofBehavior, 29, 475–491.

HURSH, S. R. (1984). Behavioral economics. Journal of the ExperimentalAnalysis of Behavior, 42, 435–452.HURSH, S. R., & SILBERBERG, A. (2008). Economic demand and essential

value. Psychological Review, 115, 186–198.ISABELLA, R. A., & BELSKY, J. (1991). Interactional synchrony and the origins

of infant–mother attachment. Child Development, 62, 373–384.KATZ, J. L. (1990). Models of relative reinforcing efficacy of drugs and their

predictive utility. Behavioural Pharmacology, 1, 283–301.KAZDIN, A. E. (2006). Arbitrary metrics: Implications for identifying

evidence-based treatments. American Psychologist, 61, 42–49.

KILLEEN, P. R. (1994). Mathematical principles of reinforcement. Behavioraland Brain Sciences, 17, 105–172.

LEA, S. E. G. (1978). The psychology and economics of demand. PsychologicalBulletin, 85, 441–446.

LEWIS, M., ALLESANDRI, S. M., & SULLIVAN, M. W. (1990). Violation ofexpectancy, loss of control, and anger expressions in young infants.Developmental Psychology, 26, 745–751.

LINDSLEY, O. R. (1957). Operant behavior during sleep: A measure of depthof sleep. Science, 126, 1290–1291.

LINDSLEY, O. R., HOBIKA, J. H., & ETSTEN, R. E. (1961). Operant behavior

during anesthesia recovery: A continuous and objective measure.Anesthesiology, 22, 937–946.

LOGUE, A. W. (1998). Self-control. In W. O’Donohue (Ed.), Learning andbehavior therapy (pp. 252–273). Boston: Allyn & Bacon.

LOH, E. A., & ROBERTS, D. C. S. (1990). Break-points on a progressive-ratioschedule reinforced by intravenous cocaine increase following depletionof forebrain serotonin. Psychopharmacology, 101, 262–266.

MALOTT, R. W., & CUMMING, W. W. (1964). Schedules of interresponse timereinforcement. Psychological Record, 14, 221–252.

MATTHEWS, B. A., SHIMOFF, E., & CATANIA, A. C. (1987). Saying and doing: A

contingency-space analysis. Journal of Applied Behavior Analysis, 20,69–74.

MATTHEWS, B. A., SHIMOFF, E., CATANIA, A. C., & SAGVOLDEN, T. (1977).Uninstructed human responding: Sensitivity to ratio and intervalcontingencies. Journal of the Experimental Analysis of Beahvior, 27, 453–457.

MAZUR, J. E. (2001). Hyperbolic value addition and general models of animalchoice. Psychological Review, 108, 96–112.

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 20/22

156 MORGAN

MCDOWELL, J. J. (1982). The importance of Herrnstein’s mathematicalstatement of the law of effect for behavior therapy. American Psychologist,37, 771–779. 

MEEHL, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, SirRonald, and the slow progress of soft psychology. Journal of Consultingand Clinical Psychology, 46, 806–834.

MILLAR, W. S., & WEIR, C. G. (1992). Relations between habituation andcontingency learning in 5 to 12 month old infants. Cahiers dePsychologie Cognitive/European Bulletin of Cognitive Psychology, 12,209–222.

MISCHEL, W., & GRUSEC, J. (1967). Waiting for rewards and punishments:Effects of time and probability on choice. Journal of Personality andSocial Psychology, 5, 24–31.

MISCHEL, W., SHODA, Y., & RODRIGUEZ, M. L . (1989). Delay of gratification in

children. Science, 244, 933–938.MORGAN, B. J., & LINDSLEY, O. R. (1966). Operant preference for stereophonicmusic over monophonic music. Journal of Music Therapy, 3, 135–143.

MORGAN, D. L. (1998). Selectionist thought and methodological orthodoxy inpsychological science. The Psychological Record, 48, 439–456.

NATHAN, P. E., & WALLACE, W. H. (1965). An operant behavioral measure ofTV commercial effectiveness. Journal of Advertising Research, 5,13–20.

NEVIN, J. A. (2008). Control, prediction, order, and the joys of research. Journal of the Experimental Analysis of Behavior, 89, 119–123.

NISBETT, R. E., & WILSON, T. D. (1977). Telling more than we can know:Verbal reports on mental processes. Psychological Review, 84, 231–259.

PETRY, N. M., & CASARELLA, T. (1999). Excessive discounting of delayed rewardsin substance abusers with gambling problems. Drug and AlcoholDependence, 56, 25–32.

PIERCE, W. D., & EPLING, W. F. (1983). Choice, matching, and human behavior:A review of the literature. The Behavior Analyst, 6, 57–76.

PREMACK, D., & BAHWELL, R. (1959). Operant-level lever pressing by amonkey as a function of intertest interval. Journal of the ExperimentalAnalysis of Behavior, 2, 127–131.

PREMACK, D., & SCHAEFFER, R. W. (1963). Distributional properties ofoperant-level locomotion in the rat. Journal of the Experimental Analysisof Behavior, 6, 473–475. 

REED, D. D., CRITCHFIELD, T. S., & MARTENS, B. K. (2006). The generalizedmatching law in elite sport competition: Football play calling as operantchoice. Journal of Applied Behavior Analysis, 39, 281–297.

REED, P. (1999). Role of a stimulus filing an action-outcome delay in human judgments of causal effectiveness. Journal of Experimental Psychology:Animal Behavior Processes, 25, 92–102.

REED, P. (2001). Schedules of reinforcement as determinants of human

causality judgments and response rates. Journal of ExperimentalPsychology: Animal Behavior Processes, 27, 187–195. 

REED, P. (2003). Human causality judgments and response rates on DRL andDRH schedules of reinforcement. Learning and Motivation, 31, 205–211.

ROMANOWICH, P., BOURRET, J., & VOLLMER, T. R. (2007). Further analysis of thematching law to describe two- and three-point shot allocation by professional

 basketball players. Journal of Applied Behavior Analysis, 40, 311–315.

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 21/22

157SCHEDULES OF REINFORCEMENT AT 50

ROVEE-COLLIER, C., GRIESLER, P. C., & EARLEY, L. A. (1985). Contextualdeterminants of retrieval in three-month-old infants. Learning andMotivation, 16, 139–157.

ROVEE-COLLIER, C., & SHYI, C.-W.G. (1992). A functional and cognitiveanalysis of infant long-term retention. In M. L. Howe, C. J. Brainerd, &

V. F. Reyna (Eds.), Developmentof long-term retention (pp. 3–55). New York:Springer-Verlag.

ROVEE-COLLIER, C. K., & SULLIVAN, M. W. (1980). Organization of infantmemory. Journal of Experimental Psychology: Human Learning andMemory, 6, 798–807. 

SCHLINGER, H. D., JR., & BLAKELY, E. (1987). Funtion-altering effects ofcontingency-specifying stimuli. The Behavior Analyst, 10, 41–45.

SCHWARTZ, B. (1986). The battle for human nature: Science, morality andmodern life. New York: W. W. Norton & Company.

SCHWARTZ, B., & LACEY, H. (1982). Behaviorism, science, and human nature.New York: W. W. Norton & Co.

SHETTLEWORTH, S., & NEVIN, J. A. (1965). Relative rate of response andrelative magnitude of reinforcement in multiple schedules. Journal ofthe Experimental Analysis of Behavior, 8, 199–202.

SHIMP, C. P. (1976). Organization in memory and behavior. Journal of theExperimental Analysis of Behavior, 26, 113–130. 

SHIMP, C. P. (1984). Cognition, behavior, and the experimental analysisof behavior. Journal of the Experimental Analysis of Behavior, 42,407–420.

SHULL, R. L. (1991). Mathematical descriptions of operant behavior: Anintroduction. In I. H. Iverson & K. A. Lattal (Eds.), Experimental analysisof behavior (Vol. 2, pp. 243–292). New York: Elsevier.

SKINNER, B. F. (1956). A case history in scientific method. AmericanPsychologist, 11, 221–233.

SKINNER, B. F. (1958). Reinforcement today. American Psychologist, 13, 94–99.SKINNER, B. F. (1976). Farewell, My LOVELY! Journal of the Experimental

Analysis of Behavior, 25, 218. SKINNER, B. F. (1987). Antecedents. Journal of the Experimental Analysis of

Behavior, 48, 447–448.

SPEAR, D. J., & KATZ, J. L. (1991). Cocaine and food as reinforcers: Effects ofreinforcer magnitude and response requirement under second-orderfixed-ratio and progressive-ratio schedules. Journal of the ExperimentalAnalysis of Behavior, 56, 261–275.

SPIGA, R., MAXWELL, R. S., MEISCH, R. A., & GRABOWSKI, J. (2005). Humanmethadone self-administration and the generalized matching law.Psychological Record, 55, 525–538.

STOOPS, W. W., LILE , J. A., ROBBINS, G., MARTIN, C. A., RUSH, C. R., & KELLY,

T. H. (2007). The reinforcing, subject-rated, performance, andcardiovascular effects of d -Amphetamine: Influence of sensation-

seeking. Addictive Behaviors, 32, 1177–1188.TARABULSY, G. M., TESSIER, R., & KAPPAS, A. (1996). Contingency detection

and the contingent organization of behavior in interactions: Implicationsfor socioemotional development in infancy. Psychological Bulletin, 120,25–41.

TERRACE, H. S. (1963). Discrimination learning with and without “errors.” Journal of the Experimental Analysis of Behavior, 6, 1–27.

8/9/2019 Schedules of Reinforcement at 50_ a Retrospective Application

http://slidepdf.com/reader/full/schedules-of-reinforcement-at-50-a-retrospective-application 22/22

158 MORGAN

TERRACE, H. S. (1969). Extinction of a discriminative operant followingdiscrimination learning with and without errors. Journal of theExperimental Analysis of Behavior, 12, 571–582.

THOMPSON, D. M. (1972). Effects of d -amphetamine on the “breaking point”of progressive-ratio performance. Psychonomic Science, 29, 282–284.

THORNDIKE, E. L. (1911). Animal intelligence. New York: Macmillan.VOLLMER, T. R., & BOURRET, J. (2000). An application of the matching law to

evaluate the allocation of two- and three-point shots by college basketballplayers. Journal of Applied Behavior Analysis, 33, 137–150.

VUCHINICH, R. E., & TUCKER, J. A. (1996). Alcoholic relapse, life events, and behavioral theories of choice: A prospective analysis. Experimental andClinical Psychopharmacology, 4, 19–28.

WHEELER, H. (Ed.). (1973). Beyond the punitive society. San Francisco: W. H.Freeman.

WILLIAMS, D. C., & JOHNSTON, J. M. (1992). Continuous versus discretedimensions of reinforcement schedules: An integrative analysis. Journalof the Experimental Analysis of Behavior, 58, 205–228.

WINTERS, L. C., & WALLACE, W. H. (1970). On operant conditioning techniques.The Public Opinion Quarterly, 3, 39–45.

WULFERT, E., DOUGHER, M. J., & GREENWAY, D. E. (1991). Protocol analysis ofthe correspondence of verbal behavior and equivalence class formation.

 Journal of the Experimental Analysis of Behavior, 56, 489–504. YOUNG, A. M., & HERLING, S. (1986). Drugs as reinforcers: Studies in

laboratory animals. In S. R. Goldberg & I. P. Stolerman (Eds.), Behavioral

analysis of drug dependence (pp. 9–67). Orlando, FL: Academic Press.ZEILER, M. D. (1977). Schedules of reinforcement: The controlling variables.

In W. K. Honig & J. E. R. Staddon (Eds.), Handbook of operant behavior(pp. 201–232). Englewood Cliffs, NJ: Prentice-Hall.