Schedules of reinforcement

13
Schedules of Reinforcement A mature study by two immature minds Copyright © 2012 Sequels.

description

This slide is made by Amit (facebook.com/titanium009) for his class presentation..Sorry fellows and fellas some fonts are not working and creating malfunction... :(...Check fonts from dafont.com and make an awesome slide.. Drop me a mail if you want the exact presentation file ([email protected])

Transcript of Schedules of reinforcement

Page 1: Schedules of reinforcement

Schedules of Reinforcement

A mature study by two immature

minds

Copyright © 2012 Sequels.

Page 2: Schedules of reinforcement

Reinforcement is a term in psychology for a process of strengthening a directly measurable dimension of behaviour--such

as rate (e.g., pulling a lever more frequently), duration (e.g., pulling a lever for longer periods of time), magnitude (e.g., pulling a lever with greater force), or latency (e.g., pulling a lever more

quickly following the onset of an environmental event)--as a function of the delivery of a "valued" stimulus (e.g. money from a slot machine) immediately or shortly after the occurrence of the

behaviour.

A Reinforcer is a temporally contiguous environmental event, or an effect directly

produced by a response (e.g., a musician playing a melody), that functions to strengthen or

maintain the response that preceded the event. Copyright © 2012 Sequels.

Page 3: Schedules of reinforcement

timing Life’s RewardsThe world would be a different

place if poker players never played cards again after the first losing hand, fishermen returned to shore as soon as they missed a catch, or telemarketers never made another phone call after their first hang-up. The fact that such unreinforced behaviours continue, often with great frequency and persistence, illustrates that reinforcement need not be received continually for behaviour to be learned and maintained.

In fact, behaviour that is reinforced only occasionally can ultimately be learned better than can behaviour that is always reinforced.

Copyright © 2012 Sequels.

Page 4: Schedules of reinforcement

Class ifi cat ions

Reinforcement

Continuous

Intermittent

FixedInterv

al

Variable

Interval

FixedRatio

Variable

Ratio

Copyright © 2012 Sequels. .

Page 5: Schedules of reinforcement

Fixed-interval scheduleWhere the first response is

rewarded only after a specified amount of time has elapsed. This schedule causes high

amounts of responding near the end of the interval, but much slower responding

immediately after the delivery of the reinforcer.

Copyright © 2012 Sequels.

Page 6: Schedules of reinforcement

• Students often study minimally or not at all until the day of the exam draws near.

– Just before the exam, however, students begin to cram for it, signalling a rapid increase in the rate of their studying response. As you might expect, immediately after the exam there is a rapid decline in the rate of responding, with few people opening a book the day after a test

Copyright © 2012 Sequels.

Page 7: Schedules of reinforcement

Variable-interval schedule

Where the first response is rewarded after an unpredictable amount of time has passed. This schedule produces a slow, steady

rate of response. Copyright © 2012 Sequels.

Page 8: Schedules of reinforcement

♂One example of a fixed-interval schedule is a weekly pay-cheque.

♂Example for variable interval will be the famous surprise quizzes .Students have to read and keep themselves up-to-date for their upcoming battle.

Copyright © 2012 Sequels.

Page 9: Schedules of reinforcement

Variable-ratio schedule

Where a response is reinforced after an unpredictable number of responses. This schedule creates a high steady rate of responding. Gambling and lottery games are good examples of a reward based on a variable ratio schedule. 

Copyright © 2012 Sequels.

Page 10: Schedules of reinforcement

One example of a variable-ratio schedule is a telephone salesperson’s job. He might make a sale during the 3rd , 8th , 9th , and 20th calls without being successful during any call in between. Although the number of responses he must make before making a sale varies, it averages out to a 20 percent success rate. Under these circumstances, you might expect that the salesperson would try to make as many calls as possible in as short a time as possible. This is the case with all variable-ratio schedules, which lead to a high rate of response and resistance to extinction.

Copyright © 2012 Sequels.

Page 11: Schedules of reinforcement

Fixed-ratio schedule

Where a response is reinforced only after a specified number of responses. This schedule produces a high steady rate of responding with only a brief pause after the delivery of the reinforcer.. Copyright © 2012 Sequels

Page 12: Schedules of reinforcement

♀One example of a fixed-ratio schedule is a weekly daring Mathlab assignments. Only after submitting we get our respective marks :D

♀This schedule produces a high steady rate of responding with only a brief pause after the delivery of the assignment before the deadline :P

Copyright © 2012 Sequels.

Page 13: Schedules of reinforcement

A Presentation by Sequels..