Sei sulla pagina 1di 2

BURRHUS FREDERICK SKINNER (1904-1990)

 Skinner believed in the stimulus-response


pattern of conditioned behaviour.
 Skinner’s 1948 book, Walden Two, is about a
utopian society based on an operant conditioning.
 He also wrote, Science and Human Behavior,
(1953) in which he pointed out how the principles of
operant conditioning function in social institutions
such as government, law, religion, economics, and
education.
 Skinner’s work differs from that of the three
behaviourists (Pavlov, Watsons, Thorndike) where he
studied operant behaviour (voluntary behaviors used in
operating on the environment). Thus, his theory came
to be known as Operant Conditioning.

The theory of behaviorism focuses on the study of observable and measurable behavior. It
emphasizes that behavior is mostly learned through conditioning and reinforcement (rewards and
punishment).

Operant conditioning is based upon the idea that learning is a result of change in overt behaviour.
Changes in behaviour are the result of an individual’s response to events (stimuli) that occur in the
environment. A response produces a consequence such as defining a word, hitting a ball, or solving
a math problem. When a particular Stimulus-Response (S-R) pattern is reinforced (rewarded), the
individual is conditioned to respond.
Reinforcement is the key element in Skinner’s S-R theory. A reinforcer is anything that
strengthens the desired response. There is a positive reinforcer and a negative reinforcer.

 A positive reinforcer is any stimulus that is given or added to increase the response.
An example of positive reinforcement is when a teacher promises extra time in the play area
to children who behave well during the lesson.
Another is a mother who promises a new cell phone for her son who gets good grades. Still,
other examples include verbal praises, star stamps and stickers.
 A negative reinforcer is any stimulus that results in the increased frequency of a response
when it is withdrawn or removed. A negative reinforcer is not a punishment, in fact it is a
reward.
For instance, a teacher announces that a students who gets a grade of 95 for the two grading
periods will no longer take the final examination. The negative reinforcer is “removing” the
final exam, which we realize is a form of reward for working hard and getting an average
grade of 95.
 A negative reinforcer is different from a punishment because a punishment is a consequence
intended to result in reduced responses.
An example would be a student who always comes late is not allowed to join a group work
that has already begun (punishment) and, therefore, loses points for that activity. The
punishment was done to reduce the response of repeatedly coming to class late.

Limited Effects of Punishment


• Punishment does not teach appropriate behaviors
• May result in negative side effects
• Undesirable behaviors may be learned through modeling (aggression-angry or violent
behavior)
• May create negative emotions (anxiety & fear)

Interval schedules: reinforcement occurs after a certain amount of time has passed
Fixed Interval = reinforcement is presented after a fixed amount of time
Variable Interval = reinforcement is delivered on a random/variable time schedule

1|Page
Ratio schedules: reinforcement occurs after a certain number of responses
Fixed Ratio = reinforcement presented after a fixed # of responses
Variable Ratio = reinforcement delivery is variable but based on an overall average # of responses

How Complex Behaviors are learned:


1. Shaping Behavior
An animal on a cage may take a very long time to figure out that pressing a lever will produce
food. To accomplish such behavior, successive approximations of the behavior are rewarded
until the animal learns the association between the lever and the food reward. To begin shaping,
the animal may be rewarded for simply turning in the direction of the lever, then for moving
toward the lever, for brushing against the lever, and finally for pressing the lever.

2. Behavioral Chaining
Comes about when a series of steps are needed to be learned. The animal would master each
step in sequence until the entire sequence is learned. This can be applied to a child being taught
to tie a shoelace. The child can be given reinforcement (rewards) until the entire process of tying
the shoelace is learned.

3. Fixed Interval Schedules


The target response is reinforced after a fixed amount of time has passed since the last
reinforcement. Example, the bird in a cage is given food (reinforcer) every 10 minutes, regardless
of how many times it presses the bar.

4. Variable Interval Schedules


This is similar to fixed interval schedules but the amount of time that must pass between
reinforcement varies. Example, the bird may receive food (reinforcer) different intervals, not
every ten minutes.

5. Fixed Ratio Schedules


A fixed number of correct responses must occur before reinforcement may recur. Example,
the bird will be given food (reinforcer) every time it presses the bar 5 times.

6. Variables Ratio Schedules


The number of correct repetitions of the correct response for reinforcement varies. For
example, the bird is given food (reinforcer) after it presses the bar 3 times, then after 10 times,
then after 4 times. So the bird will not be able to predict how many times it needs to press the
bar before it gets food again.

IMPLICATIONS OF OPERANT CONDITIONING


O Practice should take the form of question (stimulus) – answer (response) frames which expose
the student to the subject in gradual steps
O Require that the learner make a response for every frame and receive immediate feedback.
O Try to arrange the difficulty of the questions so the response is always correct and hence a
positive reinforcement
O Ensure that good performance in the lesson is paired with secondary reinforcers such as verbal
praise, prizes and good grades.

PRINCIPLES DERIVED FROM SKINNER’S OPERANT CONDITIONING:

1. Behavior that is positively reinforced will reoccur; intermittent reinforcement is particularly


effective

2. Information should be presented in small amounts so that responses can be reinforced


(“shaping”)

3. Reinforcements will generalize across similar stimuli (“stimulus generalization”) producing


secondary conditioning

2|Page

Potrebbero piacerti anche