"

Unit 2 Conclusion: Operant Learning & Choice

Psychology of Learning

Unit 2: Operant Conditioning & Choice

Summary

Module 06: Operant Conditioning 1

Operant conditioning explains voluntary behaviors shaped by consequences, contrasting with classical conditioning’s reflexive responses.

  • Thorndike: law of effect—behaviors followed by satisfying outcomes are repeated.
  • Guthrie: one‑trial learning of movements; practice builds skills.
  • Skinner: three‑part contingency (SD → R → SR) formalized operant conditioning.
  • Four consequences: positive reinforcement (add pleasant), negative reinforcement (remove unpleasant), positive punishment (add unpleasant), negative punishment (remove pleasant).
  • Shaping: reinforcement of successive approximations builds complex behaviors.
  • Reinforcement schedules: FR (high rates with pauses), FI (scalloped bursts), VR (highest steady rates, resistant to extinction), VI (steady moderate rates). Extinction produces bursts & spontaneous recovery; superstitions arise from accidental pairings.

Module 07: Operant Conditioning 2

This module deepens operant conditioning with reinforcement theories & complexities.

  • Latent learning: knowledge can form without reinforcement, but reinforcement motivates performance.
  • Premack Principle: preferred activities reinforce less‑preferred ones.
  • Response Deprivation Theory: restricting a behavior below baseline makes it reinforcing.
  • Behavioral economics: organisms optimize choices, balancing diminishing marginal value & elasticity of demand.
  • Chaining: forward & backward chaining build complex sequences.
  • Instinctive drift: learned behaviors revert to instinctual patterns.
  • Avoidance learning: classical conditioning creates fear; operant conditioning maintains avoidance through negative reinforcement.
  • Factors influencing consequences: satiation, immediacy, contingency, cost‑benefit, reinforcer quality, motivation.
  • Punishment: effectiveness depends on immediacy, intensity, consistency, & alternatives; drawbacks include emotional side effects, suppression, & aggression.

Module 08: Sports Psychology

Motor learning principles apply to skill acquisition in sports & beyond.

  • Motor skills: discrete vs. continuous; closed‑loop (feedback‑guided) vs. open‑loop (automatic).
  • Feedback: knowledge of results (KR) vs. knowledge of performance (KP); KP is more effective for improvement.
  • Practice: distributed practice outperforms massed practice; observational learning enhances skill when combined with practice.
  • Transfer of training: skills can transfer positively or negatively.
  • Arousal & performance: Yerkes‑Dodson Law (inverted‑U); optimal arousal varies by task complexity, skill level, & personality.
  • Mental imagery: visualization activates neural circuits, supplements practice, aids stress management & rehabilitation.
  • Motor learning theories:
    • Adams’s two‑stage theory: perceptual trace (internal reference) & motor trace (movement commands).
    • Response chain approach: sequences linked by stimuli, limited by timing.
    • Motor program theory: centrally pre‑programmed sequences.
    • Schema theory: variable practice builds general rules enabling novel movements.

Module 09: Decision-Making 1

Focuses on choice behavior & theoretical models.

  • Matching Law: responses distributed proportionally to reinforcement rates; generalized law adds bias & sensitivity parameters.
  • Optimization theory: organisms maximize utility, balancing diminishing marginal value.
  • Momentary maximization: choices maximize immediate utility, sometimes approximating matching.
  • Decision contexts: certainty (known outcomes), risk (known probabilities), uncertainty (unknown probabilities).
  • Normative models: expected utility theory, subjective utility theory, decision trees, utilitarianism, probability theory.
  • Descriptive models: satisficing (good enough choices), prospect theory (loss aversion, endowment effect, risk preferences), regret theory (anticipating regret), compensatory strategies (trade‑offs across dimensions). Decision-making blends rational prescriptions with systematic biases.

Module 10: Decision-Making 2

Explores heuristics, biases, & self-control.

  • Heuristics: availability (ease of recall), representativeness (stereotypes), recognition (familiarity), take‑the‑best, fast‑and‑frugal trees.
  • Biases: anchoring, framing, overconfidence, gambler’s fallacy.
  • Self-control: choosing larger‑later rewards over smaller‑sooner ones; parallels social cooperation dilemmas.
  • Delay discounting: hyperbolic function explains preference reversals & impulsivity.
  • Precommitment: constraining future choices prevents reversals.
  • Uncertainty: probabilistic outcomes reduce self-control; feedback restores confidence.
  • Willpower: limited cognitive resource subject to ego depletion but trainable.
  • Life history theory: fast strategies (impulsive, adaptive in unstable environments) vs. slow strategies (patient, adaptive in stable contexts).
  • Interventions: modify environments, lower value of impulsive choices, raise value of self-controlled choices, use feedback & record-keeping.

Overall Conclusion

Modules 06–10 trace the progression from operant conditioning principles to complex decision-making. Operant conditioning explains how voluntary behavior is shaped by consequences, refined through reinforcement schedules, & constrained by biology. Sports psychology extends these principles to motor skill learning, arousal, & imagery. Decision-making modules integrate behavioral economics, normative & descriptive models, heuristics, biases, & evolutionary strategies, revealing why humans often fail at self-control & how interventions can align immediate choices with long-term welfare.

License

Psychology of Learning TxWes Copyright © by Jay Brown. All Rights Reserved.