BARKS from the Guild Spring 2014

Page 13

COVER STORY edge of what is happening in dog training, or is a piece of history repeating itself ? Has some objectivity been lost as one reward-based operant technology is criticized and another is lauded? It would be a shame to allow misunderstandings to drive a wedge between our fledgling ranks, as we all work to convince our clients of the powers of positive reinforcement. Fortunately, humane, no-force trainers share much common ground. Following are some key criticisms and claims, and some explanation of the history and uses, the strengths and pitfalls of these training technologies, based on scientific texts and literature in applied behavior analysis and learning theory.

• Luring isn’t training. It implies doubt in the laws of learning A lure is one kind of orienting prompt (sticks and balls are others) that is used to guide a dog into a desired behavior (Lindsay, 2000). These prompts are antecedents, stimuli that come before behavior. Since the three-term contingency of A-B-C (antecedent-behavior-consequence) is the basic unit of analysis in learning theory, we cannot just dismiss A. Like other prompts — including verbal ones like high-pitched noises, gestural ones like clapping or crouching, environmental ones like using a wall to get a straight heel position — a lure is used to get the desired behavior during the early part of training and then is gradually eliminated after the behavior has been strengthened through reinforcement. Since fading is the technology for eliminating the prompt, prompting and fading go hand in hand. Prompting-fading and shaping have distinct purposes. Fading is about stimulus control, the A part of AB-C. It typically begins with a behavior that is already in the animal’s repertoire and gradually changes what controls (or evokes) it. So a ‘sit’ is first evoked with a lure, and over several repetitions the lure is faded out as a gestural or verbal cue is faded in. Whether we are shapers or prompters or both, we all use fading when installing verbal cues. In contrast, shaping is about developing a behavior that is not yet in the animal’s repertoire, the B part of AB-C. Shaping begins with an approximation of the final desired behavior and gradually develops it into the final one. In short, fading changes the controlling stimuli of

an existing behavior whereas shaping develops new behavior. Both techniques rely on the C part of A-B-C, reinforcement, to strengthen or maintain behavior. In the real world, though, the distinction is not drawn so neatly, since the two procedures are often used together. We use prompts during shaping (e.g. to teach rollover or spin) and some successive approximations during luring (e.g. getting a full down). And in both lurereward training and shaping, clickers can be used. Also known as ‘errorless learning,’ prompting-fading has been around since Skinner’s, Terrace’s and others’ work in the 1930s to 1960s, which showed that, with sufficient care, a discrimination could be learned with few or no errors (Skinner, 1938; Schlosberg & Solomon, 1943; Terrace, 1963, 1966). Skinner and Terrace both believed that errors resulting from ‘trial and error’ discrimination training (i.e. shaping) were aversive and harmful. With children and animals, shaping often provoked problem behaviors when subjects made mistakes and lost reinforcement. Aggression, self-injury, tantrums, frustration, apathy and escape behaviors were all cited as direct consequences of shaping (Touchette & Howard, 1984). So errorless techniques were developed to obtain behaviors more efficiently, with a steady rate of reinforcement, and less wear and tear on the learner. At the time, this caused quite a paradigm shift since it was widely assumed that errors were necessary for learning. A considerable body of evidence has since accumulated that errorless techniques are successful with human and nonhuman animals (Schroeder, 1997). The notion that ‘prompting is not training’ is somewhat understandable since we know that consequences, not antecedents, determine behavior. We all strive to focus our clients on consequences rather than antecedents. Although antecedents do not cause behavior to occur, they increase the likelihood of behavior if their presence has been associated with past reinforcement of the behavior. In short, B is a function of C in the presence of A. So a dog learns that in the presence of antecedent X (mom), sits are noticed and reinforced, but in the presence of antecedent Y (child), sits are not reinforced. Sits become much more likely in front of mom, who is the discriminative stimulus (SD) for reinforcement for sits. BARKS from the Guild/April 2014

13


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.