J Behav Educ DOI 10.1007/s10864-016-9249-0 ORIGINAL PAPER
Using Stimulus Equivalence-Based Instruction to Teach Graduate Students in Applied Behavior Analysis to Interpret Operant Functions of Behavior Leif Albright1 • Lauren Schnell1 • Kenneth F. Reeve1 Tina M. Sidener1
•
Ó Springer Science+Business Media New York 2016
Abstract Stimulus equivalence-based instruction (EBI) was used to teach four, 4-member classes representing functions of behavior to ten graduate students. The classes represented behavior maintained by attention (Class 1), escape (Class 2), access to tangibles (Class 3), and automatic reinforcement (Class 4). Stimuli within each class consisted of a single textual label, and multiple exemplars of textual descriptions, graphical representations, and clinical vignettes. EBI was conducted using custom computer software in a match-to-sample (MTS) format. A pretesttrain-posttest design conducted during a single session evaluated performances on computer MTS, written multiple-choice, and oral tests. Scores improved from pretests to posttests on all three tests for all students following EBI. In addition, class-consistent performances maintained 2 weeks after posttests were conducted. These results demonstrated that EBI can be used to effectively teach the functions of behavior and that a MTS teaching protocol administered on a computer can promote the emergence of class-consistent responding to selection-based (i.e., multiplechoice) and topography-based (i.e., oral) tests. Keywords Stimulus equivalence instruction Functional analysis Multiple exemplar training Derived stimulus relations College instruction & Kenneth F. Reeve
[email protected] Leif Albright
[email protected] Lauren Schnell
[email protected] Tina M. Sidener
[email protected] 1
Department of Applied Behavior Analysis, Caldwell University, 120 Bloomfield Avenue, Caldwell, NJ 07006, USA
123
J Behav Educ
Teaching Functions of Behavior with Stimulus Equivalence-Based Instruction Functional analysis has been an integral part of behavioral assessment in the last 30 years (Beavers et al. 2013). In the functional analysis research literature, formalized staff training protocols have been used to teach clinicians to conduct experimental functional analyses (e.g., Lambert et al. 2013; Moore et al. 2002; Moore and Fisher 2007; Wallace et al. 2004; Ward-Horner and Sturmey 2012). Specifically, clinicians have been taught to accurately deliver antecedents and consequences during conditions (e.g., Wallace et al. 2004). Although these skills are critical, a crucial goal of functional analysis is to determine the function of behavior by interpreting data. To date, however, only a few studies have focused on training clinicians to interpret functional analysis data (Chok et al. 2012; Hagopian et al. 1997). Although these studies effectively trained participants to analyze data (Chok et al. 2012; Hagopian et al. 1997) and make treatment decisions (Chok et al. 2012), the sessions were lengthy (e.g., 8–10 h) and required multiple procedural steps to individually train each participant. Stimulus equivalence-based instruction (EBI) may provide an alternative method to train staff to interpret functional analysis data. An equivalence class consists of three or more physically disparate stimuli, each of which occasions the selection of other members in the set (Fields and Verhave 1987; Sidman and Tailby 1982). During EBI, a series of conditional discriminations are trained, followed by tests of other non-taught, or derived, stimulus–stimulus relations. In recent years, EBI has been used to teach a wide variety of academic skills to college-level learners (e.g., Albright et al. 2016; Fields et al. 2009; Fienup et al. 2010; Fienup and Critchfield 2010; Ninness et al. 2005, 2009; Walker and Rehfeldt 2012). For example, Fields et al. (2009) taught statistical interactions to college learners. Students in the control group took two different written pretests and posttests, but they did not receive stimulus equivalence instruction. The experimental group first took a written test, then received computer-based stimulus equivalence instruction, and then took a second version of the written test. Four-member classes were then trained. Nine out of ten students learned the equivalence classes. The results showed that students in the experimental group scored higher on the written posttest than the students in the control group. In many equivalence studies, derived relations are demonstrated using selectionbased responding (i.e., selecting stimuli from an array). In contrast, some recent studies have assessed derived relations using tests with responses of different topographies. For example, Walker and Rehfeldt (2012) used a selection-based training procedure with graduate students to establish relations among the names, definitions, graphical representations, and clinical vignettes of various singlesubject research designs. While a selection-based procedure was used during training, a different response topography (i.e., oral labeling) was used to evaluate the emergence of derived relations. These data demonstrate that class-consistent responding during EBI can occur across additional response topographies.
123
J Behav Educ
Because functions of behavior are integral behavior analytic concepts (Beavers et al. 2013) and are required as part of the Behavior Analyst Certification Board’s Fourth Edition Task List (Behavior Analyst Certification Board, 2012), this material was taught to graduate students in the present study using EBI. Generalization of class-consistent responding was promoted through the use of multiple exemplar training. In addition, the study evaluated the effects of a computerized selectionbased procedure on the generalization of selection-based responding to novel stimuli (i.e., samples and comparisons), to a novel context (i.e., multiple-choice written test format), and to novel response topographies (i.e., oral responding).
Methods Participants Eleven graduate students were recruited from introductory courses in an applied behavior analysis Master of Arts program. One student was excluded after scoring higher than 75 % on both oral and multiple-choice written pretests, leaving ten students. Students included two males and eight females ranging in age from 22 to 45 years old with self-reported college grade point averages ranging from 3.0 to 4.0. Prior to the experiment, informed consent was obtained from each student. Course credit was given for study completion. Settings and Materials Experimental sessions took place in a university computer laboratory. During EBI instruction, custom computer software controlled match-to-sample (MTS) training and testing, and recorded comparison selections. Index cards, paper multiple-choice tests, and experimenter data sheets were also present during certain pretests and posttests. Equivalence Class Stimuli Four equivalence classes representing behavior maintained by attention (Class 1), escape (Class 2), access to tangibles (Class 3), and automatic reinforcement (Class 4) were established. Stimuli within each class included a textual label, textual descriptions, graphical representations, and clinical vignettes (referred to as the A through D stimuli in Table 1). The textual descriptions (B stimuli) were obtained from the Functional Assessment Screening Tool (Iwata et al. 2013) and the Motivational Assessment Scale (Durand and Crimmins 1988). Both assessments use interviews to identify environmental factors that may influence the occurrence of problem behaviors. Graphical representations (C stimuli) were constructed with Microsoft ExcelÓ using hypothetical data. Clinical vignettes (D stimuli) were developed by the authors; they consisted of written applied scenarios for which a particular target behavior was described, along with corresponding antecedent and consequence stimuli.
123
J Behav Educ Table 1 Experimental stimuli Class Type
Notation
1
Label
Ax
Attention
Description
Bx
The behavior occurs during periods of decreased social interaction
The behavior often occurs during training activities or when the learner is asked to complete tasks
The behavior often occurs when you inform the learner that (s)he cannot have a certain item or cannot engage in a particular activity
The behavior occurs frequently when the learner is alone or unoccupied
By
The behavior rarely occurs during periods of increased social interaction
The behavior rarely occurs when you place few instructions on the learner
When the behavior occurs, you often respond by giving the learner a specific item, such as a favorite toy, food, or some other item
The occurrence of the behavior by the learner typically does not result in any social consequences (e.g., response blocking)
Bz
The behavior appears to occur in response to adult interaction with other person(s) in the environment
The behavior occurs following a request to perform a difficult task
The behavior appears to occur to get a toy, food, or an activity that the learner has been told (s)he cannot have
The behavior occurs continuously if the learner is left alone for long periods of time
Bp
The behavior occurs whenever parent/staff member stops interacting with the learner
The behavior occurs when a direction is given to the learner
The behavior occurs when the favorite food, toy or activity of the learner is removed
The behavior occurs at relatively high rates regardless of what is going on in the learner’s immediate environment
Bo
When the behavior occurs, you or others usually respond by interacting with the learner in some way
When the behavior occurs, you often respond by giving the learner a brief "break” from an ongoing task
The behavior often occurs when you take a particular item away from a learner or when you terminate a preferred leisure activity
The behavior occurs repeatedly, in the same way, for long periods of time if the learner is alone
Graph
Cx
Cy
Cz
123
2
Escape
3
4
Tangible
Automatic Reinforcement
J Behav Educ Table 1 continued Class Type
Notation
1
2
3
4
Dx
Ava is playing in a room next to her brother. Ava hears her brother starting to cry. Once she enters the room, he stops crying immediately. Ava leaves the room. Her brother starts crying again
Christopher is a second grade boy who struggles in math. During class his teacher asked the students to take out their math textbooks and Christopher threw his book. He was sent to the principal’s office. The next day he ripped his math paper in half at the start of the lesson
Mrs. Alm brings her daughter to the store. When they arrive the daughter asks for candy. When Mrs. Alm says “no” the daughter begins to scream until finally mom gives it to her. Now whenever the daughter wants something and her mom says “no” she screams and cries
Tara is an 18-year old girl diagnosed with autism. She is observed making loud repetitive noises while at her job site, while at home when watching television, and in the shower before bed. The parents tell the teacher that Tara is always making these noises, no matter what is occurring around her
Dy
Kevin likes to act as the class clown. The other day Kevin spent almost the entire health class telling jokes. His fellow students roared with laughter at each one and Kevin just kept on it, disrupting the entire period
Julia receives early intervention therapy. Last week, when the therapist took out puzzles, Julia dropped to the ground. The therapist was so worried she put the puzzles away. Now this is happening every time the therapist takes out activities for Julia to complete
TJ ate a small amount of dinner and was not allowed to have dessert. He started yelling. The next day when his dad told him he couldn’t buy a new truck while at the toy store he started yelling again
Mason often flap his hands when he is excited. This seems to happen when he is engaged in his favorite activities or when it gets too loud. Sometimes Mason will also engage in hand flapping when he is alone in a room
Dz
Abby is finally sleeping in her bed. Her mother kisses her goodnight and leaves the room to go downstairs. As she is walking down the stairs, the mother hears Abby calling her name. She turns around, and walks back into her room. This happens again until finally Abby falls asleep
Robin attends a small preschool class. The teachers give the class the direction to clean up. Robin ignores them and keeps playing. When the teacher approaches to remind him to clean up he hits her. The teacher puts him in time out. Robin begins hitting whenever he is told to do something he does not like
Josh cut the line when it was time to line up to go outside for recess. His teacher asked him to move. Josh began to yell and curse. He was so disruptive that the teacher let him stay in front. The next day when he was told to stand in the back of the line, he began to yell and curse again
Ron chews on pencils. He catches himself chewing most often when he is really busy at work. He tells people it helps him think and stay calm
Cp
Co
Vignettes
123
J Behav Educ Table 1 continued Class Type
Notation
1
2
3
4
Dp
Molly is a new kindergarten student. During playtime she walked into the hallway. The teacher played with her before bringing her back. A few minutes later Molly ran out of the classroom again. The teacher now reports that Molly is running out of the classroom every day
Jackson was so excited to go to the playground with his father. On the way there, the father decided to stop at the grocery store. Jackson wasn’t happy. When they entered, Jackson became increasingly upset. His father immediately grabbed him and ran out of the store. His father has observed that screaming occurs more often when Jackson is doing something he does not like
Sarah asked whether she could take the car out to see a friend. When her mother said no, Sarah started to cry and slammed her door. Her father decided to give her his car instead. A few days later Sarah asked her parents for extra money for clothes shopping. When they said “no” Sarah started to cry
Liz’s husband is always pointing out how often she twirls her hair around her finger. Sometimes she doesn’t even realize she’s doing it. This happens at work or home, and doesn’t matter who is around
Do
Margaret just got a new job with a new doctor in town. She shows up for her first day of work wearing her new dress. The doctor comments how nice she looks. The next day she wears another new dress
Phillip’s mom presented a new food for him to eat. However he dumped the plate onto the floor. His parents immediately sent him to his room. The next night when asked to try another new food, Phillip did the same thing, dumped everything on to the floor
Today was Mason’s first day of school and his mom let him bring his favorite toy. After he hung up his coat, his mom took the toy away and turned to leave. Mason started crying. His mom gave him the toy back. Later on in the day his teacher tried taking the toy and the same thing happened
Laurie has always been a nail biter. It’s something she’s been doing as long as she can remember. It happens most often when she is alone
Stimuli with the subscripts x, y, and z were used during computer MTS training and testing. Additional stimulus variants were used during oral tests (subscript o) and written multiple-choice tests (subscript p)
Within each class, one exemplar was used for the A stimulus (textual label) during training and testing. To promote generalization, three exemplars were used for each B (textual descriptions), C (graphical representations), and D (clinical vignettes) stimuli. In addition, novel exemplars of the B, C, and D stimuli were used in the oral and written tests to assess generalization of class-consistent responding. Stimuli used in the study were rated pre-experimentally by six BCBA-Ds with at least 15 years’ experience conducting functional analyses. Respondents were presented with one of two 30-item surveys each representing all four classes. Individual versions were sent to respondents to reduce response effort. They were asked to rate descriptions, graphs, and short case study vignettes presented with online survey software for suitability as functional analysis equivalence class members. Based on these ratings, each potential stimulus was rank ordered from best to worst. To counterbalance ‘‘suitability’’ of stimuli used in each component of the study (i.e., MTS, written, and oral), B and C stimuli ranked 1st, 3rd, and 5th
123
J Behav Educ Table 2 Sequence of training and testing conditions Pretests 1.
Oral
2.
Written
3.
Computer MTS
4.
Training
Maintenance
BA (symmetry) AC
7.
CA (symmetry)
8. 9.
Posttests
AB
5. 6.
Derived relations testing
BC, CB (equivalence) BD
10.
DB (symmetry)
11.
AD, DA, CD, DC (equivalence)
12.
Computer MTS
13.
Oral
14.
Written
15.
Oral
16.
Written
most suitable were assigned as the default training stimuli. The stimuli ranked 2nd were assigned to the oral pretest and posttest. The stimuli ranked 4th were assigned to the written pretest and posttest. Any stimuli ranked 6th or higher were discarded and not used in the study. Experimental Design and Dependent Measures Table 2 depicts the pretest-train-posttest sequence (Walker and Rehfeldt 2012) that was used to assess the effects of equivalence-based instruction on the formation of four 4-member equivalence classes. All components except for maintenance were conducted in one session on the same day to reduce the likelihood of any historical confounds. Maintenance of class-consistent responding was assessed 2 weeks after the completion of EBI. The primary dependent measure was the percentage of correct trials during MTS training and testing, pretests, posttests, and maintenance tests. The number of trial blocks to criterion for each trained and derived relation during computer MTS training was also calculated. Procedure Oral Pretests Oral tests were conducted using procedures similar to those by Lovett et al. (2011). The oral tests assessed whether students could label the correct function when presented with its description, graphs, and clinical vignettes (i.e., BA, CA, DA
123
J Behav Educ
relations). A trial block contained one trial for each of the four description-to-label (BA) relations, each of the graph-to-label (CA) relations, and clinical vignette-tolabel (DA) relations, for a total of 12 trials in a block. The oral test stimuli included textual descriptions (B), graphical representations (C), and clinical vignettes (D). The B, C, and D stimuli in the oral tests were novel exemplars not used during the written tests or the MTS training and testing procedures. Each trial began when the experimenter presented an index card with the target description, graph, or clinical vignette and asked, ‘‘What function is this?’’ Students were given 30 s to respond. If the student responded correctly within 30 s, the response was scored as correct. If the student did not respond within 30 s, or delivered a response other than the correct function, the response was scored as incorrect. No feedback was provided for either correct or incorrect responses. Mastery criterion was defined as at least 92 % correct (11/12 trials). Trials were presented in a semi-random manner. Multiple-Choice Written Pretest To measure performance in a format similar to assessments often used in college classrooms, a 48-question multiple-choice written test was administered (see ‘‘Appendix’’ for 5 sample questions). Each of the 12 possible stimulus relations was represented by four multiple-choice questions, each of which had four answer choices (a, b, c, or d; Fields et al. 2009). The test assessed performance on all possible trained and derived relations. The B, C, and D stimuli in the written test were novel exemplars not used during the oral tests or the computer MTS training and testing procedures. Instructions for completing the written test were read aloud by the experimenter. Students were given 30 min to complete the test as this was a typical time limit for a similar examination at the university where the study was conducted. No feedback was provided for either correct or incorrect responses. After completing the written pretest, students began computer MTS training and testing. If any student scored at or above the exclusion criterion of 75 % on both oral and written pretests, he or she was excused from the study. Protocols for Computer MTS Training and Testing During MTS training and testing trials, one sample stimulus was presented at the top center of the screen. Four stimuli were presented in a horizontal array on the screen below the sample stimulus. One stimulus was the correct comparison and the remaining three stimuli were incorrect comparisons. Across trials, the location of each comparison stimulus was randomized without replacement across the four possible locations by the computer software. During all MTS trials, a feedback box in the upper right corner of the computer screen displayed the number of trials completed and the total number of trials in that block (Fienup and Critchfield 2010). Prior to training and testing of the equivalence stimuli, a tutorial instructed students how to respond during matching trials. The tutorial contained 10 practice trials with common stimuli not related to functions of behavior (e.g., matching
123
J Behav Educ
names of objects to their functions, matching color names to objects). All students completed the tutorial in one practice block. Students were then exposed to the computer pretest (described below) followed by MTS training and testing. Computer MTS Pretest The computer pretest tested all relations that were to be trained: label-to-description (AB), label-to-graph (AC), description-to-vignette (BD), as well as all potential derived relations: description-to-label (BA), graph-to-label (CA), vignette-to-label (DA), description-to-graph (BC), graph-to-description (CB), vignette-to-description (DB), description-to-vignette (BD), vignette-to-graph (DC), and graph-to-vignette (CD). Trial presentation within the test was randomized for each student. Following the computer MTS pretest, AB training began (described below). Training Protocol and Class Structure Training and testing used a simple-to-complex protocol (e.g., Ninness et al. 2005; Walker et al. 2010). A mixed training structure was used to teach the baseline relations (Walker and Rehfeldt 2012). We taught the baseline relations AB, AC, and BD because our intent was to demonstrate the emergence of the AD and DA relations. We considered the ability to label a vignette a functional and socially significant professional skill. Trials were randomized without replacement by the computer software within all trial blocks during the MTS procedure. If participants failed to meet the mastery criterion within three trial blocks of a particular relation, remedial training occurred. During remedial training, participants were required to meet criterion on a 12-trial block that included three trials from each of the four classes while targeting a single exemplar at a time before advancing to the next exemplar. Generalization of responding across textual descriptions (B), graphical representations (C), and clinical vignettes (D) was promoted by presenting variants of stimuli during training. Generalization of responding across variants of B, C, and D stimuli was assessed using stimuli not associated with computer MTS training during the oral and written tests. Label-to-Description (AB) Training The 12-trial block consisted of three trials each of the four label-to-description relations. During training, the order of the 12 trials in each block was randomized without replacement. Responses were followed by textual feedback presented on the computer monitor for both correct and incorrect responses. No other prompting procedures were included. Trial blocks were presented until the mastery criterion of 100 % (12/12 trials) correct per block was reached.
123
J Behav Educ
Description-to-Label (BA) Test The 12-trial block consisted of three trials each of the four description-to-label symmetry relations. Mastery criterion was defined as at least 92 % correct (11/12 trials). Feedback was not provided for correct or incorrect responses. If students did not meet mastery criterion on the BA symmetry test, they repeated AB training as described above until the AB mastery criterion was met. Students then completed the BA symmetry test again. Label-to-Graph (AC) Training The 12-trial block consisted of three trials each of the four label-to-graph relations. Similar to AB training, responses were followed by textual feedback. Trial blocks were presented until the mastery criterion of 100 % (12/12 trials) correct was reached. Graph-to-Label (CA) Test The 12-trial block consisted of three trials each of the four graph-to-label symmetry relations. Mastery criterion was defined as at least 92 % correct in the absence of feedback (11/12 trials). If students did not meet mastery criterion on the CA symmetry test, they repeated AC training as described above until the AC mastery criterion was met. Students then completed the CA symmetry test again. Equivalence Test 1 Equivalence Test 1 evaluated the emergence of description-to-graph (BC) and graph-to-description (CB) relations. The test block contained 72 trials across all classes and included one trial with each of the B and C exemplars. Mastery criterion was defined as at least 92 % (66/72 trials) correct. If students did not meet mastery criterion for Equivalence Test 1, they returned to AB and AC remedial training followed by another presentation of Equivalence Test 1. Description-to-Vignette (BD) Training Each trial in the 36-trial block contained a different combination of B and D stimuli. Once again feedback was provided for correct responses. The trial block was repeated until the mastery criterion of 100 % (36/36 trials) correct was achieved. Vignette-to-Description (DB) Test The 36-trial test block evaluated the emergence of symmetrical vignette-todescription relations. The test for DB symmetry relations was conducted in the same manner as described above for the previous symmetry tests. Mastery criterion was defined as at least 92 % (33/36 trials) correct. If students failed to meet the mastery
123
J Behav Educ
criterion, they completed BD remedial training as described above, followed by another presentation of the DB symmetry test. Equivalence Test 2 Equivalence Test 2 evaluated the emergence of untrained graph-to-vignette (CD), vignette-to-graph (DC), label-to-vignette (AD), and vignette-to-label (DA) relations. Equivalence Test 2 contained 96 trials consisting of each of the 16 relations and all C and D stimuli. Mastery criterion was defined as at least 92 % (88/96 trials) correct. If students did not meet mastery criterion for Equivalence Test 2, they returned to AC and BD training, followed by another presentation of Equivalence Test 2. Posttests and Maintenance After meeting criterion on Equivalence Test 2, students received a computer, oral, and written posttest, in that order. These posttests were identical to the pretests except that the order of trials or questions was randomized. To assess maintenance, a third oral and written test containing the same questions as the prior tests but with the question order randomized was presented 2 weeks later. Social Validity Survey Following the maintenance tests, a eight-item social validity survey (see Table 3) was completed by students. Similar to Fields et al. (2009) and Fienup and Critchfield (2011), students used a 5-point Likert-type scale to rate eight statements concerning satisfaction with the procedures and the value of the learning outcomes.
Table 3 Social validity survey results Item
Mean
Range
1. I am confident about my knowledge of the functions of behavior
4.8
4–5
2. The computer lessons helped me master information about the functions of behavior
4.7
4–5
3. I felt successful during most of the computer lessons
4.5
4–5
4. I prefer to learn using this instructional method as compared to other instructional methods
4.6
4–5
5. The time commitment for this instructional method was appropriate in relation to the amount of information I learned
4.0
3–5
6. I felt frustrated during the computer lessons
1.7
1–2
7. If available, I would use this instructional method to learn information needed for my college courses
4.7
4–5
8. I would recommend the instructional methods in this study to other students
4.5
4–5
123
J Behav Educ
Interobserver Agreement and Procedural Integrity A graduate student along with the first author collected interobserver agreement data on a trial-by-trial basis for 100 % of oral and paper tests only. An agreement was defined as both agreeing on the occurrence of a correct response. Agreement was calculated by dividing the total number of agreements by the number of agreements plus disagreements, multiplied by 100. The mean agreement across all tests was 100 %. In addition, all steps of the experimental protocol (e.g., informed consent obtained, oral pretest conducted, paper-and-pencil pretest conducted) for 100 % of sessions were video recorded and later analyzed for procedural integrity. The experimenter completed 100 % of those steps correctly. A second observer collected interobserver agreement on procedural integrity on 50 % of experimental sessions by reviewing the recorded sessions. IOA on procedural integrity was 100 %.
Results Oral Tests Figure 1 shows individual student scores for the oral pretest, posttest, and maintenance test. The mean oral pretest score across all students was 7 % (range 0–17 %). Following EBI, scores on the oral posttest improved for all students with a mean of 89 % (range 75–100 %). For the oral maintenance test, scores remained high across students with a mean of 87 % (range 50–100 %).
Fig. 1 Individual scores (percentage correct) for oral tests for each student
123
J Behav Educ
Multiple-Choice Written Tests Figure 2 shows individual student scores for the written pretest, posttest, and maintenance test. The mean pretest score across students was 60 % (range 42–75 %). Following EBI, scores on the posttest improved for all students with a mean of 95 % (range 88–100 %). During the 2-week maintenance test, scores remained high across students with a mean of 94 % (range 85–100 %). To provide further analysis, Fig. 3 shows the mean percentage of correct responses on the written tests across students as a function of relation type. Computer Pretest and Posttest Figure 4 shows individual student scores on the computer pretest and posttest. The mean pretest score across students was 67 % (range 48–82 %). Following EBI, the mean posttest score increased to 96 % (range 93–100 %). Computer MTS Training and Derived Relations During EBI, the number of trial blocks required to meet criterion for each MTS training relation was fairly consistent across students. For label-to-description (AB) training, a mean of 1.5 blocks (range 1–3) was needed to meet criterion. For labelto-graph (AC) training, a mean of 1.1 blocks (range 1–2) was needed. For description-to-vignette (BD) training, the mean number of blocks to criterion was 1.7 (range 1–3).
Fig. 2 Individual scores (percentage correct) for written tests for each student
123
J Behav Educ
Fig. 3 Mean percentage of correct responses across students as a function of specific relations on the written pretest, posttest, and maintenance test. The ‘‘I’’ beams indicate 1 standard error
Fig. 4 Individual scores (percentage correct) for the computer tests for each student
123
J Behav Educ
During testing of derived relations, students met criterion on the description-tolabel (BA) symmetry relation with a mean of 1.7 blocks (range 1–3). For graph-tolabel (CA) symmetry testing, a mean of 1.1 blocks (range 1–2) was needed. For vignette-to-description (DB) symmetry testing, the mean number of blocks to criterion was 1.6 (range 1–3). For equivalence tests 1 and 2, the mean number of blocks to criterion was 1.4 (range 1–2) and 1.5 (range 1–3), respectively. Social Validity Table 3 shows the results of the student satisfaction survey. All students were generally satisfied as indicated by positive ratings on the eight items. The mean rating across all questions was 4.5 (range 4.0–4.8) on the 5-point scale, except for question 6, which had a mean rating of 1.7 (range 1–2). For question 6, a lower score indicated a more favorable rating.
Discussion The present study demonstrated that graduate students could learn classes of stimuli representing functions of behavior using EBI. All students learned the trained and emergent relations during computer MTS training and testing, and mean computer, oral, and written posttest scores were substantially higher than mean pretest scores. While pretest–posttest comparisons without controls are generally more susceptible to historical confounds, the current study conducted all training and testing protocols, with the exception of maintenance, in a single session. Therefore, the likelihood of such extra-experimental confounds was unlikely. These results extend previous findings in which EBI has been used to teach advanced academic content (e.g., Albright et al. 2016; Fields et al. 2009; Fienup et al. 2010; Fienup and Critchfield 2010, 2011; Ninness et al. 2005, 2009; Walker and Rehfeldt 2012). In addition, selection-based training (i.e., computer MTS) promoted the generalization of class-consistent responding to non-training stimuli using a topography-based testing format (i.e., oral and written tests; e.g., Walker and Rehfeldt 2012). The present study also builds on the existing literature on the effects of multiple exemplar training on the generalization of responding across class members and their variants. In this study, three variants of descriptions (B stimuli), graphs (C stimuli), and vignettes (D stimuli) were chosen to program for generalization. Consistent with Fields et al. (2009), relations mastered in computer MTS training generalized to non-training exemplars presented during the oral and written posttests. Taken as a whole, class-consistent responding across trained class members, and variants of members never used during training, demonstrated the emergence of even larger generalized equivalence classes of stimuli representing functions of behavior (Fields and Reeve 2001). When maintenance of treatment gains was assessed, 100 % of the students maintained class-consistent responding. This suggests that the EBI training
123
J Behav Educ
produced robust effects. The results of the student satisfaction survey demonstrated the social validity of EBI in teaching complex academic concepts to college students. These findings were consistent with those from Fields et al. (2009) and Fienup and Critchfield (2010). The procedures used in prior studies in which clinicians were taught to interpret functional analysis data have been complex and time-consuming (Chok et al. 2012). Although substantial time was required to create the software in the present study, multiple trainings with little experimenter input were conducted once the software was completed. Although we used custom software to maximize experimental control in the present study, similar MTS training and testing trials could be created using readily available software such as Microsoft PowerPointÓ. One limitation of the study, however, was that some of our participants entered the study with established repertoires (see Fig. 3). An analysis of the mean percentage of correct responses across relations suggested that the mean written pretest scores on several relations (i.e., AC, CA, CB, DC, AD) were higher than the 75 % or higher exclusion criterion for inclusion in the study. One explanation of the high scores for these relations may be that the C stimuli (i.e., graph) contained a legend that included the labels of each class. Thus, control of responding on some relations may have been due to the presence of common stimulus features across class members. Without these common features to potentially cue the correct responses, pretest scores would have likely been lower. Another explanation for the high pretest scores may be due to participant characteristics. Because the participants were students in a graduate Applied Behavior Analysis program, they were likely exposed to some functional analysis content matter prior to the study. It is possible that students from an unrelated field may have produced lower pretest scores. A replication of the present study with such participants could address this concern. We chose behavior analysis graduate students, however, because the content to be learned would be more relevant to those students. This relevance was generally reflected in the high scores in the social validity surveys. Regardless of the relatively high pretest scores for some participants, however, systematic improvements did occur following a relatively short period of time of EBI, and all participants reached the mastery criterion for oral, written, and computer posttests. A second limitation is that we did not compare the efficiency of EBI to other teaching methods (e.g., lecture, self-study). The efficiency of an instructional intervention is best evaluated by comparing its outcomes (e.g., post-intervention performance, time to accomplish those effects) to those interventions used in typical practice (Fienup and Critchfield 2011). While the present investigation suggested that participants learned all relations within 2.5 h, it is unclear what results could have been obtained during a similar length traditional lecture. Another limitation is that participants were not taught to make treatment decisions based on functional analysis data. Future studies should investigate ways to train such treatment decisions. A final limitation is that classes of stimuli representing functions of behavior from real-world clinical settings would likely contain many more exemplars and variants than were used in the current study. To maximize the likelihood of class formation, the training stimuli represented
123
J Behav Educ
somewhat ‘‘ideal’’ results to make them more salient. In reality, each class can be potentially represented with an infinite number of variations within each member. Although we attempted to include a sufficient representation of each class with the multiple exemplars, the scope of each function was relatively limited. Thus, future researchers should include more varied stimuli, including stimuli with less differentiated data paths as well as potentially new members (i.e., video clips) in their stimulus classes. To summarize, our study applied EBI to a new area of academic content and demonstrated the value of multiple exemplar training on promoting generalization of instructional effects to novel exemplars. Despite the limitations noted above, the results of this study confirm the efficacy of equivalence-based instruction in teaching functions of behaviors. Participants entered the study with limited repertoires regarding the targeted concepts as noted on their pretest scores. Following the completion of equivalence-based instruction, participants exited the study with a substantial increase to those repertoires.
Appendix Sample Questions from Written Multiple-Choice Test Used During Pretests, Posttests, and Maintenance Please select the correct answer by circling the answer of the answer choice 1.
An attention behavior can be described as —— a. b. c. d.
2.
Behavior that occurs whenever a parent/staff member stops interacting with the learner. Behavior that occurs at relatively high rates regardless of what is going on in the learner’s immediate environment. Behavior that occurs when the favorite food, toy or activity of the learner is removed. Behavior that occurs when any direction is given to the learner.
The description, ‘‘The behavior occurs when the favorite food, toy or activity of the learner is removed’’ best represents which function? a. b. c. d.
Automatic reinforcement. Demand. Tangible. Attention.
123
J Behav Educ
3.
The graph below best represents which function? 100 90 80
Attention
Percentage of Intervals
70
Alone
60 50
Play
40
Demand 30
Tangible
20 10 0 1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Sessions
a.
Attention. b. c. d.
4.
Demand. Tangible. Automatic reinforcement.
A tangible function can be best represented by which of the following clinical scenarios? a.
b.
c.
d.
Sarah asked whether she could take the car out to see a friend. When her mother said no, Sarah started to cry and slammed her door. Her father decided to give her his car instead. A few days later Sarah asked her parents for extra money for clothes shopping. When they said ‘‘no’’ Sarah started to cry. Molly is a new kindergarten student. During playtime she walked into the hallway. The teacher played with her before bringing her back. A few minutes later Molly ran out of the classroom again. The teacher now reports that Molly is running out of the classroom every day. Liz’s husband is always pointing out how often she twirls her hair around her finger. Sometimes she doesn’t even realize she’s doing it. This happens at work or home, and doesn’t matter who is around. Jackson was so excited to go to the playground with his father. On the way there, the father decided to stop at the grocery store. Jackson wasn’t happy.
123
J Behav Educ
When they entered, Jackson became increasingly upset. His father immediately grabbed him and ran out of the store. His father has observed that screaming occurs more often when Jackson is doing something he does not like. 5.
The following graph is best associated with which description? 100 90 80
Percentage of Intervals
70
Play 60
Tangible 50
Demand
40
Attention Alone
30 20 10 0 1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Sessions
a. b. c. d.
The behavior occurs when a direction is given to the learner. The behavior occurs whenever parent/staff member stops interacting with the learner. The behavior occurs when the favorite food, toy or activity of the learner is removed. The behavior occurs at relatively high rates regardless of what is going on in the learner’s immediate environment.
References Albright, L., Reeve, K. F., Reeve, S. A., & Kisamore, A. N. (2016). Teaching statistical variability with stimulus equivalence-based instruction. Journal of Applied Behavior Analysis, 49, 1–12. doi:10. 1002/jaba.249. Beavers, G. A., Iwata, B. A., & Lerman, D. C. (2013). Thirty years of research on the functional analysis of problem behavior. Journal of Applied Behavior Analysis, 46, 1–21. doi:10.1002/jaba.30.
123
J Behav Educ Behavior Analyst Certification Board. (2012). Behavior Analyst Certification Board Fourth Edition Task List. Retrieved from http://www.bacb.com/Downloadfiles/TaskList/BACB_Fourth_Edition_Task_ List.pdf. Chok, J. T., Shlesinger, A., Studer, L., & Bird, F. L. (2012). Description of a practitioner training program on functional analysis and treatment development. Behavior Analysis in Practice, 5, 25–36. Durand, V. M., & Crimmins, D. B. (1988). Identifying the variables maintaining self-injurious behavior. Journal of Autism and Developmental Disorders, 18, 99–117. doi:10.1007/BF02211821. Fields, L., & Reeve, K. F. (2001). A methodological integration of generalized equivalence classes, natural categories, and cross modal perception. The Psychological Record, 51, 67–88. Fields, L., Travis, R., Roy, D., Yadlovker, E., De Aguiar, L., & Sturmey, P. (2009). Equivalence class formation: A method for teaching statistical interactions. Journal of Applied Behavior Analysis, 42, 575–593. doi:10.1901/jaba.2009.42-575. Fields, L., & Verhave, T. (1987). The structure of equivalence classes. Journal of the Experimental Analysis of Behavior, 48, 317–332. Fienup, D. M., Covey, D. P., & Critchfield, T. S. (2010). Teaching brain-behavior relations economically with stimulus equivalence technology. Journal of Applied Behavior Analysis, 43, 19–33. doi:10. 1901/jaba.2010.43-19. Fienup, D. M., & Critchfield, T. S. (2010). Efficiently establishing concepts of inferential statistics and hypothesis decision making through contextually controlled equivalence classes. Journal of Applied Behavior Analysis, 43(437–462), 2010. doi:10.1901/jaba.43-437. Fienup, D. M., & Critchfield, T. S. (2011). Transportability of equivalence based programmed instruction: Efficacy and efficiency in a college classroom. Journal of Applied Behavior Analysis, 44(435–450), 2011. doi:10.1901/jaba.44-435. Hagopian, L. P., Fisher, W. W., Thompson, R. H., & Owen-DeSchryver, J. (1997). Toward the development of structured criteria for interpretation of functional analysis data. Journal of Applied Behavior Analysis, 30, 313–326. doi:10.1901/jaba.1997.30-313. Iwata, B. A., DeLeon, I. G., & Roscoe, E. M. (2013). Reliability and validity of the Functional Analysis Screening Tool. Journal of Applied Behavior Analysis, 46, 271–284. doi:10.1002/jaba.31. Lambert, J. M., Bloom, S. E., Kunnavatana, S. S., Collins, S. D., & Clay, C. J. (2013). Training residential staff to conduct trial-based functional analyses. Journal of Applied Behavior Analysis, 46, 296–300. doi:10.1002/jaba.17. Lovett, S., Rehfeldt, R. A., Garcia, Y., & Dunning, J. (2011). Comparison of a stimulus equivalence protocol and traditional lecture for teaching single-subject designs. Journal of Applied Behaviour Analysis, 44, 819–833. doi:10.1901/jaba.2011.44-819. Moore, J. W., Edwards, R. P., Sterling-Turner, H. E., Riley, J., DuBard, M., & McGeorge, A. (2002). Teacher acquisition of functional analysis methodology. Journal of Applied Behavior Analysis, 35, 73–77. doi:10.1901/jaba.2002.35-73. Moore, J. W., & Fisher, W. (2007). The effects of videotape modeling on staff acquisition of functional analysis methodology. Journal of Applied Behavior Analysis, 40, 197–202. doi:10.1901/jaba.2007.24-06. Ninness, C., Dixon, M., Barnes-Holmes, D., Rehfeldt, R. A., Rumph, R., McCuller, G., et al. (2009). Constructing and deriving reciprocal trigonometric relations: A functional analytic approach. Journal of Applied Behavior Analysis, 42(191–208), 2009. doi:10.1901/jaba.42-191. Ninness, C., Rumph, R., McCuller, G., Harrison, C., Ford, A. M., & Ninness, S. K. (2005). A functional analytic approach to computer-interactive mathematics. Journal of Applied Behavior Analysis, 38(1–22), 2005. doi:10.1901/jaba.2-04. Sidman, M., & Tailby, W. (1982). Conditional discrimination vs. MTS: An expansion of the testing paradigm. Journal of the Experimental Analysis of Behavior, 37, 5–22. Walker, B. D., & Rehfeldt, R. A. (2012). An evaluation of the stimulus equivalence paradigm to teach single-subject design to distance education students via blackboard. Journal of Applied Behavior Analysis, 45(329–344), 2012. doi:10.1901/jaba.45-329. Walker, B. D., Rehfeldt, R. A., & Ninness, C. (2010). Using the stimulus equivalence paradigm to teach course material in an undergraduate rehabilitation course. Journal of Applied Behavior Analysis, 43(615–633), 2010. doi:10.1901/jaba.43-615. Wallace, M. D., Doney, J. K., Mintz-Resudek, C. M., & Tarbox, R. F. (2004). Training educators to implement functional analyses. Journal of Applied Behavior Analysis, 37, 89–92. doi:10.1901/jaba. 2004.37-89. Ward-Horner, J., & Sturmey, P. (2012). Component analysis of behavior skills training in a functional analysis. Behavioral Interventions, 27, 75–92. doi:10.1002/bin.1339.
123