J Behav Educ (2008) 17:187–199 DOI 10.1007/s10864-007-9056-8 ORIGINAL PAPER
Increasing Active Student Responding in a University Applied Behavior Analysis Course: The Effect of Daily Assessment and Response Cards on End of Week Quiz Scores Paul R. Malanga Æ William J. Sweeney
Published online: 30 October 2007 Ó Springer Science+Business Media, LLC 2007
Abstract The study compared the effects of daily assessment and response cards on average weekly quiz scores in an introduction to applied behavior analysis course. An alternating treatments design (Kazdin 1982, Single-case research designs. New York: Oxford University Press; Cooper et al. 2007, Applied behavior analysis. Upper Saddle River: Merrill/Prentice Hall) was used to analyze the effects of response cards and daily assessment on average weekly quiz scores. Differential treatment effects were found between the daily assessment and response card conditions. When compared to baseline, students’ consistently earned higher quiz scores on end of week quizzes in the daily assessment condition. Response cards produced mixed results. More substantial effects were revealed when analyzing individual student performance. In some cases, twice as many students earned 90% or better when either response cards or daily assessment were used compared to baseline. We discuss the implications of these results for other content areas and student demographics. Keywords Behavior analysis Response cards Higher education Active student responding
Teachers use numerous methods to encourage active student responding (ASR) during instruction ( Carnine et al. 1988; Heward 1994, 1996; Maheady et al. 1991). Applying ASR methods is important because students learn more when they actively participate in class (Huby 2001; Pratton and Hales 1986). Ideally, methods for increasing active P. R. Malanga (&) Arlington Developmental Center, 11293 Memphis-Arlington, P.O. Box 586, Arlington, TN 38002-0586, USA e-mail:
[email protected];
[email protected] W. J. Sweeney University of South Dakota, Vermillion, USA
123
188
J Behav Educ (2008) 17:187–199
student responding should meet the following criteria: (a) be relatively low in cost (i.e., in both dollars and in terms of teacher time), (b) be easy to implement, (c) be enjoyable for both teachers and students, (d) be adaptable to a variety of content areas, and (e) produce better learning outcomes than those that they replace (Narayan et al. 1990). Guided notes, choral responding, and response cards are three commonly used strategies that meet the above criteria. The database supporting the efficacy of these strategies for increasing student achievement is extensive (Armendariz and Umbreit 1999; Cavanaugh et al. 1996; Courson 1989; Drevno et al. 1994; Gardner 1989; Gardner et al. 1994; Heward et al. 1989; Heward and Cavanaugh 2001; Kreiner 1997; Sweeney et al. 2005). Consistently, higher achievement scores are reported when teacher directed instructional strategies incorporate high levels of active student responding compared with a more traditional question and answer format (Miller et al. 1995; Sindelar et al. 1986; Van Houton et al. 1974; Van Houton and Thompson 1976). Narayan et al. (1990) used an ABAB reversal design to evaluate the effectiveness of response cards at improving the number of active response opportunities and daily quiz scores of fourth-grade students during whole group instruction in the area of social studies. Traditional hand-raising and passive response modes were compared to the use of write-on response cards (i.e., white laminated boards written on with dry erase markers). During the hand-raising condition only the students who raised their hand and were called on to answer the teacher’s question were actively involved in a given instructional response. The remainder of the class were only passively involved in classroom instruction. During the response card condition, all students in the fourth-grade class were actively involved writing one to two word responses to the teacher’s instructional questions. Therefore, the response card procedure provided many more opportunities for the students to actively participate in their instruction. Results showed that students substantially improved their performance in both the rate of active student response as well as on daily quiz scores under the response card condition when compared to the hand-raising instructional condition. Gardner et al. (1994) replicated the Narayan et al. (1990) study by implementing a response card procedure with a fifth-grade science classroom in an inner-city elementary school. This study again compared write-on response cards with a hand raising approach in a traditional question-answer science review session. Again, the results showed higher daily quiz scores under the response card condition compared with the hand-raising condition. Although much of the research on response cards focuses on elementary students and classrooms and evaluates the rates of active response or participation in conjunction with academic achievement (e.g., quiz scores), several studies investigated the effects of response cards with secondary students (Cavanaugh et al. 1996), students in a highly diverse bilingual classroom (Armendariz and Umbreit 1999), and students identified as exhibiting severe disruptive behaviors in the classroom (Lambert 2001; Lambert et al. 2006). Cavanaugh et al. (1996) used an alternating treatments design to compare the effects of write-on response cards with passive responding during a lesson closure activity with ninth-grade students during a regular education earth science course. During the passive condition the teacher progressively disclosed key terms and read the definitions or descriptions of key terms or processes. The response card
123
J Behav Educ (2008) 17:187–199
189
condition required students to fill in blank spaces in the definitions on the response cards of each term progressively disclosed. Results indicated that the response card condition was more effective at improving the students’ performance on next day and delayed weekly tests compared with scores in the passive review condition. The authors assert that the results during the response card condition provided important feedback to the teacher related to students’ relative mastery of the concepts covered and the students preferred the active response mode to other more passive modes of instruction and content review. Further, these conclusions were consistent with previous recommendations of authorities on the advantages and implementation procedures for using response cards in the classroom (Heward 1994). While these studies examined the effects of active student responding in both elementary (Gardner et al. 1994; Pratton and Hales 1986) and secondary settings (Cavanaugh et al. 1996), few studies have examined the effects of active student response strategies in post-secondary settings. Fassinger (2000) and Huby (2001) describe variables such as classroom culture including student/teacher traits that may foster student participation (Fassinger 2000) or descriptions of procedures used that could increase the level of participation (Huby 2001). The Fassinger (2000) and Huby (2001) papers did not investigate and measure active student responding per se, but described characteristics such as teacher traits and general methods that could foster active student engagement. Only three studies were found via a literature search that directly and systematically investigated level of active student involvement on achievement. Kreiner (1997) investigated the effect of four viewing conditions when a video was presented on the number of multiple-choice questions answered correctly with 57 college students. Students were assigned to one of four viewing conditions: (a) no response requirement (control), (b) take notes in an unstructured format in which a blank sheet of paper was provided only, (c) guided notes where 10 open ended questions were provided on key concepts covered in the video, and (d) interactive requiring oral responses throughout the video presentation. More questions were answered correctly for students in the guided notes and interactive condition compared with the control and notes condition. Kellum et al. (2001) used an alternating treatments design to investigate the effects of response cards on end of class quiz scores with an introductory to exceptional children course at a community college. Review questions were presented at the conclusion of each lesson with and without response cards. During the review question only condition, the instructor presented the review questions and called on individual students who raised their hand. In other words, students self-selected whether they would respond to the instructional material. During the response card condition, all students, when prompted, raised their response cards displaying the answer they thought was correct. More students earned an A on the end of class quiz when response cards were used compared to the review question only condition. Marmolejo et al. (2004) also used an alternating treatments design to investigate the relative efficacy of response cards and standard lecture (i.e., hand-raising) on end of class quiz scores. However, questions were not asked at the end of lecture as was done in the Kellum study, but rather during the lecture. The instructor posed six questions to the class throughout either a response card or standard lecture
123
190
J Behav Educ (2008) 17:187–199
condition. During the response card lecture, students held up pre-printed response cards corresponding to one of two options. During the standard lecture condition students self-selected via hand raising whether to respond to questions (e.g., who thinks this is true?). At the end of each class, an 8–10 item true/false and multiplechoice quiz was given. Results indicate higher end of class quiz scores during the response card condition. While the Kellum et al. (2001) and Marmolejo et al. (2004) studies provides much needed empirical validation of the relative effectiveness of response cards at the postsecondary level, and are the only two studies found that investigated response cards on student achievement directly, the commingling of instructional variables precludes an accurate assessment of response cards alone. The review questions and response card activities required the recall of facts or a simple discrimination; composition of answers per se was not required. Furthermore, using end of class quiz scores only as a primary dependent variable precludes an assessment of retention of course content across larger periods of time, such as end of the instructional week. An important variable remained uncontrolled in both the Kellum et al. (2001) and Marmolejo et al. (2004) studies, level of practice. Marmolejo empirically validates the importance of level of practice as number of responses per class was counted across conditions allowing the computation of mean responses per student. Two to three times as many responses were made during the response card condition compared with the standard lecture condition. This procedural artifact precludes the ability to determine whether lower scores in the standard lecture condition were the result of the difference of procedure or level of responding. Similar performances on end of class quiz scores may have been obtained if level of responses were held constant across conditions. Finally, the Kellum and Marmolejo studies both compared response cards with a hand raising condition. Said another way, an ASR condition was compared with a non-ASR condition. What might have been the result of assessing the effectiveness of two variations of ASR strategies? The purpose of the present study was to compare the effects of response cards and daily assessments on end of week quiz scores in a University introductory applied behavior analysis course. The current study was designed to answer the following two questions: (a) What effect would response cards, an identification task, have on average end of week quiz scores?, and (b) What effect would informal assessments requiring composition of answers have on average end of week quiz scores? The current study extends the literature base in active student responding in general, and response cards in particular, by controlling the number of active student responses made across two ASR conditions.
Method Participants and Setting University students enrolled in an undergraduate introductory applied behavior analysis course served as participants. The majority of students were in their
123
J Behav Educ (2008) 17:187–199
191
sophomore year and were not yet admitted to the teacher education program but were working toward admission. The study was conducted in the Fall 2002 and 2003 semesters, met three times per week for 50-min, and enrollment was 24 and 27 students, respectively. All sessions were conducted in a university classroom in the school of education with approximately 40 seats.
Materials Overhead Transparencies Overhead transparencies were printed landscape, Helvetica, 18 font. Transparencies included questions and answers with each question being progressively disclosed and read by the professor. After the professor prompted, ‘‘cards up’’ and the students raised their cards, the professor progressively disclosed the answer. After a brief discussion, if necessary, the process was repeated until all questions and answers were presented.
Pre-printed Response Cards Pre-printed response cards were made and provided to the students by the instructor. Each response card had two potential answers, one printed on either side of an 8.5 9 1100 sheet of paper. The response cards were constructed with standard white printer paper and answers were printed landscape, Helvetica, 18 font. The answers were grouped into categories with four questions per category. For example, categories included independent/dependent variable, functional relationship/correlation, and DRA/DRO.
Daily Assessments Daily assessments were made by the instructor and consisted of five short answer questions requiring the recall of fact based and application-based questions. The questions corresponded to material covered in that day’s lecture. The professor reviewed each question verbally and by progressively disclosing the answers on an overhead transparency once all students finished answering the questions.
End of Week Quizzes End of week quizzes worth 20 points were administered on the final (i.e., third) class period of the week. Each weekly quiz consisted of fifteen multiple-choice questions and five short answer questions. While each multiple choice question was directly related to a specific content covered in that weeks lecture, they did not address the specific items reviewed in the daily assessment or questions used in the response
123
192
J Behav Educ (2008) 17:187–199
card conditions. Questions used in the daily assessment and response card conditions were adapted to produce the short answer questions on the end of week quiz. At least half of the multiple choice questions were taken directly from the publishers test bank while the remaining were written by the professor to address specific content idiosyncratic to the class.
Experimental Design and Procedure An alternating treatments design (Kazdin 1982; Cooper et al. 2007) was used to assess the effects of response cards and daily assessments on end of week quiz scores. This design, via rapidly alternating conditions, allows one to demonstrate a functional relationship between the independent and dependent variables in a short period of time compared with, for example, a reversal design. A functional relationship is evident when the data paths of the two treatment conditions separate and replication of treatment effect for each independent variable is demonstrated when each successive data point reproduces the level of the prior datum. All lecture content was presented via PowerPoint and the instructor, on an overhead projector, progressively disclosed review questions. Response cards and daily assessments were handed out immediately subsequent to the end of the lecture. An opportunity to ask questions was provided prior to administration of the daily assessment or beginning the response card activity. The instructor reviewed the answers immediately following each opportunity to respond (i.e., each question) via progressive disclosure for the response card condition while all questions were reviewed only after all students completed the daily assessment. The ASR strategies were implemented during the first two class periods while the third class period was reserved for the weekly quiz.
Independent Variables Response Cards The instructor handed out response cards at the end of each lecture during the first two class periods of the new unit. Review questions were progressively disclosed on an overhead projector and the instructor prompted response cards with the cue, ‘‘Cards up!’’ A wait time was provided that corresponded to the complexity of the question. For example, the instructional antecedent, ‘‘When two events systematically co-vary, a ____________ has been demonstrated’’ would require a shorter wait time than ‘‘You are a fifth grade teacher in an inclusive school. You have two special needs students in your class who exhibit deficits in social interaction skills. In an attempt to improve interactions with non-disabled peers, you establish a peer buddy system prior to recess and give the following directive: ‘‘Please stay with your buddy, play with your buddy, and talk with your buddy until we are ready to leave for recess.’’ Social interaction skills would be considered the _______________variable.’’ Shorter wait times lasted 2–3 s while longer wait
123
J Behav Educ (2008) 17:187–199
193
times lasted 5–6 s. Two practice trials were conducted during the inaugural response card lesson prior to the presentation of the first review question to develop stimulus control of student responses.
Daily Assessments Immediately following each lecture, an opportunity to ask questions was provided. Immediately following any questions the students may have had a daily assessment was administered which consisted of five short answer questions corresponding to lecture material. All daily assessments were administered at the end of each lecture immediately following a brief question and answer session. Each question was reviewed when the last student completed the daily assessment.
Dependent Variable Average Weekly Quiz Scores The primary dependent measure, average weekly quiz scores, was calculated by summing the scores from all weekly quizzes and dividing by the number of quizzes taken.
Percentage Measure Percent correct earned on end of week quizzes was calculated by dividing the number of points earned by the total number of points possible. This measure provides a contextual base for interpreting straight end of week quiz scores because as little as one additional point can mean the difference between an A or B on the quiz.
Data Collection and Accuracy Measures End of week quizzes were graded by the instructor and consisted of 15 multiplechoice questions with four answer options and five short answer questions. An answer key was made to standardize the grading criteria for the short answer section and a scantron sheet was used for the multiple-choice section and graded by the University’s computing services. Full credit for the short answer section was provided if the answer included key words or synonymous alternatives to the answer key while credit for a correct answer was provided in the multiple choice section when the bubble corresponding to the correct answer was filled in. Accuracy is defined as an agreement between an observer’s measurement and some predetermined standard (Cooper et al. 2007). In the current study, that predetermined standard was the answer key for both objective and short answer
123
194
J Behav Educ (2008) 17:187–199
sections. The answers on the answer key for the short answer section were taken directly from lecture content. A faculty member with expertise in applied behavior analysis independently scored the short answers using the key. Inter-scorer agreement was 100% throughout the course of the study. This means short answers were scored the same between the primary and secondary scorer with the answer key functioning as a standardized measure for interpreting the students’ answers.
Results Figures 1 and 2 present average weekly quiz scores for Fall 2002 and 2003, respectively. In Fig. 1 baseline data were variable ranging from 12 to 16 points. Variability was reduced and average quiz scores improved with the introduction of the daily quiz and response card conditions. A moderate separation of data paths occurred with higher quiz scores evident under the initial daily assessment condition. Scores ranged from 16 to 18 points under the daily assessment condition and 14 to 15 points under the response card condition. Figure 2 shows a clear separation of treatment effects under the response card and daily assessment conditions. Average quiz scores ranged from 14 to 17 during baseline. Comparatively, an increase in performance occurred with the introduction of the daily assessment condition with improvements in performance for the remaining two data points in this condition comparatively. Average quiz scores ranged from 17 to 18 during the daily assessment condition. A slight decrease in performance was evidenced with the introduction of the response card condition. Average quiz scores ranged from 14 to 15. Said another way, average quiz scores under the daily assessment condition improved between 10% and 20% points compared with response cards (70–75% compared with 85–90%). Results for the Fall 2002 and 2003 classes showed consistent effects. In both classes, average weekly quiz scores were higher under the daily assessment
Fig. 1 Average weekly quiz scores Fall, 2002
20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Baseline
Intervention Daily Assessment
Response Cards
1
2
3
4
5
6
Weekly Quizzes
123
7
8
9
10
J Behav Educ (2008) 17:187–199 Fig. 2 Average weekly quiz scores Fall, 2003
195 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Baseline
Intervention Daily Assessment
Response Cards
1
2
3
4
5
6
7
8
9
10
Weekly Quizzes
condition compared with the response card condition. Averaging the data, however, precludes an analysis of individual performance changes across conditions. Tables 1 and 2 present summary statistics of individual student performance changes across baseline, response card, and daily assessment conditions for both applied behavior analysis classes. Table 1 shows high and low quiz scores within and across conditions for the Fall 2002 and 2003 classes. The lowest quiz scores improved under the response card and daily assessment conditions compared to baseline for both classes. The greatest improvement in low quiz scores compared with baseline occurred under the daily assessment condition for both classes. Table 2 shows the percentage of students scoring 90% or above on quizzes across conditions for the Fall 2002 and Fall 2003 classes. The greatest percentage of students earning 90% or better on weekly quizzes occurred during daily assessment for both classes. More students earned at least 90% during baseline compared with response cards in both classes. Approximately three times as many students earned 90% or better in the daily assessment condition compared with the response card condition in the 2002 class while approximately three and a half times as many students earned 90% or better in the 2003 class.
Table 1 High and low quiz scores within and across conditions High scores in ( )
Table 2 Percentage of students scoring at least 90% on end of week quizzes
Class
Baseline
Response cards
Daily assessment
Fall, 2002
8 (20)
9 (20)
11 (20)
Fall, 2003
6 (20)
11 (20)
13 (20)
Class
Baseline
Response cards
Daily assessment
Fall, 2002
23
15
43
Fall, 2003
35.5
30
52
123
196
J Behav Educ (2008) 17:187–199
Discussion The results of this study extend the empirical database supporting the relative efficacy of active student response strategies on improving student achievement in three ways. First, the current study is the first study to directly compare response cards with a functionally equivalent alternative strategy at the collegiate level with number of responses controlled. The daily assessment and response card procedures are directly analogous in that both required recall and the number of responses to the instructional antecedent differed by only one response (five for daily assessment and four for response cards). While the Kellum et al. (2001) study also provided an instructional alternative to response cards, participation rates among students were self-selected. Second, the current study represents the only postsecondary investigation reporting the relative efficacy of two ASR strategies presented independently and not as a treatment package (i.e., review questions and response cards). All students in the current study were required to respond in both conditions while in the Kellum et al., study, number of responses were not controlled across both conditions. Third, the current investigation used end of week quiz scores as the primary dependent measure while the Kellum study used end of class quiz scores. End of week quizzes requires a maintenance component that is not present in the Kellum et al., study. A clear differentiation of treatment effects was observed between daily assessment and response card conditions with daily assessment consistently producing higher average end of week quiz scores. A number of circumstances may have contributed to the differential performance levels. Response requirement differences may be one. While questions under both experimental conditions were instructionally aligned with end of week quiz content, the daily assessment and end of week quizzes both required composition of answers (i.e., read question/write answer) while the response card condition consisted of an identification task (i.e., see question/raise card). The difference in input/output modes between response cards and end of week quizzes and their degree of correspondence with response requirements on end of week quizzes (i.e., composition of answers for questions reviewed in class) may account for some of the difference in performance. This would make sense since composition responses are more sensitive to shaping contingencies (Skinner 1958). Furthermore, some students may have remediated their responses. This would constitute a form of immediate error correction, which has been shown to facilitate the acquisition of new skills via differential reinforcement of successive approximations (Goddard and Heron 1998; Skinner 1958). Additionally, some daily assessment activities were application based rather than straight fact-based recall. In other words, rather than recalling and composing answers, students may have been required to chart data, a qualitatively difference response. This difference in skill class may account for some of the differences in scores. Quiz scores may have also been influenced by the fact that the ‘‘difficulty level’’ of quizzes under each condition was not equated on an item-by-item basis. One potential control for this variable would be that at least half of the multiplechoice questions were taken from the publisher test bank. Given the dearth of ASR research at the post-secondary level one cannot say, for example, that daily assessments in short answer form would result in higher end of
123
J Behav Educ (2008) 17:187–199
197
week quiz scores with students with exceptional needs (e.g., learning disabled) or for students in different majors. The only way to establish the generality of the current results is to directly replicate the current study with the same population (i.e., freshman/sophomores) and with students from a different demographic such as special needs students, upperclassmen, or different majors. Future research might also focus on extending the methodology of the current study to different course formats such as seminar-based courses or honors classes. A second beneficial extension of the database would be to work with the university’s disabilities services office and assess the relative effectiveness of response cards and daily assessment with those students at risk for failure. One question that cannot be answered from the current research is how often an ASR strategy must be used to produce increased quiz scores. For example, would 1 day of response cards result in the same increase in end of week quiz scores as two or three? Is daily assessment as described in the current study just as, more, or less effective than response cards when each is applied for only one class, two classes, or four classes? These are areas of future research that would extend the research base in meaningful ways by helping identify the effects of instructional modifications on student learning. If 2 days of daily assessment are just as effective as four, the potential for covering more content in the same amount of time is validated. Said another way, if fewer assessment days were equally effective, more time would be available for either additional instruction or discussion of the content. Extending the application of these instructional methods to university courses outside of special education would also provide useful information with respect to the external validity of these procedures. Currently, only this and the Kellum et al. (2001) study have formally investigated response cards at the post-secondary level using single subject research methodology and both were with special education classes. A parametric investigation of daily assessment with and without the feedback component would help delineate the relative contribution of the feedback mechanism to the shaping of composition responses in a postsecondary setting. Finally, equating assessment difficulty across conditions may provide a more sensitive measure of differential effectiveness across treatments. The above methodological artifacts notwithstanding, the current study resulted in important modest improvements in achievement for students in both classes under the daily assessment condition. The most notable finding is the number of students earning 90% or better on weekly quizzes. The daily assessment condition was most effective at increasing overall student scores. References Armendariz, F., & Umbreit, J. (1999). Using active responding to reduce disruptive behavior in a general education classroom. Journal of Positive Behavior Interventions, 1, 152–158. Carnine, D., Granzin, A., & Becker, W. (1988). Direct instruction. In J. L. Graden, J. E. Zins, & M. J. Curtis (Eds.), Alternative educational delivery systems: Enhancing instruction options for all students (pp. 327–349). Washington DC: National Association of School Psychologists. Cavanaugh, R. A., Heward, W. L., & Donelson, F. (1996). Effects of response cards during lesson closure on the academic performance of secondary students in an earth science course. Journal of Applied Behavior Analysis, 29, 403–406.
123
198
J Behav Educ (2008) 17:187–199
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Upper Saddle River, NJ: Merrill/Prentice Hall. Courson, F. H. (1989). Comparative effects of long- and short-form guided notes on learning disabled and at-risk seventh graders’ academic achievement in social studies. Unpublished doctoral dissertation. The Ohio State University. Drevno, G. E., Kimball, J. W., Possi, M. K., Heward, W. L., Gardner, R. III, & Barbetta, P. M. (1994). Effects of active student response during error correction on the acquisition, maintenance, and generalization of science vocabulary by elementary students: A systematic replication. Journal of Applied Behavior Analysis, 27, 179–180. Fassinger, P. (2000). How classes influence students’ participation in college classrooms. Journal of Classroom Interaction, 35(2), 38–47. Gardner, R. (1989). Differential effects of hand raising and response cards on rate and accuracy of active student response and academic achievement by at-risk students during large group fifth grade science instruction. Unpublished doctoral dissertation. The Ohio State University. Gardner, R. III, Heward, W. L., & Grossi, T. A. (1994). Effects of response cards on student participation and academic achievement: A systematic replication with inner-city students during whole class science instruction. Journal of Applied Behavior Analysis, 27, 63–71. Goddard, Y. I., & Heron, T. E. (1998). Please teacher: Help me learn to spell better, teach me selfcorrection. Teaching Exceptional Children, 30(6), 38–43. . Heward, W. L., Courson, F. H., & Narayan, J. S. (1989). Using choral responding to increase active student response during group instruction. Teaching Exceptional Children, 21(3), 72–75. Heward, W. L. (1994). Three ‘‘low-tech’’ strategies for increasing the frequency of active student responding during group instruction. In R. Gardner, D. Sainato, J. Cooper, T. Heron, W. Heward, J. Eshleman, & T. Grossi (Eds.), Behavior analysis in education: Focus on measurably superior instruction (pp. 173–197). Belmont: Brooks-Cole. Heward, W. L. (1996). Everyone participates in this class: Using response cards to increase active student response. Teaching Exceptional Children, 28(2), 4–10. Heward, W. L., & Cavanaugh, R. A. (2001). Educational quality for students with disabilities. In J. A. Banks & C. A. M. Banks (Eds.), Multicultural education: Issues and perspectives (4th ed., pp. 295– 326). Needham Heights, MA: Allyn & Bacon. Huby, H. S. (2001). Encouraging active student participation. College Teaching, 49(4), 141. Kazdin, A. (1982). Single-case research designs. New York: Oxford University Press. Kellum, K. K., Carr, J. E., Dozier, C. L. (2001). Response-card instruction and student learning in a college classroom. Teaching of Psychology, 28(2), 101–104. Kreiner, D. S. (1997). Guided notes and interactive methods for teaching with videotapes. Teaching of Psychology, 24(3), 183–185. Lambert, M. C. (2001). Effects of increasing active student responding with response cards on disruptive behavior in the classroom during math instruction for urban learners. Unpublished dissertation. Columbus, OH: The Ohio State University. Lambert, M. C., Cartledge, G., Heward, W. L., & Lo, Y. (2006). Effects of response cards on disruptive behavior and academic responding during math lessons by fourth-grade urban students. The Journal of Positive Behavioral Interventions, 8(2), 88–99. Maheady L., Mallette B., Harper G. F., & Sacca, K. (1991). Heads together: A peer mediated option for improving the academic achievement of heterogeneous learning groups. Remedial and Special Education, 12(2), 25–33. Marmolejo, E. K., Wilder, D. A., & Bradley, L. (2004). A preliminary analysis of the effects of response cards on student performance and participation in an upper division university course. Journal of Applied Behavior Analysis, 37, 405–410. Miller, A.D., Hall, S.W., & Heward, W. L. (1995). The effects of sequential 1-minute time trials with and without inter trial feedback on general and special education students’ fluency with math facts. Journal of Behavioral Education, 5, 319–345. Narayan, J. S., Heward, W. L., Gardner, R. III, Courson, F. H., & Omness, C. (1990). Using response cards to increase student participation in an elementary classroom. Journal of Applied Behavior Analysis, 23, 483–490. Pratton, J., & Hales, L. W. (1986). The effects of active participation on student learning. Journal of Educational Research, 79, 210–215. Sindelar, P. T., Bursuck, W. D., & Halle, J. W. (1986). The effects of two variations of teacher questioning on student performance. Education and Treatment of Children, 9, 56–66.
123
J Behav Educ (2008) 17:187–199
199
Skinner, B. F. (1958). Teaching machines. Science, 128, 969–977. Sweeney, W. J., Gardner, R., III, Hunicutt, K. L., & Mustaine, J. (2005). Increasing active student response through the use of write-on and pre-printed response cards with academically at-risk learners in an urban elementary third-grade social studies class. Unpublished manuscript, The University of South Dakota. Van Houton, R., Morrison, E., Jarvis, R., & McDonald, M. (1974). The effects of explicit timing and feedback on compositional response rate in elementary school children. Journal of Applied Behavior Analysis, 7, 547–555. Van Houton R., & Thompson, C. (1976). The effects of explicit timing on math performance. Journal of Applied Behavior Analysis, 9, 227–230.
123