Asia Pacific Educ. Rev. (2015) 16:637–651 DOI 10.1007/s12564-015-9404-7
The quality of evidence in reading fluency intervention for Korean readers with reading difficulties and disabilities Yujeong Park1 • Min Kyung Kim2
Received: 6 June 2015 / Revised: 2 November 2015 / Accepted: 4 November 2015 / Published online: 11 November 2015 Education Research Institute, Seoul National University, Seoul, Korea 2015
Abstract This study aimed to provide information about the quality of the evidence on reading fluency instruction for at-risk students and students with reading/learning disabilities as a way to evaluate whether an instructional strategy is evidence-based and has potential for classroom use. An extensive search process with inclusion and exclusion criteria yielded a total of 18 studies to be included in the present study: 12 group design studies and six single-subject design studies. The quality indicators proposed by Gersten et al. (Except Child 71:149–164, 2005) and Horner et al. (Except Child 71:165–179, 2005) were applied to evaluate the quality of selected fluency intervention studies. Results revealed that (a) most group design studies provided little information about the intervention and agent for the comparison group, (b) internal and social validity were not clearly stated in single-subject design studies, and (c) procedural fidelity in assessment and intervention implementation was inadequately addressed in both group design and single-subject design studies. Lack of methodological rigor, which hampers determinations of the effectiveness of fluency instruction, the current status of intervention studies, and future directions are discussed.
& Min Kyung Kim
[email protected] Yujeong Park
[email protected] 1
Department of Theory and Practice in Teacher Education, University of Tennessee, Knoxville, TN, USA
2
Department of Teaching and Learning, East Tennessee State University, Johnson City, TN, USA
Keywords Reading fluency Evidence-based practices Quality indicators Reading/learning disabilities
Introduction Most research views reading fluency as the ability to accurately read connected text at a conventional rate with appropriate prosody (Adams 1990; Hudson et al. 2005; Kim et al. 2009a; Lane et al. 2009), which is a critical component of effective reading instruction (Kim 2000; The National Reading Report [NRP] 2000). Both empirical and clinical research support the assertion that reading fluency skill (e.g., phonogram naming, fluency in sight word and connected text), a measure of the combination of rate and accuracy, can predict students’ overall reading abilities, including reading comprehension in English-speaking children (Gough et al. 1996; Hudson et al. 2008; Meyer and Felton 1999; Ouellette 2006) and in children who speak Korean (Kim 2000; Kim and Park 2015). For students with reading difficulties and disabilities, reading fluency is often more challenging; reading sight words and decoding novel words is slow and arduous, making text comprehension difficult (Chard et al. 2002; Kim 2000; Kim and Park 2015). Due to the critical role of fluency as a predictor for successful reading, researchers have provided numerous suggestions for instructional practices for teachers to promote students’ fluency in classrooms (e.g., Therrien 2004), as well as specialized recommendations for teaching fluency to struggling readers and students with disabilities (e.g., Chard et al. 2002; Wanzek et al. 2010; Wexler et al. 2008). Despite mounting research demonstrating the association between fluency and other reading-related outcomes and the critical role of fluency in successful reading,
123
638
relatively less attention has been paid to teaching fluency in the classroom as compared to reading skills such as vocabulary and reading comprehension (NRP 2000). This is the case both in the USA and in South Korea (Kim et al. 2009a). In studies of teaching practices (e.g., Pressley et al. 2001), many teachers showed limited use of instructional strategies in teaching reading fluency, while exemplary teachers tended to use recommended practices for reading fluency instruction. Although little evidence has emerged about teachers’ reasons for limiting their instructional strategies in reading fluency (Pressley et al. 2001), it is possible to infer that teachers may have limited knowledge about how to effectively enact reading fluency instruction and how they can improve students’ reading fluency abilities. Teaching reading fluency can be even more complex when it requires teachers’ effective practice to encompass both content knowledge about fluency instruction and knowledge about students’ specific learning needs (Brownell et al. 2009; Kim and Park 2015; Lane et al. 2009). Considering the nature of special education and students with reading difficulties, it is certain that teachers of students with reading difficulties and/or disabilities need more specific knowledge and practical ideas based on the promising research evidence available (Benedict et al. 2014; Brownell et al. 2009). In spite of the contribution of sight-words and text-level reading to skilled reading, few teachers demonstrated a sufficiently deep understanding of effective reading instruction for students with reading difficulties (Brownell et al. 2009; Lane et al. 2009). Therefore, there remains more to be established about effective instructional practices for improving reading fluency, particularly for students who are at risk for or have reading disabilities. The first and essential step, thus, is to determine whether instructional approaches used for fluency interventions are promising and explicit enough to be evidence-based practices for these students (Odom et al. 2004). There have been several studies of fluency for struggling readers (Chard et al. 2002; Swanson 2008; Wanzek et al. 2010; Wexler et al. 2008); of the interventions designed to facilitate reading fluency for students with reading disabilities, one strategy that has been effective in increasing the reading fluency and comprehension of students with reading difficulties is repeated reading (i.e., students read a passage multiple times in one sitting) (Therrien 2004). Repeated reading has appeared to improve reading fluency across the grade levels (Chard et al. 2009, 2002; Rasinski et al. 2006; Therrien 2004), and the effectiveness of repeated reading in improving reading fluency and reading comprehension can be increased when it was paired with corrective feedback, producing higher and longer-lasting effects (Strickland et al. 2013).
123
Y. Park, M. K. Kim
Despite huge empirical support accumulated over the last two decades, the repeated reading strategy is still being debated for several reasons (Strickland et al. 2013). One of the reasons is that many studies investigating the effects on reading fluency or comprehension of repeated reading for students with disabilities are often based on single-subject case designs, which makes generalization difficult. Also, the implementation of repeated reading strategies varies widely across studies, making it difficult to repeat or adapt implemented strategies for efficacy purposes (Strickland et al. 2013). This issue has been also addressed in Chard et al.’s study (2009). In a study synthesizing 24 studies on fluency interventions for elementary students with learning disabilities, the authors found repeated reading with a model to be the most effective method for building reading fluency. They also evaluated the selected studies to determine whether the repeated reading strategies are ‘‘evidence-based’’ practices against the rigorous quality criteria. As a result, however, repeated reading did not meet the criteria for being an evidence-based practice for students with and at risk for LD (Chard et al. 2009). Although repeated reading is not yet considered to be an evidencebased practice for struggling readers in this study, the authors concluded that the value of repeated reading interventions should be noticed as a means of improving reading rate, accuracy, and comprehension, and providing multiple opportunities to read text or engage in repeated reading practices should be clearly noted. Over the past decade, reading fluency research in general education and several special education studies has been able to establish some significant connections between instructional strategies for reading fluency and student improvement in fluency and comprehension (Kim 2000; Kim et al. 2009a). However, despite this progress, only a few studies have been facilitated investigating evidence-based practices in teaching reading fluency in Korean. There is, therefore, a need to continue this line of evaluation to promote the use of evidence-based practices in the field of special education, particularly for Korean-language speakers with reading difficulties. Therefore, in order to operationalize evidencebased practices in reading fluency for students with reading/ learning disabilities, researchers must first delineate the essential components of effective reading fluency instruction in addition to considering struggling readers and students with learning disabilities and the promotion of reading fluency skills for these students.
Purposes and research questions Despite mounting research demonstrating the role of reading fluency in increasing reading skills for students with reading/learning difficulties, few efforts to date in
The quality of evidence in reading fluency intervention for Korean readers with reading…
South Korea have examined whether instructional approaches used for teaching fluency are promising and explicit enough to be considered evidence-based practices. Although similar patterns and roles of reading fluency in enabling students to become skilled readers were found in both English and Korean (Kim et al. 2010), there remains a need to examine what effective fluency instruction looks like in teaching fluency in Korean. Therefore, the purposes of this study were (a) to explore the quality of intervention studies to determine evidence-based practices for reading fluency instruction, and (b) to address what we know from the research thus far and what would move us forward in realizing the goal of improving the research on reading fluency instruction, with the goal of improving students’ fluency skills.
Approaches to identifying evidence-based practices To improve students’ academic achievement, it is important to identify which instructional practices are effective and encourage educators to use those practices in their classrooms (Council for Exceptional Children [CEC] 2010; Odom et al. 2005). Researchers invested in intervention research believe that if teachers use evidence-based practices which have been proven as effective in previous studies, then their instruction can be similarly effective, leading to achievement gains in reading (e.g., Chard et al. 2009; Ehri et al. 2001; Odom et al. 2005). Given that reading fluency is a central component of skilled reading in general, there is no doubt that the quality of reading fluency instruction can enhance the academic performance of students with reading difficulties. Nevertheless, while numerous studies have investigated the effects of an intervention on the improvement of reading fluency for students with reading difficulties, little is known about what sorts of instructional practices are promising and make teachers more effective in improving the reading fluency skills of students with disabilities. Thus, it is imperative to understand what it is that teachers can do to promote the reading fluency of students with disabilities (Brownell et al. 2009; Darling-Hammond 2000; Sanders and Rivers 1996; Taylor et al. 2002). Students with disabilities need to be provided with highquality education, and this education should be based upon a strong foundation of high-quality research. To achieve this goal, the Division for Research (DR) of the CEC, which is the largest international professional organization for individuals with disabilities, formed a Task Force on quality indicators for research in special education (Odom et al. 2004) and published two papers for experimental and quasi-experimental group designs (Gersten et al. 2005) and single-subject research designs (Horner et al. 2005).
639
Gersten et al. (2005) provided guidelines for experimental and quasi-experimental group designs to guide educators and researchers in the identification and evaluation of evidence-based practices in special education. For quality indicators for evaluating the quality of completed experimental or quasi-experimental studies, they propose two sets of guidelines: four essential quality indicators and eight desirable quality indicators. Gersten and colleagues suggested that, to be considered acceptable or high quality, ‘‘a research proposal or study would need to meet all but one of the essential quality indicators and demonstrate at least one of the quality indicators listed as desirable’’ (p. 152; refer to Tables 1 and 3 in Gersten et al. 2005 for more details). Essential quality indicators involve (a) description of participants, (b) implementation of the intervention and description of comparison conditions, (c) outcome measures, and (d) data analysis. As one of the critical next steps, they propose refinements of the quality indicators based on continuing research synthesis. Horner et al. (2005) specified quality indicators of single-subject design studies that include seven features: (a) description of participants and setting, (b) dependent variables, (c) independent variables, (d) baseline conditions, (e) experimental control/internal validity, (f) external validity, and (g) social validity. In this paper, the following additional standards were used to validate single-subject design studies as being evidence-based practices (Horner et al. 2005): ‘‘(a) a minimum of five single-subject studies that meet minimally acceptable methodological criteria and document experimental control have been published in peer-reviewed journals, (b) the studies are conducted by at least three different researchers across at least three different geographical locations, and (c) the five or more studies include a total of at least 20 participants’’ (p. 176).
Method Literature search procedures Papers selected for review met the following criteria: (a) they were published in a South Korean domestic refereed journal following the year 2000; (b) they included students who are low-achieving, at risk for reading/learning disabilities, and/or identified as having reading/learning disabilities; (c) they provided intervention or instruction for school-aged learners who speak Korean as their primary language in school settings; and (d) they used quantified student outcomes as the criteria for evaluating reading fluency skills. Studies that included students with intellectual, developmental, physical, and/or visual/sensory impairments as the target population for an intervention program were not included. To identify studies that met
123
123
Group comparison with non-random assignment* (1T, 1C)
Single-subject (multiple probe)
Group comparison with non-random assignment* (1T, 1C)
Group comparison with non-random assignment* (1T, 1C)
Group comparison with non-random assignment* (2Ts, 1C)
Group comparison with non-random assignment (no control group)
Group comparison with non-random assignment* (1T, 1C)
Group comparison with non-random assignment* (1T, 1C)
Single-subject
Kwon (2005)
Kim et al. (2006)
Kim and Kim (2006)
Lee and Kim (2006)
Lee (2007)
Kim and Park (2008)
Min and Lee (2008)
Jang (2009)
Kim et al. (2009)
Yang and Seo (2009)
Single-subject (multiple baseline)
Hur and Jeong (2004)
Single-subject
(A–B design)
Study design
Study
M1–3; ARR; N = 9
E4–5; RD; N = 3
C: RD (N = 15)
T: RD (N = 15);
E4; N = 30;
C: RD (N = 10)
T: RD (N = 10);
E2–3; N = 20;
E1 ARR (N = 40); E2 ARR (N = 23); E3 ARR (N = 24)
G1–3; N = 87;
T2: RD (N = 10); C: RD (N = 11)
T1: RD (N = 12);
E5; N = 33;
C: RD (N = 11)
T: RD (N = 12);
E3; N = 23;
C: NRD (N = 20)
E2–5; N = 40; T: RD (N = 20);
E4–5; RD; N = 3
T: RD (N = 11); C: RD (N = 11)
E4; N = 22;
E4–5; RD; N = 4
Participants
3 sessions per week; 15 sessions total; 30 min per session
T: Repetitive rapid automatized naming training using digits, letters, objects, colors tasks
1 session per week; 15 weeks total; 40 min per session
Reading program developed by Kim and Park (2008)
Individualized instruction delivered by college students (repeated reading, content organization, summarizing)
Repetitive rapid automatized naming training using digits, letters
C: Direct instruction
T: Story retelling strategy (reading aloud, correcting errors, story mapping, and monitoring story map)
C: Repeated reading (independently)
T: Systematic repetitive reading program (teacher modeling, repeated reading, recording, feedback)
10 sessions total
21 sessions total; 5–10 per session
5 sessions per week; 45 sessions total; 40 min per session
3 sessions per week; 12 sessions total; 40–50 min per session
2 sessions per week; 8 weeks total; 40 min per session; 16 sessions total
T1: Repeated choral reading; T2: Repeated choral reading ? SQ3R (survey, question, read, recite, review) strategy C: Regular class participation
3 sessions per week: 20 min per session; 12 weeks total; 36 sessions total
Peer tutoring (modeling, choral reading, oral reading, error correction, repeated reading)
C: (no information)
3 or 4 sessions per week; 20 sessions total; 40–50 min per session
Before-during-after reading strategies (e.g., background knowledge, key words study, repeated reading, summarizing, discussion)
Reading comprehension; reading fluency
Rapid automatized naming speed; reading fluency
Reading comprehension; reading fluency
Reading comprehension; reading fluency
Concept about print; word recognition; reading fluency; reading comprehension
Reading comprehension; reading fluency
Reading fluency
Word recognition; reading fluency
Reading comprehension; reading fluency
Reading comprehension; reading fluency
10 sessions; 60–70 min per session
T: Life-related reading tasks/topics as pre-reading activity/strategy C: (no information)
Reading comprehension; reading fluency
Intended outcome type
14–20 sessions; 50 min per session; 9 weeks total
Duration
Story retelling strategy (reading aloud, correcting errors, story mapping, and monitoring story map)
Intervention programs
Table 1 Characteristics of selected intervention studies (Entries in chronological order)
640 Y. Park, M. K. Kim
Group comparison with non-random assignment* (1T, 1C)
Group comparison with non-random assignment* (1T, 1C)
Group comparison with non-random assignment* (1T, 1C)
Single-subject (multiple baseline)
Group comparison with non-random assignment (2Ts)
Group comparison with non-random assignment* (1T, 1C)
Single-subject
Kim and Kang (2010)
Jung and Ha (2011)
Choi and Kang (2012)
Kim et al. (2012)
Shin and Lee (2012)
Kim and Kang (2014)
Kim and Lee (2014)
C: Traditional reading instruction
E3; ARR; N = 3
T: ARR (N = 10); C: ARR (N = 10)
E4; N = 20;
Self-regulated learning consultation on reading (individual consultation)
C: Reading instruction according to the teacher’s guide of the Korean language
T: Meaning-centered sentence pause reading strategy
T2: Direct instruction
T2: ARR (N = 10)
T1: Game program
E2; N = 20;
Direct instruction (modeling, guided instruction, independent practice, feedback/correction)
C: Reading books (independently)
T1: ARR (N = 10);
E2; ARR; N = 3
T: ARR (N = 10); C: ARR (N = 10)
E2; N = 20;
T: Peer-paired reading using children’s poem
C: Regular class participation
C: RD (N = 5)
T: Graphic organizer strategies
H1; N = 10; T: RD (N = 5);
2 sessions per week; 10 sessions total; 40 min per session
3 sessions per week; 40 min per session; 16 sessions total
2 sessions per week; 15 sessions total; 80 min per session
2 sessions per week; 60 min per session; 16 sessions total
3 sessions per week; 17 sessions total; 20 min per session
30 sessions total; 50 min per session
10 sessions total; 40 min per session
T: Collaborative strategic reading
E2–3; N = 28; T: ARR (N = 14); C: ARR (N = 14)
Duration
Intervention programs
Participants
Reading comprehension; reading fluency
Reading comprehension; reading fluency
Reading comprehension; reading fluency; reading motivation
Word reading: reading fluency
Reading fluency
Reading comprehension; reading fluency
Reading comprehension; reading fluency
Intended outcome type
RD reading disabilities, ARR at-risk readers, NRD participants without reading disabilities/difficulties, E elementary school students, M middle school students, H high school students, mins minutes, * indicates quasi-experimental designs, T Treatment group (Ts means multiple treatment groups), C Control group
Study design
Study
Table 1 continued
The quality of evidence in reading fluency intervention for Korean readers with reading… 641
123
642
these criteria, first, an electronic search of the Korea Education Research and Information Service Research Information Sharing Service (KERIS-RISS), Nurimedia DBpia, Kyobo Scholar, and National Assembly Library Digital Database was conducted by the authors using variations of the following search terms: reading fluency, reading fluency intervention, intervention for reading fluency, intervention for reading disabilities, intervention for learning disabilities, reading intervention for fluency, and so on. A further search was performed using a combination of each of these terms (e.g., reading fluency intervention for learning disabilities). Second, we conducted a manual search of domestic, Korea Citation Indexed (KCI) journals that publish studies on school-aged learners and reading/ learning disabilities with reading fluency: Asian Journal of Education, Journal of Emotional and Behavioral Disabilities, Journal of Special Education, Korean Journal of Special Education, Special Education Research, The Korea Journal of Learning Disabilities, The Journal of Special Education: Theory and Practice, and The Journal of Special Education Children. During this process, opinions, conceptual pieces, review articles, theses, and dissertations were eliminated. To establish the reliability of search and selection procedures, researchers followed the following procedures. First, both electronic and manual searches were conducted by two researchers; the second author performed the search by assessing the abstract and method section of studies published in 2000 through 2008, while the first author did the same for 2009 through 2015. Second, the first author reassessed the 2000–2008 articles selected by the second author, and the second author did the same for the articles selected by the first author. The agreement rate was 100 %, and a total of 18 studies, six single-subject studies and 12 group design studies, that focused on interventions for reading fluency were included in the final review. Establishing coding criteria and training for coding procedures Prior to the coding procedures, researchers searched the recent articles that proposed the quality indicators or standards for evaluating intervention effectiveness and determining evidence-based practices such as Cook et al. (2009), Horner et al. (2005), Gersten et al. (2005), Odom et al. (2005), and Jitendra et al. (2011). These studies share the following contents in common: (a) quantified quality indicators and standards for evidence of effective practices; (b) features of promising intervention studies using singlesubject research or group design research; and (c) exemplary studies that provide useful information for interventions for students with disabilities. We comprehensively reviewed these studies to establish the rubric and coding
123
Y. Park, M. K. Kim
procedure as part of the conceptual framework for this study. Our initial discussion concluded that Jitendra’s (2011) criteria, developed based on the quality indicators suggested by Gersten et al. and Horner et al., are aligned with the purposes of this study. Accordingly, the detailed rubrics in Jitendra et al. (see pp. 140–143 in Jitendra et al. 2011) were reviewed to determine whether each indicator listed in the rubrics could be used to evaluate the selected fluency intervention studies for this study. Throughout this procedure, modifications were made as follows: (a) for information on participants’ disability or difficulties under the description of participant section, we used ‘‘culturally and linguistically diverse background’’ instead of English language learners due to the context of the studies (e.g., Korean language); (b) for disability criteria or status used in Jitendra et al.’s study, because ‘‘state criteria’’ was not applicable to the special education context in Korea, we used ‘‘legal eligibility’’ as a substitute for ‘‘state criteria;’’ and (c) for disability criteria or specific difficulties, we used a Korean fluency measure instead of the measures in English listed in Jitendra et al. The final set of coding criteria for group design studies used the 10 components used in Jitendra et al.’s (2011) study that Gersten et al. (2005) proposed, and the coding criteria for single-subject design studies used the 21 components used in the study by Jitendra et al. that Horner et al. (2005) suggested. Jitendra et al. employed a 3-point rating scale (i.e., a score of 3 = indicator met, 2 = indicator partially met, and 1 = indicator not met) to delineate the elements of each quality indicator. For group design studies, the four essential quality indicators employed in coding procedures included: (a) description of participants (information on participants’ disability or difficulties, group equivalence across conditions, information on intervention agents); (b) description and implementation of intervention and comparison conditions (description of intervention, description and measurement of procedural fidelity, description of instruction in comparison groups); (c) outcome measures (multiple measures, appropriateness of time of data collection); and (d) data analysis (techniques, effect sizes calculation). In addition to the four essential quality indicators consisting of 10 components, the eight desirable indicators (see p. 152 in Gersten et al. 2005) were used to evaluate the levels of methodological rigor of each group design study: (a) attrition rates among intervention samples; (b) test– retest reliability or inter-rater reliability in data collection; (c) outcome measures after immediate posttest; (d) evidence of criterion-related validity and construct validity of the measures; (e) treatment fidelity; (f) documentation of comparison conditions; (g) nature of intervention; and (h) clarity and cohesion in the reporting of results.
The quality of evidence in reading fluency intervention for Korean readers with reading…
For single-subject design studies, the seven quality indicators used in coding included: (a) participant and setting (participant description, participant selection, setting description); (b) dependent variable (description of dependent variable, measurement procedure, measurement validity and description, measurement frequency, measurement reliability); (c) independent variable (description of independent variable, manipulation of independent variable, fidelity of implementation); (d) baseline (measurement of dependent variable, description of baseline condition); (e) experimental control/internal validity (experimental effect, internal validity, results); (f) external validity (replication of effects); and (g) social validity (social importance of dependent variable, magnitude of change in dependent variable, practical and cost-effective implementation of independent variable, nature of implementation of independent variable). Establishing inter-rater reliability and fidelity Two researchers independently undertook content analysis for each selected study using coding rubrics. In addition to the column for coding and scoring, we added a column for each study to indicate where evidence for coding/scoring could be found in the article. For example, when providing 2 out of 3 points for description and measurement of procedural fidelity, the coder provided information to support such decisions and page numbers presenting related information in each article. In the first round of independent coding, 18 studies were randomly assigned to two researchers for coding/rating. After the first round of coding, the second round of independent coding was conducted using the same procedure as the first round to rate the other set of studies. Upon the completion of the second coding round, initial comparison of two coding sheets showed 92 % agreement. Areas of disagreement between coders (8 %) were color-coded, and the majority of the disagreements were found for the social validity indicator for single-subject design studies. Both coders read the articles involving the disagreed upon points together, as well as Horner et al.’s suggestions, to discuss the coding differences and determine the final points. Finally, the two coders reached 100 % agreement. Criteria to determine high-quality methodologies and establish an evidence-based practice Determining methodological rigor To evaluate the methodological rigor of each study, the criteria suggested by Jitendra et al. (2011) by modifying Gersten et al.’s (2005) and Horner et al.’s (2005) quality indicators were used. For group design studies, if a study
643
received a minimum score of 2 on at least nine of the 10 components of four essential quality indicators and at least one desirable quality indicator, we considered the study to be qualified as ‘‘acceptable’’; if a study received a minimum score of 2 on at least nine of the 10 components of four essential quality indicators and at least four desirable quality indicators, we considered the study to be qualified as ‘‘high quality.’’ For single-subject design studies, an average score of 2 with no 1-point scores for the 21 components of seven quality indicators was set up as the minimum to be acceptable for methodological rigor. Determining evidence-based practices Following the determination of the methodological rigor of group design and single-subject design studies, the determination of whether an instructional practice was ‘‘promising’’ or ‘‘evidence-based’’ was established based on the criteria suggested by Gersten et al. (2005) and Jitendra et al. (2011). A ‘‘promising’’ practice from group design studies was determined by the criteria suggested by Gersten and his colleagues: (a) a practice is evidence-based if ‘‘there are at least four acceptable quality studies, or two high-quality studies that support the practice and the weighed effect size is significantly greater than zero;’’ (b) a practice is promising if ‘‘there are at least four acceptable quality studies, or two high-quality studies that support the practice and there is a 20 % confidence interval for the weighted effect size that is greater than zero’’ (Gersten et al. 2005, p. 162). Based on the calculation of Cohen’s d effect size, the mean differences of posttests (fluency outcome measures) between treatment and control groups were divided by a pooled standard deviation. According to the conventional interpretation of Cohen’s d (Cohen 1988), d = 0.2 is considered a small effect size, 0.5 represents a medium effect size and 0.8 a large effect size. A mean weighted effect size was calculated when a specific intervention was implemented for multiple studies. For single-subject design studies, whether fluency instruction was evidence-based was determined based on the criteria suggested by Horner et al. (2005) as follows: ‘‘when (a) a minimum of five single-subject studies that meet minimally acceptable methodological criteria and document experimental control have been published in peer-reviewed journals, (b) the studies are conducted by at least three different researchers across at least three different geographical locations, and (c) the five or more studies include a total of at least 20 participants’’ (Horner et al. 2005, p. 176). Percentage of nonoverlapping data (PND), which is the proportion of the total number of intervention points that exceed the highest baseline point, was used to estimate the effect size of single-subject design studies. Calculated PND was interpreted based on the
123
644
conventions suggested by Mastropieri and Scruggs (1986) as follows: greater than 90 ([90) is highly effective, 90–70 (90 C PND [ 70) is fairly effective, 70–50 (70 C PND [ 50) is questionable, and less than 50 is unreliable.
Results General characteristics of selected studies Table 1 summarizes the general characteristics of reading fluency intervention studies selected for this study. The number of participants for group design studies ranged from 10 to 80, and the average number of participants was 33.3. Except for one study (Yang and Seo 2009), the number of participants of five single-subject design studies was 3 or 4. Most group design studies employed group comparisons with non-random assignment; two studies (Kim and Park 2008; Shin and Lee 2012) did not set up control conditions for comparison. The total number of sessions provided ranged from 10 to 45, and the average number of sessions was 18.6. Except for two studies using a story retelling strategy (Hur and Jeong 2004; Jang 2009), 17 studies used different programs aiming to increase reading fluency. In the following sections, the rating results of essential quality indicators for 12 group design studies and six single-subject design studies are presented. For the studies that were reported as meeting the criteria for ‘‘high quality’’ or ‘‘acceptable,’’ the results of whether the reading fluency intervention meets the criteria for ‘‘evidencebased’’ or ‘‘promising’’ are addressed. Evidence quality of the group design studies Description of participants For this indicator, three studies (25 %) out of 12 studies met or exceeded the minimum criterion (i.e., average 2 points across all three components with no 1 point scores). While most studies (N = 9; 75 %) were evaluated as providing sufficient information on participants’ reading or learning disability and reading difficulties, only three studies (25 %) provided sufficient information about the equivalence of groups across conditions (e.g., participants are comparable across intervention conditions according to relevant characteristics and disabilities; Choi and Kang 2012; Jung and Ha 2011; Shin and Lee 2012). Two of these studies (Choi and Kang 2012; Jung and Ha 2011) used nonrandom assignment for group comparison (treatment group vs. control group); however, little information was provided regarding control group assignment. No study
123
Y. Park, M. K. Kim
provided documentation about the intervention agent providing intervention and the equivalence of intervention agents. In all 12 studies, the researcher(s) or graduate students implemented the fluency intervention, and there was no study involving general education teachers or special education teachers for the intervention/treatment group. Two studies (Jung and Ha 2011; Lee 2007) stated that students in the control group participated in regular reading class; however, no specific information was provided or documented about the agents for the control group. Implementation of intervention and comparison conditions Only one study met or exceeded the minimum criteria across all three components for this indicator (Choi and Kang 2012). With the exception of one study that received a rating of 3 (Choi and Kang 2012), all the remaining studies (92 %) did not specify information about procedural fidelity, resulting in 1-point scores for the procedural fidelity indicator. Interventions included the following strategies for reading fluency: (a) life-related reading tasks/topics as a pre-reading activity/strategy (Kwon 2005); (b) repetitive rapid automatized naming training (Kim and Kim 2006); (c) peer tutoring (Lee and Kim 2006) or peerpaired reading using a children’s poem (Choi and Kang 2012); (d) repeated choral reading (Lee 2007); (e) projectbased learning (anchored instruction) (Heo 2008); (f) reading program developed by Kim and Park (2008); (g) systematic repetitive reading program (Min and Lee 2008); (h) story retelling strategy (Jang 2009); (i) collaborative strategic reading (Kim and Kang 2010); (j) direct instruction (Shin and Lee 2012); (k) meaning-centered sentence pause reading strategy (Kim and Kang 2014). Five studies (42 %) described little or limited information about the instruction/strategies provided in comparison conditions, and only one study (Choi and Kang 2012) fully provided information about instructional strategies or students’ actions in the comparison condition, such as if students were reading books independently (Choi and Kang 2012) and non-anchored instructional strategies used for reading (Heo 2008). Outcome measures and data analysis Overall, no studies met or exceeded the minimum criteria for outcome measures criteria. This is partially due to the fact that no studies provided multiple measures for fluency outcomes, resulting in 1-point scores for this component (i.e., multiple measures used). With the exception of one study (Jung and Ha 2011), the Basic Academic Skills Assessment (BASA: Kim 2000), a standardized Korean oral reading fluency assessment commonly used for the
The quality of evidence in reading fluency intervention for Korean readers with reading…
645
Table 2 Average ratings for group experimental and quasi-experimental research Study
Quality indicators Description of participation
Intervention/comparison conditions
Outcome measures
Data analysis
Effect size
Kwon (2005)
2.33
2
2
2
0.52 (M)
Kim and Kim (2006)
2
1.67
2
2
1.92 (L)
Lee and Kim (2006) Lee (2007)
2.33 1.67
1.67 2
2 1
2 2
1.03 (L) 1.38 (L), 2.29 (L)
Kim and Park (2008)
2
1.67
2
3
NA
Min and Lee (2008)
2
1.67
2
2
0.47 (S)
Jang (2009)
1.33
1.67
1
2.5
0.20 (S)
1.67
2
2
1.24 (L)
Kim and Kang (2010) 2 Jung and Ha (2011)
1.67
2
2
2
1.29 (L)
Choi and Kang (2012)
2.67
3
2
2
2.85 (L)
Shin and Lee (2012)
2.33
Kim and Kang (2014) 2
1.67
1.5
2
NA
2
2
2
0.49 (S)
NA Information needed to calculate effect size was not applicable; Cohen’s d effect size was interpreted as follow: small (S) C 0.20, medium (M) C 0.50, large C 0.80 (L) (Cohen 1988); for Lee (2007) two effect sizes were calculated (T1 vs. C and T2 vs. C)
purpose of curriculum-based measurement in South Korea, was used to measure oral reading fluency skills. In most studies (N = 11; 92 %), the researcher(s) selected BASA reading passages based on instructional levels, and two passages were tested (e.g., Passage A ? Passage B ? Passage A) to obtain reliable mean or median scores for words read correctly per minute. Regarding measuring and collecting data at appropriate times, most studies (N = 9; 75 %) met this indicator, indicating that fluency measures were provided within 2 weeks of intervention. Only two studies (17 %) met or exceeded the minimum criteria for the data analysis indicator (Kim and Park 2008; Jang 2009). This is partially because 10 studies (83 %) did not report effect size, although most studies (N = 11; 92 %) fully met the indicator about employing techniques linked to research question(s) and using appropriate statistical analysis for the unit of analysis. Table 2 summarizes the average ratings for the four quality indicators of 12 group design studies. Overall methodological rigor and evidence-based practices Based on the criteria suggested by Gersten et al. (2005), only one study (Choi and Kang 2012) met the minimum criteria for essential quality indicators (i.e., a minimum score of 2 on at least 9 of the 10 components of four essential quality indicators); however, this study did not meet any of the desirable quality indicators. Accordingly, neither promising nor high-quality group studies were found. Although six out of 12 studies (50 %) showed a
large effect size (Cohen 1988), none of the group design studies was evaluated as providing sufficient information for evidence-based practices in reading fluency. Evidence quality of the single-subject design studies The average ratings of seven quality indicators are presented in Table 3. For each quality indicator, the mean score of each study was provided. PND was calculated for each study. However, because Yang and Seo’s (2009) study did not provide sufficient numeric data to calculate PND, PND was not provided for this study. Participants and setting Five out of six studies (83 %) met or exceeded the minimum criterion (i.e., an average score of 2 with no 1-point scores). One study (17 %) that failed to meet this indicator did not precisely provide critical features of the setting, such as types of classroom and room arrangement, that allow for replication. All studies, however, fully met the criteria for participants and provided sufficient information about participant description and participant selection. Two studies (33 %) (Kim et al. 2006, 2012) fully met the criteria on all three components of this indicator. Dependent variable (outcome measures) None of the studies fully met the criteria on all five components of the dependent variable indicator, and only one study (Kim et al. 2009a, b) met the minimum criterion of
123
646
Y. Park, M. K. Kim
Table 3 Average ratings for single-subject research Study
Quality indicators Participants and setting
Dependent variable (DV)
Independent variable (IV)
Baseline
Experimental control/ internal validity
External validity
Social validity
PND
Hur and Jeong (2004)
2.67
1.8
2
2.5
2.67
3
2.4
81.5
Kim et al. (2006)
3
1.8
2.33
2
2.67
3
2.8
53.33
Kim et al. (2009a, b)
2.67
2.6
3
2
2.33
3
2.6
62
Yang and Seo (2009)
2.67
1.6
1.67
1
1.33
3
2.6
NA
Kim et al. (2012)
3
2.6
2.67
3
2.33
3
2.6
100
Kim and Lee (2014)
2.33
1.8
2
1
1.33
1
1.6
100
PND percentage of nonoverlapping data, NA not applicable due to the unavailability of the graphed data
an average score of 2 with no 1-point scores. While most studies fully or partially described dependent variables and measurement procedures, none of the studies provided sufficient information about measurement validity. Five out of six studies (83 %) provided information about measurement frequency. In contrast, only two studies (33 %) (Kim et al. 2012; Kim et al. 2009a, b) provided reliability data for each measure of the dependent variable(s). Independent variable (intervention) Half of the studies (50 %) met or exceeded the minimum criteria for this indicator. One study (Kim et al. 2009a, b) fully met the criteria on all three components of this indicator. Only two studies (33 %) specified information about implementation fidelity, and four studies reported limited or little information about procedural fidelity. Baseline For baseline, four studies (67 %) met the minimum criteria for this indicator. Only two studies provided adequate information about baseline conditions (e.g., materials, procedures, and settings). In contrast, two studies (33 %) did not receive a partial rating across two components because they provided little information about baseline conditions and baseline stability. Validity (experimental control/internal validity, external validity, social validity) For the experimental control/internal validity indicator, four studies met or exceeded the minimum criteria.
123
However, none of the studies fully met the criteria on all three components of this indicator. Most studies provided three or more demonstrations of experimental effects (four studies received 3-point scores; two studies received 2-point scores). No studies received 3-point scores for internal validity since limited experimental controls were designed and employed, which is likely to threaten the internal validity. For the external validity indicator, five out of six studies obtained 3-point scores, showing that treatment (intervention) effects were repeatedly (three or more times) measured across participants, behaviors, and/or materials. The mean rating scores of social validity ranged from 1.6 to 2.8, and only one study received less than a 2-point score. However, of the five studies (83 %) that received more than 2-points on average, only one study (Kim et al. 2006) met or exceeded the minimum criterion. All studies fully met the criteria of describing the social importance of the dependent measure(s). In contrast, with the exception of one study (Kim et al. 2006), no study provided social validity data describing the features of intervention such as intervention acceptability, feasibility, effectiveness, and continued use. Overall methodological rigor and evidence-based practices Based on the criteria suggested by Horner et al. (2005), no study met the minimum criteria for essential quality indicators (i.e., an average score of 2 with no 1-point scores for the 21 components of seven quality indicators). Accordingly, neither promising nor high-quality single-subject design studies were found in this study. Although four out of six studies (67 %) showed the fluency intervention as
The quality of evidence in reading fluency intervention for Korean readers with reading…
highly effective or fairly effective (Mastropieri and Scruggs 1986), none of the single-subject design studies was evaluated as providing sufficient information for evidence-based practices in reading fluency.
Discussion This study aimed to provide information about the quality of evidence on reading fluency instruction for at-risk students and students with reading/learning disabilities as a way to evaluate whether an instructional strategy is evidence-based and has potential for classroom use. The standards and quality indicators proposed by the CEC were used to determine the quality of published research, and the criteria proposed by Gersten et al. (2005) and Horner et al. (2005), as well as Jitendra et al.’s (2011) modifications, were used to rate the selected studies (12 group design studies and 6 single-subject design studies) and evaluate methodological rigor and the quality of evidence. The quality of evidence in group design studies Overall, an evaluation of the quality of 12 group design studies indicated that no study provided sufficient information meeting the criteria for methodological rigor (i.e., acceptable quality in methodology or high quality in methodology), resulting in no study meeting the criteria for evidence-based instruction. Although one study (Choi and Kang 2012) obtained 2-point scores for nine components, this study did not address any of the eight desirable indicators. One of the main reasons the group design studies did not meet the minimum criteria for methodological rigor was that 12 studies did not include procedural fidelity information about implementing assessments and interventions. In addition, with the exception of one study (Choi and Kang 2012), all of the 12 studies received 1-point scores on the fidelity criteria. In addition, most studies documented little or limited information about instruction provided to the control group and the agents for the control group, resulting in the low rating results. Furthermore, only two studies (Kim and Park 2008; Jang 2009) reported effect sizes and interpreted Cohen’s d values. Therefore, no group design studies on reading fluency intervention met the standards and criteria suggested by Gersten et al. (2005) for an evidence-based practice for students with reading/ learning disabilities. The quality of evidence in single-subject design studies Similar to the group design studies, according to the results of the evaluation using Horner et al.’s (2005) criteria to
647
determine the quality of evidence in single-subject design studies, the six studies selected in the current study met neither the criteria for methodological rigor nor evidencebased practices. Although Kim et al.’s (2009b) study received more than 2- or 3-point ratings for 20 components out of the 21 components of seven quality indicators, they did not document whether the implementation of the independent variable(s) was practical and cost-effective (social validity). This limitation was also observed in other studies; except for Kim et al. (2006), information about fluency intervention features with regard to practical use and cost-effective access (e.g., intervention acceptability, feasibility, effectiveness, and continued use) was lacking in most single-subject design studies. In addition, for the criteria addressing whether the measurement validity and reliability for dependent variables were established, absence of quality in this indicator was commonly found across the studies. Without obtaining information about the psychometric properties of dependent and outcome measures such as reliability and validity, it is difficult to convince others that the outcomes of a fluency intervention are reliable and promising (Gall et al. 2006).
Conclusions and implications for future research The primary goals of this study were to (a) examine what effective fluency instruction for students with reading/ learning disabilities looks like in South Korea and (b) draw practical implications from the existing literature for effectively teaching reading fluency in classrooms. Although our attempts to find evidence-based practices for reading fluency instruction in Korean did not yield the results we had hoped for due to the limited methodological rigor of the selected studies, this study is a meaningful step toward suggesting future directions for the development of intervention studies for teaching reading fluency in Korean. In the following section, based on the current status of the research on reading fluency for students with reading difficulties as well as the findings of this study, future directions for research are discussed. Among the 18 studies meeting the inclusion and exclusion criteria for the current study, the ratings for the quality of evidence in both group design and single-subject design studies did not fulfill the criteria for methodological rigor, with the result that no studies were supported as evidence-based practices. However, this finding should not be interpreted as a simple judgment on the effectiveness of fluency practices used across studies. Rather, it seems to fair to say that all lists of methodological quality criteria can be devised in a manner such that pretty much all intervention studies, particularly in educational settings, fail them, but that does not mean that there is no useful
123
648
information in those studies. This caution is also addressed in Chard et al.’s (2009) study in which the repeated reading strategy was evaluated using the same set of criteria and only a few studies met the criteria established by Gersten et al. (2005) and Horner et al. (2005). They thoughtfully pointed out that the disappointing results might be due to (a) the rigorous criteria which reflects ‘‘how difficult it is to conduct high-quality research’’ (Chard et al. 2009, p. 278) and/or (b) the rating process requiring extensive knowledge and consensus among reviewers. This study also suggests how researchers might provide better quality and scientific evidence in fluency intervention studies, and what instructional strategies require more demonstrations and replications to be appropriately used in the classroom and other teaching–learning settings (Mayton et al. 2010). Although there have been a number of efforts to examine the effects of fluency instruction for students who are at risk for and/or have reading/learning disabilities in South Korea, some methodological insufficiencies were commonly observed. As revealed in the current study, in particular, fidelity of intervention and assessment implementation was inadequately addressed in both group design and single-subject design studies. Implementation fidelity is ‘‘the degree to which an intervention is delivered as intended and is critical to successful translation of evidence-based interventions into practice’’ (Breitenstein et al. 2010, p. 164). That is, fidelity is the basic premise to ensure that an instructional practice and its content are consistently conveyed even with different agents (e.g., teachers) and in different settings (Glasgow et al. 2003). Without sufficient evidence about implementation fidelity addressing intervention and assessment procedures, even with the observed, large effect size in a study, replicability of the study cannot be guaranteed, which in turn results in unconvincing evidence on the effectiveness of the intervention or instructional strategy (Cook 2014; Smith et al. 2007). Reporting sufficient information about intervention and assessment procedures and reporting implementation fidelity would facilitate the possibility of replication of the intervention study, leading to higher-quality research and evidence for effective practices. Although the role of reading fluency in becoming a skilled reader has been established in quite a number of studies in Korean, relatively fewer intervention studies on reading fluency for students with reading/learning difficulties have been conducted in South Korea as compared to intervention studies on word-level reading (e.g., decoding, word recognition skills) and passage comprehension intervention for those students (Kim et al. 2009a, b). According to Kim and his colleagues’ (Kim et al. 2009a) synthesis on reading intervention for Korean readers with reading/learning disabilities, approximately 85 % of
123
Y. Park, M. K. Kim
studies in peer-reviewed journals published from 1999 to 2008 were about word recognition or reading comprehension skills; only 15 % of published studies focused on reading fluency. Furthermore, most of the studies on reading fluency for students with reading difficulties focused on developmental characteristics in the reading fluency of elementary students or correlational studies (e.g., reading fluency as an outcome measure predicted by word-level reading or rapid automatized naming or reading fluency as a predictor of reading comprehension) (e.g., Kim et al. 2010), indicating that relatively less attention has been paid to effective reading fluency interventions for South Korean readers with reading/learning disabilities. This might be partially due to the fact that, like in the USA, reading fluency is often not considered to be a major part of the subject matter in class, and the reading-related curriculum and materials used in South Korea’s school settings include few tasks requiring reading fluency skills. Therefore, in addition to establishing methodologically sound intervention studies, more research in terms of both quality and quantity examining the effectiveness of reading fluency intervention/instruction in South Korea may need to be conducted. Fluent reading of both sight-words and connected text is an essential component of skilled reading and is a prerequisite skill for overall reading abilities in English as well as Korean, including reading comprehension (Kazir-Cohen et al. 2006; Hudson et al. 2009; Jeong 2015). Recognizing the contribution of word recognition skills to reading fluency and reading comprehension in both English and Korean (Kim 2000; Kim et al. 2009a), the accuracy and rate of sight-words seem to be essential for reading success in Korean as well; yet, there is limited research focusing on fluency instruction on sight-words in Korean. Basically, reading fluency comprises reading accuracy and rate at the word level (Meyer and Felton 1999). This simple understanding of reading fluency at the word level has been broadened to include reading fluency at every level of reading, such as letter naming, whole-world identification, and comprehension at the passage level (Wolf and Bowers 1999; Kame’enui et al. 2000). Hudson et al. (2009) suggested that these sub-skills of reading fluency should be considered in teaching and assessing students’ reading fluency since one or more deficits in the sub-skills may result in a disfluent reader. Considering the importance of teaching sight-words fluency, future research needs to investigate the effects of fluency intervention to improve the sub-skills (e.g., letter-level, word-level, passage-level fluency) of reading fluency in Korean. As the conclusions of this study were based on only 18 studies that met inclusion and exclusion criteria, the findings of this study might limit its external validity. In addition, this study did not include intervention studies that
The quality of evidence in reading fluency intervention for Korean readers with reading…
were not published in peer-reviewed journals (e.g., theses, dissertations, technical reports); thus, the results regarding the quality of evidence in reading fluency interventions might be different if a different set of inclusion/exclusion criteria were to be employed. Moreover, this study only included fluency intervention for students with reading difficulties/disabilities; the findings might not be generalizable to studies of fluency intervention for students without disabilities and/or students with different types of disabilities (e.g., intellectual disabilities, visual/hearing impairments). Although this study includes varying efforts to establish procedural fidelity regarding reviewing and ratings for each selected study, different reviewers and a different training process might also bring about different evaluations and interpretations. Creating a direct linkage between teachers’ actions and students’ achievement is difficult, particularly for students with reading difficulties; however, increases in teachers’ uses of evidence-based practices can lead to more effective instruction, which in turn promotes students’ reading achievement. Researchers have reached some consensus on the importance of teaching reading fluency to struggling readers; nevertheless, there is still a need to delineate what effective fluency instruction looks like, as well as to determine promising instructional practices for struggling readers and students with reading/learning disabilities. This is also the case for fluency intervention studies in South Korea. Recognizing what makes intervention research more rigorous and emphasizing such research quality can guarantee effective reading fluency instruction as well as positive student outcomes. Acknowledgments The rating rubric and the ways of rating for each indicator (i.e., indicator not met = 1, indicator partially met = 2, indicator met = 3) were initially developed by Jitendra and her colleagues (see Jitendra et al. 2011, pp. 140–143). Refer to Jitendra et al. for more details on the rating development and administration process. Adapted quality indicators and coding criteria used within the current study are available upon request. This paper includes a review and evaluation of previously published studies, and the results and discussion sections were developed based on quality indicator ratings for the selected studies. We as the authors thoughtfully decided to include only average ratings with author information. The rating results for each component under quality indicators are available upon request as appropriate.
References [References marked with an asterisk denote studies included in the review.] Adams, M. J. (1990). Beginning to read: Thinking and learning about print. Cambridge: MIT Press. Benedict, A., Brownell, M., Park, Y., Bettini, E., & Lauterbach, A. (2014). Taking charge of your professional learning. Teaching Exceptional Children, 46(6), 147–157.
649
Breitenstein, S. M., Gross, D., Garvey, C., Hill, C., Fogg, L., & Resnick, B. (2010). Implementation fidelity in community-based interventions. Research in Nursing and Health, 33(2), 164–173. doi:10.1002/nur.20373. Brownell, M. T., Dimino, J., Bishop, A. G., Haager, D., Gersten, R., Menon, S., et al. (2009). The role of domain expertise in beginning special education teacher quality. Exceptional Children, 75, 391–411. Chard, D. J., Ketterlin-Geller, L. R., Baker, S. K., Doabler, C., & Apichatabutra, C. (2009). Repeated reading interventions for students with learning disabilities: Status of the evidence. Exceptional Children, 75, 263–281. Chard, D. J., Vaughn, S., & Tyler, B. J. (2002). A synthesis of research on effective interventions for building reading fluency with elementary students with learning disabilities. Journal of Learning Disabilities, 35(2), 386–406. *Choi, M. R., & Kang, O. R. (2012). The effects of peer paired reading using children’s poem on oral reading fluency and attitude of low-achieving students in reading. The Journal of Special Education: Theory and Practice, 13(4), 283–312. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum. Cook, B. G. (2014). A call for examining replication and bias in special education research. Remedial and Special Education, 35, 233–246. Cook, B. G., Tankersley, M., & Landrum, T. J. (2009). Determining evidence-based practices in special education. Exceptional Children, 75(3), 365–383. Darling-Hammond, L. (2000). Educating teachers. San Francisco: Jossey Bass, forthcoming. Ehri, L. C., Nunes, S. R., Willows, D. A., Schuster, B. V., YaghoubZadeh, Z., & Shanahan, T. (2001). Phonemic awareness instruction helps children learn to read: evidence from the National Reading Panel’s meta-analysis. Reading Research Quarterly, 36, 250–287. Gall, M., Gall, J., & Borg, W. (2006). Educational research (8th ed.). New York: Allyn & Bacon. Gersten, R., Fuchs, L. S., Compton, D., Coyne, M., Greenwood, C., & Innocent, M. S. (2005). Quality indicators for group experimental and quasi-experimental research in special education. Exceptional Children, 71, 149–164. Glasgow, R. E., Lichtenstein, E., & Marcus, A. C. (2003). Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health, 93, 1261–1267. Gough, P. B., Hoover, W. A., & Peterson, C. L. (1996). Some observations on a simple view of reading. In C. Cornoldi & J. Oakhill (Eds.), Reading comprehension difficulties (pp. 1–13). Mahwah, NJ: Erlbaum. Heo, Y. (2008). The effect of project-based anchored instruction on the academic achievement and classroom activities of students with and without learning disabilities placed in inclusive middle school reading classes. Korean Journal of Special Education, 43(1), 145–165. Horner, R. H., Carr, E. G., Halle, J., Mcgee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional children, 71, 165–179. Hudson, P. B., Nguyen, T. M. H., & Hudson, S. (2008). Challenges for preservice EFL teachers entering practicum. In Proceedings 2008 Asia TEFL International Conference: Globalizing Asia: The Role of ELT, Bali, Indonesia (pp. 1–12). Hudson, R. F., Lane, H. B., & Pullen, P. C. (2005). Reading fluency assessment and instruction: What, why, and how? The Reading Teacher, 58, 702–714.
123
650 Hudson, R. F., Lane, H. B., Pullen, P. C., & Torgesen, J. K. (2009). The complex nature of reading fluency: A multidimensional view. Reading and Writing Quarterly, 25, 4–32. *Hur, S. J., & Jeong, J. H. (2004). Effects of story retelling strategy on the reading comprehension and fluency of students with learning disabilities. The Journal of Special Education: Theory and Practice, 5(1), 369–387. *Jang, B. C. (2009). Comparison of the effects of story retelling strategy instruction and direct instruction on reading fluency and comprehension of students with reading difficulties. Journal of Special Education for Curriculum and Instruction, 2(2), 123–142. Jeong, J. S. (2015). An examination of changes in first graders‘ consonant-vowel naming, word recognition, and reading fluency. The Journal of Elementary Education, 28(1), 113–131. Jitendra, A. K., Burgess, C., & Gajria, M. (2011). Cognitive strategy instruction for improving expository text comprehension of students with learning disabilities: The quality of evidence. Exceptional Children, 77, 135–159. *Jung, D. Y., & Ha, C. W. (2011). The effects of reading instruction through graphic organizer strategies on reading fluency and comprehension ability of high school students with learning disabilities. The Korea Journal of Learning Disabilities, 8(1), 43–63. Kame’enui, E. J., Simmons, D. C., Good, R. H., & Harn, B. A. (2000). The use of fluency-based measures in early identification and evaluation of intervention efficacy in schools. In M. Wolf (Ed.), Time, fluency, and dyslexia (pp. 307–333). Parkton, MD: York Press. Kim, D. (2000). Basic Academic Skills Assessment. (BASA Reading). Seoul: Hakjisa. *Kim, E. J., Choi, H. S., & Jang, D. J. (2006). The effects of researchbased reading instruction on the reading fluency and reading comprehension of students with learning disabilities. Journal of Special Education, 13(2), 274. *Kim, M. K., & Kang, O. R. (2010). The effect of collaborative strategic reading on reading fluency and comprehension of lowachieving students in reading. The Korea Journal of Learning Disabilities, 7(2), 97–117. *Kim, S. J., & Kang, O. R. (2014). The effects of the meaningcentered sentence pause reading strategy on reading fluency and comprehension of children with poor reading comprehension. The Korea Journal of Learning Disabilities, 11(1), 243–259. *Kim, N. Y., & Kim, J. K. (2006). Effects of repetitive training of the RAN task on word recognition and reading fluency for children with reading disabilities. Journal of Emotional and Behavioral Disorders, 22(4), 271–291. Kim, D. I., Koh, E. Y., Jeong, S., Lee, Y., Lee, K. J., Park, J., & Kim, I. (2009). Learning disabilities in Korea: A synthesis of research from 1999 to 2008. Asian Journal of Education, 10(2), 283–347. *Kim, R. K., & Lee, M. S. (2014). The effects of self-regulated learning consultation on reading ability and Korean language performance for elementary school reading underachievers. Korean Education Inquiry, 32(3), 25–47. *Kim, S. J., Lee, D. S., & Kim, S. Y. (2012). The effects of a direct instruction Han-Geul reading program on word reading and reading fluency of the multicultural students with reading difficulties. The Journal of Special Education: Theory and Practice, 13(4), 415–445. *Kim, M. S., & Park, C. H. (2008). The effectiveness of reading intervention on at-risk children in first through third grade. Korean Journal of Child Studies, 29(5), 301–319. Kim, M., & Park, Y. (2015). A systematic review of high-tech assistive technologies (AT) in improving reading fluency of children with reading disabilities and difficulties. Korean Journal of Special Education, 50(1), 79–97.
123
Y. Park, M. K. Kim Kim, A. H., Park, S. H., & Kim, J. H. (2010). Reading fluency of elementary students in Korea: Reading developmental patterns and error patterns. Korean Journal of Communication Disorders, 15, 43–55. *Kim, K. S., Song, C. W., & Byun, C. S. (2009). Effects of the rapid automatized naming training on naming speed and reading fluency for children with learning disabilities. The Korea Journal of Learning Disabilities, 6(2), 151–171. *Kwon, J. S. (2005). The effects on reading fluency and reading comprehension to present pre-experience of life related to reading subject for reading disabilities. Korean Journal of Special Education, 40(2), 313–331. Lane, H. B., Hudson, R. F., Leite, W. L., Kosanovich, M. L., Strout, T. M., Fenty, N. S., & Wright, T. L. (2009). Teacher knowledge about reading fluency and indicators of students’ fluency growth in reading first schools. Reading and Writing Quarterly, 25, 57–86. *Lee, T. S. (2007). Effects of repeated choral reading (RCR) and SQ3R strategy on reading fluency and reading comprehension of students with reading difficulties. Korean Journal of Special Education, 41(4), 133–147. *Lee, T. S., & Kim, D. I. (2006). Effects of peer tutoring on reading fluency of students with reading difficulties. The Journal of Special Education: Theory and Practice, 7(3), 121–135. Mastropieri, M. A., & Scruggs, T. E. (1986). Early intervention for socially withdrawn children. Journal of Special Education, 19, 429–441. Mayton, M. R., Wheeler, J. J., Menendez, A. L., & Zhang, J. (2010). An analysis of evidence-based practices in the education and treatment of learners with autism spectrum disorders. Education and Training in Autism and Developmental Disabilities, 45, 539–551. Meyer, M. S., & Felton, R. H. (1999). Repeated reading to enhance fluency: Old approaches and new directions. Annals of Dyslexia, 49, 283–306. *Min, H. S., & Lee, D. S. (2008). Effects of a systematic repetitive reading program on reading fluency and reading comprehension of underachieving elementary students. Asian Journal of Education, 9(4), 149–172. National Reading Panel. (2000). Report of the National Reading Panel. Washington, D.C.: National Institute for Child Health and Human Development. Odom, S. L., Brantlinger, E., Gersten, R., Horner, R. D., Thompson, B., & Harris, K. (2004). Quality indicators for research in special education and guidelines for evidence-based practices: executive summary. Arlington, VA: Council for Exceptional Children Division for Research. Odom, S. L., Brantlinger, E., Gersten, R., Horner, R. H., Thompson, B., & Harris, K. R. (2005). Research in special education: Scientific methods and evidence based practices. Exceptional Children, 71, 137–148. Ouellette, G. P. (2006). What’s meaning got to do with it: The role of vocabulary in word reading and reading comprehension. Journal of Educational Psychology, 98, 554–566. Pressley, M., Wharton-McDonald, R., Allington, R., Block, C. C., Morrow, L., Tracey, D., et al. (2001). A study of effective firstgrade literacy instruction. Scientific Studies of Reading, 5, 35–58. Rasinski, T., Blanchowicz, C., & Lems, K. (2006). Fluency instruction: Research based best practices. New York: The Guilford Press. Sanders, W., & Rivers, J. (1996). Cumulative and residual effects of teachers on future student academic achievement (Research progress report). Knoxville: University of Tennessee ValueAdded Assessment Center.
The quality of evidence in reading fluency intervention for Korean readers with reading… *Shin, I. Y., & Lee, J. S. (2012). A comparison of the effects of game program and direct instruction program for poor reading students. The Korean Journal of Applied Developmental Psychology, 1(1), 147–164. Smith, S. W., Daunic, A. P., & Taylor, G. G. (2007). Treatment fidelity in applied educational research: Expanding the adoption and application of measures to ensure evidence-based practice. Education and Treatment of Children, 30, 121–134. Strickland, W. D., Boon, R. T., & Spencer, V. G. (2013). The effects of RR on the fluency and comprehension skills of elementaryage students with learning disabilities, 2001–2011: A review of research and practice. Learning Disabilities: A Contemporary Journal, 11(1), 1–33. Swanson, E. (2008). Observing reading instruction for students with learning disabilities: A synthesis. Learning Disability Quarterly, 31, 115–133. Taylor, B. M., Peterson, D. S., Pearson, P. D., & Rodriguez, M. C. (2002). Looking inside classrooms: Reflecting on the ‘‘how’’ as well as the ‘‘what’’ in effective reading instruction. The Reading Teacher, 56(3), 270–279.
651
Therrien, W. J. (2004). Fluency and comprehension gains as a result of repeated reading: A meta-analysis. Remedial and Special Education, 25(4), 252–261. Wanzek, J., Wexler, J., Vaughn, S., & Ciullo, S. (2010). Reading interventions for struggling readers in the upper elementary grades: a synthesis of 20 years of research. Reading and Writing, 23(8), 889–912. Wexler, J., Vaughn, S., Edmonds, M., & Reutebuch, C. K. (2008). A synthesis of fluency interventions for secondary struggling readers. Reading and Writing, 21, 317–347. doi:10.1007/ s11145-007-9085-7. Wolf, M., & Bowers, P. (1999). The ‘‘double-deficit hypothesis’’ for the developmental dyslexias. Journal of Educational Psychology, 91, 415–438. *Yang, M. H., & Seo, Y. J. (2009). The effects of a reading intervention for students with reading difficulties: Through college student mentoring program. Special Education Research, 8(1), 87–110.
123