• unlimited access with print and download
    $ 37 00
  • read full document, no print or download, expires after 72 hours
    $ 4 99
More info
Unlimited access including download and printing, plus availability for reading and annotating in your in your Udini library.
  • Access to this article in your Udini library for 72 hours from purchase.
  • The article will not be available for download or print.
  • Upgrade to the full version of this document at a reduced price.
  • Your trial access payment is credited when purchasing the full version.
Buy
Continue searching

Examining the relationship among reading curriculum-based measures, level of language proficiency, and state accountability test scores with middle school Spanish-speaking English language learners

Dissertation
Author: Nicole Osterman Stokes
Abstract:
The purpose of the present study was to examine the predictive ability of oral reading fluency (R-CBM) on a sixth grade high-stakes assessment with ELL and non-ELL students, as well as determine the average rate of growth on R-CBM and how that relates to level of English Proficiency. The participants in the current study included 350 sixth grade students from a middle school located in west Phoenix, AZ. Ninety of the 350 students were English Language Learners at varying levels of language proficiency in English. Archival data was used for the study. Each participant completed R-CBM three times throughout the 2006-2007 school year (fall, winter, and spring benchmarks), the Maze assessment (administered once in early spring), and the state accountability exam (AIMS) given in the spring. Of the 350 total students in the sample, 90 also were administered a Language proficiency exam (Arizona English Language Learner Assessment), due to their ELL status. In an attempt to avoid statistical and interpretive problems with the data analysis, HLM, multiple regression, and data visualization was used. This study was conducted for the purpose of examining the typical rate of growth on R-CBM over one school year for ELL and non-ELL students. It also assisted in determining whether reading screening measures (R-CBM, Maze) are effective in predicting future reading success or failure. Results indicated that the average initial status as measured by the Fall R-CBM was 103.51 words read correctly per minute with an average growth rate (slope) of 10.36 words read correct from Fall to Winter, and from Winter to Spring. Thus, the overall growth was about 20.72 words over one school year. The results indicated that there was significant variation between ELL and non-ELL students in their initial status as measured by the Fall R-CBM score. However, while ELL students score significantly lower in the Fall on R-CBM, they did not vary in their growth trajectories from non-ELL students throughout the school year. Students varied significantly based on their AIMS level in their initial status as measured by the fall R-CBM score, but did not vary significantly on their growth rates relative to their performance level on AIMS. The regression analysis for the overall student population showed that Fall RCBM (initial status), percentage of growth on RCB from Fall to Spring, and ELL status were all significant predictors of student performance on AIMS. The Maze assessment was not a significant predictor. A student's initial status, as measured by the Fall R-CBM measure, was the most important factor in predicting whether that student will meet the state standards on the AIMS assessment in the spring for both the ELL and non-ELL populations. Implications for instruction and suggestions for future research are discussed.

v TABLE OF CONTENTS

ACKNOWLEDGEMENTS iii

LIST OF TABLES vii

LIST OF FIGURES ix

ABSTRACT xi

CHAPTER I: INTRODUCTION 1 Older ELL Students with Reading Difficulties 3 CBM in Reading with English Language Learners 5 CBM and High-Stakes Assessment 7 Statement of the Problem 8 Relevance of the Study 10

CHAPTER II: REVIEW OF THE LITERATURE 13 Reading Difficulties in Older Students 14 Reading and English Language Learners 17 Curriculum-Based Measurement in Reading 22 Curriculum-Based Measurement with Older Students 24 CBM and English Language Learners 25 Curriculum-Based Measures and High-Stakes Assessments 34 Summary and Concluding Remarks 42

CHAPTER III: METHODOLOGY 44 Introduction 44 Subjects and Setting 44 Procedures 45 Description of Measures 46 Data Analysis 52 Summary 56

CHAPTER IV: RESULTS 58 Descriptive Statistics 61 CBM Models 65 AIMS Models 69 Prediction Analyses 88

CHAPTER V: DISCUSSION 104 R-CBM Initial Status and Growth: Results 105 R-CBM Initial Status and Growth Relative to ELL Status: Results 105 AIMS Performance and R-CBM Initial Status and Growth Rates: Results 106 Predictors for AIMS Performance: Results 109

vi Results in Relation to Previous Research 112 Implications for Instruction 120 Limitations for the Present Study 124 Suggestions for Future Research 126 Conclusion 129

REFERENCES 133

VITA 141

vii

LIST OF TABLES

Table 1: Student Population Demographics 45

Table 2: Subjects and Data 46

Table 3: AZELLA Performance Level Descriptions of Language Proficiency for Sixth Grade 48

Table 4: Arizona Reading Concepts and Strands 50

Table 5: AIMS Performance Level Descriptions 51

Table 6: Types of Data and Assessments 52

Table 7: Frequency and Percent of ELL Students in AZELLA Levels 1-5 59

Table 8: Means and Standard Deviations of Fall, Winter, and Spring R-CBM Scores 61

Table 9: Means and Standard Deviations of Fall, Winter, and Spring R-CBM Scores for ELL and Non-Ell Students 62

Table 10: Means and Standard Deviations for Oral Reading Fluency Scores at Fall, Winter and Spring by AIMS Level 64

Table 11: R-CBM Unconditional Growth Model (from Fall to Spring) 66

Table 12: Final Estimation of Variance Components for the CBM Unconditional Growth Model 67

Table 13: Differences in Initial Status and Growth Rates Between ELL and Non-ELL Students 68

Table 14: Final Estimation of Variance Components for ELL Status 69

Table 15: ANOVA for AIMS, ELL Status, and Time on R-CBM Initial Status 70

Table 16: Differences in Initial Status and Growth Rates by AIMS Reading Level 71

Table 17: Final Estimation of Variance for AIMS Level 72

viii Table 18: ELL Status and AIMS Reading Level Three-way Interaction Effect on R-CBM Initial Status and Growth Rate 73

Table 19: Final Estimation of Variance for AIMS Level and ELL Status 74

Table 20: Correlation Matrix of Predictor and Outcome Variables 88

Table 21: ANOVA: Model and AIMS Reading Scores 92

Table 22: Parameter Estimates of Predictor Variables on AIMS Outcome 93

Table 23: ANOVA: Non-ELL Model and AIMS Reading Scores 96

Table 24: Parameter Estimates of Predictor Variables on AIMS Outcome for Non-ELL Students 96

Table 25: ANOVA: ELL Model and AIMS Reading Scores 96

Table 26: Parameter Estimates of Predictor Variables on AIMS Outcome for ELL Students 97

ix

LIST OF FIGURES

Figure 1: R-CBM data for all students that Fell Far Below the standard on the AIMS assessment. The outlier was removed from the data analysis. 60

Figure 2: Parallel coordinate of individual student’s growth from Fall to Spring in R-CBM data by AIMS level 1 (Falls far below). 75

Figure 3: Parallel coordinate of individual student’s growth from Fall to Spring in R-CBM data by AIMS level 2 (Approaches the standard). 76

Figure 4: Parallel coordinate of individual student’s growth from Fall to Spring in R-CBM data by AIMS level 3 (Meets the standard). 77

Figure 5: Parallel coordinate of individual student’s growth from Fall to Spring in R-CBM data by AIMS level 4 (Exceeds the standard). 78

Figure 6: Parallel coordinate of non-ELL students’ growth on R-CBM from Fall to Spring. 79

Figure 7: Parallel coordinate of ELL students’ growth on R-CBM from Fall to Spring. 80

Figure 8: Parallel coordinate of ELL students’ growth on R-CBM from Fall to Spring at AIMS level 1 (Falls far below). 82

Figure 9: Parallel coordinate of non-ELL students’ growth on R-CBM from Fall to Spring at AIMS level 1 (Falls far below). 82

Figure 10: Parallel coordinate of ELL students’ growth on R-CBM from Fall to Spring at AIMS level 2 (Approaches the standard). 84

Figure 11: Parallel coordinate of Non-ELL students’ growth on R-CBM from Fall to Spring at AIMS level 2 (Approaches the standard). 85

Figure 12: Parallel coordinate of ELL students’ growth on R-CBM from Fall to Spring at AIMS level 3 (Meets the standard). 86

Figure 13: Parallel coordinate non-ELL students’ growth on R-CBM from Fall to Spring at AIMS level 3 (Meets the standard). 87

x Figure 14: Homoescedasticity and independence of residuals. 90

Figure 15: Normality of residuals. 90

Figure 16: Scatterplot of the normality of residuals. 91

Figure 17: Scatterplot of Fall R-CBM by AIMS Scaled Score. 93

Figure 18: Scatterplot of ELL status by AIMS Scaled Score (ELL = 1, non-ELL = 0) 94

Figure 19: Scatterplot of Percentage of Growth on R-CBM from Fall to Spring and AIMS Scaled Score. 94

Figure 20: Scatterplot of Maze scores and AIMS Scaled Score. 95

Figure 21: Scatterplot of R-CBM by AIMS Scaled Score for non-ELL students. 98

Figure 22: Scatterplot of R-CBM percentage of growth by AIMS Scaled Score for non-ELL students. 99

Figure 23: Scatterplot of Maze by AIMS Scaled Score for non-ELL students. 99

Figure 24: Scatterplot of Fall R-CBM b AIMS Scaled Score for ELL students. 100

Figure 25: Scatterplot of R-CBM growth by AIMS Scaled Score for ELL students. 100

Figure 26: Scatterplot of Maze by AIMS Scaled Score for ELL students. 101

Figure 27: AIMS Scaled Score versus growth by ELL status. 102

Figure 28: AIMS Scaled Score versus growth by Maze. 102

Figure 29: AIMS Scaled Score versus initial status (Fall R-CBM) by ELL status. 103

Figure 30: AIMS Scaled Score versus initial status (Fall R-CBM) by Maze. 103

xi

ABSTRACT

The purpose of the present study was to examine the predictive ability of oral reading fluency (R-CBM) on a sixth grade high-stakes assessment with ELL and non- ELL students, as well as determine the average rate of growth on R-CBM and how that relates to level of English Proficiency. The participants in the current study included 350 sixth grade students from a middle school located in west Phoenix, AZ. Ninety of the 350 students were English Language Learners at varying levels of language proficiency in English. Archival data was used for the study. Each participant completed R-CBM three times throughout the 2006-2007 school year (fall, winter, and spring benchmarks), the Maze assessment (administered once in early spring), and the state accountability exam (AIMS) given in the spring. Of the 350 total students in the sample, 90 also were administered a Language proficiency exam (Arizona English Language Learner Assessment), due to their ELL status. In an attempt to avoid statistical and interpretive problems with the data analysis, HLM, multiple regression, and data visualization was used. This study was conducted for the purpose of examining the typical rate of growth on R-CBM over one school year for ELL and non-ELL students. It also assisted in determining whether reading screening measures (R-CBM, Maze) are effective in predicting future reading success or failure. Results indicated that the average initial status as measured by the Fall R-CBM was 103.51 words read correctly per minute with an average growth rate (slope) of 10.36

xii words read correct from Fall to Winter, and from Winter to Spring. Thus, the overall growth was about 20.72 words over one school year. The results indicated that there was significant variation between ELL and non-ELL students in their initial status as measured by the Fall R-CBM score. However, while ELL students score significantly lower in the Fall on R-CBM, they did not vary in their growth trajectories from non-ELL students throughout the school year. Students varied significantly based on their AIMS level in their initial status as measured by the fall R-CBM score, but did not vary significantly on their growth rates relative to their performance level on AIMS. The regression analysis for the overall student population showed that Fall RCBM (initial status), percentage of growth on RCB from Fall to Spring, and ELL status were all significant predictors of student performance on AIMS. The Maze assessment was not a significant predictor. A student’s initial status, as measured by the Fall R-CBM measure, was the most important factor in predicting whether that student will meet the state standards on the AIMS assessment in the spring for both the ELL and non-ELL populations. Implications for instruction and suggestions for future research are discussed.

1

CHAPTER I INTRODUCTION The importance of the acquisition of early literacy skills in well known. Students who have difficulty learning to read often continue to have significant difficulty throughout their educational careers. Often referred to as the “Matthew Effect” described by Stanovich (1986), students who acquire early reading skills are equipped with the tools to exponentially grow in their knowledge and skills while students who fail to develop early literacy skills continue to fall further and further behind. Evident in recent national concern and legislation, many children are failing to develop early literacy skills which lead to poor academic and social success (Haager & Windmueller, 2001). With the introduction of the No Child Left Behind Act (NCLB, 2001), states are required to test students on an annual basis and show that at least 95% of students are meeting annual measurable objectives in reading and math. In addition to the general student population, individual subgroups, including English Language Learners and students with disabilities, must also meet the 95% goal. Thus, it is essential for schools to identify students at the start of the school year who are at risk of not meeting state standards, as well as to improve overall reading outcomes for these students through appropriate intervention. Universal screening using Curriculum Based Measurement is one way of quickly assessing and identifying at-risk students early on in the school year. Curriculum Based Measurement (CBM) oral reading has been shown as a reliable assessment tool for

2

predicting student performance on state tests (Hintze & Silberglitt, 2005; McGlinchey & Hixson, 2004; Silberglitt et al., 2006; Stage & Jacobsen, 2001; Wiley & Deno, 2005). However, very few studies have addressed the use of CBM oral reading with English Language Learners, and more specifically whether CBM is a reliable predictor of ELL performance on high stakes assessments. The CBM Maze task is another general outcome measure of reading that has been explored in the literature (Ardoin et al., 2004; Brown-Chidsey, Johnson, & Fernstrom, 2005; Shinn, Deno, & Espin, 2000; Twyman & Tindal, 2007; Wiley & Deno, 2005). However, research examining the predictive validity of the Maze task on state high-stakes assessments in reading is minimal, particularly with English Language Learners. English language learners (ELLs) frequently have difficulties developing early reading skills. For example, Hispanic students are almost twice as likely as non- Hispanic Whites to be reading below the expected level for their age (Snow, Burns & Griffin, 1998 as cited in Gunn et al., 2000). What’s more, many Hispanic students are getting as far as middle school with reading skills that still fall significantly below their non-Hispanic White peers. While there is a growing body of research on ELL students and reading in the elementary grades, there has been minimal research done with middle school students (i.e., grades 6-8). According to Denton et al. (2008), the research on effective intervention for older readers with reading difficulties is lacking, particularly with students who are English Language Learners (ELLs). The importance of early intervention is well established. However, many districts are facing high numbers of transition ELL students moving in and out of their schools, which poses a significant

3

problem for effectively meeting these student’s needs. Often, students are starting middle school (sixth grade) with reading skills that are falling significantly below grade level expectations. More research is needed on how to effectively identify, intervene, and monitor these students in order to improve reading outcomes for this population. The current study attempts to further research in the area of assessment and prediction with older ELL students by examining the use of Curriculum Based Measurement (CBM) in reading, both oral reading and Maze, and how these general outcome measures relate to ELL student performance on high stakes tests. Older ELL Students with Reading Difficulties According to Kamil (2003), more than 25% of middle school students are unable to read well enough to sufficiently identify the main idea of reading passages. There have been three possible explanations for the low reading skills of older students identified by Denton et al. (2008). These include: 1) older students with reading deficits do not have adequate knowledge of the alphabetic principle and word reading; 2) older students with reading difficulties do not possess an adequate understanding of word meaning and adequate reasoning skills necessary for comprehension of text; and 3) older students with reading difficulties do not have the task orientation toward reading to gain adequate reading proficiency. For older students who are also English Language learners, the challenge of becoming an adequate reader becomes even more complex. Not only do these students struggle with word reading skills and vocabulary knowledge, but many of them also present with limited background knowledge as well as various

4

contextual factors (e.g., high mobility rates, low socioeconomic status, etc.) that may affect their future reading success. Although there has been research conducted on elementary ELL students struggling with reading, studies examining older ELL students with reading difficulties are limited. One study by Denton et al. (2008) examined the effectiveness of a multicomponent reading intervention with middle-school Spanish-speaking ELL students. Results showed that the treatment group did not demonstrate higher outcomes than the non-treatment group in terms of word recognition, comprehension, or fluency. The authors concluded that ELL students with severe reading difficulties may require considerably more intensive interventions over considerably longer amounts of time in order to improve reading outcomes. The results from this study may contribute to the notion that middle school students with reading difficulties are too old for remediation. However, there have been several studies conducted with adolescents with reading difficulties which indicate that older students are generally responsive to reading interventions (Edmonds et al., 2009; Ehren, Lenz, & Deshler, 2004; Scammacca et al., 2007). Although these studies are promising, there is more research needed in the use of reading assessment procedures in order to quickly and accurately identify middle-school students in need of intervention, particularly with ELL students.

5

CBM in Reading with English Language Learners Assessment is a key component of any effective reading program. According to Haager and Windmueller (2001), assessment serves three critical purposes in developing a reading intervention program, which include the identification of students in need of supplemental instruction, guiding instructional planning, and ongoing monitoring of student progress. As with any intervention, students need to be screened in some way to identify a need for assistance. For reading intervention, there are numerous screening tools that can be used for identifying students at risk for reading failure. Curriculum- based measurement (CBM) is one such screening tool. CBM has been established as a systematic, standardized, and reliable assessment tool for examining a student’s progress in reading. Deno (2003) described multiple uses of CBM including norm development, identification of students academically at risk, and prediction of performance on important criteria. CBM also has the potential to be a method for assessing both the level and growth of student performance in reading. While the research on the typical rate of growth in reading for monolingual English speaking students has grown over the past decade, studies investigating the typical growth rate of ELL students are limited (Dominguez de Ramirez & Shapiro, 2006). Moreover, there is even less research examining growth patterns in reading of middle school students, specifically students who are English language learners. By developing growth standards in reading specifically for Spanish-speaking ELLs, including those students in the higher- elementary grades and middle school, practitioners could be more efficiently and effectively identifying students who are having difficulty (Dominguez de Ramirez &

6

Shapiro, 2006). Once identified, these students could be provided with intervention and their progress monitored frequently. Their growth trajectory could be compared with that of the typical Spanish-speaking ELL student, which would also assist school teams in answering that difficult question of whether their lack of academic progress is due to second language acquisition or an underlying disability. The underlying assumption that fluency and comprehension are related is often challenged by ELL teachers who claim that these students can decode words without comprehending what they read (Dominguez de Ramirez & Shapiro, 2006). As such, additional research is needed that looks more carefully at the link between ORF (oral reading fluency) and comprehension for ELLs. One measure that is currently being used to assess reading comprehension at the higher grade levels is the Maze assessment. The Maze assessment consists of one standardized reading passage of grade-level difficulty (Howe & Shinn, 2002). In each Maze passage, the first sentence is left intact, with each subsequent seventh word being replaced with three words to choose from (e.g., When (red/she/told) was a little girl…). The Maze task has been found to be a reliable and valid measure of reading comprehension for students in elementary, middle, and high school (Brown-Chisdey, Davis, & Maya, 2003; Shin, Deno, & Espin, 2000). However, very few studies have examined the reliability and validity of the Maze task with the ELL population. One study by Wiley and Deno (2005) found that while the Maze task was a better predictor than oral reading fluency for fifth grade non ELL students, it was less predictive than oral reading for the ELL population. Thus, more research is needed on

7

whether the Maze task would be an appropriate assessment tool for use with the ELL population in predicting reading success. CBM and High-Stakes Assessment Another benefit of using CBM data is to assist in predicting whether students will meet state standards as assessed by the state tests mandated by the No Child Left Behind Act of 2001. There have been a few studies examining the relationship between curriculum-based measurement for reading (R-CBM) and state accountability test scores (Hintze & Silberglitt, 2005; McGlinchey & Hixson, 2004; Silberglitt et al., 2006; Stage & Jacobsen, 2001). One study found that while there was a relationship between CBM and state test scores, the magnitude of the relationship declined significantly as grade level increased (Silberglitt et al., 2006), yet this decline was established with a primarily non- ELL population. As found in previous research, the growth trajectory on reading fluency measures for ELL students differs from that of mono-lingual English speaking students. In one particular study (Dominguez de Ramirez & Shapiro, 2006), while Spanish- speaking ELLs rate of growth in English was significantly slower than their non ELL peers, they demonstrated substantial improvement by fifth grade. As such, the predictability of reading CBM on state test performance may differ for the ELL population. In other words, if ELL student’s oral reading fluency continues to increase at a high rate into the later elementary-early middle school years, the magnitude of the relationship between CBM and state test performance may remain consistent or even increase as the grade level increases. In the current studies, the sample populations participating in the studies were primarily White, not of Hispanic Origin. While one of

8

the current studies “non-Caucasian” populations made up 52% of the total sample, the language proficiency of these students was not revealed (McGlinchey & Hixson, 2004). As such, it is difficult to generalize the current findings of the relationship between CBM and performance on state tests to the ELL student population. Another study by Wiley and Deno (2005) examined oral reading and the Maze measures to predict performance on the state test with both non-ELL and ELL students in third and fifth grade. Results showed moderate to moderately strong correlations between the state standards test and the two CBM measures for all students. However, when the two measures were combined, the predictive power only increased for the non-ELL students. Further research is needed that examines the potential of oral reading and maze for use with the ELL population in terms of assessment of reading proficiency. Statement of the Problem Further research is needed to examine the relative validity of CBM reading assessment with older ELL students for the purpose of developing an assessment system that is able to quickly and accurately target students in need of intervention. In addition, this assessment system could be utilized to change student outcomes in terms of future high-stakes assessments. The current research examined the average rate of growth over one school year in CBM oral reading fluency, the relationship of this rate of growth to students’ level of English Language Proficiency, and whether reading fluency and MAZE measures are predictive of later high-stakes success in reading for older ELL students. Dominguez de Ramirez and Shapiro (2006), Wiley and Deno (2005), Twyman and Tindal (2007), Sibley et al. (2001), Hintze and Silberglitt (2005), McGlinchey and

9

Hixson (2004) have indicated the need for more research in the areas being addressed by the current research. The present study adds to previous findings of the predictiveness of R-CBM assessments to high-stakes tests with the use of both oral reading fluency and Maze measures. Unlike previous studies, however, this research examines the predictive power of R-CBM measures on future high-stakes success with middle-school students who are also English Language Learners. Level of language proficiency will also be examined in terms of its relationship with the rate of growth on reading fluency measures, as well as whether it has an effect on the magnitude of R-CBM and Maze as they compare to high-stakes test scores. Due to the limited research on R-CBM with the ELL population, an expected rate of growth in terms of reading fluency for this population is lacking. In a study by Dominguez de Ramirez and Shapiro (2006), although progress was slower than their non-ELL peers, Spanish-speaking ELL students showed growth on CBM measures over time, demonstrating that CBM can be sensitive to examining the course of literacy development of this population. As such, there is a need for more research to assist in developing an expected rate of growth in order to determine norms and/or benchmarks for this population. In other words, what is the expected rate of growth from the fall to winter benchmark assessment for students who are learning English? In addition to rate of growth, it is also crucial to be able to identify early (i.e., on the fall benchmark assessment) which students are at risk of reading failure in order to provide intervention quickly. As such, there needs to be more research on establishing appropriate “cut-off” points for accurately identifying ELL students in need of more intensive intervention. The current study utilized R-CBM in fall, winter, and spring to

10

examine the typical rate of growth for this population as well as examining the oral reading cut-off scores in the fall in order to predict success or failure on state tests on the spring. Relevance of the Study The number of ELL students enrolled in public schools in the U.S. has increased significantly over the past few years. According to the U.S. Census Bureau, it is estimated that the foreign-born population in the United States increased by 57% from 1990 to 2000 (from 19.8 million to 31.1 million, respectively), and that over half of the foreign-born population were from Latin America (U.S. Census Bureau, 2000). According to a report by Macias (2000), it was estimated that 78% of the limited English proficient students in the U.S. in 1997-1998 were Spanish-speaking. With this growing number of Spanish-speaking ELL students in the schools, educators have become more concerned with how to appropriately meet the educational needs of this population, particularly in reading. The educational difficulty of these students has become a national concern due to low reading achievement that increases in severity in later grades, which in turn has led to high rates of grade retention and dropout for these students (Rueda & Windmueller, 2006). Currently, there is a political and educational emphasis on accountability, high- stakes assessment, and student outcomes. Districts are being held accountable for the outcomes of all students based on the results of standards-based, high stake assessments. Schools not only have to show that 95% of their students are proficient on state tests, but individual subgroups, including English Language Learners, must also meet the 95% goal

11

(Wiley & Deno, 2005). This puts schools in a difficult position, because by the time this assessment is given (usually in the spring), it is too late to identify students at risk as well as to determine whether reading instruction was effective in improving student outcomes. For this reason, many districts have been using CBM as a way of identifying early in the school year students who are at risk of reading failure, as well as measuring student progress over time. In addition, recent research has been promising in the use of oral reading fluency measures as a predictor of whether students will meet standards on state tests. While CBM has been well established as an effective assessment tool for English speaking students, there has been limited research on the use of CBM with ELL students. According to McCardle, Mele-McCarthy and Leos (2005), there is a need for accurate and user-friendly assessment tools that schools can utilize for screening and progress monitoring with the ELL population. The purpose of the current study is to examine the following questions: 1) What was the average rate of growth of the current population over one school year on the reading fluency R-CBM measures? 2) What is the relationship among the rate of growth on R-CBM and level of English Language Proficiency? 3) If students vary on their growth rates (slopes) on R-CBM and/or intercept (performance on state AIMS reading assessment), is this variation related to ELL status? 4) Is R-CBM initial status, R-CBM rate of growth, and Maze predictive of reading achievement on the spring AIMS reading assessment relative to ELL

12

status? Which of these factors is a more potent predictor of reading achievement on the AIMS reading assessment?

13

CHAPTER II REVIEW OF THE LITERATURE With the introduction of the No Child Left Behind Act (2001), schools are being required to test all of their students on an annual basis in order to establish that the majority of students are making adequate yearly progress in both reading and math. In other words, at least 95% of the student population, which includes individual subgroups (e.g., English Language Learners) must score proficient on some sort of high-stakes assessment administered in the spring. Because of this 95% goal, schools, teachers, and support personnel are feeling tremendous amounts of pressure to identify early on those students who are at-risk of not meeting state standards, particularly in reading. As such, there is a well-established need for simple yet reliable assessments that have the ability to predict academic outcomes. One such assessment is Curriculum-Based Measurement (CBM) (Deno, 1985), which is a brief set of measures that serve as indicators, or general outcome measures, of one specific academic area such as reading (Stecker, Lembke, & Foegen, 2008). CBM is widely used and well-researched at the elementary level in order to universally screen for students who are at risk of reading failure, as well as progress monitor those students who have been targeted as in need of additional instruction/ intervention. However, there is limited research on the use of CBM with older students, and whether it is an effective assessment tool for predicting outcomes and monitoring progress over time. Yet another challenge is the use of CBM with English Language

Full document contains 155 pages
Abstract: The purpose of the present study was to examine the predictive ability of oral reading fluency (R-CBM) on a sixth grade high-stakes assessment with ELL and non-ELL students, as well as determine the average rate of growth on R-CBM and how that relates to level of English Proficiency. The participants in the current study included 350 sixth grade students from a middle school located in west Phoenix, AZ. Ninety of the 350 students were English Language Learners at varying levels of language proficiency in English. Archival data was used for the study. Each participant completed R-CBM three times throughout the 2006-2007 school year (fall, winter, and spring benchmarks), the Maze assessment (administered once in early spring), and the state accountability exam (AIMS) given in the spring. Of the 350 total students in the sample, 90 also were administered a Language proficiency exam (Arizona English Language Learner Assessment), due to their ELL status. In an attempt to avoid statistical and interpretive problems with the data analysis, HLM, multiple regression, and data visualization was used. This study was conducted for the purpose of examining the typical rate of growth on R-CBM over one school year for ELL and non-ELL students. It also assisted in determining whether reading screening measures (R-CBM, Maze) are effective in predicting future reading success or failure. Results indicated that the average initial status as measured by the Fall R-CBM was 103.51 words read correctly per minute with an average growth rate (slope) of 10.36 words read correct from Fall to Winter, and from Winter to Spring. Thus, the overall growth was about 20.72 words over one school year. The results indicated that there was significant variation between ELL and non-ELL students in their initial status as measured by the Fall R-CBM score. However, while ELL students score significantly lower in the Fall on R-CBM, they did not vary in their growth trajectories from non-ELL students throughout the school year. Students varied significantly based on their AIMS level in their initial status as measured by the fall R-CBM score, but did not vary significantly on their growth rates relative to their performance level on AIMS. The regression analysis for the overall student population showed that Fall RCBM (initial status), percentage of growth on RCB from Fall to Spring, and ELL status were all significant predictors of student performance on AIMS. The Maze assessment was not a significant predictor. A student's initial status, as measured by the Fall R-CBM measure, was the most important factor in predicting whether that student will meet the state standards on the AIMS assessment in the spring for both the ELL and non-ELL populations. Implications for instruction and suggestions for future research are discussed.