No Child Left Behind – Adequate Yearly Progress

Subject: Law
Pages: 6
Words: 2015
Reading time:
9 min
Study level: PhD

Outline

The No Child Left behind Act program was introduced by the state department of education in 2001. Here the schools are expected to assess the yearly progress of the students using Adequate Yearly Progress (AYP). The NCLB goals as set by the federal government for the year 2002- 20014 are. The states, which have seen an upsurge in a number of schools and districts, joining NCLB’s effort to improve their overall performance, have seen quite a number of improvements and improvisations to the program. This paper, therefore, examines the pros and cons of the AYP. For example, even though it has its pros, many states and scholars have been critical of this program, citing some of its weaknesses (Choi, Goldschmidt, & Yamashiro, 2005). Furthermore, many educational scholars and researchers have predicted that by the school year 2013/14, approximately all schools and districts shall fail to meet the adequate yearly progress standards or requirements.

Introduction

The No Child Left behind Act (2001) demands that, on a yearly basis, schools and districts go through adequate yearly progress (AYP) testing. AYP requires that states monitor, on a continuous basis, student and school performance, by accounting for the number of students who meet the set standards set by the state (Aldridge, 2009). The NCLB goals as set by the federal government for the year 2002- 20014 are: meet state-set standards for subject mastery within time-frame, ensure states assess students’ knowledge and ability, define and carry out the implementation for the teachers’ quality improvement efforts in order to achieve the target, define ways that can improve performance of schools, to ensure student performance feedback to parents is sufficiently effective, and finally, give freedom in the allocation criteria of funds by the states (Aldridge, Jerry. 2009).

The states, which have seen an upsurge in a number of schools and districts, join NCLB’s effort to improve their overall performance, have seen quite an umber of improvements and improvisations to the program. Even though it has its pros, many states have been critical of this program, citing some typical failures that have occurred, especially by unjustly identifying some schools to be failing more than others (Choi, Goldschmidt, & Yamashiro, 2005). Furthermore, many educational scholars and researchers have predicted that by the school year 2013/14, approximately all schools and districts shall fail to meet the adequate yearly progress standards or requirements, not even the elite schools that have continuously excelled in the past (Goldschmidt, 2006; Linn, 2005). This saw the relaxation or the flexibility of the education system that allowed many schools and districts, and states to submit any proposed alternative methods of monitoring the schools’ performance models (U.S. Department of Education, 2006). This saw states of North Carolina, Tennessee, Delaware, Arkansas, and Florida submitting their proposals to the Education Department that eventually got approved in 2007 (Kappan, M., W. 2003).

There is the step by step ways of applying CBM in order to achieve AYP requirements (John, 1989). The first step is quantifying the initial proficiency status: this is done by assessing every student’s past performance in order to identify the number of students who met the AYP benchmarks that would eventually the school’s or the district’s initial proficiency status (89). Secondly, the discrepancy between the initial proficiency and the universal proficiency as set by the 2013/14 goals (96). This is meant to outline the discrepancies between the initial proficiency level and the universal one. The difference will give the level of discrepancies. Finally, identifying the AYP; the found discrepancy of each student can be divided by the number of years left, the result gives the AYP or simply put the additional number of students who must meet the benchmark by the end of the year (97). This will assist the school in achieving universal proficiency by the deadline set by the state (99)

NCLB requires that all students must be proficient in reading and mathematics by the year 2003/14 school year, specifically for those schools that receive Title I funding (Davare, 2004). This is in line with the target of this policy that students and schools must just be ready to demonstrate perfect (100%) proficiency in adequate yearly progress (Choi, 2006a). The schools that do not meet this set standard annually face some very severe sanctions in line with how many years they have faltered in this program (Choi, 2006b). The presumption here is that the use of percentages in measuring the ability of the students to read and solve mathematical problems is sufficiently justified to separate schools that meet or do not meet the improvement requirements (24).

Progress monitoring through Adequate Yearly progress by the states has some of its benefits, despite having some flaws as raised by experts and scholars (Goldschmidt, 2006). This scientifically-based program is useful in that the state is able to identify the schools that are performing well and those which do not, thereby accelerating the learning process, especially for those students who receive adequate and appropriate instructions from their teachers or instructors (51). This will definitely build more expectations on the side of students and teachers. In an overall sense, progress evaluation or monitoring is likely to ultimately result in more efficient and focused techniques of instructions as well as goals that eventually improve all the students’ performance in a specific state (Davare, 2004)

Since NCLB Act demands that from grades third to eighth of all students in public schools should be proficient in reading and writing by the year 2013/14, AYP helps to improve the performance of schools, classes, and students in the effort to achieve the No Child Left behind program goals (ELC). This will ensure that schools strive to achieve the minimum universal proficiency targets for the students. It can therefore be noted that AYP is the minimum standard of growth rate that is required to eliminate the discrepancies and gaps between schools within a specified time frame. It, therefore, follows that AYP in one particular school will not definitely fit for another school (ELC).

As compared to traditional assessment methods that rely on the measurement of skills, the AYP method relies on the very specific Curriculum-Based Measurement (CBM), a set of methods for testing competency in academics, especially in reading, spelling, mathematics, and writing proficiency (Deno,2007). It is mainly based on fluency and the scores basically relying on reflected scores, accuracy, and the ease of response. It consequently reflects on the overall performance and competency of the individual student. A good example is reading, that if a student passes, will show that he or she has gone through multiple skills of coordinating word decoding, identification, and comprehension.

Through the use of Curriculum-Based Measurement for Adequate Yearly Progress, the modeling of academic improvement is possible within the school year. When a school uses individual scores for students on either a weekly or monthly basis, a graph plotted with such figures clearly shows the students’ improvements or decline in their academic performance. As Fuchs (l985) says, “a goal line that represents a desired rate of improvement can be established by connecting the student’s initial CBM score to the year-end goal. What if the score fall below this goal line, the teacher deems the present instructional program inadequate to accomplish the year-end goal, makes changes to the program in an attempt to enhance the rate of learning”.

However, this method of measurement takes more than its share of disadvantages. This is why more growth models are being encouraged rather than NCLB required single use of AYP. It has proved to be more of a status model since it only accounts for the number of students who meet the target as per the specified year. As stated earlier, the presumption here is that monitoring of students’ performance or proficiency in general reading and mathematics is enough to measure the schools and districts that do well against those that perform poorly. However, this particular assumption is not always right and misleads. First, the schools and districts are held responsible for the performance of the students, sub-groups, and a particular section of students from diverse backgrounds that may have lacked some adequate childhood development program. This may consequently cause difficulty in achieving the goals of this program, hence jeopardizing the success of some schools with many such student subgroups (Novak & Fuller, 2003).

Again, simple monitoring of the percentage of students with good or bad scores at a proficiency level is never appropriate for academic performance measurement (Goldschmidt et al., 2005). This is because it would put schools that enroll many disadvantaged students, hence causing a crisis in the enrolment process. It’s thus likely to make some schools shun specific students just to score hilly in the scorecard of AYP. Furthermore, schools that receive many such students would definitely find it difficult to favorably compete with schools that enroll a few of such students.

Monitoring the students’ performance through AYP is being scrutinized keenly, for it assumes that the current student performance is a specific functionality of the current or that specific year’s instructional efforts, wholly ignoring the past years. This may give a wrong signal of the actual reason for good or bad performance because the ultimate good performance is normally the result of long-term performance criteria (U.S. Department of Education, 2006). Still, it gives little credibility to the performance, especially due to the fact that when one score monitoring process of either proficient, above, or below proficient is used; so much information is being left out or lost hence the limitations in getting the required or necessary adequate information for the keen monitoring process (Wermers, 2004).

No child Left behind the program is a costly program. After examining the financial cost of NCLB, Kappan (2003) reveals that the program plus the AYP is a very costly way of improving the standard of educations in the United States. In this study, the professional judgment method was used to identify the cost of this program combined with the AYP monitoring criteria. A panel of experts examined the general allocation and compared it to school performance ratings in the United States, that is, both high and low achieving schools. The result of the ten studies revealed that if AYP is used in the monitoring process, a standard-based NCLB cost will increase by 24% and that the actual cost of remedial education will double in a whopping 8 out of 10 states. The average cost of this estimated at the national level increased from $84.5 billion to 148 billion, surpassing the $1billion increased budget by the federal government each year. This is why many experts, Kappan included, question the feasibility and reliability of AYP. He goes further to highlight the negative impact of this assessment method attributing it to the high rate of school dropouts and narrowing of the curriculum just because schools are struggling to avoid being branded as a “group of failures”. It is therefore estimated that by the school year 1913/14, 75% of schools will fall in this category of “failures” (Wermers, 2004). This is likely to prompt a shift from equal access to educational resources for all citizens hence deny them equal opportunity to get education as a basic need since money will be taken away from the poor-performing schools to rich performing schools (9).

The schools and districts being held responsible for the performance by the students, sub-groups, and a particular section of students from diverse backgrounds that may have lacked some adequate childhood development program is never justified at all. This is because a student’s own inability is being immersed in the school’s or the instructors’ inability to use appropriate methods of teaching. This would affect the relationship between the students and their teachers, subsequently affecting the efforts towards the goals of this program (Novak & Fuller, 200)

Conclusion

The overall performance indicator, as shown above, can never be solely illustrated by the AYP only but by the use of an appropriate combination of performance indicator measurements that are universally accepted. This is why it is important for a substantial repeal of or just significant revision of the No Child Left behind Act.

References

  1. Aldridge, Jerry. 2009. No Child Left Behind: Costs and Benefits, Childhood Education. FindArticles.com.
  2. Kappan, M., W. 2003. No Child Left Behind: Costs and Benefits., 84(9), 679-686 Copyright Association for Childhood Education International Fall, ProQuest Information and Learning Company
  3. John H. 1989. “Is the Test Score Decline Responsible for the Productivity Growth Decline?”, The American Economic Review, Vol. 79, No. 1, p. 178-197
  4. Davare, D. 2004. Director of research, Pennsylvania School Boards Association, personal communication
  5. Deno, S (2007).LCurriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219-232. Through Robert Strauss.
  6. Hanushek, Eric A.,( 2004). “Some Simple Analytics of School Quality”, Working Paper 10229, National Bureau of Economic Research.
  7. Accountability Works, “NCLB under a Microscope”, Education Leaders Council,  2004.
  8. Hanushek, Eric A., 2003. “The Importance of School Quality”, Education Next.
  9. Baker, E. L., Linn, R. L., Herman, J. L., & Koretz, D. (2002). Standards for educational accountability systems (CRESST Policy Brief 5). Los Angeles: University of California,National Center for Research on Evaluation, Standards, and Student Testing
  10. Choi, K. (2006a). Growth-based school accountability systems: Key issues and suggestions(Invited paper prepared for the U.S. Department of Education). Los Angeles:University of California, National Center for Research on Evaluation, Standards, and Student Testing.
  11. Choi, K. (2006b). A new value-added model using longitudinal multiple-cohorts data. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.
  12. Choi, K., Goldschmidt, P., & Yamashiro, K. (2005). Exploring models of school performance: From theory to practice. In J. L. Herman & E. H. Haertel (Eds.), Uses and misuses of data for educational accountability and improvement (NSSE Yearbook, Vol. 104,Part 2, pp. 119-146). Chicago: Blackwell Publishing. National Society for the Study of Education.
  13. Goldschmidt, P. (2006). Practical considerations for choosing an accountability model, Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.
  14. Goldschmidt, P., & Hara, M. (2005l). Are there really good schools: The role of changing demographics within schools. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Canada
  15. Goldschmidt, P., Roschewski, P., Choi, K., Auty, W., Hebbler, S., Blank, R., et al. (2005). Policymakers’ guide to growth models for school accountability: How do accountability models differ? Washington, DC: Council of Chief State School Officers.
  16. Linn, R. L. (2005). Test-based educational accountability in the era of No Child Left Behind (CSE Rep. No. 651). Los Angeles: University of California, Center for the Research on Evaluation, Standards, and Student Testing. No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115 Stat. 1425 (2002).
  17. Novak, J., & Fuller, B. (2003). Penalizing diverse schools? Similar test scores, but different students, bring federal sanction (PACE Policy Brief). Berkeley, CA: Policy Analysis for California Education.
  18. Ray, A. (2006). Vale added implementation in England. Paper presented at the First International Meeting on Value Added and School Accountability, Santiago, Chile.
  19. Stecher, B. (2006). “No Child” leaves too much behind. Washingtonpost.com. 
  20. Thum, Y. M. (2003). Measuring progress toward a goal: Estimating teacher productivity using a multivariate multilevel model for value-added analysis. Sociological Methods and Research, 32, 153-207.
  21. U.S. Department of Education. (2006). No Child Left Behind. Growth models: Ensuring grade- level proficiency for all students by 2014. Web.
  22. Fuchs, L.S., Fuchs, D. (l985). Determining Annual Yearly Progress from Kindergarten through Grade 6 with Curriculum-Based Measurement, in press, Assessment for Effective Intervention
  23. Wermers, J. 2004. “’No Child’ called impractical”, Richmond Times-Dispatch.