Using the Percentage of Students Meeting or Exceeding Their Growth Projections as an Evaluation Tool

Of all of the statistics provided by NWEA in our series of reports, perhaps the easiest to understand and the most widely used is the percentage of students in a classroom, grade-level, or school who meet or exceed their end-of-year growth projection. Each student’s growth projection (sometimes referred to as the student’s “growth target”) is based on the student’s grade, starting RIT score, and the subject in which that student is tested, and represents the median level of growth observed for similar students in NWEA’s norming sample. Or, put differently, the growth projection represents our best estimate of the average growth for students at various points on the RIT scale.

This statistic is included in grade, building, and classroom growth reports, and does provide teachers and principals with useful information about the performance of their students. And while this statistic doesn’t provide information about how much growth students showed over the course of the year (because students either met/exceeded or did not meet their projections), it does summarize the percentage of students who showed growth consistent with what we might expect to see from them based on their grade, starting achievement level, and the subject tested.

This statistic can serve a number of purposes, as it can be useful in monitoring student growth in the classroom, it can be used to summarize school or district performance, it can be a useful guide for instructional planning, and so on. However, what we are observing more and more is the use of this statistic as the basis for teacher evaluation, which is not the intended use of this statistic. For example, districts might establish a benchmark at the beginning of the year for what the percentage of students meeting or exceeding their growth projections needs to be in order for a teacher to be considered “effective” (75% of students, for example), or teachers may need to improve the percentage of students achieving this goal by a certain percentage each year (such as an annual improvement of 5 percentage points). The use of this statistic in these ways (and more) continues to be something we observe with greater frequency, so I wanted to take a moment to share a couple of general thoughts about this practice.

First and foremost, while the growth projections based on our 2011 norms do provide reasonable goals for students and teachers to strive to attain, recall that these growth projections represent the level of growth we might expect to see from the student in the middle of the distribution (the average amount of growth). In other words, on average, approximately half of all students are going to show more growth than the growth projection, and the other half are going to show less growth. So, if the growth projections are being used as the basis for teacher evaluation, it is probably worth considering if they are appropriate for the students actually in the classroom. That is, for a classroom with greater academic, social, or behavioral challenges, the growth projections may be too high of a bar; conversely, in classrooms where these challenges do not exist, the bar may be too low. As I previously noted, the growth projections are generated for students based on their grade, starting RIT score, and the subject being assessed, so many of the other factors that likely would be related to the amount of growth a student shows (special education status, for example) are not considered in the student’s final growth projection. As a result, the “bar” for a teacher’s students, in the form of our growth projections, may not adequately or accurately capture appropriate learning goals for those students. Because of this, teachers may be at an unfair disadvantage in evaluations that are based on this metric simply because of the students they teach and the challenges that these students bring—factors that are largely outside of the teacher’s influence or control.

Secondly, one of the other potentially problematic issues we observe is that districts set goals for teachers without any context for what might be considered effective or adequate performance. For example, we have worked with districts that have identified 75% as the district’s benchmark for what is considered “effective” teacher performance, but that percentage was chosen without any consideration for the prior performance of students in the district (or the level of test performance we might expect to see from these students) as a “one-size-fits-all” approach. And, as I mentioned in the previous paragraph, while this 75% benchmark might be appropriate for some students (and for some teachers), it likely isn’t going to be appropriate for all students and all teachers. It is certainly true that districts want to set high standards for their students and teachers, but when this statistic is used for teacher evaluation purposes, districts should also want to identify goals that reflect what students in a classroom can actually do, and are goals that are actually going to be attainable by the teacher.

We hope to have some more specific guidance about what our recommendations are for using our tests for teacher evaluation purposes, as well as some of the considerations and concerns we have about this process, so stay tuned for that. But, in the meantime, if you are reading this and you are using our tests for teacher evaluation purposes (especially the percent of students meeting or exceeding their growth projections), it’s important to restate that the growth projections are just that—projections (which are great for use in setting student goals)—and not definitive targets for where students should be by the end of the year.

If you have more questions about teacher evaluation, or want to share how your school/district is using MAP data for teacher evaluation purposes, please share your comments/thoughts below…we’d love to hear from you, and can perhaps offer our own thoughts on your particular situation!

Website

Reading differentiation made easy

MAP Reading Fluency now includes Coach, a virtual tutor designed to help students strengthen reading skills in as little as 30 minutes a week.

Learn more

Webinar

Dyslexia 101

Learn more about dyslexia with the on-demand version of our webinar Straight facts on dyslexia: What the research actually tells us.

Watch now

Guide

Put the science of reading into action

The science of reading is not a buzzword. It’s the converging evidence of what matters and what works in literacy instruction. We can help you make it part of your practice.

Get the guide

Article

Support teachers with PL

High-quality professional learning can help teachers feel invested—and supported—in their work.

Read the article

Content disclaimer:

Teach. Learn. Grow. includes diverse perspectives that are meant to be a resource to educators and leaders across the country and around the world. The views expressed are those of the authors and do not necessarily represent those of NWEA.