For our students to learn to read, we need to teach using practices that work. Evidence-based practices —using what we know from the science of reading—is a key starting point. But we know that in real life, students respond differently to practices that tend to work. That’s why we need to get some feedback cycles going: how is what we’re doing now playing out for this particular kiddo?
When we want to know whether a student is really benefitting enough from the interventions or instruction we are providing, we need to look at that student’s growth. One way to check growth on a faster cycle is through progress monitoring. In reading, a key progress monitoring tool is oral passage reading: we plot the student’s words correct per minute (WCPM) with regularity to see how it changes. When we see good growth, we stick with the program. When we don’t, we find ways to improve our instruction or intervention.
What comes before WCPM?
It doesn’t make a lot of sense to ask most kindergarteners to read a passage aloud. While they are busy learning letter sounds and learning to blend and segment phonemes, they are still going to be getting a lot of zero WCPM when asked to read an unfamiliar passage. It’s not that they aren’t growing in literacy; it’s just that WCPM is not the right metric yet.
When we want to know whether a student is really benefiting enough from the interventions or instruction we are providing, we need to look at that student’s growth.
So what are the right metrics? How should we do progress monitoring in phonics and word recognition, or in phonological awareness, for our youngest students? There are a few approaches out there to answering this question.
In MAP® Reading Fluency™, we designed a domain-level approach that supports the same kinds of analysis and decision-making that educators are used to now, with oral reading fluency. We have put the whole domain of phonological awareness, for example, on one scale. It’s a bit like a RIT score, really, but just for phonological awareness. As kids grow on that scaled domain score, we can make decisions about the effectiveness of our instruction or intervention in phonological awareness.
Having a single metric that holds steady across longer periods of time—much like WCPM—provides us a way to see how our instruction is adding up to more than a collection of skills. We can monitor overall growth in a critical domain.
Why don’t we just assess single skills, in sequence?
Another style of looking at progress is to measure one skill at a time, in sequence. In this kind of measurement—skills mastery monitoring—we might first watch letter sounds grow to mastery before we start monitoring simple word reading: “cat,” “bug,” “pin.” Instructionally, though, we know that once students know letter sounds for A, M, T, and S, we’ll be playing with words like “at,” “mat,” “sat,” “am,” and “Sam.” When a student can use these letters to begin sounding out these words, that’s progress. But on a single-skill measure of only letter sounds, that growth doesn’t show: they still only have four letter sounds. If we use narrow measures one by one according to our skills sequence, then we aren’t capturing all the meaningful growth that is happening in the larger phonics and word-recognition domain.
When we want to know whether a student is really benefitting from the interventions or instruction we are providing, we need to look at that student’s growth across the relevant domain, not just growth in one skill.
Teaching and learning are more holistic, within a domain. Let’s think about a domain outside of literacy that also works this way: learning to swim.
Skills, progress, and swimming
I learned to swim at the YMCA. Back then, the swimming class levels had sea creature names: when I was in Polliwogs, my sister was already a Minnow. Our big brother was a Shark. While I was blowing bubbles and learning to kick my legs while holding the side of the pool with the other Polliwogs, my brother was doing the crawl across the whole pool the long way. But he hadn’t stopped blowing bubbles or kicking; for him, growth included improving his rhythmic side breathing and making his flutter kick more efficient.
My sister was eager to become a Shark, too. To move up a level, she didn’t just have to master a new skill introduced in Minnows. She also had to be better across a lot of skills. What the instructors cared most about was whether she was better at swimming than most Minnows.
Isn’t there a sequence of skills or stages that matters, instructionally? Sure. No Polliwog is ready to learn the underwater flip turn, like the Sharks learn. Similarly, we don’t introduce multi-syllabic word attack strategies to kids who only know four-letter sounds. But the important point is that growth in most domains does not involve neatly checking off one skill—done!—before beginning the next. Within a domain, skills build and sharpen and integrate across a long trajectory.
Acquiring skills matters. But, ultimately, improving overall proficiency in a domain matters more. When we want to know whether a student is really benefitting from the interventions or instruction we are providing, we need to look at that student’s growth across the relevant domain, not just growth in one skill.
Decisions, decisions
Data is for informing real decisions. So what kinds of decisions is progress monitoring designed to inform, and just how does that work?
Progress monitoring is for evaluating effectiveness of the instruction or intervention you are offering. The core question in monitoring progress is this: Is this working well enough, for this student? If it is, let’s keep this up. If it isn’t, let’s make a change.
For this decision to happen, we need to know what “well enough” means. That means setting an appropriate goal and considering the slope of growth required to get there. Here’s where it’s helpful to have a domain scaled, with norms. If we want this student to catch up to their grade-level peers by the end of the year in phonics and word recognition, then we find the 50th percentile for spring in that domain. From where the student is performing now, we set a goal line that slopes steadily toward that end-of-year goal, using our scale as the metric on the vertical axis.
If we have a really successful intervention going on, we’ll see the data from regular progress monitoring fill in near and often above that goal line. We’re on track for success. If, instead, we see our data staying below that goal line, then we need to figure out how to improve our effectiveness. We might try a smaller group, more individualization—ways to offer a higher tier of intensity, for those of you who use a form of multi-tiered systems of support (MTSS).
The progress monitoring toolbox in MAP Reading Fluency
MAP Reading Fluency has provided efficient, automatically scored progress monitoring in oral reading fluency for years. Now, the same method of informing decisions about intervention effectiveness extends to two new progress monitoring tools: phonological awareness, and phonics and word recognition.
Check each tool out. See how it fits in your hand, admire its functional-yet-stylish design. But then let’s get to work on the task these tools are most helpful for: ensuring that each child is responding to our best-practice instruction and interventions with excellent growth.