Five Characteristics of Quality Educational Assessments – Part Four

5 Characteristics of Quality Educational Assessments – Part 4This is our final blog post in the series that has broken down the five characteristics of quality educational assessments. Our first blog post highlighted the need for content validity and our second focused on reliability. Our third post was centered on the need for fairness.

This final post touches on the final two characteristics of what makes for quality educational assessment:  student engagement and motivation, and consequential relevance.

1. Student engagement and motivation – Student engagement and motivation are somewhat intangible traits. Engagement is not synonymous with enjoyment. A student might be engaged in the assessment, but still not enjoy the activity of being assessed. Engagement speaks to the effort that the student is putting into the assessment. The idea is that the better effort the student makes, the more likely the results of her effort (as evidenced by her assessment performance) will give a representative snapshot of what she understands, knows, and can do.

Student engagement is tracked by many different indicators depending on the type of assessment and its delivery. Some indicators might include:

+ Student response patterns on the infamous “bubble tests.” If a student has an abnormal response pattern (one that seems artificial, such as AAAABBBBCCCCC), then the test will likely be considered invalid.

+ On computer delivered assessments, student response time can be measured. If a student’s response time is consistently and significantly shorter than the average response time range, their test will likely be invalidated.+ On assessments requiring a written response, such as an extended essay, student engagement might be tracked by the length of their response. Students submitting a response that falls significantly below expectation in word count, for example, might be invalidated.

Engagement can be observed through response patterns, word count, time spent on the item, etc., but motivation can only be inferred. Engagement and motivation are not the same thing. A student demonstrates engagement because of his or her motivation. A student can be motivated to do his or her best in an assessment experience for a variety of reasons, generally organized into an external/internal schema. External motivations to demonstrate engagement might come from prizes or rewards tied to assessment performance, or, conversely, punishments or ‘consequences’. Internal motivation to demonstrate engagement might come from the desire to do well on the test, or garner positive attention or praise, or a competitive urge to outperform peers.

Classroom teachers have an advantage when it comes to gauging student engagement and understanding the motivating, and demotivating, factors that might be in play for their students. During formative assessment practice and classroom assessment, teachers can visibly see if students are engaged or not and adjust midstream. Teachers can work with the students and support the factors that motivate them, and help them navigate through the factors that demotivate them.

2. Consequential relevance – When educators spend precious instructional time administering and scoring assessments, they want the utility of the results to be worth the time and effort they spent to assess their students.  They want to understand the data and use it to meaningfully adjust their instruction and better support student learning.  This is what consequential relevance is.

Assessment data can help to answer all kinds of questions and inform a variety of different decisions.  These questions can range from whether a student has shown proficiency on a state summative exam, to whether students have performed well enough to earn college credit via an AP exam, and whether and how much each student has grown during different intervals of the academic year. The usefulness of an educational assessment resides in the utility of its data to help inform decisions, and for that, you have to understand what the data mean. Some questions to help you get there include:  what exactly did the assessment measure? What didn’t it measure? Given that, what can the data tell you?  What can’t the data tell you?  What kind of inferences can be made from the data?  And, what kind of decisions can the data reasonably inform? How do the educational assessments you or your district use reflect these traits?

Website

Reading differentiation made easy

MAP Reading Fluency now includes Coach, a virtual tutor designed to help students strengthen reading skills in as little as 30 minutes a week.

Learn more

Webinar

Dyslexia 101

Learn more about dyslexia with the on-demand version of our webinar Straight facts on dyslexia: What the research actually tells us.

Watch now

Guide

Put the science of reading into action

The science of reading is not a buzzword. It’s the converging evidence of what matters and what works in literacy instruction. We can help you make it part of your practice.

Get the guide

Article

Support teachers with PL

High-quality professional learning can help teachers feel invested—and supported—in their work.

Read the article

Content disclaimer:

Teach. Learn. Grow. includes diverse perspectives that are meant to be a resource to educators and leaders across the country and around the world. The views expressed are those of the authors and do not necessarily represent those of NWEA.