Design Challenge Winner: Student Assessment Engagement
By James Soland, NWEA
Any educator who’s administered a test will likely relate to this anecdote: while most students in class appear absorbed in the assessment, there’s one student who stares off into space. Or who doodles on his scratch paper. Or who simply puts her head down on the desk. And, it’s usually unsurprising when scores for that student come back low.
What we’ve learned at NWEA through our research partnership with Santa Ana Unified School District is that these small signs of disengagement from a test can be subtle manifestations of deep-rooted issues related to self-belief, motivation, and engagement. They can be clues to how students perceive themselves, and to where they see themselves going.
These small test disengagement behaviors allowed us to assess social-emotional learning (SEL) competencies, and academic self-management specifically, in an entirely new way.
The link between behaviors and SEL is one reason Santa Ana has been interested in assessing academic self-management. Students with low self-management often come to class unprepared, struggle to follow directions, and avoid independent work. Put simply, students with low self-management skills have trouble staying focused and completing tasks. Many students who have consistent trouble completing academic tasks do not believe themselves capable of finishing those tasks successfully. If there’s little hope of doing something satisfactorily, what’s the point of undertaking it in the first place?
Small behaviors like coming to class unprepared are often signs that a student lacks the self-confidence to engage in basic classroom tasks, and may be at risk of dropping out.
As is so often the case with SEL competencies, measuring self-management is hard. Surveys are typically used, which can be biased when students report their self-management inaccurately, either because they are worried to admit they are not focused during school, or they are simply unaware just how disengaged they are. While educators can measure self-management by observing behaviors like coming to class unprepared, it’s another story to ask teachers who already have too many responsibilities to quantify, record, and report those behaviors.
As it turns out, we found a solution in the form of metadata that are often captured and discarded when students take achievement tests on a computer. It’s pretty simple: we measure how long a student spends answering each question on the MAP Growth test and, after it’s over, count how often the student responded too quickly to have understood the question’s content. This behavior is referred to as “rapid guessing”, and it is one way educators can measure how engaged a student is on a test.
What we found is that a student who rapidly guesses on the test also typically has low scores on self-management surveys. Students who rapidly guess on 10% or more of the questions are also much more likely to be chronically absent, fail courses, and be suspended, all of which are signs a student may eventually drop out. Therefore, educators can gain useful information about self-management systematically and quickly every time a student takes the test, and those data do not suffer from the same biases as subjective survey scores.
Educators can use rapid guessing in a multiple-measures approach to assessing self-management. For example, if a student reports high self-management on a survey, but rapidly guesses often on a test, then teachers may question if self-report bias is at play. We can also demonstrate just how much low self-management impacted a student’s test score, making a direct link between achievement and SEL in a way that has never been quantified previously.
Details on how educators can use these measures can be found here: https://www.nwea.org/resources/using-student-assessment-engagement-measure-student-sel-and-school-engagement/
Small behaviors. Discarded computer data. Sometimes, the little things tell us a lot.
What are signs of disengagement you’ve seen when students take tests? In your classrooms? How do they relate?
What other data would you want to include in a multiple-measures approach to assessing self-management that includes survey scores and rapid guessing metrics? How might you use these measures in tandem to foster academic self-management?
Disclaimer: The Assessment Work Group is committed to enabling a rich dialogue on key issues in the field and seeking out diverse perspectives. The views and opinions expressed in this blog are those of the authors and do not necessarily reflect the official policy or position of the Assessment Work Group, CASEL or any of the organizations involved with the work group.