Practitioners Talked; We Listened: Lessons from Building Consensus on Practical SEL Assessment
InformBy: Elizabeth Nolan and Deborah Moroney, American Institutes for Research
As the saying goes, “You can’t improve what you don’t measure.” As researchers, we hear this quite a bit in our hallways, and we take this statement very seriously. For many reasons, the quote is true. Without measurement, we can’t determine whether interventions, programs, practices, and policies are working. Measuring SEL, broadly, provides crucial feedback about how adults employ practices and strategies to support social and emotional learning; how youth and adults are developing social and emotional competencies; and how agencies and organizations are systematically supporting social and emotional learning and development.
Here’s the caveat to our favorite adage, though: “You can’t improve what you don’t measure well.” In the SEL field, multiple assessment issues – from complexity and reliability of the measures, to the use and interpretation of results – baffle even the most experienced researchers. But leaders in schools and out-of-school time programs are already assessing SEL practice and social and emotional competencies, without waiting for research to catch up. We think it is critically important to tap into the perspectives and insights of leading practitioners about how to measure SEL well. Rarely do we ask those who are doing the assessing and using the data what they think, prior to the adoption of large-scale changes that include measurement. What advice do practice leaders have now, as they are implementing, assessing, and tweaking their practices? What promises and pitfalls do they see on the horizon?
Through the National Practitioner Advisory Group on Using Data to Inspire SEL Practice (NPAG) organized in partnership with Dale Blyth and CASEL, we initiated such conversations in the summer of 2018. We convened 28 practice leaders from across the country – who work in schools, district, states, and out-of-school time programs – to allow them to share perspectives with their peers and with us. At the outset of our time as a working group, we quickly realized that the SEL field was missing both a strong, practitioner-led, unified voice about SEL assessment and data, and practitioner-friendly assessment guidance. In an effort to change that, we coordinated the creation of an NPAG consensus statement on SEL assessment and data.
Led and written by a thoughtful subgroup of NPAG leaders, the statement synthesizes the many discussions we have had throughout our time together. Ten key beliefs on SEL data and assessment emerged from our ongoing conversations. Within the statement, you’ll find insights on what practice leaders believe will make SEL assessment successful, with actions to take and reflection questions to consider.
Below, we offer a preview of three key takeaways from NPAG’s work.
Assess in a strengths-based, equitable way. Throughout our conversations, NPAG members concentrated on the concern that SEL may take on a deficit lens if not implemented and assessed carefully. SEL is intended to be a universal support. However, in practice, NPAG members have observed that SEL is often reserved for youth who adults observe as “needing SEL.” Judging young people’s competencies – while they’re learning, growing, and making mistakes – is a delicate task, which requires a strengths-first focus. Can a young person really “lack self-management” and “lack ‘grit’” when what he or she really carries are the effects of stress and anxiety from outside of a school environment? If we attribute these sorts of labels to individuals, SEL assessment may result in more harm than good.
Build adult capacity to assess SEL, and interpret and use SEL data. Educators are surrounded by, sometimes drowning in, data. Even as more and more data is collected, it can be a challenge to make sense of it all. SEL data is not as easily quantified as measures like math or reading skills, because social and emotional competencies are more malleable – that is, changeable and adaptive. Therefore, adults may need distinct training on how best to interpret SEL data. How does it work with, yet differ from, the academic and behavioral outcome data that educators already have?
Assess to continuously improve. We shouldn’t collect data for data’s sake – instead, we should consider how to align practices, instruction, and assesment in a way that identifies areas of strength as well as areas for improvement. SEL data should be used together with other data that’s already collected, and as our colleagues point out, only if this data adds something new to the conversation.
Through NPAG, we have learned a lot about practice leaders’ current opportunities and challenges in SEL assessment. Certainly, there is no one-size-fits-all approach to assessing SEL effectively. Common principles, like those outlined in NPAG’s statement, can serve as a guide to assessing SEL ethically, which can help to foster the sustainability of SEL practices.
Want more? See the list of all ten practitioner beliefs below or you can read the full statement here, and access the accompanying resource list here. For other resources including the new State of SEL Assessment report, check out the Measuring SEL Resources page here.
Making SEL Assessment Work: Ten Practitioner Beliefs
- Begin intentionally and with a strong vision.
- Assess strengths, not deficits.
- Create a positive culture and climate.
- Implement and assess with an equity lens.
- Recognize the importance of adult social-emotional competence.
- Measure for growth, not an endpoint.
- Foster adult capacity continuously.
- Authentically engage and collaborate with youth and families.
- Use data to continuously improve SEL practice.
- Implement universal and differentiated approaches to SEL development.
What additional beliefs or advice would you recommend?
I think that there is value in not only assessing strengths but in identifying trouble areas and opportunities for improvement.
I believe the idea and promotion of strength based assessment at the expense of any other work is misguiding the field. The matter is not whether your assessment is named “strength” or not. A high “problem” area can easily be presented as a low “strength”. Any assessment can be formed and named in whatever direction one desires. The point of assessment and screening is whether the tool identifies areas of need and allows the evaluation of your efforts in a reliable meaningful way. Look for example at the table presented by CASEL on SEL competencies (see site). Three key point emerge…SEL skills can be acquired/taught through a variety of means (column one), thus resulting in the build-up of a person’s SEL core competencies (column two), which ultimately will result in long term behavior outcomes (column three). The long term outcomes include reduction of conduct problems, reduction of internalizing types of behaviors, increase in social skills, academics, etc. This is ultimately the goals of SEL skills, and that is what needs to be assessed with reliable, valid to these constructs instruments. “Pounding” the message of strength based assessment as the panacea of SEL success has not as of today resulted in the outcomes one would expect. Looking at the assessment tools presented as resources in the CASEL website one can easily see why the field is not moving forward, why there is so much confusion with the strength based assessments. Should I say that over 100 plus constructs with good “face” validity are promoted in the list of these instruments but with no more than a few supporting research studies? I strongly believe that the concept of “strength” should be the foundation of the curricula and any other practice or intervention(s) across all MTSS tiers we promote (see column one in CASEL table) This is where we need to concentrate our efforts in judging the quality of the practices promoted for utilization in schools, etc. The field has been flooded with SEL materials, in many cases with nothing more that fancy websites and colorful posters and handouts, work that lacks the research and evidence of their qualities. Do these materials result in the build up (strength) of the SEL core competencies? Perhaps self awareness, self management (no doubt) are important SEl skills but can one find an area of life where is it not affected by any of these concept? What would be important to have is to find through appropriate and efficient assessments behaviors that can demonstrate the impact of any of these SEL constructs. If we follow this “strength based” assessment trend, we simply ignore decades of literature and statistics of the need to address immediate mental health issues. The CDC statistics of 1 out 5 needing help NOW, is completely ignored. I find it quite interesting that when one reads any of the articles on the strengths based assessments the introductions begin, with CDC statistics and in some cases they continue to claim validation of their strength based instruments using clinical populations. So let’s shift a little away from the “fad” of strength based assessment and concentrate our efforts in investigating and establishing the qualities of strength promoting curricula. It is not a matter of whether your assessment screening tools are strength or non streght based. The question is: Does your instruments lead you to actionable data to address the 1 out 5 kids behavior/mental needs while also assessing the SEL constructs connected with these behaviors. Does the instrument allow you to conduct an evaluation of your long term efforts in making a difference through SEL programming? This what we have been doing effectively with our work with the Behavior Intervention Monitoring Assessment System -2 (BIMAS-2). Identifying system wide and individual student mental health needs through behaviors which are connected to both the CDC statistics and the SEL core competencies.
Achilles N. Bardos, Ph.D.
Professor of School Psychology