Choosing and Using SEL Competency Assessments:

What Schools and Districts Need to Know

measuring SEL and RAND

Step 7: Use data

As noted in Step 2, uses of data from SEL competency assessments can fall into two broad categories:

Formative uses for learning.
Summative uses of  learning.

This section takes a closer look at some examples of formative and summative uses of SEL assessments from districts across the country.

Formative uses of SEL competency assessment

Formative assessment is one of the most powerful tools classroom teachers can use to enhance student learning by providing insights into what students are learning and whether teachers should change their approach. [2] School and district leaders and teams can also use assessments formatively to determine if implementation strategies are effective or new ones need to be undertaken.

Here, we outline a variety of purposes that formative assessment of students SEL competencies can serve:

Purpose #1: Promote students’ competencies by fostering effective SEL instruction in classrooms
Purpose #2: Create an instructional plan based on a classroom profile of competencies
Purpose #3: Elevate student voice and promote student engagement and agency
Purpose #4: Improve school and district implementation strategies
Purpose #5: Foster equitable learning environments by revealing disparities

Purpose #1: Promote students’ competencies by fostering effective SEL instruction in classrooms

According to Robert Marzano, [2] effective SEL formative assessment focuses on three essentials:

  • Explicit learning goals regarding SEL skills.
  • Progress toward those learning goals.
  • Guidance in the steps needed to progress toward learning goals.

Marzano also outlines stages of learning from lowest to highest for any given SEL competency:

Figure 2. Learning stages for SEL competencies

 Learning stages for SEL competencies

School and districts can use this kind of learning progression to develop SEL assessment rubrics that enable formative uses. For an example of this kind of rubric, see Appendix B and/or Appendix C. Data can then be gathered to complete the rubric using strategies such as probing discussions, student-generated assessments, and teacher observation of student behavior during routine instruction and learning.[3]

Getting teachers and students involved. When developing SEL competency rubrics, we recommend that schools and districts consider including teachers and students, to deepen understanding and build buy-in.

It can also be beneficial for schools to translate these learning progressions into “student-friendly” versions, which are easily understood by students of various developmental levels. Several of CASEL’s partner districts have created such language, in some cases calling them “I Can” statements, framed as a series of statements that represent what a student “can” do if schools successfully implement SEL. See an example of “I can” statements from a CASEL district partner, Metropolitan Nashville Public Schools.

In creating these versions, especially when students are involved, schools and districts empower students to be self-directed learners, which fosters a sense of agency. Moreover, student engagement with these learning progressions may foster the competencies they are meant to assess.

District example: Naperville, Illinois

DuPage District 203, Naperville Community Unit School District has made developing and using rubrics a key lever for fostering high-quality SEL instruction.

Naperville’s rubrics articulate what students need to know and be able to do at each grade level, painting a picture of three degrees of development (e.g., “Beginning,” “Approaching,” and “Secure”) for each competency. They are aligned with Illinois’ SEL standards, as well as the districts’ instructional curriculum for SEL. They articulate a grade-specific and developmentally appropriate learning progression that can be quickly observed by teachers throughout the course of daily instruction.

Teachers are provided training and resources on how to identify competencies at each grade level, and on what resources are available to help them foster their students’ competencies. See Appendix C to learn more about Naperville’s rubric and see examples of their work.

Other approaches to classroom formative use of assessment. Marzano’s recommendations for developing formative assessment rubrics and the example provided by Naperville school district represent one approach to formative assessment use in classrooms, but there are others. Some SEL programs integrate ongoing assessment into their curriculum, so that teachers can check progress as they implement the program. In other cases, the line between instruction and assessment is indistinguishable, as is sometimes the case with computerized game-based programs that offer skill-building within simulated scenarios of decision-making and social interaction while also assessing students’ abilities.

Purpose #2: Create an instructional plan based on a classroom profile of competencies

Results of survey measures and performance-based assessments can be helpful to provide teachers with an overview of their students’ social and emotional strengths and opportunities for growth at the start of the year, which in turn can inform instructional planning for the year.

These assessments can also be retaken midyear to provide teachers an opportunity to check on their students’ progress and adjust their instructional approach or strategies, if indicated by the data. To be useful, however, results must be provided in a timely fashion. If there’s a long delay, findings may no longer accurately represent the students’ actual progress.

Increasingly these assessment types are administered using computers (e.g., Panorama, DESSA, SELWeb; see the SEL Assessment Guide for more information), sometimes as an optional package from the developer. This can decrease the delay in getting results, but these options can include additional charges from developers or require additional equipment or the need for initial and ongoing technical support.

Purpose #3: Elevate student voice and promote student engagement and agency

School staff can empower students in their own social emotional development by involving them in the interpretation of assessment results and problem-solving. Involving students in the process gives them a sense of ownership of their own learning and increases their motivation, engagement, and sense of agency.

District example: Washoe County, Nevada

In Washoe County, Nevada, district leaders realized that fully understanding their data would require engaging their most important stakeholder group—their students. The district began conducting annual student-led data summits, during which students lead sessions focusing on understanding data and developing improvement plans based on what the data say. More information and resources are available about Washoe’s work:

District example: Austin, Texas

During the 2017-2018 year, Austin Independent School District in Austin, Texas, tested a process called a “student data dig” in a single high school. This was a two-session process done “to provide an opportunity to empower students’ voices by gaining insight into their experiences at school, discuss students’ needs, and brainstorm ways to improve students’ experiences.” Students were enthusiastic and helped generate specific ideas. In its report on the process, the district indicated it is likely to consider expanding to additional schools going forward.

Purpose #4: Improve school and district implementation strategies

Combined with implementation data, student assessment results can empower district and school leaders and teams to make decisions related to policies, practices, and programs. Examples include adding professional learning or programs to boost instruction. Keeping reliability and validity in mind, survey assessments can often be a good option because they can be administered quickly and at fairly low cost.

The specifics of how SEL competency data are used formatively for continuous improvement can vary based on whether decision-making is controlled by the central office or schools have more autonomy:

  • In centrally controlled districts, district leaders may use the data to choose assessments and determine how to allocate supports, professional development, and programming adoption. Data may shed light on how students are responding to existing SEL programs or practices, or to identify schools where especially strong improvement is occurring so district leaders can explore what factors might have contributed to it.
  • When individual schools have more autonomy, schools may use data similarly, but with school principals or SEL teams leading the way. There may be a focus on examining students’ development across grade levels instead of across schools (although district-level leaders may also compare both across schools and across grades).

District example: California’s CORE districts

As mentioned in Step 5, California’s CORE network of eight large urban school districts has been working for many years to implement social and emotional learning at scale. Through their efforts, the CORE districts are pushing themselves and others to rethink how schools define success for their students. Along with academic and behavioral outcomes, the CORE districts utilize an annual survey that measures SEL competencies and areas of school culture and climate to foster continuous improvement in their schools. 

Through strategic research partnerships with organizations like Transforming Education, Harvard University, and Policy Analysis for California Education (PACE), the CORE districts have equipped their schools with the information needed to continuously improve while using techniques of improvement science to accelerate their learning and share ongoing insights across their schools.

Over the last few years, CORE’s research partner PACE has disseminated numerous reports, briefs, and infographics on topics such as:

To learn more, see PACE’s Publications page and the case study of the CORE districts.

Purpose #5: Foster equitable learning environments by revealing disparities

By disaggregating assessment results by student groups, school or district teams can identify opportunities for improving policies and practices, ensuring that SEL efforts benefit all students. To do this, districts and schools must identify disparities among student groups, systemic root causes, and strategies to improve systemic practices and policies that contribute to disparities.

Before diving in, it is important to consider whether assessments will allow for valid comparisons across student groups. We encourage schools and districts to ask assessment developers to share any research evidence that their instrument performs equivalently across groups of students for whom schools may look to disaggregate data.

It is also essential to deepen staff capacity for using data to explore differences among student groups. Doing this requires a shared understanding and agreement that:

  • The school or district is looking for evidence of disparities, so they can determine and act on the root causes of those disparities.
  • The system in which students live and learn is responsible for supporting all students’ social, emotional, and academic development.
  • Disparities among groups of students indicate a need for improvement in the practices and policies of the system, not deficiencies or failures of the students themselves.

Summative uses of SEL competency assessment

Compared to those used formatively, assessments intended for summative uses tend to be more formal, less frequent, and appropriate for system-level decision-making. Since they are often administered to all students in a school or district, issues of cost, time, and training are especially important considerations for implementing at scale. For this reason, surveys are frequently used because they tend to be comparatively low-cost, quick to administer, and require less training than other assessment methods.

Beyond those already outlined for student SEL competency assessment generally, there are some additional important considerations for using SEL competency assessments for summative purposes, which are sometimes associated with somewhat higher stakes decision-making:

Align instruction/program and assessment. It is essential that the competencies targeted by an SEL program be the same as those being assessed. If a program focuses on empathy, then an assessment that explicitly measures empathy should be used. We recommend that school and district teams carefully read program and assessment documentation to learn how competencies are defined, as sometimes definitions differ even when labels or titles are the same.[4]

Ability to detect improvement. Sensitivity of measurement is an important issue for both assessment selection and evaluation design. It is essential that assessments be able to detect improvement when it occurs, and that evaluations be designed to allow for analysis of data to occur on a schedule appropriate for the expected timeline for improvement (i.e., analyses that are designed to support inferences about change should not planned before research suggests improvement is likely to occur).We recommend that schools and districts ask assessment developers for evidence that their instrument can detect change over time. When designing a plan to evaluate the impact of programming, evaluators should align their plan with research that indicates a feasible timeline for improvement. This will allow schools and districts to set realistic expectations among stakeholders and also to foster an efficient use of resources by planning assessment and analyses only when improvements are expected to occur.

Know what to expect when interpreting data. For example, if your theory of change and/or implementation plan indicates that a year of implementation is likely to be needed before improvements in certain student outcomes can be expected, then failure to find improvements in schoolwide or districtwide student SEL scores sooner does not necessarily indicate a need to change course. Moreover, improvement does not always occur in a linear or incremental manner. Use previous research and past experience to estimate likely timelines for improvement for particular competencies and keep these timelines in mind when evaluating success, interpreting findings, and reporting to stakeholders.

Inquire about measurement equivalence. Researchers and assessment developers should establish measurement equivalence (or invariance) across student groups. Measurement equivalence relates to evidence that an assessment is equivalent across student groups and provides statistical assurance that an assessment functions similarly across diverse communities so that score interpretation is meaningful and appropriate for all members of a community. We encourage educators to inquire about these statistical analyses from assessment developers.For example, if statistical analyses fail to establish measurement equivalence between groups of students based on their ELL status, differences in scores between students with and without ELL status cannot safely be assumed to represent real differences in whatever that survey is measuring. For more about analyzing measurement invariance, see this analytic “how-to” resource from the Claremont Evaluation Center and this example analysis on survey instrument from the University of Connecticut.

Researcher-Practitioner Partnerships Some districts may not have existing capacity in place to conduct this type of research. In these cases, we encourage schools and districts to consider forming a researcher-practitioner partnership to obtain measurement equivalence evidence needed to confidently compare student groups. Such a partnership could allow data to be tested for measurement equivalence and provide important insights about the appropriateness of exploring differences in groups.

District example: Washoe County, Nevada

Supported by funding from the Institute of Education Sciences (IES), Washoe County School District (WCSD), CASEL, and assessment researchers from the University of Illinois at Chicago (UIC) collaborated on this kind of research-practice partnership. This partnership produced both improved assessments and improved measurement practices for the district and sparked a greater focus on student voice and engagement. [5]

Findings from this research and the subsequent developments in WCSD have been covered by media outlets (see Students help design measures of social-emotional skills in Education Week)

There are fewer purposes for which we recommended that student SEL competency assessments be used summatively, but two are outlined here:

  • Purpose #1: Evaluate the impact of an SEL classroom program
  • Purpose #2: Report to stakeholders about the progress of SEL initiatives

Purpose #1: Evaluate the impact of an SEL classroom program

Assessments administered before, during, and after the use of an SEL classroom program can provide evidence of the degree to which its implementation might have led to improvements in students’ SEL competencies. Teachers and leaders can use this information to decide whether to continue with the programs and practices, add additional supports for implementation, or change their approach. Findings from these assessments can also contribute to a building body of research on the impact of SEL on student development.

Purpose #2: Report to stakeholders about the progress of SEL initiatives

When aggregated at the school or district level and reported to stakeholders, student assessment results can build and maintain support for SEL by showing how investments in SEL are positively impacting students. When using data for this purpose, it is important to clearly communicate timelines and expectations to stakeholders at the start, so they are not surprised if improvement is not realized early on. Regularly remind stakeholders of these expected timelines, especially when reporting out about initiative progress. Consider limiting reporting only to analyses that align with the expected timeline for change. If improvement in students’ competencies is expected on a particular timeline of implementation, analyses of these student data may not occur until after that timeframe. This also efficiently uses valuable human resources only when indicated by your implementation and evaluation plan.

District example: Austin Independent School District

The Research and Evaluation Department in the Austin Independent School District has been an integral part of the district’s SEL effort. Since starting with SEL in 2011, the department has consistently published research reports and briefs on a wide range of topics, including:

NOTE: There are other purposes for which summative data are used in education, such as for state accountability systems and teacher evaluation. However, those are not discussed in detail here, since we do not generally recommend that student SEL competency assessments be used for those purposes.

Footnotes:

[1] Black & William, 1998; Kingston & Nash, 2011; Marzano, 2015

[2] Marzano, 2015

[3] Marzano, 2015

[4] Jones, Bailey, Brush, Nelson, & Barnes, 2016

[5] Davidson, Crowder, Gordon, Domitrovich, Brown, & Hayes, 2018