Step 5: Select assessment tools
There are several practical factors that may drive the choice of assessment:
Cost. Some assessments are free, whereas others require the purchase of a license, often on a per student per year basis. Frequently, costs reflect implementation supports offered by assessment developers. Alternatively, some assessment developers provide the assessment instruments for free but charge schools and districts for implementation support. Other developers make their instrument available only upon purchase of a more comprehensive system. Implementation supports typically offered for a fee include things like survey administration/data collection, scoring, data reporting and digital dashboarding, staff training on data collection and data use, and ongoing technical assistance.
Administrative and analytic capacity. Depending on the district’s or school’s capacity for research and assessment, comprehensive, paid implementation support from the developer may or may not be needed. Although these supports can represent a significant investment, especially for large school districts with many schools and large student populations, they can be helpful to practitioners and supportive to the continuous improvement of SEL. These supports can provide easy-to-interpret data visualizations, the ability to disaggregate data to illuminate student strengths, and links to resources to guide instruction or implementation improvement within the data reports (ideally based on results).
Scalability. It is important in some situations that assessments be administered to a large student population. Although cost is a factor in scalability, so are technology requirements and the length of time for assessment. For example, when California’s CORE districts (eight of the state’s largest districts) wanted to administer SEL assessments to nearly one million students in grades 1-12 across eight districts, they realized they would need assessments that were fairly inexpensive, did not have technological requirements, and minimized the impact on instructional time. The CORE districts chose student self-reports, which could be administered to students in grades 4-12, and teacher reports of student competencies, to be administered in grades K-3. The assessments could be administered online or via pencil and paper and take each student and teacher approximately 20-30 minutes per survey administration. See Expanding the Definition of Student Success: A Case Study of the CORE Districts for more information about CORE.
Reporting Needs. Questions about the level at which scores are reported (student, classroom, grade, school, etc.), who will have access to the reports, and whether associated resources are included in the reports, such as recommended programs and practices based on the results of the assessment, can drive the choice of assessments. For example, some assessments provide scores only at the aggregate level, while others include scores at the individual and aggregate level but do not offer accompanying resources.
Look beyond terms to definitions
As Stephanie Jones and her team have found through the Taxonomy Project, many different terms can be used to describe the same SEL competency (see Dr. Jones’ What is the same and what is different white paper). Thus, it is important when searching for measures to understand the inherent definitions, overlap, and distinctions among different references to specific competencies.
For example, many educators want their students to develop a growth mindset—defined as a belief that ability and skill are malleable and will increase in response to one’s effort, rather than being fixed and outside of one’s control. Measures to assess growth mindset do exist. However, growth mindset is closely related to other characteristics, including self‐confidence, self-efficacy, and empowerment. If one only looked for assessments with “growth mindset” in the title, one might miss some measures that could be suitable.