Guest post by Jessica Ainsworth

 

The use of standardized or large-scale assessments affect the decision-making of policymakers, educational leaders, teachers, and other stakeholders—as those of us at Lithia Springs High School learned when standardized testing results and other factors placed us on the state’s “at-risk” list. Lithia Springs High was considered a failing school in Georgia, and we had an enormous task before us to change that perception. 

We relied heavily on our formative assessments to guide our school improvement process and give us a window into the future successes of our students, but it made me wonder: Were our formative assessments really predictive in measure? Were they providing teachers with an accurate understanding of students’ specific curriculum standard strengths and weaknesses sufficient to design enrichment and remediation?

Yet, without sound proof of reliability or validity of these district-developed assessments, we continued to make instructional decisions. Unreliable or invalid results could negatively impact the implementation of appropriate instruction for all students, but adversely impact special student populations even more.

This led me to three primary questions:

1. To what extent are common formative assessments significant predictors of high-stakes student achievement in the school district?

2. To what extent are common formative assessments significant predictors of high-stakes student achievement for students with a disability?

3. To what extent are common formative assessments significant predictors of high-stakes student achievement for students with a disability within specific demographic groups?

The results of these questions allowed me to establish whether or not a relationship existed among the formative assessment results and our state test. In addition, it found if a relationship existed for students who identify with specific subgroups on formative assessments and high-stakes assessments. If there was not a strong relationship, I knew we would need to look at making revisions to our current assessments. Once there was an understanding of the nature of the relationships, I could also provide educators with the confidence to use the results of the formative assessments to adjust instruction, provide remediation, or provide enrichment.

I decided to take small steps, beginning with our weakest area—mathematics—and found seven relationships between specific mathematics common formative assessments and subdomains on a state test. The greatest correlations were found among students as a whole, with fewer identified for students with a disability, and few or none for specific subgroups among students with a disability.

Subgroup performance was especially important to me due to limited research on predictive validity for students with a disability. Knowledge about subgroup performance would help us determine our school improvement approach for our lowest-performing subgroup, but these results also helped me think differently about our assessment practices.

Information from this analysis allowed me to understand the importance of statically evaluating assessment results, so teachers have the opportunity to confidently use the assessment results to guide and differentiate instruction to better prepare their students for mastery of learning. In other cases, the information informed our assessment writers that revisions were needed to increase predictive validity and provided evidence for decision-makers to continue the use of these formative assessments for the purposes of predictive validity.

In my new role as assistant director of assessment for the county, the practice of evaluating our assessments has expanded to each assessment in our district. I hope the model I used will become a common practice in Georgia and require us to think differently as we choose which tests to give to students.

Assessment and accountability measures are here to stay. When you think about the underperformance of specific student groups, how are you ensuring your assessment program and specific assessments yield an accurate perception of their future on high-stakes assessments?

To read the full research, download it here.

Jessica Ainsworth, Ed.D., is the assistant director of assessment for the Douglas County School System in Douglasville, GA. She was named 2015 NASSP Assistant Principal of the Year, 2015 Georgia Assistant Principal of the Year, and 2016 K–12 Dive Education Administrator of the Year. Follow Jessica on Twitter @jessmainsworth.

About the Author

Jessica Ainsworth, Ed.D., is the assistant director of assessment for the Douglas County School System in Douglasville, GA. She was named 2015 NASSP Assistant Principal of the Year, 2015 Georgia Assistant Principal of the Year, and 2016 K–12 Dive Education Administrator of the Year. Follow Jessica on Twitter @jessmainsworth.

Leave a Reply

Your email address will not be published. Required fields are marked *