A Cascade of Errors in Interim “Summative” Assessments

I have been working on a paper investigating teachers interpreting student performance data. The data come from their district’s interim summative assessments, tests given every 6-8 weeks to help them predict how students will perform on the high stakes end of year tests. These interim assessments have taken a very important place in these schools, who are under the threat of AYP sanctions.

The teachers are all working so hard to do right by their kids, but there is a cascade of errors in the whole system.

First, the assessments were internally constructed. Although they match the state standards and have been thoughtfully designed, they have not been psychometrically validated. That means that when they are used to measure, say, a student’s understanding of addition of fractions, it has not been determined over repeated revision that this is what is actually being tested.

Second, the proficiency cut points are arbitrary, yet NCLB has everybody worried about percentage of students above proficiency. This is a national problem, as was so eloquently laid out in Andrew Ho’s 2008 article in Educational Researcher.

In the end, we are sacrificing validity for precision. We think these data reports tell us with great accuracy about who is learning what and to what degree. But there is reason to believe that this cascade of errors is just another sorting and labeling mechanism interfering with real teaching and learning.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s