Making Sense of Student Performance Data

Kim Marshall draws on his 44 years’ experience as a teacher, principal, central office administrator and writer to compile the Marshall Memo, a weekly summary of 64 publications that have articles of interest to busy educators. He shared one of my recent articles, co-authored with doctoral students Britnie Kane and Jonee Wilson, in his latest memo and gave me permission to post her succinct and useful summary.

In this American Educational Research Journal article, Ilana Seidel Horn, Britnie Delinger Kane, and Jonee Wilson (Vanderbilt University) report on their study of how seventh-grade math teams in two urban schools worked with their students’ interim assessment data. The teachers’ district, under pressure to improve test scores, paid teams of teachers and instructional coaches to write interim assessments. These tests, given every six weeks, were designed to measure student achievement and hold teachers accountable. The district also provided time for teacher teams to use the data to inform their instruction. Horn, Kane, and Wilson observed and videotaped seventh-grade data meetings in the two schools, visited classrooms, looked at a range of artifacts, and interviewed and surveyed teachers and district officials. They were struck by how different the team dynamics were in the two schools, which they called Creekside Middle School and Park Falls Middle School. Here’s some of what they found:

  • Creekside’s seventh-grade team operated under what the authors call an instructional management logic, focused primarily on improving the test scores of “bubble” students. The principal, who had been in the building for a number of years, was intensely involved at every level, attending team meetings and pushing hard for improvement on AYP proficiency targets. The school had a full-time data manager who produced displays of interim assessment and state test results. These were displayed (with students’ names) in classrooms and elsewhere around the school. The principal also organized Saturday Math Camps for students who needed improvement. He visited classrooms frequently and had the school’s full-time math coach work with teachers whose students needed improvement. Interestingly, the math coach had a more sophisticated knowledge of math instruction than the principal, but the principal dominated team meetings.

In one data meeting, the principal asked teachers to look at interim assessment data to predict how their African-American students (the school’s biggest subgroup in need of AYP improvement) would do on the upcoming state test. The main focus was on these “bubble” students. “I have 18% passing, 27% bubble, 55% growth,” reported one teacher. The team was urged to motivate the targeted students, especially quiet, borderline kids, to personalize instruction, get marginal students to tutorials, and send them to Math Camp. The meeting spent almost no time looking at item results to diagnose ways in which teaching was effective or ineffective. The outcome: providing attention and resources to identified students. A critique: the team didn’t have at its fingertips the kind of item-by-item analysis of student responses necessary to have a discussion about improving math instruction, and the principal’s priority of improving the scores of the “bubble” students prevented a broader discussion of improving teaching for all seventh graders. “The prospective work of engaging students,” conclude Horn, Kane, and Wilson, “predominantly addressed the problem of improving test scores without substantially re-thinking the work of teaching, thus providing teachers with learning opportunities about redirecting their attention – and very little about the instructional nature of that attention… The summative data scores simply represented whether students had passed: they did not point to troublesome topics… By excluding critical issues of mathematics learning, the majority of the conversation avoided some of the potentially richest sources of supporting African-American bubble kids – and all students… Finally, there was little attention to the underlying reasons that African-American students might be lagging in achievement scores or what it might mean for the mostly white teachers to build motivating rapport, marking this as a colorblind conversation.”

  • The Park Falls seventh-grade team, working in the same district with the same interim assessments and the same pressure to raise test scores, used what the authors call an instructional improvement logic. The school had a brand-new principal, who was rarely in classrooms and team meetings, and an unhelpful math coach who had conflicts with the principal. This meant that teachers were largely on their own when it came to interpreting the interim assessments. In one data meeting, teachers took a diagnostic approach to the test data, using a number of steps that were strikingly different from those at Creekside:
  • Teachers reviewed a spreadsheet of results from the latest interim assessment and identified items that many students missed.
  • One teacher took the test himself to understand what the test was asking of students mathematically.
  • In the meeting, teachers had three things in front of them: the actual test, a data display of students’ correct and incorrect responses, and the marked-up test the teacher had taken.
  • Teachers looked at the low-scoring items one at a time, examined students’ wrong answers, and tried to figure out what students might have been thinking and why they went for certain distractors.
  • The team moved briskly through 18 test items, discussing possible reasons students

missed each one – confusing notation, skipping lengthy questions, mixing up similar-sounding words, etc.

  • Teachers were quite critical of the quality of several test items – rightly so, say Horn, Kane, and Wilson – but this may have distracted them from the practical task of figuring out how to improve their students’ test-taking skills.

The outcome of the meeting: re-teaching topics with attention to sources of confusion. A critique: the team didn’t slow down and spend quality time on a few test items, followed by a more thoughtful discussion about successful and unsuccessful teaching approaches. “The tacit assumption,” conclude Horn, Kane, and Wilson, “seemed to be that understanding student thinking would support more-effective instruction… The Park Falls teachers’ conversation centered squarely on student thinking, with their analysis of frequently missed items and interpretations of student errors. This activity mobilized teachers to modify their instruction in response to identified confusion… Unlike the conversation at Creekside, then, this discussion uncovered many details of students’ mathematical thinking, from their limited grasp of certain topics to miscues resulting from the test’s format to misalignments with instruction.” However, the Park Falls teachers ran out of time and didn’t focus on next instruction steps. After a discussion about students’ confusion about the word “dimension,” for example, one teacher said, “Maybe we should hit that word.” [Creekside and Park Falls meetings each had their strong points, and an ideal team data-analysis process would combine elements from both: the principal providing overall leadership and direction but deferring to expert guidance from a math coach; facilitation to focus the team on a more-thorough analysis of a few items; and follow-up classroom observations and ongoing discussions of effective and less-effective instructional practices. In addition, it would be helpful to have higher-quality interim assessments and longer meetings to allow for fuller discussion. K.M.] “Making Sense of Student Performance Data: Data Use Logics and Mathematics Teachers’ Learning Opportunities” by Ilana Seidel Horn, Britnie Delinger Kane, and Jonee Wilson in American Educational Research Journal, April 2015 (Vol. 52, #2, p. 208-242

Advertisement

3 thoughts on “Making Sense of Student Performance Data

  1. Pingback: Policy and Math Education: A Conference | teaching/math/culture

  2. Pingback: Teachers’ Work Conditions | teaching/math/culture

  3. Pingback: Supporting Instructional Growth in Mathematics (Project SIGMa) | teaching/math/culture

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s