Good news to share: another research grant has been funded by the National Science Foundation. Yay!
This is what we pitched to the NSF:
This study addresses the need to develop processes for adequate and timely feedback to inform mathematics teachers’ instructional improvement goals. In this study, we propose using design-based implementation research to develop and investigate a process for documenting mathematics teachers’ instruction in a way that is close to classroom practice and contributes to teachers’ ongoing pedagogical sense making. The practical contribution will be a framework for formative feedback for mathematics teachers’ learning in and from practice. The intellectual contribution will be a theory of mathematics teachers’ learning, as they move from typical to more ambitious forms of teaching in the context of urban secondary schools. Both the practical and theoretical products can inform the design of professional development and boost other instructional improvement efforts.
In a recent Spencer study, my team and I investigated how teachers used standardized test data to inform their instruction. (That team was Mollie Appelgate, Jason Brasel, Brette Garner, Britnie Kane, and Jonee Wilson.)
Part of the theory of accountability policies like No Child Left Behind is that students fail to learn because teachers do not always know what they know. By providing teachers with better information, teachers can adjust instruction and reach more students. There are a few ways we saw that theory break down. First, the standardized test data did not always come back to teachers in a timely fashion. It doesn’t really help teachers adjust instruction when the information arrives in September about students they taught last May. Second, the standardized test data took a lot of translation to apply to what teachers did in their classroom. Most of the time, teachers used data to identify frequently challenging topics and simply re-taught them. So students got basically the same instruction again, instead of instruction that had been modified to address central misunderstandings. We called this “more of the same,” which is not synonymous with better instruction. Finally, there were a lot of issues of alignment. Part of how schools and districts addressed the first problem on this list was by giving interim assessments –– basically mini versions of year end tests. Often, the instruments were designed in-house and thus not psychometrically validated, so they may have not always measured what they purported to measure. Other times, districts bought off-the-shelf interim assessments whose items had been developed in the traditional (and more expensive) manner. However, these tests seldom aligned to the curriculum. You can read the synopsis here.
Accountability theory’s central idea –– giving teachers feedback –– seemed important. We saw where that version broke down, so we wanted to figure out a way to give feedback that was closer to what happens in the classroom and doesn’t require so much translation to improve instruction. Data-informed action is a good idea, we just wanted to think about better kinds of data. We plan to use a dual video coaching system — yet to be developed — to help teachers make sharper interpretations of what is happening in their classrooms.
Why did we partner MfA LA? When I reviewed the literature on teachers’ professional learning, they seemed to be hitting all the marks of what we know to be effective professional development. They focus on content knowledge; organize their work around materials that can be used in the classroom; focus on specific instructional practices; they have a coherent and multifaceted professional development program; and they garner the support of teacher communities. Despite hitting all of these marks, the program knows it can do more to support teachers.
This is where I, as a researcher, get to make conjectures. I looked at the professional development literature and compared it to what we know about teacher learning. MfA may hit all the marks in the PD literature, but when we look at what we know about learning, we can start to see some gaps.
|*Conjecture 1||Professional learning activities need to address teachers’ existing concepts about and practices for teaching.
|Conjecture 2||Professional learning activities need to align with teachers’ personal goals for their learning.
|Conjecture 3||Professional learning activities need to draw on knowledge of accomplished teaching.
|*Conjecture 4||Professional learning activities need to respond to issues that come up in teachers’ ongoing instruction
|*Conjecture 5||Professional learning activities need to provide adequate and timely feedback on teachers’ attempts to improve their instructional practice to support their ongoing efforts.
|Conjecture 6||Professional learning activities should provide teachers with a community of like-minded colleagues to learn with and garner support from as they work through the challenges inevitable in transformative learning.
|*Conjecture 7||Professional learning activities should provide teachers with rich images of their own classroom teaching.
The conjectures with * are the ones we will use to design our two camera coaching method.
We need to work out the details (that’s the research!) but teacher’s instruction will be recorded with two cameras, one to capture their perspective on significant teaching moments and a second to capture an entire class session. The first self-archiving, point-of-view camera will be mounted on the teacher’s head. When the teacher decides that a moment of classroom discourse illustrates work toward her learning goal, she will press a button on a remote worn around her wrist that will archive video of that interaction, starting 30 seconds prior to her noticing the event. (As weird as it sounds, it has been used successfully by Elizabeth Dyer and Miriam Sherin!) The act of archiving encodes the moment as significant and worthy of reflection. For example, if a teacher’s learning goal is to incorporate the CCSSM practice of justification into her classroom discourse, she will archive moments that she thinks illustrate her efforts to get students to justify their reasoning. Simultaneously, a second tablet-based camera would record the entire class session using Swivl®. Swivl® is a capture app installed in the tablet. It works with a robot tripod and tracks the teacher as she moves around the room, allowing for a teacher-centered recording of the whole class session. Extending the prior example, the tablet-based recording will allow project team members to review the class session to identify moments where the teacher might support students’ justifying their reasoning but did not do so. The second recording also captures the overall lesson, capturing some of the lesson tone and classroom dynamics that are a critical context for the archived interactions. Through a discussion and comparison of what the teachers capture and what the project team notices, teachers will receive feedback on their work toward their learning goals. We will design this coaching system to address the starred conjectures in the table
I will keep you posted!