Design, Learning, and Data, Oh my! … (or how not to make people be defensive)

Readings this week:

Argyris, C. (1991). Teaching Smart People How to Learn. Harvard Business Review, (May-June).
City, E., Elmore, R., Fiarman, S., & Teitel, L. (2009). Chapters 4-6. In Instructional rounds in education: A network approach to improving teaching and learning (pp. 83-131). Cambridge, Mass.: Harvard Education Press.
Lieberman, A. (2000). Networks as Learning Communities: Shaping the Future of Teacher Development. Journal of Teacher Education, 221-227.

Design projects and data are not familiar language to educators, even though (hopefully most) teachers are literally engaged in design every day as they modify the local learning environment to fit the needs of their students. We rarely see it as such, though, as the emphasis is on students and their work in relation to the teacher’s design, not the reflection on our own thinking. This mirrors Argyris’ single-loop vs. double-loop learning. As Argyris notes, he was working with people who were “well-educated, high-powered, high commitment professionals,” which I think would describe a lot of graduate students in education. Argyris (1991) writes, “People can be taught how to recognize the reasoning they use when they design and implement their actions.” When faced with a design project that tests our thinking, where the likelihood of failure is high, fear creeps in.

It is through the very act of design, feedback, and failure that requires us to bypass our own interpretations because we’ve literally put our thinking outside of our heads. This act of dissociation of our emotional, judgmental selves from our practice is exactly what City et al. (2009) refer to as separating the practice from the person. Likewise feedback systems, whether they be user testing in design or observational notes in the classrooms, are what bring in the feedback, or data, on our design. When we can use the feedback to redesign, rather than defend, we can learn.

As we have heard in many readings this semester and again this week, professional community stems from “conversations about their work” (Lieberman, 2000), but clearly these conversations need data about practice, not about teachers, and the people conversing need guidance in using the data. City et al. (2009) refer to the “culture of nice” as an improvement-impeding norm, because it clouds the distinction between practice from person. People are unwilling to offer feedback for fear that it will be taken as criticism and elicit defensiveness, so they just avoid the conversation all together. The Instructional Rounds protocols offer such guidance for school leaders on how they help teachers in “learning to see, unlearning to judge” and offer clear expectations for how to discuss practice in a way that pushes people to think rather than defend.

On a more practical note, for the field work we are just beginning for the DRP class and on personalization in practice, I found many things helpful in the Instructional Rounds piece. At least for me, it will help me orient myself to conducting an observation: keeping it descriptive rather than evaluative, asking open questions to kids, not discussing with fellow researchers in the hallways but waiting for a time to debrief, and examining my own assumptions or biases about what good teaching and learning looks like.

Carnegie Summit Learning + Reaction 6

Screen Shot 2015-06-04 at 4.11.06 PM

If you had asked me about standardized tests 5 years ago, I would have vehemently dismissed them as the wrong direction for education. While I still resist the Fitbit model of constant quantification of progress and self, this week I heard and read about compelling ways that data can be used to build professional cultures, see and support individuals, and the design of better systems.

One of the sessions at the Carnegie Summit that I attended was a panel on Doctoral programs that embed improvement science into their curriculum, including the program at UCLA with Dr. Louis Gomez, whom we heard from a few weeks ago. He said two things that struck me. First, in working on problems the same way, you build organizational culture. This is echoed in Halverson (2010) “Over time, teacher concerns about teacher evaluation seemed to ease as the principal made a significant time commitment to help teachers make sense of the MAP data reports in terms of math instruction. The Walker principal used MAP data in faculty and staff meetings to create a common vocabulary for Walker teachers to discuss student learning.” (p. 141) To me, this is what data can do for schools when it is approached from a mindset of possibility rather than fear. Further, I heard more than one person at the conference remark that using data was allowing their teachers to have conversations about instruction never possible before. As Halverson quotes of the Malcolm school leaders, “The beauty of data is that we can have these conversations” (p.144). Second, Dr. Gomez stated that improvement leadership is social justice leadership, precisely because it builds common culture focused on improvement for all kids. It changes the system to yield better outcomes rather than treating the symptoms of a system that doesn’t work.

Continue reading

Pre-conference Reflections (or would that be PROflections?)

Let the learning begin!

Let the learning begin!

Here I am at the Carnegie Summit. What am I hoping to learn and come away with?

This reflection ends up just being more questions. This started last fall when I read the paper on Networked Improvement Communities, and it felt like it was a roadmap to how I want to work with educational systems. So I’ve come to the conference to learn more about it, hear what people are doing and what they’re thinking about, and find out how I can maybe get involved.

If I had to pick a content interest that I have read about and am interested in it would be the development of a strong teacher workforce, and how districts can use a framework like that to reflect on where they are focusing their resources to drive innovation. But in my role as a researcher, how can I work with districts and the improvement science model? What do improvement scientists need from researchers?

One specific aspect I want to understand are examples of measurement that practitioners are using other than test scores and aside from post-measures like retention rate or success rate. What can I measure in real time?

Sessions I’m looking forward to:
Pursuing Excellence: An In-Depth Study of the School District of Menomenee Falls
Preparing the Next Generation of Leaders as Improvers and Stewards of the Profession
A Powerful Engine for Change: Applying the Model for Improvement
From Aim to Action: Developing a Theory of Practice Improvement