Carnegie Summit Learning + Reaction 6

Screen Shot 2015-06-04 at 4.11.06 PM

If you had asked me about standardized tests 5 years ago, I would have vehemently dismissed them as the wrong direction for education. While I still resist the Fitbit model of constant quantification of progress and self, this week I heard and read about compelling ways that data can be used to build professional cultures, see and support individuals, and the design of better systems.

One of the sessions at the Carnegie Summit that I attended was a panel on Doctoral programs that embed improvement science into their curriculum, including the program at UCLA with Dr. Louis Gomez, whom we heard from a few weeks ago. He said two things that struck me. First, in working on problems the same way, you build organizational culture. This is echoed in Halverson (2010) “Over time, teacher concerns about teacher evaluation seemed to ease as the principal made a significant time commitment to help teachers make sense of the MAP data reports in terms of math instruction. The Walker principal used MAP data in faculty and staff meetings to create a common vocabulary for Walker teachers to discuss student learning.” (p. 141) To me, this is what data can do for schools when it is approached from a mindset of possibility rather than fear. Further, I heard more than one person at the conference remark that using data was allowing their teachers to have conversations about instruction never possible before. As Halverson quotes of the Malcolm school leaders, “The beauty of data is that we can have these conversations” (p.144). Second, Dr. Gomez stated that improvement leadership is social justice leadership, precisely because it builds common culture focused on improvement for all kids. It changes the system to yield better outcomes rather than treating the symptoms of a system that doesn’t work.

Continue reading “Carnegie Summit Learning + Reaction 6”

How do we use information?

Reaction to the following articles:

Bryk, A. (2010). Organizing schools for improvement lessons from Chicago. Chicago: University of Chicago Press.

Excerpt from Gee, James Paul. (1990) Social Linguistics and Literacies: Ideology in Discourses, Critical Perspectives on Literacy and Education. London [England]: New York.

Newmann, F.M., Carmichael, D.L., & King, M.B. (in press). Chapter 6. Authentic Intellectual Work: Improving Teaching for Rigorous Learning. Thousand Oaks, CA: Corwin

How do we use information? This is a really broad question and might not seem on topic for this week, but I’ll get there. First, I returned to Chris Thorn’s “Knowledge Management for Educational Information Systems: What is the state of the field?” (2001) that we read last semester. He defines knowledge and it’s relationship to information and data. Data is facts, information is facts + context, knowledge is the facts + context + experience, judgement, intuition, values. (These are actually definitions from Epson, 1999, that Thorn cites.) There is thus a progression from data to knowledge of as facts are brought into a Discourse. Two different Discourses might take the same data and come out with different knowledge. Thinking about it in this way led me to think about what we have discussed the last two weeks about how administrators have the power to bring a policy into their Discourse (if they have established one, of course).

But returning to the question of how we use information, what I find so exciting about Authentic Intellectual Work (AIW) is that it is an information gathering tool that seeks to measure what I would call the “good stuff” of teaching and learning: the conversations, the higher-order thinking, the student interest, social support. What’s more, the implementation framework actually establishes a Discourse around the use of the information, changing the way educators interact and centering the conversation around the empirically gathered information – not about thoughts, intentions, feelings, etc. Teachers are coached on how to see and understand the information that is already in their classrooms.

In a different turn on how we use information, Organizing Schools for Improvement uses data to show relationships in a way that I had never seen before. It was the first time I had seen a quantitative analysis of systems that even attempted to show synergistic effects, such as Figure 4.11 (p. 114), showing that schools strong on two supports did substantially better than those strong in just one or the other support. While I have struggled with accepting the use of math and reading scores as measures of “achievement,” I think the way it was used here has merit. Since the schools deemed “improving” were the ones in the top quartile, it does seem that this would represent genuine learning. It seems it would be hard to exclusively teach to the test and get into the highest quartile.

Continue reading “How do we use information?”

Reaction 11: Data-Driven Instructional Systems

Anarchist Org Chart: https://julierobison.files.wordpress.com/2014/11/a8423-anarchistorgchart.gif

Reading this week:

Halverson, R.; Grigg, J.; Prichett, R.; Thomas, C. (2007). The New Instructional Leadership: Creating Data-Driven Instructional Systems in School. Journal of School Leadership. 17: 159-193.

Thorn, C.A. (2001, November 19). Knowledge Management for Educational Information Systems: What Is the State of the Field?. Education Policy Analysis Archives. 9(47). Retrieved September 5, 2007 from http://epaa.asu.edu/epaa/v9n47/.

Unit of analysis. That seemed to be the thing that kept popping out at me this week. This was stated directly by Thorn (2001) that if the student is the unit of interest, then the data gathered should be attributes about the student. Student Information Systems, however, tend to be designed to produce reports for district-level analysis, not for the classroom. Halverson et al. (2007) found a mismatch or inoperability of data in the district’s high-tech data storage as opposed to the local collection and storage of low-tech data. The logical goal of the proposed data-driven instructional system is thus to link the results of summative data with formative information systems that teachers can use to improve instruction. The goal of practical measurement, as proposed by Yeager et al. (2013) is that “educators need data closely linked to specific work processes and change ideas being introduced in a particular context” (p.12).

In the past, I have associated data-driven decision making as context-blind work whose sole purpose was to improve standardized test scores. The readings this week as well as the networked improvement communities (Bryk et al., 2010) from a few weeks ago has given me a different perspective on what it means to use data to inform instruction, design, and communities. Continue reading “Reaction 11: Data-Driven Instructional Systems”