Post-Semester Teaching Reflections

Screen Shot 2019-05-13 at 12.06.57 PM
From Getting Ideas Into Action (Bryk, Gomez, Grunow, 2011)

A week before this semester began, I was asked to teach ELPA 875, Theory and Practice in Educational Planning. This post is a summary and brief reflection of this experience and how I might improve the course in the future. The image above was a focal diagram that we returned to throughout the semester. The PDSA image comes from

Course Theme

Broadly, this is a class about planning for and effecting change in an organization. Through the lens of trying to impact change, we considered

  • Scale – district, building, classroom, learner. For example, we explored how defining problems at a large scale, such as the achievement gap, can make them feel unsurmountable and consequently disconnected from daily work. We worked to define problems in a way that was connected to our daily work and aligned with organizational goals.
  • Design – we talked about a design strategy of starting with small, iterative testing rather than large-scale changes all at once; seeing the system that produces undesired results; and considered the importance of including different perspectives – not just to be “midwest nice” – but because no one person can see all the pieces of a system,
  • Change as relational – bringing people together to solve common problems can be the work that builds a positive culture where people trust and support each other.

Continue reading “Post-Semester Teaching Reflections”

Book Notes & Thoughts: Improvement by Design

9780226089386

It’s been awhile… I think every blogger goes through a spell when it’s really hard to write. In January, I set out a writing plan for the spring, which involved blogging a book per week. Easy, right? I made it through the first month, but the blogging never happened. And now I’m in a total writing block, unable to tap out the literature review that is past due! So, my goal is to get back in the saddle, as they say, and use this for what it’s always been good for: making me write, organize my thoughts, and document my work. I’ve pulled down all the books off my shelf from the library that I’ve read (okay, skimmed) over the past few years. In the next few days, they are going to get reviewed and returned. Forward progress and decluttering!

The first book is Improvement By Design: The Promise of Better Schools, by Cohen, Peurach, Glazer, Gates, and Goldin (2014). I’m all about improvement these days, heading to present a poster at the Carnegie Summit in a week, and I also am increasingly sold on the idea of education as the design for learning, which I wrote about LOOOONG ago, in the days before I did this reading and thinking for a living. So naturally, when I saw the title, I was intrigued.

I’m really curious right now about how people are using words around educational reform right now. The contents of the book talk about improvement, implementation, suitability, and building systems. Interestingly they do NOT say innovation, which is what I usually put alongside improvement. Here are MY definitions of a few terms:

  • Improvement – making the system better
  • Systemic Improvement – using a systems approach to make the system work better
  • Innovation – creative application of a new-to-you idea
  • Infrastructuring – the process of creating the connections, relationships, individual knowledge, and agency for change (based loosely on definitions from Penuel, 2015, and DiSalvo & DiSalvo, 2015)

Continue reading “Book Notes & Thoughts: Improvement by Design”

Book Notes & Thoughts: The New Institutionalism in Education (2006), Edited by Heinz-Dieter Meyer and Brian Rowan

new institutionalism

Eight chapters into the book and I returned to the beginning to remind myself of the definition of “New Institutionalism.” Amazing how we can get lost in jargon and think we’re understanding what we read. Seriously though, the jargon in this field is terrific! Pages go by when I realize I don’t know what is being said. I know all the words, but not what they actually mean together. As always, this is why I blog, so that I have a chance to put my thoughts into words. It is this act that makes me clarify my thinking.

New institutionalism was a shift in how institutions were studied. Up until the 1970s, there was a focus on the goal of the institution and how it was structured. The people in it were considered rational actors. But researchers at Stanford began to notice that, in fact, institutions were “loosely coupled” (Weick), meaning that what was intended was not actually done. This has often been cited as the reason reforms don’t make an impact. I think of this like trying to move a mattress: you start to lift at one end but the other end is wobbling on its own accord. When we then look at schools today, they are actually quite tightly coupled between standards and assessments, though perhaps not in all realms. Spillane and Burch (chapter 6) write about make “instruction” less monolithic and breaking it down by subject, because math instruction might be tightly coupled with assessments, but social studies might not.

Stanford organization theory researchers proposed that actions taken followed myths and ceremonies, rather than rationality. For example, it might be in a teacher’s best interest to change how they teach because it would raise test scores, but they would reject it because it is not consistent with the mission of the school and would not be considered legitimate schooling by the public. I think of this in the case of Rocketship schools, where kids sit in cubicles staring at screens (or at least this is how it is described). This may improve test scores, but it is not seen widely as a legitimate form of education for all. Importantly, this is neither good nor bad. These practices are complex & contradictory, as Meyer and Rowan say in the introduction (p.11).

Continue reading “Book Notes & Thoughts: The New Institutionalism in Education (2006), Edited by Heinz-Dieter Meyer and Brian Rowan”

Starting a NIC: Resource Round-Up

Plant-Growing-Nature-Picture-Hd-Wallpaper-Background

One of my goals for attending the Summit this year (that I wrote about last week) was to bring back an understanding of starting a NIC. What I’d like to do here is compile my notes, link resources, and highlight key ideas from the sessions. I’ve attempted to categorize them roughly into foundational/big ideas, what to do first, and then later considerations.

Foundational/Big Ideas

1. Formation of the network initiation team

  • Develop a theory of practice improvement
  • Understand the problem of practice – be there to observe, user-centered – take the test! What actually touches students as they learn?
  • Decide on a common and measurable aim (craft aim statement, more detail below)
  • Specify high leverage drivers (root cause analysis tools)
  • Attend to power relations (particularly supervisor/evaluative relationships) and other contextual politics
  • Do we have people from outside education?
  • Learning to use improvement research methods: the NIC team needs to support each other in building common practices. everyone gets the book!
  • Get an office staff person – someone to book rooms, schedule site visits, manage calendars, order coffee… this is a key role.

2. Focus on the Problem, or face “solution-itis.” Resource: Carnegie blog post

  • Solution-itis = get more enamored of a particular approach or philosophy with an unclear way that this will address the problem
  • The problem is the “anchor”
  • Not all questions are worthy of inquiry! (Ex. Having students turn homework in on time is not a meaningful problem to work on.
  • What is the unit of analysis? Okay to have it be one teacher’s classroom.

3. Crafting a “Network narrative”

  • Metro-map activity – help people talk about how they got here
  • Narrative as a way to mobilize people for the work (it’s not always just about the problem)
  • Networks are about influence, not control: Who we are and why we exist must be compelling

4. Thinking about Meetings

  • Improvement science as a social practice (Tony Bryk, opening keynote) – the meetings are where the network is instantiated
  • Meeting in person at first (commonly a multi-day summer institute) gives people opportunities to connect, around more than just the focus of the work – give people money to go out to dinner in inter-organizational groups
  • Schedule regular times, like weekly google hangouts
  • Use meeting protocols – this prevents one person from always speaking for a group or those traditionally empowered from dominating/controlling the conversation
  • Provide actuation spaces. We generally do not have problems that can be solved by more information. People need time and space to make sense together.

5. Building a Measurement System for Improvement

  • Evaluate short term needs, long term goals
  • Measurement is attached to the change process
  • Noncognitive measures resource from the Chicago Consortium on School Research
  • Traditional research methods (video/audio recording + coding) vs. 5 minute survey
  • Limit the amount of data

6. Role of the content expert

  • Bring in information to guide action once the problem is defined, ex. bring in literature
  • Balance between research, capacity building, and implementation
  • Role is different – need to provide opportunity where experts want to get involved (i.e. they get something out of it) but where they get on board with the direction it’s going and not just where they want it to go

7. Role of the network hub

  • Everyone needs to learn to use improvement research methods, but it’s the role of the network hub will be to support network members
  • Lots of strands running
  • Do the analytic work
  • Attend to social motivation

8. Leadership thoughts

  • You invent the work as you do it – being open about the complexity
  • A paradoxical mindset – engage with opposing interpretations, suspend judgement
  • Be eclectic about methods
  • Attend to both social pieces (facilitating a meeting) and technical pieces (cycles, due dates, action steps)
  • “Definitely incomplete and possibly wrong” mantra
  • This is the ANTITHESIS of strategic planning!

What to do first

Get into the schools!

  • Bias towards action. Get out and do something. Anything that gets done is going to be incomplete and partially wrong, so get started.
  • Test something – hunches, theories, ideas. Scale comes later.
  • This test might just be with one classroom – have initiation team be there

While you’re there,

  • Listen. This is a social practice. It’s about people and relationships first.
  • Listen to students!
  • Ask questions to understand the problem of practice

Research has to be rapid: 90 day cycles

  • First 30 days – be user-centered
  • Next 30 days – what does the literature say? Pareto 80/20 principle. focus on good theory with empirical warrant.
  • Last 30 days – PDSA begins – small interventions. where did it work, for whom, and under what circumstances? Report out

Later considerations

Build structures for local ownership

  • The role of the network hub may change over time. Think about a gradual transfer of leadership – building the agenda may be housed at the university at first, but eventually want to transfer this to the school.
  • Sometimes, get out of the way and let teams do their work

Build cross-team connections

  • make the network visible, reiterate the aim statement
  • find a common language, be sensitive to and aware of local politics
  • develop team norms, ex. assume everyone is doing the best they can
  • measure network health, ex. social network analysis – quick survey after every meeting with simple list of people asking who they have interacted with and how – trackable over time

Build “intervention bundles”

  • Everyone tried different things… what shows promise? How can these be combined?

Other resources

R+P Collaboratory webinar series – I have to go back and watch these! They cover topics such as “getting a partnership started” to “negotiating roles”.

Obviously, the Carnegie blog has a ton of resources, but they specifically have a series on starting a NIC. The first one by Jennifer Lin Russell  offers a framework and cool diagram. Also coming soon in article format: Russell, Bryk, Dolle, Gomez, LeMahieu, Grunow – A framework for initiation of NICs, Teachers College Record (in press)

Book Notes & Thoughts: Research and Practice in Education, by Cynthia E. Coburn and Mary Kay Stein

researchandpracticebookcover

I have a new strategy for literature reviews, which is how I found this book. Basically I find the people whose ideas I’m interested in and then find everything they’ve written. This past summer, I read a white paper by Coburn, Penuel, and Geil (2013) called Research-Practice Partnerships: A strategy for leveraging research for educational improvement in school districts. Right up my alley. Loved it. So I went in search of what else the authors do and have done.

First, Bill Penuel is a research at UC-Boulder. During my design-based research class this fall I had to research and present a DBR project, so I used the opportunity to read about Penuel’s work with InquiryHub and learned about DBIR (design-based implementation research). Starting with the person gave me a window into the current research conversations rather than simply past published work.

Cynthia Coburn was another author on the article, so I went looking for her other publications and found this book of case studies. It is divided into four sections: university-school partnerships, tools, larger scale networks, and district-level partnerships.

The three chapters on the role of tools used concepts from some of my previous reading, such as Wenger’s Communities of Practice, which I wrote about here and here, or Star’s boundary objects, which I had read about in the DBIR work. Getting to see how these ideas are APPLIED and written about is critical, I think, in my development as a scholar.

Ikemoto and Honig write about the Institute for Learning and how an intermediary organization can partner with teachers to improve student learning. While IFL itself is interesting, what I learned most from this chapter was how to apply a sociocultural learning perspective to ground the analysis of their research. They specifically look at how tools of the program, such as IFL’s Principles of Learning or their LearningWalk protocol. Interestingly, I think there is a difference in what they refer to as tools and what some refer to as artifacts.

In the aforementioned DBR class this fall, I heard many times about LeTUS, and getting to read the meta-description and analysis of the project answered many of my questions about it, continuing to fill in my understanding of the research-practice landscape.

My favorite chapter was the one about a larger scale: the National Writing Project (NWP). Laura Stokes applies Englebart (1992)’s organizational levels of infrastructure to the theory of action of the NWP. This is the same conceptual framework that is used to describe Networked Improvement Communities work (Bryk, Gomez, and Grunow, 2011) and will inform my spring research project studying a regional network. If I hadn’t been reading every chapter, I would have missed this connection.

Stokes describes NWP as an “improvement infrastructure” and demonstrates how the design of the network acts at A, B, and C Levels to improve student and teacher writing. Stokes cites the key components of the infrastructure as its model, its linked local sites, its knowledge resources, its people and its programs. I like the visual they use to show the action levels of the Local NWP sites and NWP Network.

NWP_ActionLevels

The key here is that work that learning that happens at Level A accrues to the network, so individual pockets of teachers are not rediscovering what the teachers at the school next door already know. I like to think of the infrastructure as a harness that links us all together so that as some move forward we are all drawn along. Your work improves my work and vice versa.

Side note: I have always LOVED that NWP requires its participants to do their own writing, such as having teachers actually write out answers to the prompts of the college entrance test (p.154). One of my firm beliefs is that teachers must continue to engage in learning what they are trying to teach. This is one of the reasons I like working at Field Day Lab, where teachers learn through designing.

The chapter about Lesson Study by Perry and Lewis was excellent, especially given my participation in a Critical Friends Group. I think if I were to be in charge of professional development at a school or district this is how I would want to approach it. They note that “lesson study is about the lesson, not about the teacher” (p.133), which aligns with some of the other reading I’ve been doing about professional development as improving teaching, not teachers (Hiebert & Morris, 2012).

The conceptual frame of this chapter, increasing the “demand” for professional development, was interesting. I’m not always a fan of use economics terms outside of economics, but I understand their application of Elmore (1996)’s use of the term. Note to self, must look up Elmore’s work.

In their concluding chapter, Coburn and Stein reflect on the implications of the case studies presented. One is that “designers should place renewed attention on teacher learning and organizational change” (p.217), with attention to how teachers “learn how to teach”. They cite the designs needed to harness the work of practitioners: tools to foster interaction, participation structures, and intentional pathways to connect research and research-based ideas. (p.219) Notably, at the school level, teachers need opportunities to experiment with new approaches AND discuss and adjust practice. I think we often do the first but not the second, and meaningful discussions do not happen by themselves, which is why I love the critical friends protocol.

All of the projects described in this book are multiyear, well funded initiatives. I have two thoughts about this. First, when I think about my career, ultimately I would want to build one of these networks or partnerships.But where do I focus – tools? teachers? districts? teacher education? intermediary organizations? policy? Furthemore, what implications does this have for the role I conceive of as a researcher? How does this match with the traditional role of a university professor, and is this the best job to achieve what I want to do? As they note, traditional scholarship often creates disincentives to do this kind of work. (p.224)

Second, when I think about the more immediate demands of a dissertation, I won’t be able to build a practice-partnership likes these, so perhaps I can concentrate on finding interesting networks or partnerships already in progress and study them. I have some ideas…

Overall, this book builds on my consistent interest in how schools change, and, in these chapters, it is through partnerships with researchers or outside organizations. The most valuable takeaway for me from this book was the value of reading (or at least skimming) every chapter, which is why I continue to prioritize the time to read and then write about my reading.

Carnegie Summit Learning + Reaction 6

Screen Shot 2015-06-04 at 4.11.06 PM

If you had asked me about standardized tests 5 years ago, I would have vehemently dismissed them as the wrong direction for education. While I still resist the Fitbit model of constant quantification of progress and self, this week I heard and read about compelling ways that data can be used to build professional cultures, see and support individuals, and the design of better systems.

One of the sessions at the Carnegie Summit that I attended was a panel on Doctoral programs that embed improvement science into their curriculum, including the program at UCLA with Dr. Louis Gomez, whom we heard from a few weeks ago. He said two things that struck me. First, in working on problems the same way, you build organizational culture. This is echoed in Halverson (2010) “Over time, teacher concerns about teacher evaluation seemed to ease as the principal made a significant time commitment to help teachers make sense of the MAP data reports in terms of math instruction. The Walker principal used MAP data in faculty and staff meetings to create a common vocabulary for Walker teachers to discuss student learning.” (p. 141) To me, this is what data can do for schools when it is approached from a mindset of possibility rather than fear. Further, I heard more than one person at the conference remark that using data was allowing their teachers to have conversations about instruction never possible before. As Halverson quotes of the Malcolm school leaders, “The beauty of data is that we can have these conversations” (p.144). Second, Dr. Gomez stated that improvement leadership is social justice leadership, precisely because it builds common culture focused on improvement for all kids. It changes the system to yield better outcomes rather than treating the symptoms of a system that doesn’t work.

Continue reading “Carnegie Summit Learning + Reaction 6”

Pre-conference Reflections (or would that be PROflections?)

Let the learning begin!
Let the learning begin!

Here I am at the Carnegie Summit. What am I hoping to learn and come away with?

This reflection ends up just being more questions. This started last fall when I read the paper on Networked Improvement Communities, and it felt like it was a roadmap to how I want to work with educational systems. So I’ve come to the conference to learn more about it, hear what people are doing and what they’re thinking about, and find out how I can maybe get involved.

If I had to pick a content interest that I have read about and am interested in it would be the development of a strong teacher workforce, and how districts can use a framework like that to reflect on where they are focusing their resources to drive innovation. But in my role as a researcher, how can I work with districts and the improvement science model? What do improvement scientists need from researchers?

One specific aspect I want to understand are examples of measurement that practitioners are using other than test scores and aside from post-measures like retention rate or success rate. What can I measure in real time?

Sessions I’m looking forward to:
Pursuing Excellence: An In-Depth Study of the School District of Menomenee Falls
Preparing the Next Generation of Leaders as Improvers and Stewards of the Profession
A Powerful Engine for Change: Applying the Model for Improvement
From Aim to Action: Developing a Theory of Practice Improvement

Reaction 5: Accountability, Educational Research Methods, and Inquiry

Captured from Brian Reiser's paper cited below.
Captured from Brian Reiser’s paper cited below.

Feuer, M.J., Towne, L., &  Shavelson, R. J.  (2002) Scientific Culture and Educational Research. Educational Researcher 31(4) 4-14.

U.S. Department of Education (2003). Identifying and implementing educational practices supported by rigorous evidence:
a user friendly guide.  Available at http://ies.ed.gov/ncee/pubs/evidence_based /evidence_based.asp.

Reiser, B. J. (2013). What professional development strategies are needed for successful implementation of the next generation science standards? Paper prepared for K12 center at ETS invitational symposium on science assessment. Washington, DC. http://www.k12center.org/rsc/pdf/reiser.pdf.

Clearly Feuer, Towne, & Shavelson (2002) were at odds with the policy emphasis captured in the “user-friendly guide” by the Department of Education in (2003), though they were clearly open to increasing use of randomized, controlled trials: “Although we strongly oppose blunt federal mandates that reduce scientific inquiry to one method applied inappropriately to every type of research question, we also believe that the field should use this tool in studies in education more often than is current practice…. We have also unapologetically supported scientific educational research without retreating from the view that the ecology of educational research is as complex as the field it studies and that education scholarship therefore must embody more than scientific studies.” While they leave the field open for many different communities of inquiry, the DOE report narrows the focus onto just one. This narrowing of the range of inquiry, in my view, is short-sighted and extremely limiting in three ways.

First, as we learned in Organizing Schools for Improvement, change takes time. It often takes five years for a new program or community to be built and show results. There can be an implementation dip, where the disruption of change actually makes things worse initially. As we learned at Waukesha STEM this week, the first six months of their new idea of “connect time” was true chaos with teachers ready to get rid of it immediately. Now it is one of the pillars of the way they have changed to student-centered learning. Second, the narrowing of a focus to one kind of method as suggested in the DOE report means that there are fewer questions that can be asked. For example, there is no ethical way to use randomized, controlled trials to understand the experience of homeless students in schools. As Feuer, Towne, & Shavelson state, “The question drives the methods, not the other way around. The overzealous adherence to the use of any given research design flies in the face of this fundamental principle.” Finally, it is increasingly clear that a diversity of ideas drives innovations and solutions, and “the presence of numerous disciplinary perspectives (e.g., anthropology, psychology, sociology, economics, neuroscience) focusing on different parts of the system means that there are many legitimate research frameworks, methods (Howe & Eisenhart, 1990), and norms of inquiry.” (Feuer, Towne, Shavelson, 2002) We need multiple Discourses (Gee, 1990) in educational research.

The Department of Education report is meant to address the gap between research and practitioners. Feuer, Towne, and Shavelson quote the National Research Council that said, “Educators have never asked much of educational research and development, and that’s exactly what we gave them.” What I found compelling about Reiser’s (2013) paper on professional development for the Next Generation Science Standards was that it seamlessly wove theory and practice, describing the cultural shift to one line messages, giving examples of the way practice is now, and describing what it should be. For example, Reiser writes, about the “shift from learning about… to figuring out,” and “Inquiry is not a separate activity—all science learning should involve engaging in practices to build and use knowledge.” Further, when Reiser outlines the key principles for professional development, lists a series of recommendations, and includes practical examples, like the suggestion, “One fruitful way to engage teachers with records of practice is for teachers to analyze video cases of teaching interactions.” In the frame of distributed leadership, changing systems of practice happens through changing the routines, and this paper clearly brings research to bear on precisely what is being done in the classroom.

(Somewhat more philosophically, it is ironic that just as the Next Generation Science Standards are shifting towards an approach of describing phenomena first and then trying to explain it, while Department of Education clings to the old scientific model of inquiry that dictates rigid positivist methods.)

What are the implications for school leaders? I see the appeal of a one-size-fits-all, tried-and-true, what works solution, but I think most educators know that nothing with kids (or teachers, for that matter) works that way. Yet when faced with a field of educational research that seems to have a lot of internal conflict about what is considered “rigorous” research, what do you do first, on Monday, when the kids show up? I think this is why the ideas of design and professional community are appealing as a way of improving educational systems. Design, to me, is not about realizing one fixed answer, but rather is constant process of listening and testing, embedded in local context rather than seeking to minimize it. Similarly, focusing on professional community builds the capacity of people and context, rather than seeking to minimize them. Just as inquiry is not a separate activity when learning science or for educational researchers, it is not a separate activity for leaders, either.

How do we use information?

Reaction to the following articles:

Bryk, A. (2010). Organizing schools for improvement lessons from Chicago. Chicago: University of Chicago Press.

Excerpt from Gee, James Paul. (1990) Social Linguistics and Literacies: Ideology in Discourses, Critical Perspectives on Literacy and Education. London [England]: New York.

Newmann, F.M., Carmichael, D.L., & King, M.B. (in press). Chapter 6. Authentic Intellectual Work: Improving Teaching for Rigorous Learning. Thousand Oaks, CA: Corwin

How do we use information? This is a really broad question and might not seem on topic for this week, but I’ll get there. First, I returned to Chris Thorn’s “Knowledge Management for Educational Information Systems: What is the state of the field?” (2001) that we read last semester. He defines knowledge and it’s relationship to information and data. Data is facts, information is facts + context, knowledge is the facts + context + experience, judgement, intuition, values. (These are actually definitions from Epson, 1999, that Thorn cites.) There is thus a progression from data to knowledge of as facts are brought into a Discourse. Two different Discourses might take the same data and come out with different knowledge. Thinking about it in this way led me to think about what we have discussed the last two weeks about how administrators have the power to bring a policy into their Discourse (if they have established one, of course).

But returning to the question of how we use information, what I find so exciting about Authentic Intellectual Work (AIW) is that it is an information gathering tool that seeks to measure what I would call the “good stuff” of teaching and learning: the conversations, the higher-order thinking, the student interest, social support. What’s more, the implementation framework actually establishes a Discourse around the use of the information, changing the way educators interact and centering the conversation around the empirically gathered information – not about thoughts, intentions, feelings, etc. Teachers are coached on how to see and understand the information that is already in their classrooms.

In a different turn on how we use information, Organizing Schools for Improvement uses data to show relationships in a way that I had never seen before. It was the first time I had seen a quantitative analysis of systems that even attempted to show synergistic effects, such as Figure 4.11 (p. 114), showing that schools strong on two supports did substantially better than those strong in just one or the other support. While I have struggled with accepting the use of math and reading scores as measures of “achievement,” I think the way it was used here has merit. Since the schools deemed “improving” were the ones in the top quartile, it does seem that this would represent genuine learning. It seems it would be hard to exclusively teach to the test and get into the highest quartile.

Continue reading “How do we use information?”

Chapter 3, Organizing Schools for Improvement

A little delay since the last two posts (Intro & Chapter 1, Chapter 2)… Christmas vacation = no preschool, so life has been very full of other things. Also, took a little trip. Here’s my view as I write this morning:

/home/wpcom/public_html/wp-content/blogs.dir/db1/39059569/files/2015/01/img_0069.jpg
Life is good!

In Chapter 2, the authors introduced their five “essential supports”: 1) school leadership, 2) parent-school-community ties, 3) professional capacity, 4) student-centered learning climate, and 5) instructional guidance system. Note that these are supports, as in they provide conditions that “substantially influence” (p.79) the work of the school, but they do not directly cause the improvement. This is the nuance of a systems approach. The authors also make the point that they are essential and will use a quantitative methodology to show that improvement stagnates without them.

Because of this systems approach, the idea of “holding other factors constant” doesn’t make sense. If each support is reinforcing (or undermining) to the other, holding others constant doesn’t actually give a sense of how the two systems interact, sort of like trying to understand how a steering wheel functions independent of the wheels. This also means that statistical approaches that are designed to control for particular variables don’t work. Thus the approach that is used is “a form of analytic spiral” (p.80). Basically the authors use a large longitudinal database of surveys and test scores to explore these supports. I would be interested to know all the other ideas they tried before coming up with their final analysis. It comes across quite straightforward, but the process was no doubt complex.

Again, they use reading and math test scores, but they use them only as an indicator of improvement if they were in the top quartile or stagnation if they were in the bottom quartile. This approach makes sense to me as the top and bottoms are obviously showing improvement or stagnation whereas those in the middle are harder to parse. Perhaps as my quantitative fluency improves I will have a more critical eye to their methods, but for now I will take it as presented.

A strength of their analysis is that they present both the schools that are improving and stagnating. This bolsters their argument because it shows that schools with high levels of the supports are more likely than chance to show significant improvement whereas schools with low levels of the supports are more likely than chance to show stagnation.

Most interesting to me was the cumulative effects of the supports. Through an aggregated indicator score for the supports compared to improvement in math, reading, and attendance, the authors show a distinct correlation with strength or weakness in the supports.

/home/wpcom/public_html/wp-content/blogs.dir/db1/39059569/files/2015/01/img_0073.jpg
P. 94