Articles this week:
Grover, S.; Pea, R. (2013). Computational Thinking in K-12: A Review of the State of the Field. Educational Researcher. 42(38): 38-43.
Berland, M., & Lee, V. R. (2011). Collaborative strategic board games as a site for distributed computational thinking. International Journal of Game-Based Learning, 1(2), 65-81.
Resnick, M. (2012). Reviving Papert’s Dream. Educational Technology. 52(4): 42-46.
Since I’m familiar with computational thinking, I also read Berland & Lee’s article about students playing the board game Pandemic, and I went back to a Scratch project I haven’t worked on in awhile and attempted a little debugging.
My favorite quote was this: “None of the groups understood the rules by reading through the guidebooks without attempting to play through the rules” (Berland & Lee, 2011). The idea of “playing through the rules,” I realized, is how I have approached learning with students because it’s how I approach my own learning. If it’s science, I need to see or do something. If it’s Twitter, sign me up and write a few tweets. If it’s Tinkercad, drag and drop a few objects, then ask why or how it works. I learn rules by interacting with them, not by thinking about them.
This low barrier to entry (sign up and start) is the idea of “low floor, high ceiling,” which has been “one of the guiding principles for the creation of programming environments for children … since the days of LOGO.” (Grover & Pea, 2013) Whether it’s Tinkercad or Pixel art, programs or suites of programs have embraced an easy entrance and seemingly unlimited complexity. (As an interesting aside, this might be an interesting antidote to what Sennett negatively describes as our modern passion for consuming incredibly powerful devices that we never use to their full potential. We might be attracted to these simple programs for the possibility of being creative but never reach the high ceiling, like parents using Final Cut Pro to edit baby videos for Facebook. On the other hand, one look at the Tinkercad gallery or Scratch community shows sophisticated uses of these simple tools. Maybe the difference lies in the creative use, not just consumption, of the program?)
My Scratch project is a recreation of the old computer game, Frogger: http://scratch.mit.edu/projects/15377582/. (I initiailly made this as an example for my students, then I would work on it off and on during class when I was teaching Scratch. I kind of like that it’s buggy because kids see it in progress.) One of the bugs that I could not figure out was why the log sprites would translate up or down every time they bounced off the edges. In going back to it this week, I FINALLY figured it out! When you code “if on edge, bounce” it flips the orientation of the sprite (a rule that I didn’t know). Because my log sprites have transparent area associated with them and the paint is not equally distributed on the footprint, when it flips orientation, it APPEARS to move down. I still haven’t solved the problem yet, but at least I know now what is going wrong!
Why bother sharing this revelatory moment? As I already mentioned, I started work on this project almost two years ago, and I just figured this out. The idea that kids could be give two weeks to “complete” a project ignores the reality that learning is not linear and takes time. You need a lot more messing around time to test and play with the rules than you need production time. If learners are only given production time, they have to try to both learn and use the rules at the same, which is obfuscates both.
This has implications for assessment, which Grover and Pea (2013) mention as an issue but do not really offer any guidelines. They cite “academic talk” as being a point that could be leveraged. Anecdotally, I certainly saw and heard this when my students played SimCity: the technical level of their conversations about urban development and sustainable cities was far above my expectations. SimCityEDU has attempted to assess this by using the data from students playing through a preset scenario. All the articles emphasize that computational thinking is a creative process, and I don’t think most traditional subject teachers have expertise in assessing creativity, but artists do. Art is absolutely rule driven, but to the uninitiated, it might not seem this way. One important lesson that I think we might learn from how art is taught is spending time learning the media and then creating a final project. Again, traditional subjects often ask students to learn and produce at the same time.
So how do we assess open inquiry projects that integrate computational thinking, such as programming in Scratch or designing games? The way I have approached it is to ask students to document or tell the story of their process, whether through writing, screenshots, or screencasts, and then share explanations of decisions that they made and the result. This has a meta-cognitive bonus for the learner while also teaching me a lot about the platform! I have also had students pick out one piece of their Scratch or Minecraft project and share why they built it a certain way. Not only do they get a chance to reflect on their thinking, but their peers also get ideas for what to do and how to think about a problem. If we assess a discrete set of tasks, like testing whether students can use the “always if” block, we miss more important abilities, like systems thinking. It is the same thing as a student who can solve quadratic formulas in a textbook but would never see it in a basketball shot or a rocket launch. If we are going to make progress in developing assessments for computational thinking, not just computational tasks, then we need to change the time scale for projects, separate rule learning/testing time from production time, and explore how artists assess teach and assess creativity in their domain.