The American Association for the Advancement of Science (they publish Science) has put together a website that catalogs common science misconceptions and I love it. It contains over 600 multiple-choice questions and the distribution of answers received for “a large sample size” (couldn’t find the exact number) disaggregated by grade level, gender and language proficiency. The data tells some incredible stories about the state of science understanding in our schools. The questions are protected by registration because the assessment could lose value if broadly distributed but anyone can create an account if they’re willing to promise not to divulge questions. I highly recommend it. Here is a taste of some of the concepts students know well and some of the most commonly held misconception.
The questions are organized first by broad topic, then by big ideas and finally by specific knowledge goals. The goals are written in plain language and cleanly isolate concepts. Clicking on a goal reveals a question used to diagnose the desired knowledge and a side bar reveals common incorrect answers, the misconceptions they indicate and a graph of student performance. I’ve been exploring the data set by first trying my hand at the question and I’m sad to report that I have come across some misconceptions that I held.
Perhaps the greatest thing about this website is that it allows instructors to add items to an item bank and automatically generate PDFs of the questions they selected. This tool is then not just an academic exercise — it’s something very practical. Writing assessments is difficult and having an instrument developed this carefully could be very helpful in getting feedback from students and targeting what they don’t know. Though I was bummed to find out I held some misconceptions, I’m thrilled to think that I just learned a thing or two that I may never have realized I didn’t know! Good assessments used for diagnostics can greatly help the learning process.
When do we get this kind of analysis and diagnostic tool for mental models of computation? I know that there have been some attempts at concept inventories (at least one) and others at language-agnostic programming knowledge tests but I don’t think any of those tools has yet risen above the level of academic exercise. When one does emerge, it better be presented at least as well as this science assessment — after all, isn’t creating usable computational artifacts one of the broad goals of our discipline?