Originally posted August 14, 2014, on LindaSuskie.com
In my July 30 blog post, I discussed the key findings of a study on rubric validity reported in the June/July 2014 issue of Educational Researcher. In addition to the study’s major findings, a short statement on how the rubric under study was developed caught my attention:
“A team…developed the new rubric based on a qualitative analysis of approximately 100 exemplars. The team compared the exemplars to identify and articulate observed, qualitative differences…”
I wish the authors had fleshed this out a bit more, but here’s my take on how the rubric was developed. The process began, not with the team brainstorming rubric criteria, but by looking at a sample of 100 student papers. I’d guess that team members simply took notes on each paper: What in each paper struck them as excellent? Mediocre but acceptable? Unacceptably poor? Then they probably compiled all the notes and looked through them for themes. From these themes came the rubric criteria and the performance levels for each criterion…which, as I explained in my July blog post, varied in number.
I’ve often advised faculty to take a similar approach. Don’t begin the work of developing a rubric with an abstract brainstorming session or by looking at someone else’s rubric. Start by reviewing a sample of student work. You don’t need to look at 100 papers—just pick one paper, project or performance that is clearly outstanding, one that is clearly unacceptable, and some that are in between. Take notes on what is good and not-so-good about each and why you think they fall into those categories. Then compile the notes and talk. At that point—once you have some basic ideas of your rubric criteria and performance levels of each criterion—you may want to consider looking at other rubrics to refine your thinking (“Yes, that rubric has a really good way of stating what we’re thinking!”).
Bottom line: A rubric that assesses what you and your colleagues truly value will be more valid, useful, and worthwhile.