Originally posted on October 16, 2015, on LindaSuskie.com
I recently came across two ideas that struck me as simple solutions to an ongoing frustration I have with many rubrics: too often they don't make clear, in compelling terms, what constitutes minimally acceptable performance. This is a big issue, because you need to know whether or not student work is adequate before you can decide what improvements in teaching and learning are called for. And your standards need to be defensibly rigorous, or you run the risk of passing through and graduating students unprepared for whatever comes next in their lives.
My first "aha!" insight came from a LinkedIn post by Clint Schmidt. Talking about ensuring the quality of coding "bootcamps," he suggests, "set up a review board of unbiased experienced developers to review the project portfolios of bootcamp grads."
This basic idea could be applied to almost any program. Put together a panel of the people who will be dealing with your student after they pass your course, after they complete your gen ed requirements, or after they graduate. For many programs, including many in the liberal arts, this might mean workplace supervisors from the kinds of places where your graduates typically find jobs after graduation. For other programs, this might mean faculty in the bachelor's or graduate programs your students move into. The panels would not necessarily need to review full portfolios; they might review samples of senior capstone projects or observe student presentations or demonstrations.
The cool thing about this approach is that many programs are already doing this. Internship, practicum, and clinical supervisors, local artists who visit senior art exhibitions, local musicians who attend senior recitals--they are all doing a various of Schmidt's idea. The problem, however, is that often the rating scales they're asked to complete are so vaguely defined that it's unclear which rating constitutes what they consider minimally acceptable performance.And that's where my second "aha!" insight comes into play. It's from a ten-year-old rubric developed by Andi Curcio to assess a civil complaint assignment in a law school class [as of 8/11/2023, no longer available online]. Her rubric has three columns with typical labels (Exemplary, Competent, Developing), but each label goes further.
"Exemplary" is "advanced work at this time in the course - on a job the work would need very little revision for a supervising attorney to use."
"Competent" is "proficient work at this time in the course - on a job the work would need to be revised with input from supervising attorney."
And "Developing" is "work needs additional content or skills to be competent - on a job, the work would not be helpful and the supervising attorney would need to start over."
Andi's simple column labels make two things clear: what is considered adequate work at this point in the program, and how student performance measures up to what employers will eventually be looking for.
If we can craft rubrics that define clearly the minimal level that students need to reach to succeed in their next course, their next degree, their next job, or whatever else happens next in their lives, and bring in the people who actually work with our students at those points to help assess student work, we will go a long way toward making assessment even more meaningful and useful.