Originally posted on July 31, 2015, on LindaSuskie.com
I’ve been working with a number of colleges on assessing their gen ed or institution-wide learning outcomes, and concluded that what many are doing is way too complicated. Typically colleges decide to use rubrics (often the AAC&U VALUE rubrics or a modification of them) to assess gen ed or institution-wide learning outcomes. Then they have faculty submit samples of student work. Then one or more groups of faculty use the rubrics to score the student work samples.
If this strategy works, there’s nothing wrong with it. But I’m seeing too many colleges where this process isn’t working well.
At some colleges, faculty submitting samples are largely disconnected from the assessment process, so they don’t feel ownership. Assessment is something “done” to them.
It’s hard to come up with a rubric that’s meaningfully applicable to student work taken from many different courses. So, at some colleges, the rubric results don’t have meaning to many faculty. That makes it hard to use the rubric results to make meaningful, broad improvements in teaching and learning.
At many of these colleges, student work samples are submitted into an assessment data management system. These systems, chosen and implemented wisely, can be great time-savers. But too often I’m seeing faculty required, rather than encouraged, to use these systems. They’re required to use rubrics, or to use rubrics with a particular format, or to report on what they’ve done in a particular way—all of which may not fit well with what they’re doing. Square pegs are being pushed into round holes.
Using standard assessment structures encourages comparisons that may be inappropriate. Should we really compare students’ critical thinking skills in literature courses with those in chemistry courses?
What I’m increasingly recommending is a bottom-up, qualitative approach to assessing gen ed and institution-wide learning outcomes. Let faculty in each course or program develop a rubric or other assessment that is meaningful to them—that reflects college-wide learning outcomes through the lens of what they are trying to teach. That kind of rubric can be used both for grading and for broader assessment.
(An important caveat here: I said "course" and not "class." Faculty teaching sections of the same course should be collaborating to identify and implement an appropriate strategy to assess key gen ed or institutional learning outcomes in all sections of the course.)
Then have a faculty group review the reports of these assessments holistically and qualitatively for recurring themes. I’ve done this myself, and things always pop out. At one college I visited, students repeatedly struggled to integrate their learning—pull the pieces together and see the big picture. At another, students repeatedly struggled with analysis, especially with data. The findings, gleaned from human rather than system review, were clear and “actionable”—they could lead to institution-wide discussions and decisions on strategies to improve students’ integration or data analysis skills.
So if a standardized, centralized approach to assessing gen ed or institutional outcomes is working for your institution, don’t mess with success. But if it seems cumbersome, time consuming, and not all that helpful, consider a less structured, decentralized approach.