Originally posted on April 10, 2016, on LindaSuskie.com
You may have seen Bob Shireman's essay "SLO Madness" in the April 7 issue of Inside Higher Ed or his report, "The Real Value of What Students Do in College." I sent him the following response today:
I first want to point out that I agree wholeheartedly with a number of your observations and conclusions.
1. As you point out, policy discussions too often “treat the question of quality—the actual teaching and learning—as an afterthought or as a footnote.” The Lumina Foundation and the federal government use the term “student achievement” to discuss only retention, graduation, and job placement rates, while the higher ed community wants to use it to discuss student learning as well.
2. Extensive research has confirmed that student engagement in their learning impacts both learning and persistence. You cite Astin’s 23-year-old study; it has since been validated and refined by research by Vincent Tinto, Patrick Terenzini, Ernest Pascarella, and the staff of the National Survey of Student Engagement, among many others.
3. At many colleges and universities, there’s little incentive for faculty to try to become truly great teachers who engage and inspire their students. Teaching quality is too often judged largely by student evaluations that may have little connection to research-informed teaching practices, and promotion and tenure decisions are too often based more on research productivity than teaching quality. This is because there’s more grant money for research than for teaching improvement. A report from Third Way noted that “For every $100 the federal government spends on university-led research, it spends 24 cents on teaching innovation at universities.”
4. We know through neuroscience research that memorized knowledge is quickly forgotten; thinking skills are the lasting learning of a college education.
5. “Critical thinking” is a nebulous term that, frankly, I’d like to banish from the higher ed lexicon. As you suggest, it’s an umbrella term for an array of thinking skills, including analysis, evaluation, synthesis, information literacy, creative thinking, problem solving, and more.
6. The best evidence of what students have learned is in their coursework—papers, projects, performances, portfolios—rather than what you call “fabricated outcome measures” such as published or standardized tests.
7. You call for accreditors to “validate colleges’ own quality-assurance systems,” which is exactly what they are already doing. Many colleges and universities offer hundreds of programs and thousands of courses; it’s impossible for any accreditation team to review them all. So evaluators often choose a random or representative sample, as you suggest.
8. Our accreditation processes are far from perfect. The decades-old American higher education culture of operating in independent silos and evaluating quality by looking at inputs rather than outcomes has proved to be a remarkably difficult ship to turn around, despite twenty years of earnest effort by accreditors. There are many reasons for this, which I discuss in my book Five Dimensions of Quality, but let me share two here. First, US News & World Report’s rankings are based overwhelmingly on inputs rather than outcomes; there’s a strong correlation with institutional age and wealth. Second, most accreditation evaluators are volunteers, and training resources for them are limited. (Remember that everyone in higher education is trying to keep costs down.)
9. Thus, despite a twenty-year focus by accreditors on requiring useful assessment of learning, there are still plenty of people at colleges and universities who don’t see merit in looking at outcomes meaningfully. They don’t engage in the process until accreditors come calling; they continue to have misconceptions about what they are to do and why; and they focus blindly on trying to give the accreditors whatever they think the accreditors want rather than using assessment as an opportunity to look at teaching and learning usefully. This has led to some of your sad anecdotes about convoluted, meaningless processes. Using Evidence of Student Learning to Improve Higher Education, a book by George Kuh and his colleagues, is full of great ideas on how to turn this culture around and make assessment work truly meaningful and useful to faculty.
10. Your call for reviews of majors and courses is sound and, indeed, a number of regional accreditors and state systems already require academic programs to engage in periodic “program review.” There’s room for improvement, however. Many program reviews follow the old “inputs” model, counting library collections, faculty credentials, lab facilities, and the like and do not yet focus sufficiently on student learning.
Your report has some fundamental misperceptions, however. Chief among them is your assertion that the three step assessment process—declare goals, seek evidence of student achievement of them, and improve instruction based on the results—“hasn’t worked out that way. Not even close.” Today there are faculty and staff at colleges and universities throughout the country who have completed these three steps successfully and meaningfully. Some of these stories are documented in the periodical Assessment Update, some are documented on the website of the National Institute for Learning Outcomes Assessment (www.learningoutcomeassessment.org), some are documented by the staff of the National Survey of Student Engagement, and many more are documented in reports to accreditors.
In dismissing student learning outcomes as “meaningless blurbs” that are the key flaw in this three-step process, you are dismissing what a college education is all about and what we need to verify. Student learning outcomes are simply an attempt to articulate what we most want students to get out of their college education. Contrary to your assertion that “trying to distill the infinitely varied outcomes down to a list… likely undermines the quality of the educational activities,” research has shown that students learn more effectively when they understand course and program learning outcomes.
Furthermore, without a clear understanding of what we most want students to learn, assessment is meaningless. You note that “in college people do gain ‘knowledge’ and they gain ‘skills,’” but are they gaining the right knowledge and skills? Are they acquiring the specific abilities they most need “to function in society and in a workspace,” as you put it? While, as you point out, every student’s higher education experience is unique, there is nonetheless a core of competencies that we should expect of all college graduates and whose achievement we should verify. Employers consistently say that they want to hire college graduates who can:
• Collaborate and work in teams
• Articulate ideas clearly and effectively
• Solve real-world problems
• Evaluate information and conclusions
• Be flexible and adapt to change
• Be creative and innovative
• Work with people from diverse cultural backgrounds
• Make ethical judgments
• Understand numbers and statistics
Employers expect colleges and universities to ensure that every student, regardless of his or her unique experience, can do these things at an appropriate level of competency.
You’re absolutely correct that we need to focus on examining student work (and we do), but how should we decide whether the work is excellent or inadequate? For example, everyone wants college graduates to write well, but what exactly are the characteristics of good writing at the senior level? Student learning outcomes, explicated into rubrics (scoring guides) that elucidate the learning outcomes and define excellent, adequate, and unsatisfactory performance levels, are vital to making this determination.
You don’t mention rubrics in your paper, so I can’t tell if you’re familiar with them, but in the last twenty years they have revolutionized American higher education. When student work is evaluated according to clearly articulated criteria, the evaluations are fairer and more consistent. Higher education curriculum and pedagogy experts such as Mary-Ann Winkelmes, Barbara Walvoord, Virginia Anderson, and L. Dee Fink have shown that, when students understand what they are to learn from an assignment (the learning outcomes), when the assignment is designed to help them achieve those outcomes, and when their work is graded according to how well they demonstrate achievement of those outcomes, they learn far more effectively. When faculty collaborate to identify shared learning outcomes that students develop in multiple courses, they develop a more cohesive curriculum that again leads to better learning.
Beyond having clear, integrated learning outcomes, there’s another critical aspect of excellent teaching and learning: if faculty aren’t teaching something, students probably aren’t learning it. This is where curriculum maps come in; they’re a tool to ensure that students do indeed have enough opportunity to achieve a particular outcome. One college that I worked with, for example, identified (and defined) ethical reasoning as an important outcome for all its students, regardless of major. But a curriculum map revealed that very few students took any courses that helped them develop ethical reasoning skills. The faculty changed curricular requirements to correct this and ensure that every student, regardless of major, graduated with the ethical reasoning skills that both they and employers value.
I appreciate anyone who tries to come up with solutions to the challenges we face, but I must point out that your thoughts on program review may be impractical. External reviews are difficult and expensive. Keep in mind that larger universities may offer hundreds of programs and thousands of courses, and for many programs it can be remarkably hard—and expensive—to find a truly impartial, well-trained external expert.
Similarly, while a number of colleges and universities already subject student work to separate, independent reviews, this can be another difficult, expensive endeavor. With college costs skyrocketing, I question the cost-benefit: are these colleges learning enough from these reviews to make the time, work, and expense worthwhile?
I would add one item to your wish list, by the way: I’d like to see every accreditor require its colleges and universities to expect faculty to use research-informed teaching practices, including engagement strategies, and to evaluate faculty teaching effectiveness on their use of those practices.
But my chief takeaway from your report is not about its shortcomings but how the American higher education community has failed to tell you, other policy thought leaders, and government policy makers what we do and how well we do it. Part of the problem is, because American higher education is so huge and complex, we have a complicated, messy story to tell. None of you has time to do a thorough review of the many books, reports, conferences, and websites that explain what we are trying to do and our effectiveness. We have to figure out a way to tell our very complex story in short, simple ways that busy people can digest quickly.