By Watson Scott Swail, President & CEO, Educational Policy Institute/EPI International
This week’s InsideHigherEd.com article, CLA as ‘Catalyst for Change,’ talks about the seven-year project by the Council of Independent Colleges (CIC) to use the CLA, or the Collegiate Learning Assessment, to measure learning in a test-bed of its member institutions. The purpose of the project is to help “propel” reform on campus through its findings and comparisons.
It is a noble idea. But it won’t work.
The CLA, created by contractor RAND Corporation, is a tool designed to measure what students have learned, on average, at institutions of higher education. So, think for a moment—given the grandeur of the American Higher Education System (corp?), the massive variation between colleges and universities, departments, satellites, and more—how can one instrument tell us the “value added” of students in higher education?
It can’t, and that’s the problem.
The CLA was created to measure value added before and after going to college. The problem is, naturally, the variance in institutions and programs is so wide, and the experiences so distant—even within one institution—one measure like the CLA is almost meaningless. There is no “average” learning effect in higher education, because everyone takes a different set of courses, from a different set of instructors, using different sets of resources. There is no “average” learning. And the CLA can’t pretend to be a measure of this learning.
If the CLA measures writing, for instance, the only way writing improves for students is by doing a lot of writing. And I mean a lot of writing with expert instruction. I became a good writer by writing a lot. I mean—a lot. My writing improved greatly over the course of about 10 years. But it improved very little during my undergraduate program (didn’t have to write much), and not much in my master’s program (same). It wasn’t until my doctoral work that I really started writing. It didn’t hurt that I was writing reports in an internship. I was a below-average writer who became an above average writer. All by writing. Thus, I am more concerned about what the CLA measures and the policies that could be created by limited-value data. How much one writes depends on what program you are enrolled within. How much math—well, the same thing.
If we really want to know how students are learning, and perhaps how colleges are doing, we need to keep it at a department level. Let chemistry provide value added outcomes for chemistry. Same with mathematics, physics, political science, history, and so on and so on. The learning is way too specific to address in the CLA or any other measure that is meant for the institution “at large.”
We should get away from this understanding of “institutional” value. Institutions are not created equal, and most certainly nor are their parts. An institution may have great value in one department of college, but be crap in another. In fact, that would be the rule, not the exception.
Even the report seems to suggest that little impact has come of the CLA, even though the report boasts effectiveness. It’s probably because the measure is broad and policies, in the end, need to be on a surgical level.
If you really want to know what “value added” an institution has on a student, we need to talk about a national (or international) standard of learning: common standards for common courses, with common assessments. Then we would know. Almost a NAEP-style of testing, but it needs to be content specific.
It is that simple. Better colleges will have better results. The antithesis will prove true, too.
Have a nice weekend.