Last week I spoke at a meeting of the Lumina Foundation’s Achieving the Dream Initiative, a meeting of policymakers from 15 states all working to improve the effectiveness of community colleges. At one point, a data working group shared results of its efforts to create new ways to measure college outputs. This was basically a new kind of report card, one capable of reporting results for different subgroups of students, and enabling comparisons of outcomes across colleges. Something like it might someday replace the data collection currently part of the IPEDS.
While it's always gratifying to see state policymakers engaging with data and thinking about how to use it in meaningful ways, I couldn’t help but feel that even this seemingly forward-thinking group was tending toward the status quo. The way we measure and report college outputs right now consistently reinforces a particular way of thinking-- a framework that focuses squarely on colleges and their successes or failures.
What’s the matter with that, you’re probably wondering? After all, aren’t schools the ones we need to hold accountable for outcomes and improved performance? Well, perhaps. But what we’re purportedly really interested in—or what we should be interested in—is students, and their successes or failures. If that's the case, then students, rather than colleges, need to be at the very center of our thinking and policymaking. Right now this isn't the case.
Let’s play this out a bit more. Current efforts are afoot to find ways to measure college outcomes that make more colleges comfortable with measurement and accountability--and thus help bring them onboard. That typically means using measures that allow even the lowest-achieving colleges at least a viable opportunity for success, and using measures colleges feels are meaningful, related to what they think they’re supposed to be doing. An example: the 3-year associates degree completion rates of full-time community college entrant deemed “college ready” by a standardized test. We can measure this for different schools and report the results. Where does that get us? We can then see which colleges have higher rates, and which have lower ones.
But then what? Can we then conclude some colleges are doing a better job than others? Frankly, no. It’s quite possible that higher rates at some colleges are attributable to student characteristics or contextual characteristics outside an individual college (e.g. proximity to other colleges, local labor market, region, etc) that explain the differences. But that’s hard to get people to focus on when what’s simplest to see are differences between colleges.
It's not clear that this approach actually helps students. What if, instead, states reported outcomes for specified groups of students without disaggregating by college? How might the policy conversation change? Well, for example, a state could see a glaring statewide gap in college completion among majority and minority students. It would then (hopefully) move to the next step of looking for sources of the problem—likely trying to identify the areas with the greatest influence, and the areas with the most policy-amenable areas of influence. This might lead analysts back to the colleges in the state to look for poor or weak performers, but it might instead lead them to aspects of k-12 preparation, state financial aid policy, the organizational structure of the higher education system, etc. The point is that in order to help students, states would need to do more than simply point to colleges and work to inspire them to change. They’d be forced to try and pinpoint the source(s) of the problems and then work on them. I expect the approaches would need to vary by state.
Don’t get me wrong, I’m not trying to absolve any college of responsibility for educating its students. What I’m suggesting is that we think hard about why the emphasis right now rests so heavily on relative college performance—an approach that embraces and even creates more insitutional competition—rather than on finding efficient and effective ways to increase the success of our students. Are we over-utilizing approaches, often adopted without much critical thought, that reify and perpetuate our past mistakes? I think so.
Image Credit: www.openjarmedia.com