THE CHRONICLE OF HIGHER EDUCATION. JULY 10, 2013. As policy makers demand more accountability from higher education, a group of higher-education leaders has been conducting an ambitious experiment in turning existing data into a set of five college-performance gauges that might make sense, and make good policy.
Over the past two years, the Voluntary Institutional Metrics Project has brought together the presidents of 18 institutions, including huge state universities, for-profit companies, and community colleges, to consider ways to develop measures that would give government officials a more accurate and nuanced understanding of how colleges and universities are doing.
A report released on Wednesday details the project’s progress toward establishing the five new metrics, which would measure repayment and default rates on student loans, students’ progress and program-completion rates, institutional cost per degree, employment outcomes for graduates, and learning outcomes at the program level, as measured by data like “core skills” evaluations and professional qualifying examinations.
College leaders already stagger under a data-reporting burden, but they also grouse about the one-size-fits-all statistical measures that sometimes result.
“I have a problem with the burden, but I have a bigger problem with data elements that aren’t representative of what I do,” said Ed Klonoski, president of Charter Oak State College, a public online institution in Connecticut that participated in the metrics project.
Charter Oak was joined by a diverse group of institutions of various sizes and sectors, including the University of Missouri at Columbia, the University of Maryland University College, for-profit companies like DeVry University and Capella University, and two-year colleges like Anne Arundel Community College. The institutions may have differing missions and financial models, but they share a dilemma.
“There’s a lot of data collection, but there’s not a lot of good, useful information,” said Michael J. Offerman, an independent consultant who helped coordinate work on the project. It was supported by the Bill & Melinda Gates Foundation, though no money went to participating institutions.
The participating institutions shared their own data, as reported to both governments and nongovernmental organizations, and tried to find ways to create metrics that would be comparable across institutions of different sizes, revenue sources, and missions. (None of the real institutional data used by the project were included in the report.)
For the student-loan repayment and default-rate metric, for example, the project started by using data from the Education Department’s Integrated Postsecondary Education Data System, or Ipeds, to predict an individual institution’s likely repayment and default rates, “input adjusted” for the nature of the students the institution serves. A college with a high percentage of Pell-eligible or first-generation students would not be expected to have the same successes as an elite private institution.
Then, the report proposes, the predicted rates would be compared side-by-side with the actual rates in a dashboard format (see an example) to help create “a credible set of measures that you should look at holistically,” Mr. Offerman said.
The more detailed and nuanced sort of statistics proposed by the project reflect metrics that the participating presidents consider important but also more fair.
During the project, “I recall there being a conversation to the effect of, ‘If we’re going to have measures out there, let us define them so that they make sense,” Mr. Offerman said.
The current metrics available to policy makers, even when statistically accurate, don’t always provide a true picture of how individual institutions are performing, Mr. Klonoski said. Charter Oak boasts an admirable 66-percent six-year graduation rate, but in practical terms “that’s not a real number,” he said. His students are predominantly adults transferring from other institutions—a group that also happens to be invisible in the federal Ipeds data, which focus on first-time, full-time students who start and finish at the same college.
Performance-Based State Support
The quality of such metrics is increasingly important as more states consider allocating their support for higher education on the basis of individual colleges’ performance. Tennessee has already enacted such a law.
“The reality is, performance funding is here, and it’s not going away,” Mr. Klonoski said. Crude measures might penalize colleges that enroll students who are less likely to graduate and reward colleges with a better-prepared and better-motivated student body. If legislators looked at the sort of comprehensive set of metrics proposed by the Voluntary Institutional Metrics Project “and you saw my institution move up in categories where it needed to improve and stay strong in categories in which it was good, it would be much easier to say, yes, they are performing across multiple metrics,” Mr. Klonoski said.
But the project’s report is candid about the challenges faced in trying to create workable metrics beyond the bounds of a small-scale experiment. Calculations of institutional cost per degree could not include an institution’s capital costs, among a number of other complications involving college budgeting. Reliable gathering of data on postgraduation employment is “not widespread, consistent, or well documented,” the report says.
The report even acknowledges defeat in coming up with a workable metric for learning outcomes. Plans to collect data from a variety of sources down to the program-specific level were eventually shelved. “We couldn’t find enough existing data that works in multiple institutions to be comparative,” Mr. Klonoski said.
Mr. Offerman considers the project a success, though he added that making the proposed metrics a reality would involve “some real heavy lifting” by institutions and governments to improve the available data without unduly increasing the reporting burden. At least the report will “carry the message forward that there is a way to make sense of all the data,” he said.
But such metrics—and the effort involved in creating them—would prove rewarding for higher education in many ways, Mr. Klonoski said. Not only would they offer “some internal comfort about how performance funding, or whatever you want to call it, will occur—at the state and federal level,” he said, but “it really helps you dig into who you’re serving and how you’re serving them.”