CAPPS - Avocacy and Communication Professional Development

California Association of Private Postsecondary Schools

What Experts on College-Ratings System Mean by ‘We Need Better Data’

02/14/2014

The Chronicle of Higher Education. Feb 14, 2014.

If any consensus arose last week at the Education Department’s daylong symposium on the technical challenges facing the Obama administration’s college-ratings system, it was on the need for better data about colleges and universities.

Tod R. Massa captured the sentiment in the opening line of his presentation: “To the department, I say this: We need better data. Let me rephrase that: You need better data.”

Mr. Massa, who directs policy research and data warehousing for the State Council of Higher Education for Virginia, echoed other data experts when he highlighted the gaps in data the department collects through its Integrated Postsecondary Education Data System, or Ipeds.

In some of the most important measures of college accountability—graduation rates, net prices, postgraduate wages, and community-college outcomes—the Ipeds data fall short, the experts said.

Several experts who spoke at the symposium, including Mr. Massa, said a unit-record data system that could track every student’s progress was the best solution to the bad-data problem. Alas, such a system is currently prohibited by law.

So what are the shortcomings of the Ipeds data? And absent a unit-record system, how can the Education Department improve the data it collects? In this and the next few posts, we’ll try to answer those and other questions by exploring in depth several of the most important college metrics and the roadblocks standing in the way of better data on college access, affordability, and outcomes.

Measuring Graduation Rates

Let’s start with graduation rates, one of the most relied-upon measures of outcomes in higher education. Ipeds calculates graduation rates using cohorts of first-time, full-time degree- or certificate-seeking students. The department asks colleges to report the number of students in each cohort who graduate with a degree or certificate in 100 percent, 150 percent, and 200 percent of “normal time,” which translates to four, six, and eight years, respectively, at a four-year college.

So what’s wrong with that? For one thing, the Ipeds graduation number can leave many students unaccounted for, said Christine M. Keller, associate vice president for academic affairs at the Association of Public and Land-Grant Universities.

Consider a four-year public university with a six-year graduation rate of 47 percent. What happened to the remaining 53 percent of students? The Ipeds graduation-rate data don’t tell us anything about them (except for a widely mistrusted transfer-rate figure, which we’ll get to in a minute).

Ms. Keller and her association, on behalf of several other major higher-education groups, advocate a data-collection system that would track more than just the number of students who graduated from the college where they started.

The proposed system, known as the Student Achievement Measure, would also capture how many students were still enrolled in the same college, how many transferred and graduated from another institution, and how many transferred and were still enrolled at another institution. That approach would leave a much smaller group of unknowns.

Like other experts, Ms. Keller proposed a “limited, secure system for collecting student-level data,” a carefully worded call for a unit-record-like system, to achieve those goals.

A ‘Good Metric’ for Community Colleges

Some experts pointed out that the measures for evaluating two-year colleges should be different from those used for four-year institutions.

Patrick Perry, a vice chancellor of the California Community Colleges—the largest public higher-education system in the country, with more than two million students—criticized the transfer-rate metric in Ipeds, an important data point for community colleges, where many students enroll with the goal of transferring to a four-year institution.

Ipeds treats all transfers the same and transferring as a subordinate outcome to earning a degree or certificate, but, as Mr. Perry said, all transfers are not equal.

“A lateral transfer [to another two-year or less program] should not be treated the same as an upward transfer [to a four-year college],” he said.

Getting a better transfer rate is key to making any ratings system relevant to community-college students, said Thomas R. Bailey, director of the Community College Research Center at Columbia University’s Teachers College.

“If we don’t have a good measure of transfer, then it’s really not going to be a good metric for community colleges,” Mr. Bailey said.

Additionally, the department should collect data much further out than 200 percent of “normal” graduation time, Mr. Perry said.

“A student who is enrolled at six credits per term is going to get an associate’s degree in 10 years,” Mr. Perry said. “We don’t like students to take that long, and we discourage it, but from a state-resource standpoint, it doesn’t take any more state resources.”

The ‘Largest Investment’ in Higher Education

Both Ms. Keller’s and Mr. Perry’s goals would be much easier to accomplish with a unit-record system, which could track exactly where students enroll after they leave one institution and whether they ultimately graduate with a four-year degree.

But even under the current institution-level system, the Education Department could fill a major hole in the graduation-rate data, said Patrick J. Kelly, senior associate at the National Center for Higher Education Management Systems.

“The federal government doesn’t collect completion rates for Pell [Grant] recipients,” Mr. Kelly said. “This is the government’s largest investment in higher education.”

Ipeds collects graduation-rate data by gender, race and ethnicity, and even for athletes. But the government doesn’t collect data on how well colleges graduate the neediest students.

“It’s really a tragedy from a collection standpoint,” Mr. Kelly said.

Virginia, by contrast, measures three different graduation rates for public institutions: the graduation rate for students with Pell Grants, the rate for students receiving other financial aid, and the rate for students receiving no financial aid.

“The intent is to bring attention to the differences while requiring institutions to bring those three measures in line with each other,” said Mr. Massa of the State Council of Higher Education for Virginia.

That is, perhaps, the most important—and most likely—first step to improve the current Ipeds graduation-rate data. It’s a step that would not necessarily require a unit-record system but would still shed much-needed light on how well colleges serve lower-income students.

The Higher Education Opportunity Act of 2008 requires that colleges collect graduation rates for Pell Grant recipients and disclose them if requested, but that crucial piece of information hasn’t yet been incorporated into Ipeds. According to the experts at last week’s symposium, it’s long past time to do so.

“There really should be no reason why students admitted to an institution, allegedly believed to be able to do the required work and succeed, should have substantially different graduation rates based on their financial-aid status,” Mr. Massa said.

Next week we’ll look at net prices and how that widely used calculation obscures the reality of college affordability and access.