Determining what makes a good chocolate chip cookie and what makes a good college education might have more in common than you think.
POLITICO. Feb 21, 2014.
Analogies (and advice) abound for the Obama administration’s forthcoming college rating system. It will be a Consumer Reports for colleges. Or a Good Housekeeping seal of approval. Or a higher education version of the exhaustive Cook’s Illustrated reviews of skillets and blenders for every budget.
The Education Department says it doesn’t know what the ratings will look like, although they’re expected to be unveiled this spring. Those are just a few of the comparisons that have been offered, perhaps to distinguish the ratings from better-known college rankings, as the department undertakes its rating endeavor.
The difference: Rankings order colleges from best to worst. It’s a big deal if Yale falls behind Harvard, or the University of California-Berkeley gains on Princeton. Ratings just say whether or not something is good — not if it’s necessarily the best.
“The ratings system won’t highlight trivial differences between elite institutions or heavily reward schools based on the number of students they turn away,” Education Secretary Arne Duncan said in December.
Some of the ratings analogies have even been embraced by department officials as possible comparisons or sources of inspiration: Acting Under Secretary Jamienne Studley, who is leading the project, is fond of the Cook’s Illustrated suggestion. But those parallel cases also carry their own examples of the promises and pitfalls of judging quality.
There’s also an underlying question: Who is the rating system really for? Duncan and other officials insist that it’s meant to influence consumer behavior, like other, more incremental higher education policies from the Obama administration: a “shopping sheet” to help students compare financial aid offers and a “scorecard” that clearly displays a given college’s average debt at graduation, graduation rates and other vital statistics.
Experts have said that a rating system might have a better chance at nudging colleges to improve — especially if the ratings eventually end up tied to federal financial aid, the Education Department’s holy grail.
But if they stay consumer-focused, the Education Department is entering a tough market. Many of the students the Education Department helps the most — low-income students who are less savvy about the admissions process than their better-off peers — don’t consult rankings or ratings to inform their choices at all, experts said at an event last month. The students who do pore over those guides have plenty to choose from, including not only U.S. News & World Report but also the Princeton Review and Washington Monthly.
“A problem that I think we face with any rating system is clarity in whose behavior we are trying to affect,” said Tod Massa, policy research and data warehousing director at the State Council of Higher Education in Virginia, which has collected reams of information on graduates from within the state.
Consumer Reports is a prolific rater. It’s best-known for reviewing cars, mattresses, refrigerators and other pricey, durable consumer goods, and the company provides volumes of information on how it does so.
Cars to college educations are a common comparison: An economist of higher education often points out that people borrow nearly as much for a new mid-size sedan as they would for college. Transparency measures such as the “shopping sheet” meant to display average loan debt and graduation rates have been compared to the window sticker on new cars that provides information on gas mileage. Some object to the analogy, arguing that referring to students as “consumers” devalues education.
When it comes to ratings, the comparison could fall apart.
At Consumer Reports, cars go for a rigorous test-drive that measures everything from the expected — acceleration, braking and safety — to the often-overlooked (“Staff members of different sizes judge how easy it is to get comfortably situated behind the steering wheel,” the company explains). They take the car through its paces at a 327-acre test center in rural Connecticut with dozens of engineers at the ready to examine the transmission.
The Education Department can’t test-drive a four-year college education. Nor can it send a small army of bureaucrats to every college campus — roughly 7,000 of them — a suggestion made satirically at a recent hearing on accreditation, higher education’s main form of quality control.
That’s where the chocolate chip cookie comes in.
Here’s how Consumer Reports rates food: “We develop standards for how an excellent product should taste. The criteria define a range of attributes acceptable for an excellent product. For example, an excellent chicken noodle soup may have long or short noodles, as long as they aren’t mushy. An excellent chocolate chip cookie may taste buttery or not. A garlicky beef hot dog may be excellent, but so may a smoky pork or poultry one. We don’t pretend to know our readers’ particular likes and dislikes.”
The Education Department has started to do the same, defining a quality education based both on its worth for individual students (will they graduate, and can they get jobs and pay back their loans?) and its value to society (Does the college do a good job getting low-income students in and through?)
But there are also more complex questions about quality, and health care has been considered as potential field that has tried to answer them. Here, there’s a more direct parallel: The Centers for Medicare and Medicaid Services have a five-star rating system for nursing homes, which rates on nine different quality measures.
That was the only parallel rating system outside higher education that the Education Department explored when gathering a panel of technical experts. Dana Mukamel, a professor at the University of California at Irvine who has served on task forces to design the report cards, discussed some of the possible parallels.
But she also highlighted pitfalls, particularly that even a system designed with the best of intentions might not be understood by consumers. One of the measures used for nursing home quality is the frequency of bedsores among patients. But consumers misunderstood, thinking a higher percentage meant better outcomes, Mukamel said.
Another risk, one that has cropped up when state governments try to rate public and charter schools from A through F, is that the public wants to see its own value system reflected in the ratings.
Washington Monthly’s college rankings are meant to provide an alternative to U.S. News & World Report, in part by reflecting the role colleges play in society. Colleges get a boost if they admit and graduate low-income students, and if their graduates choose careers in public service.
By those measures, the best research universities in the country are the University of California at San Diego, the University of California at Riverside and Texas A&M University. Stanford is sixth. Harvard is eighth. And Robert Kelchen, an assistant professor of higher education at Seton Hall University, said he’d be surprised if six students of the nation’s 21 million used the Washington Monthly rankings to choose a college.
A rating system won’t have to confront the jockeying for place that rankings do. But consumers still expect to see Ivy Leagues rating highly, said Kelchen, who helped design the Washington Monthly rankings.
Otherwise: “I just don’t think they match up with what students think reality should be.”