2 minute read

Shouldn’t student happiness be the simplest reflection of whether a college is “good” or not?

By Zach Spindler-Krage spindler@grinnell.edu

When I was accepted into Grinnell College, the first thing I did was look at its ranking. As I weighed my options and approached a final decision, I subconsciously compared schools based on my perception of their prestige. While rankings were far from the only factor in my eventual selection of Grinnell, they certainly played a role in my choice.

Advertisement

In the U.S. News and World Report, the alleged gold-standard for college rankings, Grinnell College currently ranks #15 out of over 200 liberal arts colleges across the United States. The ranking position prompts a seemingly simple takeaway — Grinnell College is worse than 14 other liberal arts colleges, and it is better than the majority. Yet, this interpretation is a gross simplification of what it means for a college to be “good.”

As soon as U.S. News began placing colleges into an ordered list, it created an assumption that some schools are inherently better than others. However, the establishment of the “best colleges,” which is intended to be an objective list, is based on a subjective methodology. According to U.S. News, a fifth of each college’s overall grade is based on “expert opinion” and “academic reputation.” For liberal arts colleges, this score is calculated by sending a comprehensive list of all the liberal arts colleges in the country to each college’s president, provost and dean of admissions. At each participating school, these three individuals rank the hundreds of schools on a scale of one to five. The average score of all respondents then becomes 20% of the ranking equation.

This process raises the question of how these three individuals possibly know enough about the other colleges to accurately score their reputation. If most of these individuals are delegating scores without knowing meaningful details about the college, it is likely that they are simply relying on the prior year’s rankings to guide their perception of reputation.

As a result, the rankings process becomes cyclical. The colleges at the top of the list generally remain at the top because their high ranking gives them a good reputation, and their good reputation subsequently gives them a high ranking again the next year.

Much of the other data that U.S. News uses — graduation rate, financial resources and student selectivity — fail to reveal any substantive information about the actual quality of an institution. This data does not indicate, for example, why the graduation rate is high or low, how accessible or inaccessible resources are to students or whether selectivity comes at the cost of diversity.

Because U.S. News attempts to be both heterogeneous, comparing colleges of all sizes, locations, etc., and comprehensive, comparing across multiple variables, it ultimately fails to create an effective or useful ranking.

Despite the recent news of many top law schools no longer submitting data to U.S. News, I highly doubt that there will be a widespread rejection of

This article is from: