14
November 2O21
Opinions
A DREAM Graphic by Jaidyn Holt
The hidden bias behind
College is a numbers game. SAT scores. Extracurriculars. APs. College is a numbers game. It’s a statement that almost every high school student hears at least once. It’s tempting to boil the college application process down to a science. According to nonprofit group EDUCAUSE, over 75% of colleges already have. These colleges use algorithms to manage the herd of potential students, taking a comprehensive look at a student’s portfolio and running incredibly complex systems to see exactly how they stack up against the competition. It is, to many, the future of education. There’s just one issue. The game’s broken. In attempting to take bias out of the application process, colleges have instead hidden it behind walls of proprietary info, math, and contractors, replacing flawed people with flawed programs. While these models are incredibly efficient at doing what they do, that’s exactly the problem. A faster decision is not a fairer one. The history of acceptance algorithms — and their biases — begins in 1979. Forced to go through thousands of applications, a doctor at the St. George’s Hospital Medical School in London wrote code to automate the grueling and often inconsistent process. By 1982, this algorithm, which looked at both prospective and previous students to make its recommendations, was universally used by the college. It wasn’t until a few years later that people started to catch on. As tech journal the IEEE Spectrum reports, in attempting to mimic human behavior the algorithm had codified bias. A commission found that having a non-European name could take up to 15 points off an applicant, and female applicants were docked an average of three points based solely on their gender. The commission also found that many staff viewed the algorithm as absolute and unquestionable. If a student was denied? Well, they just should have had better numbers. While the school was eventually found guilty
of discrimination in admissions, the growth of algorithms was too big to stop. According to a recent Pew Research Center study, from 2002 to 2017, the amount of high school undergraduate applications more than doubled. In addition, the Center on Budget and Policy Priorities found that overall state funding for public universities fell over $6.6 billion from 2008 to 2018. As a result of these logistical challenges, algorithms have largely replaced humans as the initial test for applications. “You want to be able to select students who have demonstrated in high school that they have the ability to be successful on your campus, and that that varies from college to college,” Dr. Gordon Chavis, the Associate Vice President for Enrollment Services at UCF said. Chavis helps manage over 60,000 applications and the over $600 million in financial aid the school gives out each year. “We tend to use phrases like, ‘Do we have the resources? And do we have the programming to support a student as she he or she attempts to become successful and graduate from university?’” he said. Most of the time, this selection involves predictive modeling. In an effort to wade through the pile of applications, the vast majority of schools outsource this job to consulting companies. These companies compile data and recommendations, researching things like academic reports or intended majors. However, this data can sometimes go … further. As Slate Magazine found earlier this year, companies often use browsing histories, student profiles, an opaque “affinity index” and — of particular interest to many schools — ethnic identity and household income. This data, most of the time collected without knowledge or consent, is used by colleges for the stated goal of figuring out if a student is likely to attend, and then comparing increases in offered scholarships to increased chance of enrollment. This process is nothing new. However, in recent years colleges have turned to algorithms