7 minute read

Employers using artificial intelligence in decision making

By Jill Pedigo Hall, JD

The use of artificial intelligence (AI) and machine learning in the workplace is growing exponentially. Employers are using AI — sometimes without knowing it — to make employment decisions at every stage of the job life cycle. Over the last two decades, web-based applications and questionnaires have made paper applications nearly obsolete. Employers seeking to streamline recruitment and control costs have adopted use of computer-based screening tools such as “chatbots” to schedule interviews, ask screening questions and even conduct videoconference interviews where facial analysis evaluates a candidate’s personality. In May, Equal Employment Opportunity Commission (EEOC) Chair Charlotte Burrows said, “Over 80% of employers use AI in some form in their broader work and their employment decision making.” U.S. employers are using predictive algorithms in their hiring processes, from résumé scanners to interview analysis to performance predictors. Popular websites such as LinkedIn, Monster, ZipRecruiter and CareerBuilder use AI to pair résumé and application information from job seekers with employer job descriptions to generate recommendations. Computer algorithms use

data to make inferences about people, including their identities, their demographic attributes, their preferences and their likely future behaviors. Now faced with the growing talent shortage, employers of all sizes see the use of AI as a more efficient way through the hiring process.

Benefits and risks of using AI

When designed carefully, AI can help recruiting and hiring to be more open, fair and inclusive by masking for protected classes, hiding terms associated with a particular gender or race, identifying adjacent skills or identifying candidates for upskilling. However, the use of AI technology can also create risk. Because algorithms rely on historical data sets and human inputs, the technology can generate bias or exacerbate existing bias. Bias enters the AI selection process in basically three ways: through biased data, biased variables or biased decision making. Use of biased data can result from use of benchmark résumés from a previous successful candidate group of a predominant gender, age, national origin, race or other group. Built upon the biased data, the algorithm might then exclude words that are commonly found in résumés of a minority group. Amazon learned the impact such biased data can make when it tested a résumé screening tool between 2015 and 2017. Data scientists fed the program’s algorithm a data set consisting of résumés belonging to successful current employees and candidates from the previous 10 years. Using machine learning, the program was able to identify patterns and then use those patterns to rate new applicants’ résumés. However, because the vast majority of résumés in the data set belonged to men, the program automatically downgraded résumés with certain word combinations such as women’s sports teams, women’s clubs and the names of women’s colleges. Additionally, the algorithms learned to assign significance to the use of certain terms in résumés and applications and then favored candidates who described themselves using verbs more commonly found on male engineers’ résumés, such as “executed” and “captured.” The tool has been termed “cloning AI,” since it used the data and skills from its historically “best” employees — mostly males — to create the search tool. Additionally, by using a job description to create the data set and algorithm, word choice limitations can also create bias. Using this biased data, AI can predetermine who sees job advertisements. As a result of these issues, a majority of résumés can be screened out before a human is involved. Bias can also result when variables selected for use contain bias (such as use of a zip code) that can reflect racial or ethnic bias. Finally, bias may creep into decision making because employers may not question automated — and essentially unreviewable — proprietary predictors, as AI provides an appearance and sense of objectivity and scientific analysis.

Federal government action

Currently no federal law or regulations specifically regulate use of AI in the workplace. As early as 2016, the EEOC signaled its recognition of concerns over bias and resulting discrimination from the impact of AI, people analytics and “big data” on employment. However, there was little movement by the EEOC until 2021 when, following an October 2020 inquiry by a group of U.S. senators, the EEOC launched internal and then external initiatives. Beginning in early spring 2021, EEOC Commissioner Keith Sonderling began commenting, conducting seminars and issuing articles on the discrimination concerns related to AI bias in employment. In September 2021, Sonderling signaled that EEOC may use commissioner charges — agency-initiated investigations unconnected to a discrimination charge — to ensure employers are not using AI unlawfully. In the same time frame, commission investigators all participated in AI training. Then on Oct. 28, 2021, Chair Burrows announced a new EEOC initiative focused on ensuring that the use of AI by

Currently no federal law or regulations specifically regulate use of AI in the workplace.

Even before EEOC took a more thorough look, some states and municipalities had moved forward on legislation or resolutions focused on identification and elimination of bias in employer AI use.

employers at all employment stages complies with federal anti-discrimination and civil rights laws. Continuing public comment by the commissioners signal that the agency has honed in on the use of hiring and employment technologies as an area of systemic discrimination. Thus, employers using AI or other such technologies should exercise caution. Although the initiative announcement suggested a concentration on information collection, education and guidance, the investigator training points to likely enforcement. Burrows signaled this, saying, “Bias in employment arising from the use of algorithms and AI falls squarely within the commission’s priority to address systemic discrimination.” In May 2022, the EEOC took another definitive step by issuing technical guidance warning employers that the use of AI and algorithmic decision making in employment decisions may violate the Americans with Disabilities Act (ADA) if, among other things, the tools screen out job applicants with disabilities or result in prohibited disability-related inquiries. The same day, the Department of Justice posted companion technical assistance guidance outlining potential ways AI and automated hiring tools can violate the ADA. Similar to the EEOC guidance, it identified employer obligations regarding individuals with disabilities and the requirement of reasonable accommodation related to AI use.

State and local action

Even before EEOC took a more thorough look, some states and municipalities had moved forward on legislation or resolutions focused on identification and elimination of bias in employer AI use. In August 2019, Illinois enacted the Artificial Intelligence Video Interview Act. The law requires Illinois employers to notify applicants of the nature and operation of any AI-enabled video interview technology used during the hiring process and to obtain their consent. It also requires employers relying “solely” on AI in hiring to collect and annually report to the state the race and ethnicity of candidates selected or rejected for interviews and those that are then hired. Maryland followed with a similar law that prohibits employers from using facial recognition technology during preemployment job interviews without the applicant’s consent. In 2021, New York City passed a law regulating the use of “automated employment decision tools.” Employers using such tools must provide advance notice to job candidates of their use and to disclose the job qualifications and characteristics the employer is seeking. Additionally, prior to use employers must have submitted the tools to a “bias audit,” which must be made publicly available. Finally, in spring 2022, the California Fair Employment and Housing Council proposed draft modifications to the state’s nondiscrimination employment law applicable to employers or agencies that use or sell services with AI, supervised machine learning or automated decision systems. The proposed regulations expand discrimination liability to include discrimination resulting from use of an automated decision system regardless of intent.

Proactive steps for employers

In light of the EEOC’s statements of priority and the state and local legislative action occurring, employers should evaluate their use of AI and automated decision-making tools to ensure there is no ensuing bias. Additionally, if an employer has used or is considering use of an AI technology vendor, the employer should: (1) ensure the vendor understands the employer’s EEO obligations, (2) ask the vendor to explain how it proactively avoids bias in its process and the results, and (3) consider making the avoidance of bias a material term of the vendor contract. Employers are advised to implement policies related to such use — including requiring managers who use AI technology to report any biased results and to prohibit inappropriate or discriminatory use of such systems. Finally, given the fast pace of government activity in this area, employers are advised to keep informed of new state and local legislation and additional federal guidance and enforcement.

Jill Pedigo Hall, JD, is a shareholder in the labor and employment section at von Briesen & Roper, s.c. Contact her at 608-661-3966 or jill.hall@vonbriesen.com.

This article is from: