Keeping the Human in HR When Using in the Workplace
BY AISHA JORGE MASSENGILL, ESQ.
THE INTEGRATION OF ARTIFICIAL INTELLIGENCE (AI) into Human Resources (HR) functions represents both transformative opportunities and significant compliance challenges. This is particularly true for small businesses seeking operational efficiencies and cost containment. But what are the legal and practical implications of unfettered adoption in a part of the business with access to some of the most sensitive organizational and people-related data? This article examines the legislative landscape for AI adoption in HR practices, analyzes the emerging legal frameworks governing its use, and concludes with some practical tips for legal and business consideration.
There’s no shortage of AI applications targeting the full spectrum of HR Processes. There are tools that help source applicants and screen candidates, conduct video interviews and performance reviews, set goals, and drive compensation and benefits. There are also a myriad of collaboration and productivity applications powered by AI, such as chatbots and others that automate administrative tasks such as Chatbots, Slack, and Zoom’s AI companion that records and summarizes meetings.
On May 12, 2022, the Equal Employment Opportunity Commission (EEOC) issued its initial guidance on AI in the Workplace in the context of the Americans with Disabilities Act (ADA). In its guidance, the EEOC warned that despite their apparent neutrality, AI tools may violate the ADA during the employee screening process by failing to consider their impact on applicants needing accommodations. Employers were advised not to blindly rely on vendor promises that their software does not discriminate by ensuring that non-discriminatory design practices and
accommodations considerations remained at the forefront of their adoption.
In 2023, the Biden Administration issued the Executive Order (EO) 141101 on Safe, Secure, and Trustworthy AI, established broad principles for AI governance, including:
● Requirement for notice when AI is used in employment decisions
● Emphasis on preventing algorithmic discrimination
● Protection of privacy and personal data
● Transparency in automated decision-making
There’s no shortage of AI applications targeting the full spectrum of HR Processes.
Concerns about AI adoption by the government have not been limited to the EEOC. In April 2024, nine government agencies, including the Departments of Justice and Labor, issued a “Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection and Equal Opportunity in Automated Systems”. Their overarching message was that these federal agencies were poised to monitor the use and development of automated systems across a broad spectrum of use cases and environments to ensure that consumers and employees are protected against abuse or carelessness. The Joint Statement concluded: “These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices. Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”
The EEOC issued additional guidance on December 19, 2024, on the potential adverse impact of widespread employer adoption of wearable technology. In its “Wearables in the Workplace Fact Sheet,” the EEOC again warned that using such devices may be tantamount to impossible medical examinations under the ADA. It advised employers adopting such technology to ensure that the use is “job related and consistent with business necessity.”
Closer to home, the Maryland General Assembly passed HB 1255, a bill “Prohibiting, subject to a certain exception, an employer from using an automated employment decision
tool to make certain employment decisions; and requiring an employer, under certain circumstances, to notify an applicant for employment of the employer’s use of an automated employment decision tool within 30 days after the use, and providing certain penalties per violation for an employer that violates the notification requirement of the Act.” The companion Senate Bill SB 957 eventually died due to a lack of enforcement mechanisms and the need for additional financial resources, specifically stating that the state did not have the “technical expertise” needed for enforcement. Nevertheless, the bills were instructive as they recommended that employers conduct annual impact assessments and provide notice to applicants of use in decision making.
Governor Moore signed an Executive Order on January 1, 2024, calling for the establishment of an AI Subcabinet to ensure appropriate governance of AI within the government. The Subcabinet is charged with establishing policy, adoption, and assessment framework for AI use.
A 2022 survey of HR Professionals found that 92% of HR leaders planned to increase their use of AI tools within HR. The primary areas include Records Management, Payroll, Recruiting, Performance Management, and Onboarding. Adoption of AI allows for faster hiring, operational efficiencies, and cost savings through headcount reduction.2 This increased usage comes with a large warning label. As the case of Mobley v. Workday, Case No.23-cv-00770-RFL, makes its way through the courts, employers (and AI vendors) are cautioned against blindly turning the keys of their hiring processes to automated systems. In Mobley, the plaintiff alleges that Workday’s AI-powered applicant screening tools discriminate on the basis of race, age, and
No matter how the politics settle, there remain best practices that should govern the adoption of AI in HR processes.
disability in violation of federal and state antidiscrimination laws. It is a putative class action matter now and the courts are allowing claims of direct liability against Workday.
So, what are business owners and employers who need to watch their bottom line and operate a lean workforce to do about the current regulatory landscape? No matter how the politics settle, there remain best practices that should govern the adoption of AI in HR processes.
First, keep your humans involved in the decision-making process. Whether in hiring or performance management, humans must remain involved as the primary decision-makers.
Provide clear written notice to applicants about the use of AI in your processes.
Consider providing an opt-out for objecting candidates and be clear about the availability of reasonable accommodations throughout the hiring process.
Require that the use of AI tools on company systems go through a vetting process that documents the intended use and identifies ways that sensitive company information will be protected.
At the point of tool adoption, establish clear procedures for human oversight. This may potentially include convening a standing committee of cross-functional partners (HR, IT, and Legal, for example) to review.
Ensure that any tool has been properly validated, job-related, and consistent with business necessity.
Conduct annual impact assessments (preferably under privilege) to determine potential adverse impact.
For existing employees, a business should have:
Written policies governing AI use
Training programs for HR staff
Clear procedures for AI system oversight
Regular compliance reviews
Documentation protocols
Conclusion
The integration of AI in HR functions presents significant opportunities for small businesses to improve efficiency and effectiveness. However, careful attention to legal compliance and risk management is essential. As the regulatory landscape continues to evolve, businesses must maintain flexibility in their implementation approaches while ensuring robust compliance frameworks.
The absence of comprehensive AI-specific legislation in Maryland creates both opportunities and challenges for small businesses. While this regulatory space remains dynamic, adherence to existing privacy, employment, and anti-discrimination frameworks provides a foundation for responsible AI implementation.
Aisha Jorge Massengill is the Founder and Principal attorney of Sedgwick Andrews Legal & Consulting, a
firm that helps businesses protect what they’ve built through proactive compliance practices.