USLAW
had an algorithm in its application software that made applicant screening decisions based on certain pre-determined criteria. The EEOC alleged that iTutorGroup had an algorithm programmed to automatically reject female applicants aged 55 or older and male applicants aged 60 or older. The AI allegedly rejected approximately 200 applicants during a set period pursuant to this algorithm. The EEOC determined that this algorithm violated Title VII of the Civil Rights Act. Settlement was reached between the EEOC and iTutorGroup, and the company was forced to submit to further screening by the EEOC in addition to the monetary payouts. In another 2024 class action matter, filed in California’s Northern District, Mobley v. Workday, a human capital management platform, was sued for discriminatory hiring practices due to an AI screening program that allegedly discriminated against applicants on the basis of race, age and disability. The court, in analyzing the narrow issue of whether Workday, a third-party vendor that provided screening services to business clients, could be found liable for the hiring practices of its business clients merely on the basis of the conduct of its algorithmic programming, determined that Workday was an “agent” of its business clients under Title VII and allowed Plaintiff’s claims to proceed to discovery. This ruling has significant implications for both AI vendors and employers using AI employment screening services, potentially expanding the scope of liability under federal anti-discrimination laws. While the above-highlighted cases took place in two of the more liberal jurisdictions in the United States, the overall conclusion to be drawn from these rulings is that it is highly likely that courts will continue to determine whether entities that use AI will ultimately be held liable for injury to the public. More importantly, to assume that these initial cases are not indicative of a larger future trend in litigation for plaintiff lawyers is a serious risk to your entity and its exposure and liabilities. WHAT TO DO NOW? The ubiquitous nature of AI by our clients and colleagues has made it essential to have up-to-date written policies outlining the proper use of AI, how said AI will be monitored and a description of explicit sanctions for the improper use of AI. Our
1
Local Rule 7.2(f)(1) – (3).
9
SUMMER 2025 USLAW MAGAZINE
corporate clients can no longer sit back and look at the improper use of AI as a solitary act with no blame-worthy party and must further understand that courts will soon begin to hold corporate clients accountable for the actions of their employees or vendors (rogue or otherwise) in this regard. While decisions on litigation regarding vicarious liability due to the use of AI are still new and largely unclear, it is essential that corporate clients and lawyers get ahead of the inevitable future and begin preparing safeguards for their employees’ and directors’ behavior. The first step is to set up a commission or group within your company to decide what the proper use of AI is in the context of your business model. The next step is to memorialize that policy in your manuals and handbooks. WHAT TO INCLUDE? Your company policies regarding the use of generative AI should generally cover three important categories: (1) Scope; (2) Accepted Programs/Apps; and (3) Clear Sanction Policy. (1) Scope
A good scope section regarding the use of generative AI sets out to define what the company personally believes is an appropriate use of AI in light of their business models, goals for the organization and explicitly outlines the limits on same. This sets out to define for employees the important distinctions or limitations on the expected use of AI, such as do you want your employees to create emails to customers with chat-generated programming or are you concerned that this practice could reflect poorly on the level of attention to detail your customers will associate with your company. Conversely, do you have any areas of work that you believe will be more efficient with the use of AI that you want to encourage further use by your employees, such as “Thank You” letters or accounting matters? Setting these matters aside clearly in your policy signals to your employees what is expected of them and what to watch out for and ultimately reduces the likelihood of overreach or confusion.
tive policy as to what applications or programs are acceptable to use for AI practices, identified by program or company name specifically, is important. More restrictive is better, as most executives will admit that they would rather prohibit the use of a program that they are unfamiliar with but ultimately turns out to be reliable than fail to prohibit the use of a program that it is later discovered has a history of inaccuracies that could lead to legal exposure. Before specifying these limitations, meet with your IT personnel and discuss what programs make sense to place on an exhaustive list for your business model. (3) Clear Sanction Policy
Lastly, while defining use is important, it is just as important to outline consequences for an employee’s deviation from the stated policy. As the litigation landscape for holding employers responsible for the use of AI expands, it will be incumbent on our corporate clients to ensure that they are not deemed to have a permissive policy that does not actually prevent improper AI use. The pitfalls of the use of AI are becoming more foreseeable by the day. With that, it is anticipated that courts will hold defendants accountable for not properly monitoring the use of AI by its employees and vendors. Make sure you have language in place that details clearly to your personnel that the impermissible use of AI is not tolerated and has employment-related consequences. CONCLUSION As an overall directive, get ahead of the curve. Generative AI is becoming a greater part of the business landscape and your responsibilities regarding its use will increase with little warning, beyond the expected increase in litigation for AI use in the future. Make sure you reach out to your inhouse or outside counsel to determine how to include appropriate language in your handbook or policy manual that reflects an understanding of the changing practices and responsibilities of AI use and places your business in firm standing to avoid being unprepared for the coming wave of changes.
(2) Accepted Programs/Apps
When selecting an application, program or a vendor who provides services that implement AI, it is important to note that not all AI is created equal. Having a restric-
John C. Krawczyk is a Dallas attorney and senior counsel with Fee, Smith & Sharp, LLP. He focuses his practice on labor and employment law and matters involving catastrophic loss, construction litigation and insurance defense.