By Erica Roberts, Talia Koltun-Fromm, and Yuwen Wang
Lawsuits that allege discrimination in AI-assisted hiring decisions follow the same pathway as any other discrimination case but with one big wrinkle: the possibility that both the employer and the AI system could be found liable for violating the law.
However, the opaqueness baked into how algorithmic and AI tools operate poses various challenges for litigation and enforcement of anti-discrimination laws. Chief among them: Can an AI service fall within the definition of employer (including as an “employment agency,” “indirect employer,” or “agent of the employer,”) under federal anti-discrimination laws and therefore be held liable for violations of those laws?
We are going to examine two different cases challenging AI-assisted hiring practices under federal anti-discrimination laws that shed light on the current landscape of these cases.
iTutorGroup Settlement
The first AI hiring lawsuit, Equal Employment Opportunity Comm’n v. iTutorGroup, Inc., was settled just last year. The EEOC alleged that iTutor, a China-based company recruiting English tutors in the U.S., programmed its application software to automatically reject candidates over a certain age for remote tutoring positions, in violation of the Age Discrimination in Employment Act (ADEA).
According to the lawsuit, the company programmed the software to automatically reject female applicants 55 and older and male applicants 60 and older, resulting in more than 200 rejections to qualified applicants due to their age.
The case settled for $365,000 to be distributed among these applicants, and further provides that, should iTutor resume hiring U.S.-based tutors, the company will institute better training for those involved in hiring tutors, develop a new and comprehensive antidiscrimination policy, and maintain strict directives against hiring based on age or sex.
Takeaway: This lawsuit was against the employer, not a separate employment software company, but it exemplifies how these programs can be used in unlawful ways. It also paves the way for holding companies accountable for using algorithmic tools in a discriminatory fashion.
Workday Lawsuit
Mobley v. Workday is an ongoing class action lawsuit against the HR software company Workday. Derek Mobley brought claims against Workday for violations of Title VII, ADEA, the Americans with Disabilities Act (ADA), and 42 U.S.C. section 1981, alleging that Workday’s algorithmic hiring system discriminated against him and other similarly situated job applicants on the basis of race (African American), age, and disability. He also asserted claims for aiding and abetting discrimination under California’s Fair Employment and Housing Act (FEHA).
Mobley, who suffers from anxiety and depression, claims that Workday’s software uses certain identifiable characteristics of individual applicants—such as graduation date, education experience, and various personality assessments —to discriminatorily reject applicants. Specifically, Mobley contends that he was rejected from over 100 different positions at companies that use the Workday platform to screen job applicants, claiming that the rejections he received were automatic and based on discrimination.
For Workday to be held liable for discrimination under federal law, the hiring platform would have to qualify as an employer (including an employment agency, an indirect employer, or an agent of the employer). Workday argued that it is merely a software provider and that it does not qualify as any of the entities that could be held liable under the law.
In July 2024, the Court partially rejected Workday’s arguments, finding that while Workday did not qualify as an employment agency, Workday did act as an agent of the employers and may therefore be liable under federal law. In coming to this decision, the Court reasoned that “under the plain language of the anti-discrimination statutes, as well as the case law interpreting their purpose and structure, a third-party agent may be liable as an employer where the agent has been delegated functions traditionally exercised by an employer.”
Takeaway: This decision is an initial step by federal courts to begin to address the use of algorithmic systems in the workplace under federal anti-discrimination laws. As the case progresses toward class certification, summary judgment and trial, it will be closely watched.
The Federal Government’s Position
The current federal administration has taken steps to identify potential bias related to AI in various areas. For example, in October 2023, the Biden administration issued an Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, which instituted new standards for AI safety, security, and equity and tasked the U.S. Department of Justice with coordinating interagency efforts to create and enforce anti-discrimination principles in relation to AI and algorithmic discrimination. To further these goals, in May 2024, the Department of Labor established key principles to apply to the development and deployment of AI systems in the workplace, including ethical development of AI, AI governance and human oversight, and labor and employment rights and protections.
The Equal Employment Opportunity Commission (EEOC) could also potentially play a critical role in protecting employees against algorithm and AI-related discrimination. The EEOC has issued a technical assistance document concerning the use of algorithmic decision-making. This document informs employers of their potential liability under Title VII and discusses approaches to assessing potential adverse impact when using AI technology. The guidance also encourages employers to proactively audit employment practices, identify AI-related discrimination, and change their practices accordingly.
Moreover, in April 2024, the EEOC filed an amicus curiae brief in the Mobley v. Workday case in support of the plaintiff. The Commission argued that Workday plausibly qualified as a covered employer (including as an employment agency, an indirect employer, or an agent of employers) under Title VII, the ADA, and the ADEA. Specifically, the EEOC argued that Mobley’s allegations that “employers delegate control of significant aspects of their hiring processes to Workday” and “Workday’s screening system itself allegedly can either refer candidates for further consideration or instead reject them,” suffice to qualify the Workday system as an agent of the employer, consistent with the EEOC’s technical guidance.
The Potential Future of AI and Algorithmic Bias Litigation
The current administration’s position that algorithmic and AI systems are covered employers under federal anti-discrimination law is the correct one. Additional support, through enforcement actions, amicus briefs or statements of interest, should be provided to plaintiffs who continue to litigate these issues.
Recently, in May 2024, the American Civil Liberties Union (ACLU) filed a complaint with the Federal Trade Commission against Aon Consulting, Inc., a hiring technology vendor. The complaint alleges that Aon’s hiring tools that incorporate algorithmic or AI-related features are likely to discriminate based on race, disability, and other protected classes.
Similar investigations and lawsuits are sure to follow. The challenge for plaintiffs is getting into the black box of a particular algorithmic tool’s development and deployment. Legislation and regulations setting transparency standards for AI technology are thus critical to the future of AI litigation.
Was I Discriminated Against by a Company’s AI?: Wrapping Up
AI is being used to streamline all sorts of employment decisions for all types of employers. It can be hard to discern when you have applied through an AI system, but a big giveaway is whether you were in touch with a human being. It can also be hard to tell whether you were discriminated against. But if you believe that you were qualified for the position that you did not get, based on being a member of a protected class, it is possible that AI discrimination occurred.
A key piece to propelling AI litigation forward is finding plaintiffs who believe they have been discriminated against by an algorithmic or AI tool and are willing to come forward. If you believe that your job application was rejected or your promotion was denied due to an AI or algorithmic hiring platform bias, contact an experienced employment attorney at Sanford Heisler Sharp McKnight.