Some employers have hoped that new technologies like machine learning and other forms of artificial intelligence can remove subjectivity—and therefore bias—out of hiring decisions. Unfortunately, the record for these new technologies is poor because they rely on and thereby perpetuate existing discriminatory patterns. While current anti-discrimination laws provide some important protections against algorithmic discrimination, DC Attorney General Karl Racine’s recently proposed Stop Discrimination by Algorithms Act will be a welcome addition to existing law. The bill would widen the list of entities covered by anti-discrimination laws and create reporting requirements to readily identify which practices cause discrimination.
New Technology Reproduces Old Patterns of Hiring Discrimination
Automated technologies may be new, but they can still perpetuate the same old types of discrimination. Amazon, for example, created “computer programs . . . to review job applicants’ resumes with the aim of mechanizing the search for top talent” starting in 2014. The company’s technology modelled ideal candidates and vetted applicants based on ten years of resumes and hiring patterns in the tech industry. In so doing, Amazon perpetuated the tech industry’s male dominance. The algorithm “penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain’ [a]nd it downgraded graduates of two all-women’s colleges[.]”
Amazon should not have been surprised that reproducing old hiring patterns would produce more discrimination. Employers often rely on facially neutral factors which reproduce discrimination. For example, a significant pay gap persists between men and women in nearly every industry across the United States. Employers who rely on salary history to set employee compensation can perpetuate this pay gap and can be liable under anti-discrimination laws as a result. New technologies relying on old hiring patterns face a similar risk.
Other automated technologies that provide more than just resume review are susceptible to the same risks. HireVue, for example, is one the biggest companies providing automated video interviews. In each standard interview for each applicant, the company collects and analyzes over 500,000 data points including data on interviewees’ “facial expressions, their eye contact and perceived ‘enthusiasm[.]’” Again, it should not surprise an employer to learn that relying on such data could lead to discriminatory outcomes. Similar interview processes based on subjective criteria often face scrutiny for perpetuating stereotypes about certain classes of people who are not good “fits” for a particular employer. Making matters worse, studies show that facial analysis software is deeply flawed as it fails to recognize certain faces, particularly non-white faces and the faces of trans people. Thus, despite HireVue’s promises that it “can database what the human processes in an interview, without bias[,]” there is plenty of evidence that these new systems exacerbate the same patterns of discrimination.
Current Anti-Discrimination Laws Can Be Effective Against Algorithmic Discrimination, But New Tools Will Better Protect Civil Rights
Title VII of the Civil Rights Act of 1964 and the Uniform Guidelines on Employee Selection Procedure govern “tests and other selection procedures which are used as a basis for any employment decision.” Employers cannot use a selection procedure that produces a disparate impact on protected classes and must ensure that every selection procedure is job-related and consistent with business necessity. Local and state anti-discrimination laws provide for similar and often stronger protections.
Ever since Congress enacted the Civil Rights Act, employees have challenged the resume reviews and hiring interview processes that new technologies now seek to supplant. In many of these cases, selection procedures are automated when a machine or outside vendor processes test results. These selection procedures may also be developed by someone other than the employer. Automation and the use of an outside vendor have never insulated employers from liability, and there is no reason to believe that the result would be different for algorithmic discrimination.
Attorney General Racine’s bill would provide an important supplement to existing laws. For example, while employers using technology like HireVue could be liable under Title VII and the DC Human Rights Act, HireVue itself is not necessarily bound by these laws because the company is not an “employer.” Attorney General Racine’s bill would add companies like HireVue as “covered entities” or “service providers” under the DC Human Rights Act. Most importantly, the Stop Discrimination by Algorithms Act would force companies like HireVue to provide detailed reports of how and whether its selection procedures cause a disparate impact to job applicants and the Attorney General.
Other states and municipalities have also enacted similar legislation. Illinois was the first state to act by requiring employers to provide notice and consent if the employers wants to use automated video interview software like HireVue. New York City also recently enacted a law to take effect in January 2023 which, similarly to Attorney General Racine’s proposed law, includes a requirement to provide notice to job applicants and a requirement that employers investigate automated selection procedures, but it does not subject companies like HireVue to anti-discrimination laws, nor does it contain detailed requirements for assessments of disparate impact.
Federal agencies are also ready to step into the fray. The EEOC has investigated algorithmic discrimination, and the FTC has published guidance suggesting that algorithmic discrimination can be an unfair or deceptive trade practice.
If enacted, Attorney General Racine’s proposed legislation would provide some of the strongest protections to job applicants thus far. While anti-discrimination laws already provide some remedies, states and federal agencies should strengthen and enforce existing law to cover new technologies that perpetuate old forms of discrimination.
 EEOC, Compliance Manual, § 10 n.77 (collecting cases and noting that “[p]rior salary cannot, by itself, justify a compensation disparity . . . because prior salaries of job candidates can reflect sex-based compensation discrimination”). See, e.g., Heath v. Google LLC, 345 F. Supp. 3d 1152, 1170 (N.D. Cal. 2018) (affirming class action status for age discrimination suit based on evidence that interviews evaluating “Googleyness” and “cultural fit” favored younger applicants). AI West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Retrieved from https://ainowinstitute.org/discriminatingsystems.html (collecting studies); see also Avi Asher-Schapiro, Online exams raise concerns of racial bias in facial recognition, Christian Science Monitor (Nov. 17, 2020) (noting persistent problem for non-white bar exam takers receiving the message, “Due to poor lighting we are unable to identify your face.”). Uniform Guidelines on Employee Selection Procedure § 2.B (1978). The Uniform Guidelines do not apply to age or disability discrimination, but the EEOC and courts apply similar frameworks to those types of claims. EEOC, Employment Tests and Selection Procedures (Dec. 1, 2007). Id. §§ 4-5. See, e.g., United States v. City of New York, 713 F. Supp. 2d 300, 319 (S.D.N.Y. 2010) (noting that employer’s arbitrary and standardless resume review and interview process was “ripe for male and female applicants to face differential treatment”). See. e.g., Bridgeport Guardians, Inc. v. City of Bridgeport, 933 F.2d 1140, 1147 (2d Cir. 1991) (upholding disparate impact theory for written examination). See, e.g., Griggs v. Duke Power Co., 401 U.S. 424, 436 (1971) (upholding disparate impact theory for professionally-developed general intelligence test). The Stop Discrimination by Algorithms Act, Legislative Bill No. 558 § 3(4) (2021). Id. §§ 6(d), 7. Illinois Artificial Intelligence Video Interview Act, 820 ILCS 42/. 2021 NYC Local Law 144. EEOC, Press Release, Use of Big Data Has Implications for Equal Employment Opportunity, Panel Tells EEOC (Oct. 13, 2016), available at https://www.eeoc.gov/in-the-media/newsroom/use-big-data-has-implications-equal-employment-opportunity-panel-tells-eeoc. Elisa Jillson, Aiming for truth, fairness, and equity in your company’s use of AI FTC (Apr. 19, 2021), available at https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.