If you are qualified for the job to which you applied online and did not get selected, or even get an interview, you may have experienced unlawful discrimination because the online hiring program used an algorithm or AI.
Even though such programs are perceived as “machines,” they can discriminate too. If the algorithm tends to disfavor applicants of a particular race, ethnicity, gender, disability status, or age, then it is possible that the system is violating the laws against discrimination.
How Is AI Being Used Today to Help Companies Hire Employees?
As discussed in the first part of this series, companies are increasingly using AI and algorithmic tools to screen resumes and applicants, contact potential hires, conduct video interviews, and even make decisions on whom to hire or promote. These systems are in common use across all industries.
What Is AI Bias?
Behind every AI or algorithmic system is a human who designed the algorithm. Sometimes, the designer’s inherent bias, or unconscious bias, can affect how the AI program evaluates candidates. For example, a designer may instruct the algorithm to screen out candidates who do not make consistent eye contact or candidates who stutter, without consideration for disabilities or speech impediments that may impact one’s eye contact or speech.
AI systems make decisions based on data. Patterns emerge from the data used by their developers to train the systems. In the case of employment AI, this often means a company’s historical hiring patterns and employment data. If a company has hired and promoted predominantly white and male workers in the past, and that company trains an AI program using its historical data, then the system will lean in favor of applicants who share these same characteristics.
Am I Part of a Protected Class Under Anti-discrimination Law?
Federal and state laws protect against discrimination for certain classes of people that have protected characteristics. It may be possible to make a legal case based on the following protected characteristics:
- Race
- Gender
- Age (40 and older)
- Disability
- Ethnicity / National Origin
- Religion
- Sexual Orientation
- Gender Identity
These are protected characteristics under federal law that are most likely discernable on someone’s resume or other hiring materials. Other characteristics that are protected under federal law include pregnancy and veteran status. Certain state laws protect additional characteristics, like citizenship, marital status, familial status, height, and weight.
How does AI know if I am part of a protected class?
You might think that it impossible for an algorithm to determine your race, gender, age, disability status, and the like. But there are many hints on the typical documents you submit for hiring and promotions that could tell a computer system a lot about you.
Some of these characteristics, like gender and age, can be more easily discerned based on your resume. For example, if you list pronouns or your name is a typical female name, the AI program can assume your gender. And if you list your birthday or graduation years, the program can guess your age. Zip codes and names can also be used as a proxy for race. Computers may even be able to tell someone’s race, religion, or other protected characteristics based on different affinity groups to which someone belongs. For instance, if a resume or cover letter mentions involvement in the Black Student Union or an LGBTQ group, the algorithm may assign the corresponding characteristics to the person. Algorithms may even associate the sports you play with a particular race, gender, disability status, or age.
Take a look back at the documents that you submitted for the position. Keep an eye out for whether you included your:
- Pronouns
- Zip Code
- Birthday
- Graduation dates
- Nationality
- Attendance of foreign schools
- Date of certain jobs
- Involvement with affinity groups
- Involvement with religious activities
- Languages spoken
- Athletics teams or affinity groups
If you included any of these in your hiring documents, it is possible that an algorithm assigned you to a group with a protected characteristic. Alternatively, the algorithm may have deemed undesirable certain characteristics (like any of the above) that are more likely to be associated with a particular protected characteristic. In either case, if the algorithm then made the decision not to move your application forward, it could be evidence of discrimination.
If you did a recorded video interview, the AI program could be able to tell your race, gender, age, and other characteristics like national origin. Also, recorded video interviews may discriminate against people based on their accent or disabilities that impact speech.
If you believe that your job application was rejected due to an AI or algorithmic hiring platform bias, you may have a claim for discrimination. Contact us for a free intake.
Next: In Part III, we will look at two lawsuits that challenge AI-assisted hiring practices as discriminatory.
Erica Roberts is an Associate at Sanford Heisler Sharp McKnight.