Artificial intelligence (AI) is all around us. While recent platforms like ChatGPT and Midjourney have made generative AI popular, other types of AI have been widely adopted across industries over the past few decades. Businesses often leverage AI and machine learning to streamline their processes, improve efficiency, and boost productivity. Health care institutions, for example, use them to reduce certain kinds of human error, automate tasks, and improve diagnostic capabilities.
However, artificial intelligence is inherently imperfect, and the recent surge in generative AI has shed light on a deep-seated problem in the field: bias.
AI draws on data and sources that are, fundamentally, human. When those sources contain biases, prejudices, and skewed data, AI can amplify and perpetuate them. In some contexts, those biases can have legal implications.
When AI biases run afoul of the law
Civil rights laws are designed to provide equal access to opportunities such as housing, employment, and lending. Those laws prohibit businesses and organizations from discriminating against people on grounds such as:
- Race and ethnicity
- Sex, gender, and sexual orientation
- Disability
- Marital status
- Religion
Biases in AI can lead to discriminatory practices that are illegal. Those biases can disproportionately impact marginalized groups, resulting in inequities in areas such as:
- Hiring practices
- Employment decisions
- Lending decisions
- Housing decisions
- Medical care
One study, for example, found that skewed datasets feeding AI in health care can result in disparities along the lines of race, gender, and income levels.
In the employment context, employers are increasingly relying on AI to drive their hiring and recruitment practices. According to a 2022 survey, nearly 80 percent of employers use AI tools for that purpose. Many recruitment tools and applicant tracking systems like Workday, Eightfold, Jobvite, and SmartRecruiters also reportedly leverage AI. Yet AI-based tools can harbor biases that translate into real-world discrimination, denying applicants their legal right to fair treatment. Amazon, for example, had to abandon an AI-driven recruiting engine that discriminated against women because its dataset was skewed in favor of men. The problem of bias is widespread enough that the Equal Employment Opportunity Commission (EEOC) recently launched an initiative to gather data and educate employers on the risks of violating equal employment laws through reliance on biased AI tools.
Many biases in AI aren’t apparent, especially in large language model tools like ChatGPT that draw on enormous datasets. This lack of transparency highlights the importance of human oversight. It also amplifies the need for more ethical, transparent AI across all sectors.
Our firm spearheads legal action to hold companies accountable
When companies and employers engage in illegal discriminatory practices – AI-assisted or otherwise – legal action is essential for holding them accountable. At Sanford Heisler Sharp McKnight, our Discrimination and Harassment Practice Group is dedicated to pursuing justice on behalf of those whose rights have been violated. Our civil rights lawyers share a commitment to empowering and giving a voice to clients who have experienced illegal treatment, whether in employment, housing or other contexts.