Use of AI in Hiring Process During the Pandemic Poses Employment Law Risks

By Douglas Lipsky

Today, more employers are navigating the pandemic by turning to artificial intelligence (A.I.) to recruit and screen job applicants remotely. Employers are well-advised to take a lesson from, however, which reportedly scrapped its A.I. screening tool in 2018 after finding that it discriminated against women. 

The Amazon experience highlights how A.I. tools can inadvertently lead to discrimination in the hiring process; their virtual system introduced unconscious bias into the process. If you believe you have been discriminated against in a job interview, it takes an experienced employment lawyer to protect your rights. In the meantime, this article is a brief discussion of the potential employment law liabilities posed by the use of A.I. screening tools. 

Virtual Screening Tools and Employment Discrimination

Even before the pandemic, more businesses were beginning to rely on AI tools to screen applicants, accurately identify the right candidates and accelerate the hiring process. Misusing these tools can introduce unconscious bias into the process and expose employers to liability under applicable anti-discrimination laws, however.

In particular, the use of A.I. systems to screen and interview candidates can lead to discrimination claims based on disparate treatment and/or disparate impact under both Title VII of the Civil Rights Act of 1964 (Title VII) and the Age Discrimination in Employment Act.  

While a claim based on disparate treatment or intentional discrimination may seem counterintuitive because a computer program by its nature lacks “intent,” courts have upheld disparate treatment claims based on unconscious or implicit biases. 

As learned, unconscious bias can be introduced into an A.I. system through its programming and self-learning. Therefore, a court could hold an employer liable for a screening tool that exhibits the unconscious bias of its programmer. 

Moreover, an employer could face disparate impact claims if the use of an A.I. screening tool adversely impacts members of a protected class, much like the female applicants who were disfavored by Amazon’s recruiting tool. 

Courts analyzing such a claim could rely on the reasoning in cases that involved an employer’s use of standardized tests in the application process. When such tests are found to have a disparate impact on protected groups of employees, employers must establish that the tests are both job-related and represent a reasonable measure of job performance. Therefore, courts could require employers to show how the factors considered by AI screening tools relate to the specific job requirements for the position.

In any event, these are only a few ways AI tools could potentially lead to employment law claims. It is worth noting that the Equal Employment Opportunity Commission (EEOC) has investigated two claims of alleged A.I. bias and has advised that employers using A.I. screening tools in the hiring process could face liability for employment discrimination.

Why This Matters

The use of A.I. tools by employers during the hiring process will likely become part of the new normal, which has serious implications for employers and employees alike. The best way to understand your rights and responsibilities in the virtual employment landscape is to consult with an experienced employment lawyer.

About the Author
Douglas Lipsky is a co-founding partner of Lipsky Lowe LLP. He has extensive experience in all areas of employment law, including discrimination, sexual harassment, hostile work environment, retaliation, wrongful discharge, breach of contract, unpaid overtime, and unpaid tips. He also represents clients in complex wage and hour claims, including collective actions under the federal Fair Labor Standards Act and class actions under the laws of many different states. If you have questions about this article, contact Douglas today.