The EEOC on AI in Employment Decisions: What Companies Should Know and Do


2 minute read | May.23.2023

A growing number of companies use artificial intelligence (AI) algorithms to identify and evaluate candidates, a development that promises to revolutionize recruitment and hiring. Proponents say AI-driven systems can eliminate bias, enhance objectivity, optimize efficiency and lead to fairer and more accurate selection outcomes. Those systems also can potentially affect diversity, equity and inclusion at work, though, and federal regulators are warning employers to stay vigilant.

The EEOC recently reminded companies that Title VII of the Civil Rights Act “prohibits employers from using neutral tests or selection procedures that have the effect of disproportionately excluding persons based on race, color, religion, sex, or national origin.” The ban applies to tests or selection procedures that are not “job related for the position in question and consistent with business necessity.”

What Employers Should Know:

The EEOC says discrimination can result when companies use “algorithmic decision-making tools” or software systems such as:

  • Resume scanners that prioritize applications that use certain keywords.
  • Employee-monitoring software that rates employees based on keystrokes or other factors.
  • “Virtual assistants” or “chatbots” that ask candidates about qualifications and reject those who don’t meet pre-defined requirements.
  • Video-interviewing software that evaluates candidates based on facial expressions and speech patterns.
  • Testing software that provides “job fit” scores for applicants or employees regarding personalities, aptitudes, cognitive skills or perceived “cultural fit” based on their performance in a game or on a more traditional test

Even employers using AI tools designed or administered a third party may be liable if disparate impacts result from their use, the EEOC says.

What Employers Should Do:

To reduce the risk of running afoul of Title VII when using AI to select candidates, we recommend companies work with counsel to:

  • Understand what AI tools are being used and what role they play in selection processes.
  • Determine whether your selection procedures may potentially adversely impact any protected groups by conducting a privileged analysis.
  • Determine whether a selection procedure that has an adverse impact is job-related and consistent with business necessity, and if so, whether a less discriminatory alternative exists.
  • Take steps to reduce the chance that an algorithmic decision-making tool may cause a substantially lower selection rate for individuals with a characteristic protected by Title VII.

Contact the Orrick employment team to learn more.