A growing number of employers are beginning to turn to artificial intelligence to help make hiring decisions. Although some programs may sound like science fiction, they are already being used by companies. For example, some online systems search social media profiles for desirable characteristics to identify candidates for positions. Others use keyword searches of resumes or more complex evaluations to compare and rank the materials candidates submit as part of their application. And rather than conducting screening interviews in person, some companies are using chat bots for the initial screening contact or recording and using artificial intelligence programs to analyze video of a candidate answering interview questions.
Although artificial intelligence has the potential to make hiring and other employment decisions easier by reducing the amount of work required to find a great candidate for a position, some commentators are increasingly concerned about the potential for discrimination or disparate outcomes as a result.
Real-World Examples of Discrimination in Automated Systems
It might seem counterintuitive that turning your hiring decisions over to a seemingly neutral and bias-free computer system could lead to discriminatory outcomes. But these systems are not perfect — they are developed and trained by humans who may have unconscious biases that the artificial intelligence system “learns” and applies.
For example, during a review of a resume-screening tool, one company discovered that the program identified two factors to be the most important when deciding whether to recommend a potential candidate: the applicant’s name was Jared, and the applicant played lacrosse in high school. These biases, which were unintentionally added to the program, had the potential to disadvantage women candidates or employees with disabilities.
In 2015, Amazon decided to limit its use of artificial intelligence in hiring decisions after it discovered that its algorithm was biased against women — the computer system had used old resumes of past Amazon hires to predict who the company should hire next. The problem: the resumes that Amazon used to train the program were overwhelmingly from male applicants, so the system learned to prefer the applications of men.
The Equal Employment Opportunity Commission’s Response
In response to concerns of discrimination caused by artificial intelligence, the EEOC launched an initiative in 2021 to review the use of artificial intelligence in hiring and other employment decisions. The initiative is gathering information about the use of technology in employment decisions, identifying promising practices, and will work to provide guidance on the use of artificial intelligence. Charlotte Burrow, chair of the EEOC, stated that “the EEOC is keenly aware that [artificial intelligence and algorithmic decision-making tools] may mask and perpetuate bias or create new discriminatory barriers to jobs.” Burrow explained that the EEOC’s initiative seeks to ensure that this technology doesn’t become “a high-tech pathway to discrimination.”
Steps for Employers
Are there risks to employers that turn to automated systems to make important employment decisions? The short answer — yes, but there are steps employers can take to review decisions made by artificial-intelligence programs to protect themselves from costly employment discrimination litigation.
If your company uses artificial intelligence to make employment decisions and doesn’t want to end up with an employment-discrimination lawsuit (or a disproportionate number of lacrosse-playing guys named Jared), it’s important to review and audit those decisions to make sure you are complying with federal and state anti-discrimination laws. And if your company contracts with a vendor that uses artificial intelligence, consider discussing your anti-discrimination obligations with the vendor and learning what steps the vendor is taking to reduce bias in its systems. Employers should also stay alert for new guidance from the EEOC task force related to technology and artificial intelligence.