It’s the year 3022: you’ve just applied for a new job as an engineer on a starship traveling to Mars. You submit your resume, and an algorithm selects you for an interview. You record video answers to the employer’s interview questions and upload them for a robot to review.
Using software to review applicant resumes and robot interviews may sound like far-fetched practices of the distant future, but some employers are already starting to rely on these machine-assisted methods—in 2022.
The future is here
In its new guidance for employers, the Equal Employment Opportunity Commission noted that employers have “a wide variety of computer-based tools available to assist them in hiring workers, monitoring worker performance, determining pay or promotions, and establishing the terms and conditions of employment.”
Examples of software, algorithms, and artificial intelligence that employers are already using include:
- Algorithms to scan resumes for key words
- Video software to analyze facial expressions during interviews
- Software to monitor employee keystrokes
- Chatbots to conduct initial screening interviews
Last year, the EEOC formed a task force to review the use of artificial intelligence and algorithmic decision-making tools. And it recently issued helpful guidance for employers who may be considering whether to implement these programs.
In its 2022 guidance, the EEOC explains that an employer may be responsible for discrimination, even if the employer uses algorithmic decision-making tools developed by an outside software vendor. Employers who adopt these programs should do so carefully, and they should consider reaching out to legal counsel to review the systems for compliance with federal and state anti-discrimination law.
Are these the droids we’re looking for?
At first glance, using robots and software to make employment decisions may sound like a perfect solution to discrimination and bias, but our current programs still have flaws. Software developed and trained by humans can unintentionally adopt the biases of its creators. For example, Amazon, an early adopter of using AI in its hiring process, discontinued its use of a particular algorithm after the software unintentionally screened out women.
The use of algorithmic decision-making tools could also lead to violations of the Americans with Disabilities Act (ADA). The EEOC explains that employers may need to provide reasonable accommodations to rate applicants fairly and equitably. Computer algorithms may unintentionally pass over individuals with disabilities, even though the individual is able to do a job with a reasonable accommodation. Software programs may also violate the ADA if they make certain disability-related inquiries or medical examinations.
The EEOC guidance includes promising practices to help employers comply with the ADA when using software or artificial intelligence. Employers should:
- Use tools designed to be accessible to individuals with disabilities.
- Provide clear and accessible instructions for requesting accommodations.
- Only measure abilities or qualifications that are truly necessary for the job.
- Confirm with software vendors that the tool does not ask questions likely to elicit information about a disability, unless related to requests for accommodation.
As new software and computer programs become more common in the workplace, employers should continue to vigilantly watch for discriminatory practices and stay alert for new laws and guidance relating to artificial intelligence and technology.