EEOC’s Guidelines on How To Avoid AI-Driven Discrimination
When it comes to artificial intelligence (AI), the sky is seemingly the limit. But that cuts both ways; while the multiple technologies that use AI are already quite powerful and hold incredible promise, they’re also sparking some concerns.
One such concern, raised by the U.S. Equal Employment Opportunity Commission (EEOC) and U.S. Department of Justice (DOJ) last year, is that using AI when hiring and managing employees can lead to violations of the Americans with Disabilities Act (ADA). Recently, the EEOC issued new guidance to help employers audit their AI tech to catch and prevent inadvertent discrimination — this time as defined under Title VII of the Civil Rights Act.
Generally, AI refers to using computers to perform complex tasks typically thought to require human intelligence. Examples include image perception, voice recognition, decision-making and problem-solving.
AI can take several forms. One is machine learning, which applies statistical techniques to improve machines’ performance of specific tasks over time with little or no programming or human intervention. Another is natural language processing, which uses algorithms to analyze unstructured human language in documents, emails, texts, instant messages and conversations. And a third is robotic process automation, which automates time-consuming repetitive manual tasks that don’t require decision-making.
As mentioned, in May 2022, the EEOC and DOJ warned employers about AI’s dangers in an HR context. Each issued separate guidance addressing the same fundamental problem: using AI — most notably, machine learning and algorithm-based tools such as natural language processing — can inadvertently lead to discrimination against people with disabilities. AI in HR: How is it Really Used and What are the Risks? | HEC Paris
The EEOC’s guidance, entitled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” pointed to three of the most typical ways that use of AI tools could lead to an ADA violation:
- Failing to provide accommodations to applicants/employees because of AI-based algorithms,
- Rejecting applicants with disabilities because of AI tools, and
- Implementing an AI solution in violation of the ADA.
The DOJ’s guidance, entitled “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring” mirrored the EEOC’s concerns.
The new EEOC guidance is entitled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” Its purpose is to outline recommended steps that employers can take to audit AI and other automated solutions with the goal of identifying and avoiding “disparate impact” under Title VII. According to the guidance, disparate impact occurs when employers use:
… neutral tests or selection procedures that have the effect of disproportionately excluding persons based on race, color, religion, sex, or national origin, if the tests or selection procedures are not job related for the position in question and consistent with business necessity.
The EEOC goes on to define “central terms regarding automated systems and AI.” These include “software,” “algorithm” and AI itself. The guidance also features answers to seven common questions about the topic, such as “Could an employer’s use of an algorithmic decision-making tool be a ‘selection procedure’?” and “What is a ‘selection rate’?”
Importantly, the EEOC explicitly notes that employers may be held responsible for algorithmic decision-making tools — even if those tools are designed and administered by a vendor.
If your organization uses AI in its hiring and performance management processes, stay apprised of the ongoing concerns and regulatory guidance regarding this technology. To read the full text of the EEOC guidance, click here. For additional assistance with your company’s finances, give us a call!