Employers should conduct prior and ongoing assessments of adverse impact when using AI tools in the hiring, promotion, and termination process.
On May 18, 2023, the Equal Employment Opportunity Commission (EEOC) issued a “technical assistance document” titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” Even though the new guidance provides insight into the agency’s views on the use of AI in selection procedures decisions, it is non-binding and does not have the force of law. The guidance mirrors previous warnings issued by the agency and the Department of Justice (DOJ) regarding the impact on employees and job applicants with disabilities under the Americans with Disabilities Act (ADA).
This new guidance addresses only Title VII of the Civil Rights Acts, the use of AI tools as employment selection procedures, including hiring, promoting, and terminating, as well as the potential disparate or adverse impact. It defines “artificial intelligence” as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” Several examples of AI tools used in the employment selection process are identified in the guidance.
Potential disparate or adverse impact resulting from the use of AI-driven tools is expressly addressed in the guidance.
- The “four-fifths rule” is not determinative. Even though this rule is a useful “rule of thumb,” it is not an absolute indicator of disparate impact. The EEOC may consider an AI tool that passes the four-fifths test to still create an unlawful adverse impact if it nevertheless results in a significant difference in selection rates.
- Algorithmic decisioning tools can be selection procedures subject to the 1978 Uniform Guidelines on Employee Selection Procedures (UGESP). Tools that “measure, combination of measures or procedure,” if used as a basis for an employment decision, may be subject to UGESP.
- An employer may be liable where an AI-powered selection tool results in adverse impact, even if the test was developed or administered by an outside vendor. The guidance notes that the liability may be extended when the vendor is incorrect in its assessment of the tool.
- Employers should consider conducting an adverse impact assessment before using an algorithmic decision-making tool, as well as conducting ongoing assessments and making proactive changes based on the result.
Employers using third-party vendors to assist in the selection process should ask if they are using AI-powered selection tools to identify candidates. If so, employers should ensure that the data is being analyzed to avoid disparate impact against a protected group.
Employers using AI-driven tools in the selection, promotion, or termination process may want to review their practices to determine the extent to which they or their vendors may be using and assessing these tools.
As many federal and state agencies are scrutinizing the potential discriminatory impact on employment using AI-based tools, we will stay abreast on future developments and keep our clients updated.
Previous blogs on this subject: