At its best, artificial intelligence is the world’s most informative and efficient assistant. The ability to gather information, lend knowledge, sort data, source figures, and scale endlessly at a moment’s notice is an employer’s dream. But, as contractors learned at the NILG Conference, the federal government foresees a significant risk to equal employment opportunity if the technology is embraced too quickly or haphazardly. AI has the potential to both exacerbate and mitigate discrimination in hiring, depending on how it is developed and deployed.
“Artificial intelligence can replicate and amplify a lack of diversity,” EEOC Commissioner Charlotte Burrows said. This message comes just as AI is entering the mainstream, and appears to be a lifeline for overworked human resources departments. Too often, these departments are strapped for resources and time, getting bombarded with application avalanches and the accompanying regulatory maze.
Implementing systems capable of peeling off unqualified, undesirable, or inexperienced job seekers is an attractive product. If AI can slice through one of the most tedious and treacherous parts of the hiring process, it will become a ubiquitous mainstay for hiring managers.
While proper use of this tool might be able to harness exponential powers of an assistant, the federal government is strongly urging caution. Contractors must recognize the hidden risks associated with this technology. Particularly when it comes to complying with anti-discrimination laws. Between baked-in bias of historical data, the unintentional partiality of algorithms, or the lack of diversity in AI teams, the government sees numerous red flags for employers to heed.
“As much as we have been listening to and learning about what AI can do for the good -- and it’s blue waters -- there is also the opportunity, whether directly or indirectly, to have an adverse impact on your hiring decisions,” OutSolve’s Chief Operations Officer, Patrick Savoy said when recapping the government’s most recent public position shared at the NILG conference. “That’s really where the agency is focused.”
Indeed, OFCCP District Director, Sam Maiden, told NILG attendees that the agency intends to ask contractors for the data used to train the systems. This could peel back layers of unintended consequences contractors might not even understand themselves.
Part of the government’s message stresses that organizations must be intentional in their screening efforts, avoiding reliance on machine learning to fulfill equal opportunity obligations. Best practices can include: retaining data, auditing results against objective metrics, and requiring vendors to provide information on how their AI systems work.
Ultimately, the government’s message on artificial intelligence is clear:
Contractors can’t delegate their EEO responsibility to a third party. And claims of proprietary technology are not a safeguard against claims of discrimination.
While AI can be a useful tool in ratcheting up efficiency in the hiring process, it is ripe with potential pitfalls as well. And the OFCCP crystallized the onus of compliance responsibility still falls directly on the contractor, regardless of the machine’s recommendations. And it requires a practical, proactive approach.
“Let’s face it, if there is something creating an adverse impact in your selection decisions, in the applicant to hire process, the obligation of you as a contractor and employer is to dissect that down to see where the impact exists,” Savoy said. “If that impact derives from an AI tool you’re using, it requires diving a little deeper to make sure there’s nothing discriminatory in its construction. That is a legal rabbit hole… and it was front and center for the majority of the NILG Conference. It is on the minds of OFCCP when they’re investigating your affirmative action plans.”