Artificial Intelligence (AI) is enhancing and transforming many parts of our lives, including recruitment and employment. As with any new process, AI is not without growing pains. If you’ve been following HR news lately, then you’ve probably seen headlines about the Workday AI lawsuit. It’s a case that’s making headlines across the HR industry and raising critical questions about the tools many of us rely on to make smarter, faster hiring decisions.
At the end of the day, AI can help streamline recruitment, but if not carefully implemented, monitored, and maintained, then it can lead to unintentional discrimination. That’s the foundation of the Workday AI bias lawsuit, and why HR professionals everywhere must pay close attention.
Let’s get into the details. Here are three key takeaways you’ll learn in this article:
Workday is a platform that helps manage a company's workforce including payroll, timesheets, benefits, recruiting, and more. Workday also has AI-based applicant screening tools, two of which are at issue in this lawsuit. They are:
The Workday lawsuit centers on claims that the company’s AI-powered hiring tools may be unintentionally discriminating against older job seekers.
The lead plaintiff is an African American, disabled job applicant over the age of 40 who alleged that he was repeatedly rejected from positions after applying to roles at companies that used Workday’s AI-based screening technology. The plaintiff argued that Workday’s algorithms resulted in a disparate impact, meaning certain groups were disproportionately screened out, even without any intentional bias.
As of now, the court has allowed the lawsuit to move forward as a collective action and includes all individuals aged 40 and over who, from September 24, 2020 through the present applied for job opportunities using Workday’s job application platform and were denied employment recommendations That decision alone signals that courts are taking these claims seriously and employers could be held accountable for discrimination, even if it's the result of algorithmic decision-making.
This legal challenge to the use of AI in hiring decisions is a preview to how courts are likely to treat AI lawsuits brought directly against employers and is the first significant federal class-action case. While the plaintiff in this lawsuit filed specifically against Workday, rather than the companies where he applied, employers are likely next in line for these types of legal challenges.
When Workday tried to limit the scope of the lawsuit to exclude additional AI technology that it acquired as a separate product well after the plaintiff filed his lawsuit (“HiredScore”), the court rejected that argument, too. Workday thus has been ordered to produce a list of customers who have enabled the AI features. The court ruled: “If Workday can determine definitively that certain customers who enabled the AI features did not receive any scores or rankings, or did not score or screen candidates based on those AI features, Workday may exclude those customers from the list. Otherwise, they should be included.”
Workday represented in its court filings that 1.1 billion applications were rejected using its software tools during the relevant period, so the collective action could potentially include hundreds of millions of rejected applicants.
Workday isn’t the only company offering artificial intelligence tools for use in the hiring process. It’s ultimately about how each AI recruiting tool is being used in the employer’s hiring process, and the real risks that come along with them. Here are some crucial takeaways for HR:
A common misconception is that if the vendor’s algorithm is doing the screening, then the liability shifts to the vendor. That’s not the case.
Courts and regulatory agencies have made it clear that employers are ultimately responsible for discriminatory outcomes, even when third-party tools are involved. This is especially important if you’re using platforms like those at issue in the Workday lawsuit or any AI-powered system to evaluate resumes, rank candidates, or conduct video interviews.
If the tech you’re using screens out protected classes disproportionately, then your company could be liable.
Using AI doesn’t mean outsourcing legal responsibility. It just means sharing it with both your vendor and your HR department if something were to happen.
To comply with anti-discrimination laws, regular audits of your AI tools are critical. This means using your actual applicant and hiring data to uncover whether the algorithm is making decisions that disproportionately harm specific groups, even unintentionally.
More jurisdictions are beginning to require this by law. For example, New York City’s Local Law 144 (LL144) mandates annual bias audits for automated employment decision tools. Colorado, Illinois, and California are rolling out or considering similar laws.
Even if your state doesn’t have specific AI hiring legislation yet, anti-discrimination laws are still in effect. Be proactive and make privileged bias audits a recurring part of your HR tech strategy. You want to catch issues before regulators or class-action attorneys do.
AI can be extremely helpful, but don’t rely on it. Maintaining human oversight is essential, not just from a compliance standpoint, but to ensure fairness and quality in hiring decisions.
Humans can spot red flags that algorithms might miss. If an AI tool flags a qualified candidate as “unfit” because of a gap in employment or an unconventional resume format, a “human reviewer” can provide context.
Think of AI as a co-pilot, not auto-pilot. It’s not there to take over the entire process. It’s there to help the humans in your hiring process select the most qualified person for the job opening. The more you blend automation with human judgment, the safer, and smarter, your hiring process will be.
If you’re partnering with vendors that provide AI recruitment tools, then now’s the time to get specific about their safeguards.
Here’s what you should be asking:
And remember, don’t rely solely on vendor assurances. You still need to run your own internal audits or hire third-party auditors to verify that the tool aligns with your internal equity, anti-discrimination, and compliance goals.
If your vendor can’t explain how their AI works, or refuses to share their data, then you might want to reconsider the partnership.
The Workday case is part of a broader legal trend. Here are a few compliance points to keep in mind:
The legal environment is shifting fast. AI hiring lawsuits like this one are likely just the beginning. The earlier you adapt, the better protected your organization will be.
So, how should you adapt your hiring policies to avoid ending up in a similar legal situation? Here’s a roadmap to consider:
The Workday AI lawsuit is a wake-up call for HR across all industries. Even though AI offers incredible potential to improve efficiency and reduce human bias, it’s not foolproof and it’s not exempt from legal scrutiny. Ironically, if not monitored for accuracy, the very tool that’s supposed to help reduce bias can in fact inadvertently create bias and discriminatory hiring practices.
As outlined in this article, now is the time to:
The bottom line is that AI is here to stay and will continue to be used for recruitment. Employers need to make sure these tools are free of bias, because when it comes to hiring, fairness isn’t just a goal, but also a legal and ethical responsibility.
If you are interested in receiving articles like this directly to your Inbox, subscribe to our Newsletter.