OutSolve Blog

Workday AI Lawsuit Explained: Implications for HR

Written by Renee Arazie | Sep 24, 2025 4:36:06 PM

Artificial Intelligence (AI) is enhancing and transforming many parts of our lives, including recruitment and employment. As with any new process, AI is not without growing pains. If you’ve been following HR news lately, then you’ve probably seen headlines about the Workday AI lawsuit. It’s a case that’s making headlines across the HR industry and raising critical questions about the tools many of us rely on to make smarter, faster hiring decisions. 

At the end of the day, AI can help streamline recruitment, but if not carefully implemented, monitored, and maintained, then it can lead to unintentional discrimination. That’s the foundation of the Workday AI bias lawsuit, and why HR professionals everywhere must pay close attention. 

Let’s get into the details. Here are three key takeaways you’ll learn in this article: 

  1. What this Workday AI lawsuit is about 
  2. What this lawsuit means for HR 
  3. Steps you can take to make sure your AI recruitment tools are fair, effective, and legal. 

What Is the Workday AI Lawsuit? 

Workday is a platform that helps manage a company's workforce including payroll, timesheets, benefits, recruiting, and more. Workday also has AI-based applicant screening tools, two of which are at issue in this lawsuit. They are:

  • Candidate Skills Match: Extracts skills in the employer’s job posting and the applicant’s materials and determines the extent to which the applicant’s skills match the role to which they applied. The results are reported to the employer as “strong,” “good,” “fair,” “low,” “pending,” and “unable to score.”
  • Workday Assessment Connector: Allegedly uses machine learning to observe that an employer disfavors certain candidates who are members of a protected class and decreases the rate at which it recommends those candidates. These AI-based tools are coming under fire for discrimination claims.  

The Workday lawsuit centers on claims that the company’s AI-powered hiring tools may be unintentionally discriminating against older job seekers. 

The lead plaintiff is an African American, disabled job applicant over the age of 40 who alleged that he was repeatedly rejected from positions after applying to roles at companies that used Workday’s AI-based screening technology. The plaintiff argued that Workday’s algorithms resulted in a disparate impact, meaning certain groups were disproportionately screened out, even without any intentional bias. 

As of now, the court has allowed the lawsuit to move forward as a collective action and includes all individuals aged 40 and over who, from September 24, 2020 through the present applied for job opportunities using Workday’s job application platform and were denied employment recommendations That decision alone signals that courts are taking these claims seriously and employers could be held accountable for discrimination, even if it's the result of algorithmic decision-making.  

This legal challenge to the use of AI in hiring decisions is a preview to how courts are likely to treat AI lawsuits brought directly against employers and is the first significant federal class-action case. While the plaintiff in this lawsuit filed specifically against Workday, rather than the companies where he applied, employers are likely next in line for these types of legal challenges.

When Workday tried to limit the scope of the lawsuit to exclude additional AI technology that it acquired as a separate product well after the plaintiff filed his lawsuit (“HiredScore”), the court rejected that argument, too. Workday thus has been ordered to produce a list of customers who have enabled the AI features. The court ruled: “If Workday can determine definitively that certain customers who enabled the AI features did not receive any scores or rankings, or did not score or screen candidates based on those AI features, Workday may exclude those customers from the list. Otherwise, they should be included.” 

Workday represented in its court filings that 1.1 billion applications were rejected using its software tools during the relevant period, so the collective action could potentially include hundreds of millions of rejected applicants.  

HR Takeaways from the Lawsuit 

Workday isn’t the only company offering artificial intelligence tools for use in the hiring process. It’s ultimately about how each AI recruiting tool is being used in the employer’s hiring process, and the real risks that come along with them. Here are some crucial takeaways for HR: 

1. Employers are Still Responsible Even if AI is Doing the Screening 

A common misconception is that if the vendor’s algorithm is doing the screening, then the liability shifts to the vendor. That’s not the case. 

Courts and regulatory agencies have made it clear that employers are ultimately responsible for discriminatory outcomes, even when third-party tools are involved. This is especially important if you’re using platforms like those at issue in the Workday lawsuit or any AI-powered system to evaluate resumes, rank candidates, or conduct video interviews. 

If the tech you’re using screens out protected classes disproportionately, then your company could be liable. 

Using AI doesn’t mean outsourcing legal responsibility. It just means sharing it with both your vendor and your HR department if something were to happen. 

2. Bias Audits Aren’t Optional Anymore 

To comply with anti-discrimination laws, regular audits of your AI tools are critical. This means using your actual applicant and hiring data to uncover whether the algorithm is making decisions that disproportionately harm specific groups, even unintentionally. 

More jurisdictions are beginning to require this by law. For example, New York City’s Local Law 144 (LL144) mandates annual bias audits for automated employment decision tools. Colorado, Illinois, and California are rolling out or considering similar laws. 

Even if your state doesn’t have specific AI hiring legislation yet, anti-discrimination laws are still in effect. Be proactive and make privileged bias audits a recurring part of your HR tech strategy. You want to catch issues before regulators or class-action attorneys do. 

3. Keep a Human in the Loop 

AI can be extremely helpful, but don’t rely on it. Maintaining human oversight is essential, not just from a compliance standpoint, but to ensure fairness and quality in hiring decisions. 

Humans can spot red flags that algorithms might miss. If an AI tool flags a qualified candidate as “unfit” because of a gap in employment or an unconventional resume format, a “human reviewer” can provide context. 

Think of AI as a co-pilot, not auto-pilot. It’s not there to take over the entire process. It’s there to help the humans in your hiring process select the most qualified person for the job opening. The more you blend automation with human judgment, the safer, and smarter, your hiring process will be. 

What to Ask of Your HR Tech Vendors 

If you’re partnering with vendors that provide AI recruitment tools, then now’s the time to get specific about their safeguards. 

Here’s what you should be asking: 

  • How do you audit your AI models for bias? 
  • What inputs are your models trained on? 
  • Can you provide transparency into how candidate scores are calculated? 
  • How often are the algorithms updated or retrained? 
  • What human oversight is built into the process? 

And remember, don’t rely solely on vendor assurances. You still need to run your own internal audits or hire third-party auditors to verify that the tool aligns with your internal equity, anti-discrimination, and compliance goals. 

If your vendor can’t explain how their AI works, or refuses to share their data, then you might want to reconsider the partnership. 

Legal & Compliance: What You Need to Know 

The Workday case is part of a broader legal trend. Here are a few compliance points to keep in mind: 

  • Disparate impact claims, which are claims that an action has an adverse impact on a protected group, even without intent, can proceed in court. That’s exactly what’s happening in this Workday case. “Although the President issued an executive order that directed the Equal Employment Opportunity Commission and the Department of Justice to de-prioritize disparate impact cases, aggrieved employees and applicants can hire attorneys to proceed in court, just like the plaintiff in the Workday litigation.” said attorney Alissa Horvitz. “Disparate impact claims are alive and well,” she added. 
  • Vendors may be considered agents, which means you’re both on the hook for outcomes, especially if the vendor is acting on your behalf in the hiring process. 
  • State and local regulations are increasing. If you operate in multiple states, stay up-to-date with each jurisdiction’s rules, including whether the jurisdiction requires you to notify applicants whether and how you are using AI in the hiring process. 

The legal environment is shifting fast. AI hiring lawsuits like this one are likely just the beginning. The earlier you adapt, the better protected your organization will be. 

Implications for Your HR Strategy 

So, how should you adapt your hiring policies to avoid ending up in a similar legal situation? Here’s a roadmap to consider: 

  • Audit Your Existing AI Tools: Start by assessing any recruiting tools that use AI or automation. Run bias audits, review past hiring outcomes, and make sure any red flags are addressed promptly. 
  • Educate Your HR Team: Everyone on your recruiting team should understand the basics of AI bias, disparate impact, and your legal responsibilities under Equal Employment Opportunity Commission (EEOC) guidelines and local laws. 
  • Add Compliance Checkpoints: Integrate compliance reviews into your recruitment workflows. For example, require bias audits before implementing a new tool, and schedule regular check-ins to evaluate outcomes. 
  • Update Your Hiring Policy: Include language in your policies and handbooks about human oversight, AI audits, and vendor responsibility in your hiring policies and procedures. This will provide clarity and structure and serve as evidence that your organization is being proactive. It is also best practice to alert job applicants about how AI is used during the hiring process.  

What this AI Lawsuit Means for Your Organization 

The Workday AI lawsuit is a wake-up call for HR across all industries. Even though AI offers incredible potential to improve efficiency and reduce human bias, it’s not foolproof and it’s not exempt from legal scrutiny. Ironically, if not monitored for accuracy, the very tool that’s supposed to help reduce bias can in fact inadvertently create bias and discriminatory hiring practices.  

As outlined in this article, now is the time to: 

  • Audit for bias 
  • Maintain human oversight 
  • Hold vendors accountable 
  • Stay ahead of new laws 

The bottom line is that AI is here to stay and will continue to be used for recruitment. Employers need to make sure these tools are free of bias, because when it comes to hiring, fairness isn’t just a goal, but also a legal and ethical responsibility.  

If you are interested in receiving articles like this directly to your Inbox, subscribe to our Newsletter.