Artificial Intelligence (AI) is enhancing and transforming many parts of our lives, including recruitment and employment. As with any new process, AI is not without growing pains. If you’ve been following HR news lately, then you’ve probably seen headlines about the Workday AI lawsuit. It’s a case that’s making headlines across the HR industry and raising critical questions about the tools many of us rely on to make smarter, faster hiring decisions.
At the end of the day, AI can help streamline recruitment, but if not carefully implemented, monitored, and maintained, then it can lead to unintentional discrimination. That’s the foundation of the Workday AI bias lawsuit, and why HR professionals everywhere must pay close attention.
Click here for the latest updates that occurred in March 2026.
![]()
Here’s What You’ll Learn
- Even if an AI tool’s bias is unintentional and the software is provided by a third-party vendor, courts have ruled that employers are ultimately responsible if those tools disproportionately screen out protected groups.
- To mitigate legal risk, HR must maintain consistent human oversight of their AI tools; human reviewers can provide the context necessary to catch qualified candidates that an algorithm might unfairly reject.
- HR teams must now hold their tech vendors to higher standards of transparency, including detailed explanations of how candidate scores are calculated, or how the AI tool makes decisions, whenever the employer's regular bias audits reveal potential indicators of discrimination.
![]()
What Is the Workday AI Lawsuit?
Workday is a platform that helps manage a company's workforce including payroll, timesheets, benefits, recruiting, and more. Workday also has AI-based applicant screening tools, two of which are at issue in this lawsuit. The first tool, Candidate Skills Match, extracts skills in the employer’s job posting and the applicant’s materials and determines the extent to which the applicant’s skills match the role to which they applied.
The results are reported to the employer as “strong,” “good,” “fair,” “low,” “pending,” and “unable to score.” The second tool, Workday Assessment Connector, allegedly uses machine learning to observe that an employer disfavors certain candidates who are members of a protected class and decreases the rate at which it recommends those candidates. These AI-based tools are coming under fire for discrimination claims.
The Workday lawsuit centers on claims that the company’s AI-powered hiring tools may be unintentionally discriminating against older job seekers.
The lead plaintiff is an African American, disabled job applicant over the age of 40 who alleged that he was repeatedly rejected from positions after applying to roles at companies that used Workday’s AI-based screening technology. The plaintiff argued that Workday’s algorithms resulted in a disparate impact, meaning certain groups were disproportionately screened out, even without any intentional bias.
As of now, the court has allowed the lawsuit to move forward as a collective action and includes all individuals aged 40 and over who, from September 24, 2020 through the present applied for job opportunities using Workday’s job application platform, and the AI tool declined to advance them further in the employer’s selection process. That decision alone signals that courts are taking these claims seriously and employers could be held accountable for discrimination, even if it's the result of algorithmic decision-making.
This legal challenge to the use of AI in hiring decisions is a preview to how courts are likely to treat AI lawsuits brought directly against employers and is the first significant federal class-action case. While the plaintiff in this lawsuit filed specifically against Workday, rather than the companies where he applied, employers who used Workday’s AI tools are likely next in line for these types of legal challenges.
When Workday tried to limit the scope of the lawsuit to exclude additional AI technology that it acquired as a separate product well after the plaintiff filed his lawsuit (“HiredScore”), the court rejected that argument, too. Workday thus has been ordered to produce a list of customers who have enabled the AI features.
The court ruled: “If Workday can determine definitively that certain customers who enabled the AI features did not receive any scores or rankings, or did not score or screen candidates based on those AI features, Workday may exclude those customers from the list. Otherwise, they should be included.”
It follows, then, that if the Court ultimately finds that Workday’s products discriminated against any protected groups, the employers who enabled the features and received scores or rankings are exposed to potential discrimination allegations. It also exposes these employers to document subpoenas from plaintiffs’ attorneys as they seek to discover the impact that Workday’s AI tools had on each customer’s hiring decisions, going back to September 24, 2020.
Workday represented in its court filings that 1.1 billion applications were rejected using its software tools during the relevant period, so the collective action could potentially include hundreds of millions of rejected applicants.
Latest Developments in the Lawsuit (March 2026)
So, what’s happened recently with this lawsuit?
In early March 2026, a federal judge allowed the age discrimination claims under the Age Discrimination in Employment Act (ADEA) to move forward. Workday had argued that disparate impact claims were applicable only to employees, not applicants. The court disagreed.
Instead, the judge pointed to prior court decisions and longstanding guidance from the Equal Employment Opportunity Commission (EEOC), reinforcing that anti-discrimination laws can apply to job seekers, not just people already employed. In plain terms, companies can’t sidestep liability just because the alleged bias happens during the application stage.
The court also kept the core “disparate impact” argument alive. That means plaintiffs can continue trying to show that Workday’s technology disproportionately screened out applicants aged 40 and older, even if there was no intentional bias baked into the system. That’s a big deal for anyone using AI in hiring.
That said, it wasn’t a total loss for Workday. The judge dismissed several state law claims and a disability discrimination claim due to a lack of detailed supporting facts. However, plaintiffs were given a chance to revise and refile those portions of the complaint.
The case doesn’t seem to be slowing down or shrinking substantially in scope. Courts have continued to reject efforts to shut it down entirely, which means discovery is moving forward, and more details could come to light. For HR teams, the takeaway is pretty clear. These cases aren’t getting tossed out early. Rather, they’re being taken seriously and allowed to play out.
HR Takeaways from the Lawsuit
Workday isn’t the only company offering artificial intelligence tools for use in the hiring process. It’s ultimately about how each AI recruiting tool is being used in the employer’s hiring process, and the real risks that come along with them. Here are some crucial takeaways for HR:
1. Employers are Still Responsible Even if AI is Doing the Screening
A common misconception is that if the vendor’s algorithm is doing the screening, then the liability shifts to the vendor. That’s not the case.
Courts and regulatory agencies have made it clear that employers are ultimately responsible for discriminatory outcomes, even when third-party tools are involved. This is especially important if you’re using platforms like those at issue in the Workday lawsuit or any AI-powered system to evaluate resumes, rank candidates, or conduct video interviews.
If the tech you’re using in the hiring process screens out protected classes disproportionately, then your company could be liable to the non-hired applicants.
Using AI doesn’t mean outsourcing legal responsibility. It just means sharing it with both your vendor and your HR department if something were to happen.
2. Bias Audits Aren’t Optional Anymore
To comply with anti-discrimination laws, regular audits of your AI tools are critical. This means using your actual applicant and hiring data to uncover whether the algorithm is making decisions that disproportionately harm specific groups, even unintentionally.
More jurisdictions are beginning to require this by law. For example, New York City’s Local Law 144 (LL144) mandates annual bias audits for automated employment decision tools. Colorado, Illinois, and California are rolling out or considering similar laws.
Even if your state doesn’t have specific AI hiring legislation yet, anti-discrimination laws are still in effect. Be proactive and make privileged bias audits a recurring part of your HR tech strategy. You want to catch issues before regulators or class-action attorneys do.
3. Keep a Human in the Loop
AI can be extremely helpful, but don’t rely on it. Maintaining human oversight is essential, not just from a compliance standpoint, but to ensure fairness and quality in hiring decisions.
Humans can spot red flags that algorithms might miss. If an AI tool flags a qualified candidate as “unfit” because of a gap in employment or an unconventional resume format, a “human reviewer” can provide context.
Think of AI as a co-pilot, not auto-pilot. It’s not there to take over the entire process. It’s there to help the humans in your hiring process select the most qualified person for the job opening. The more you blend automation with human judgment, the safer, and smarter, your hiring process will be.
What to Ask of Your HR Tech Vendors
If you’re partnering with vendors that provide AI recruitment tools, then now’s the time to get specific about their safeguards.
Here’s what you should be asking:
-
How do you audit your AI models for bias?
-
What inputs are your models trained on?
-
Can you provide transparency into how candidate scores are calculated?
-
How often are the algorithms updated or retrained?
-
What human oversight is built into the process?
And remember, don’t rely solely on vendor assurances. You still need to run your own internal audits or hire third-party auditors to verify that the tool aligns with your internal equity, anti-discrimination, and compliance goals.
If your vendor can’t explain how their AI works, or refuses to share their data, then you might want to reconsider the partnership.
Legal & Compliance: What You Need to Know
The Workday case is part of a broader legal trend. Here are a few compliance points to keep in mind:
-
Disparate impact claims, which are claims that an action has an adverse impact on a protected group, even without intent, can proceed in court. Recent developments from March 2026 in the Workday case are a good example that these types of claims are not only “surviving” but are being taken very seriously and actively moving forward.
-
Vendors may be considered agents, which means you’re both on the hook for outcomes, especially if the vendor is acting on your behalf in the hiring process.
-
State and local regulations are increasing. If you operate in multiple states, stay up-to-date with each jurisdiction’s rules, including whether the jurisdiction requires you to notify applicants whether and how you are using AI in the hiring process.
The legal environment is shifting fast. AI hiring lawsuits like this one are likely just the beginning. The earlier you adapt, the better protected your organization will be.
Implications for Your HR Strategy
So, how should you adapt your hiring policies to avoid ending up in a similar legal situation? Here’s a roadmap to consider:
-
Audit Your Existing AI Tools: Start by assessing any recruiting tools that use AI or automation. Run bias audits, review past hiring outcomes, and make sure any red flags are addressed promptly.
-
Educate Your HR Team: Everyone on your recruiting team should understand the basics of AI bias, disparate impact, and your legal responsibilities under Equal Employment Opportunity Commission (EEOC) guidelines and local laws.
-
Add Compliance Checkpoints: Integrate compliance reviews into your recruitment workflows. For example, require bias audits before implementing a new tool, and schedule regular check-ins to evaluate outcomes.
-
Update Your Hiring Policy: Include language in your policies and handbooks about human oversight, AI audits, and vendor responsibility in your hiring policies and procedures. This will provide clarity and structure and serve as evidence that your organization is being proactive. It is also best practice to alert job applicants about how AI is used during the hiring process.
What this AI Lawsuit Means for Your Organization
The Workday AI lawsuit is a wake-up call for HR across all industries. Even though AI offers incredible potential to improve efficiency and reduce human bias, it’s not foolproof and it’s not exempt from legal scrutiny. Ironically, if not monitored for accuracy, the very tool that’s supposed to help reduce bias can in fact inadvertently create bias and discriminatory hiring practices.
As outlined in this article, now is the time to:
-
Audit for bias
-
Maintain human oversight
-
Hold vendors accountable
-
Stay ahead of new laws
The bottom line is that AI is here to stay and will continue to be used for recruitment. Employers need to make sure these tools are free of bias, because when it comes to hiring, fairness isn’t just a goal, but also a legal and ethical responsibility.
Renee attended Augusta University and graduated with a bachelor's degree in business management. At OutSolve, Renee leads a team of HR compliance consultants. Her experience with recruiting systems has been of great benefit in working with clients regarding business processes. Prior to joining OutSolve, Renee was part of a Talent Acquisition team for a global business analytics and information services firm for several years, where she recruited for several different US departments. Renee managed global process improvement projects as the applicant tracking system administrator and global trainer, as well as analyzed hiring metrics and data integrity.
Weekly OutLook
Featured Posts
DEI in 2026: What Employers Should Review, Keep, and Fix
California Pay Data Reporting Portal Now Open
Related Posts
Workday AI Lawsuit Explained: Implications for HR
Artificial Intelligence (AI) is enhancing and transforming many parts of our lives, including recruitment and employment. As with any new process, AI...
From Reactive to Resilient: Why HR Is Moving to Compliance as a Service
Today’s HR leaders aren’t just managing people. They’re managing risk, data, culture, and compliance in an environment that seems to change by the...
Understanding the Hawaii Pay Transparency Law
Understanding the balance between staying compliant and advancing a companyculture of transparency and fairness is a top priority for HR and...
Renee Arazie