AT Think

AI hiring is now a legal risk. Are you up to speed?

When used correctly and ethically, artificial intelligence can be a huge help to overburdened hiring managers and HR teams when resumes and interview dates start stacking up. 

Processing Content

But the potential legal risks of using AI hiring systems has officially crossed a new threshold. In January, a proposed class-action lawsuit was filed in California against Eightfold AI. The complaint alleges the company's hiring algorithms were used to evaluate candidates without proper notice, transparency or access, potentially violating federal consumer-protection laws and California's Fair Employment & Housing regulations. For anyone using systems like Eightfold AI outside of the U.S., the EU Artificial Intelligence Act has likely been breached. 

The complaint argues that Eightfold AI produced hidden candidate "scores" used by major employers including Microsoft, PayPal, Salesforce and Bayer to screen applicants. Two female candidates claim the AI-based system denied them job opportunities in STEM fields without ever allowing them to see, understand or challenge the AI-generated evaluations.

Whether or not the lawsuit succeeds, the message to employers is unmistakable: AI hiring systems are no longer just about efficiency. They are now a compliance, fairness and trust issue.

What the lawsuit alleges

According to the complaint and related reporting, Eightfold AI's tools allegedly:

  • Scored job applicants on a scale of 0–5 based on their predicted "likelihood of success," with 5 being the highest score. 
  • Used sensitive personal data, including social media profiles, location data, device activity, cookies and other tracking signals.
  • Operated secretly without applicants receiving any disclosure, consent or opportunity to correct errors. Candidates allegedly were also not given a copy of the report that influenced their job prospects.

According to the complaint, "There is no meaningful opportunity to review or dispute Eightfold's AI-generated report before it informs a decision about one of the most important aspects of their lives — whether or not they get a job."

The filing also underscores that "there is no AI exemption to these laws."

If the court finds that Eightfold's algorithmic rankings qualify as "consumer reports" under existing U.S. law (such as the Fair Credit Reporting Act), then the entire category of opaque AI hiring systems may fall under strict regulation.

Three key risks employers should now recognize

The message for employers is clear: AI-driven candidate evaluation in hiring is not a technical shortcut. It is a regulated activity with legal consequences if misused by any size organization, not just Fortune 500 companies cited in the aforementioned case.

Here are three of the biggest risks to be mindful of:

1. Compliance risk: undisclosed or unreviewable AI = legal exposure. If your hiring AI tool makes decisions that applicants cannot see or challenge, your organization may violate consumer-protection and fair-assessment rules, even unintentionally.

2. Data risk: AI systems can pull more than you realize. Tools that ingest social media, device data, location information or browsing behavior create high-stakes privacy obligations.

3. Bias and fairness risk: "black box" scoring can discard qualified candidates. Opaque AI scoring may eliminate strong applicants before a human ever looks at them, increasing the likelihood of legal claims and potential damaging employer brand.

Reducing your AI hiring risk

To adopt AI safely, organizations need workflows that are defensible, auditable and transparent — not just fast.

Here are five best practices that every employer should implement:

1. Transparency by design. Applicants should always know:

  • When AI is being used;
  • What it evaluates;
  • What data it relies on.

In your career advertisements and job application material sent to candidates you should reference which AI you use and why. Transparency builds trust and reduces regulatory exposure.

2. Human-in-the-loop decision-making. AI should augment hiring decisions, never replace them. A human reviewer should always be part of the final hiring evaluation.

3. Applicant rights: access, explanation and correction. When using an algorithm to generate a score or assessment for a job applicant, make sure the applicant:

  • Is able to see it;
  • Understands how the algorithm affected their candidacy;
  • Has the opportunity to correct inaccurate information.

Employers that provide feedback to all candidates — regardless of outcome — should be able to tell them why they did — or didn't — move onto the next stage of the selection process. If the AI has incorrectly rejected them, candidates should be able to correct the mistake by replying to the person(s) tasked with screening candidates. This is where the human element comes into play to mitigate the risk of automated selection and mirrors existing fairness and consumer-reporting obligations.

4. Reduce data collection to what is job-relevant. Avoid systems that pull data from social media, location tracking, browsing or device activity. Use only job-related, validated inputs. It's the employer's responsibility to ask questions of the AI provider to ensure that only job-relevant criteria are used. Some hiring tools seem to have filters determining where AI does or doesn't go in building an assessment of a candidate's suitability (for example, switching on/off reading a candidate's social posts). The Eightfold case seems to suggest the company's AI was going everywhere looking for reasons to select or reject the complainants.

5. Use tools aligned with employment and consumer law. Choose AI hiring vendors that proactively design audit trails, which provide clearly explainable scoring, that test for bias and that provide compliance documentation. If a tool cannot explain its decisions clearly to candidates and to your management team, then your legal team will not be able to defend them.

A better path forward: faster hiring without the legal risk

Employer demand for faster hiring is real — but speed cannot come at the cost of fairness, transparency, or compliance risk. Resources like our Hire Fast with Confidence Guide show it is possible to accelerate hiring without:

  • Losing qualified candidates to opaque AI filters;
  • Relying on undisclosed algorithmic scores;
  • Creating avoidable legal exposure.

The future of AI in hiring belongs to systems that are:

  • Transparent by design;
  • Human-guided, not human-replaced;
  • Aligned from the ground up with employment and consumer-protection law.

The aforementioned Kistler & Bhaumik v. Eightfold AI lawsuit is more than an isolated case. It's a harbinger of what's coming in the new employment landscape going forward. AI hiring will be scrutinized the way credit reporting, background checks and other regulated decision systems are today. Employers that prioritize transparency, fairness and compliance now will be the ones who navigate this shift successfully — and who earn greater trust from candidates in the process.


For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Recruiting Lawsuits
MORE FROM ACCOUNTING TODAY