Responsible AI | Transparency

Our bias audit results, published in full.

The Eightfold Matching Model was independently audited for disparate impact across gender and race/ethnicity. We passed. Here is every number, every group, and every finding — no summaries, no spin.

Most AI organizations say they care about fairness. Few publish the data to prove it.

This page presents the full results of the 2026 bias audit of the Eightfold Matching Model, conducted by BABL AI Inc. under the requirements of New York City Local Law 144. The audit evaluated disparate impact across gender and race/ethnicity, internal governance, and risk assessment. The Eightfold Matching Model passed all three sections.

Prepared by BABL AI Inc.  |  March 3, 2026  |  Signed: March 26, 2026

Overall results

Three categories audited. Three passing opinions.

Disparate Impact
PASS
Governance
PASS
Risk Assessment
PASS
Overall
PASS

The audit was conducted by BABL AI Inc., an independent auditing firm whose lead auditors are ForHumanity Certified under the NYC AEDT Bias Audit standard. BABL AI independence conforms to the ForHumanity and Sarbanes-Oxley definitions. Fees are fixed and unrelated to the opinion rendered.

The audit covered data from January 2024 through December 2025, analyzed across more than 29 million candidate assessments where demographic data was self-declared.

The system

What the Eightfold Matching Model does.

The Eightfold Matching Model evaluates a candidate’s skills and experience relative to a specific role and its requirements. It produces a match score — rated from 0 to five stars in increments of 0.5 — used primarily in the initial phase of external applications and occasionally to support internal mobility decisions such as promotions.

The model does not receive demographic inputs as part of its scoring logic. Scoring rates were measured after the fact, using self-declared demographic data, to assess whether model outputs produced disparate outcomes across groups.

Auditor
BABL AI Inc.
Audit date
March 3, 2026
Data timeframe
Jan 2024 – Dec 2025
Internal testing
January 2026
Testing conducted by
Eightfold AI Inc.
Applicable law
NYC Local Law 144

Methodology

How scoring rates and impact ratios work.

The audit used the scoring rate method — the proportion of candidates within a demographic group who scored at or above the overall median score of the full population.

Impact ratios are calculated by dividing each group’s scoring rate by the scoring rate of the highest-scoring group. Under the federal Four-Fifths Rule (UGESP, 1978), an impact ratio below 0.80 is generally regarded as evidence of adverse impact.

In plain terms: if female candidates score above the median 62.8% of the time and male candidates do so 60.5% of the time, the male impact ratio is 0.962 — well above the 0.80 threshold. All groups in this audit remained above 0.80.

Disparate impact results · Gender

Gender scoring rates across 23.8 million assessments.

Group Candidates assessed Scoring rate Impact ratio
Female 10,435,471 62.8% 1.000 — reference
Male 13,381,312 60.5% 0.962 ✓

Female candidates scored at or above the median at a slightly higher rate than male candidates. The male impact ratio of 0.962 is comfortably above the 0.80 Four-Fifths threshold. No gender group showed adverse impact.

Note: An additional 74,997,062 candidate records were excluded from this calculation due to an unknown gender category — a reflection of real-world data collection limitations in self-declaration.

Disparate impact results · Race/Ethnicity

Race and ethnicity scoring rates across seven groups.

Group Candidates assessed Scoring rate Impact ratio
Hispanic or Latino 2,120,351 67.2% 1.000 — reference
American Indian or Alaskan Native 122,484 66.3% 0.986 ✓
Native Hawaiian or Pacific Islander 46,422 66.1% 0.984 ✓
Two or More Races 746,927 65.7% 0.978 ✓
Black or African American 2,246,235 64.8% 0.965 ✓
White 5,291,543 64.8% 0.964 ✓
Asian 4,880,107 63.0% 0.938 ✓

Hispanic or Latino candidates had the highest scoring rate in this dataset, making them the reference group for impact ratio calculations. All seven race/ethnicity groups showed impact ratios above 0.80. No group triggered the adverse impact threshold.

The range across all groups was narrow: from 63.0% (Asian) to 67.2% (Hispanic or Latino), a spread of 4.2 percentage points across 15.4 million candidates with known race/ethnicity data.

Note: An additional 85,587,944 candidate records were excluded due to an unknown race/ethnicity category.

Disparate impact results · Intersectional analysis

Gender and race/ethnicity combined.

NYC Local Law 144 requires intersectional analysis, examining every combination of gender and race/ethnicity — not just each dimension separately. This is a more rigorous standard than most audits require. The reference group (highest scoring rate): Hispanic or Latina female candidates — 70.5%.

Female candidates

GroupCandidates assessedScoring rateImpact ratio
Hispanic or Latina Female1,247,00270.5%1.000 — reference
American Indian or Alaskan Native Female57,38770.0%0.994 ✓
Native Hawaiian or Pacific Islander Female23,81169.1%0.980 ✓
Two or More Races Female358,84768.1%0.966 ✓
White Female2,421,93567.8%0.962 ✓
Black or African American Female1,216,99066.4%0.941 ✓
Asian Female1,643,21962.1%0.880 ✓

Male candidates

GroupCandidates assessedScoring rateImpact ratio
Non-Hispanic Asian Male2,732,44964.1%0.910 ✓
Two or More Races Male321,03363.7%0.903 ✓
American Indian or Alaskan Native Male55,74763.0%0.894 ✓
Hispanic or Latino Male782,36662.9%0.893 ✓
Native Hawaiian or Pacific Islander Male18,78962.9%0.892 ✓
Non-Hispanic Black or African American Male809,71762.7%0.890 ✓
Non-Hispanic White Male2,282,43362.2%0.882 ✓

All intersectional groups — including those with the lowest observed ratios — remain above the 0.80 Four-Fifths threshold. The lowest value in the dataset is 0.880 (Non-Hispanic White Male and Asian Female), both of which exceed the threshold. A consistent pattern appears across the data: female candidates scored at higher rates than their male counterparts across all race/ethnicity groups.

Governance · Audit finding: Pass

Who owns fairness at Eightfold AI.

Governance of bias and fairness risk at Eightfold AI is managed by a cross-functional Responsible AI working group. This team includes the Chief AI Compliance Officer alongside representatives from product, engineering, legal, and security, ensuring fairness considerations have direct influence over product decisions.

The governance structure passed all three audit criteria: the accountable party is identified, duties are clearly defined, and those duties were demonstrably carried out prior to the audit date.

Contact for bias audit inquiries: legal@eightfold.ai

Risk Assessment · Audit finding: Pass

How Eightfold AI identifies and monitors bias risk.

The audit reviewed the Eightfold AI internal risk assessment process, including the risk register, risk prioritization methodology, and evidence of ongoing monitoring. BABL AI reviewed screenshots from risk register dashboards, meeting minutes, and received verbal testimony from risk register maintainers.

The risk assessment covered risk identification, stakeholder impact, severity scoring, likelihood scoring, risk sources, and controls — all required dimensions under BABL AI’s Criterion Audit Framework, modeled after PCAOB Auditing Standard 1105.

Independent auditor

BABL AI Inc.

BABL AI Inc. is an independent AI auditing firm based in Iowa City, Iowa. Lead auditors are ForHumanity Certified Auditors under the NYC AEDT Bias Audit standard.

The BABL AI audit framework — the Criterion Audit Framework — is modeled after financial auditing practice and was published in the Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24).

BABL AI independence is codified by the Sarbanes-Oxley Act of 2002 and the ForHumanity Code of Ethics. Fees paid for the audit are fixed and unrelated to the opinion rendered. The opinion is grounded solely in the audit criteria.

Signed by Dinah Rabe, Lead Auditor, BABL AI Inc.  ·  March 26, 2026

Scope and limitations

What this audit covers — and what it does not.

This audit was designed to satisfy the requirements of New York City Local Law No. 144 of 2021. It does not certify that the Eightfold Matching Model is “bias-free” — no audit can make that claim — and it is not intended to demonstrate compliance with any legislation other than the NYC AEDT law.

Publishing this section is a deliberate choice. Transparency means being clear about what an audit covers, not only what it found.

— Assessed in this audit

Our commitment

This is not a one-time event.

NYC Local Law 144 requires annual bias audits. We conduct them because the law requires it — and because we believe responsible AI is an ongoing practice, not a moment in time.

Download full 2026 audit report (PDF) →

Questions about our methodology?

Contact our legal team for questions about our audit methodology, results, or responsible AI practices.

legal@eightfold.ai →

Curious about Responsible AI at Eightfold?

See how fairness and transparency shape every product decision we make — from model design to ongoing governance.

Read AI You Can Trust →

Learn more

Fairness is built into the model, not bolted on after.

See how the Eightfold Matching Model works — and how responsible AI thinking shapes every product decision we make.

Note that the criteria presented in this report were constructed specifically to address the requirements of a “bias audit” outlined in NYC Local Law No. 144 of 2021. The model was audited as though it were an automated employment decision tool (AEDT) under NYC Local Law No. 144 of 2021, but we do not make any determination whether the model is, in fact, an AEDT under this law. © 2026 Eightfold AI Inc. All rights reserved.

Share Popup Title

Share this article