As an HR professional, you likely have chosen this career because you deeply care about people, culture, and potential. The prospect of a massive and complex regulation like the EU AI Act, which is due to become applicable in August this year (or later) might feel daunting at first glance.
But you don’t have to be a lawyer to understand that the EU AI Act actually codifies what good HR leaders have always valued: fairness, transparency, and meritocracy.
Behind the rules and regulation lies the intention that technology treats people right. And at a time when generative AI allows candidates to submit dozens of applications in just a short time, artificial intelligence can also help organizations cut through that noise and identify the candidates with the right skills and potential.
The EU AI Act requires employers to use tools that were specifically designed and properly tested for this purpose.
HR departments are in a unique position to drive the transformation of their organizations through AI, lead re-skilling and upskilling and become the guardian of how AI impacts the workforce.
To fill this role, HR leaders must become AI literate. As an AI-native talent intelligence platform, we are here to be your partner on that journey.
Eightfold CEO and Co-Founder Ashutosh Garg discusses the importance of regulating AI.
What HR professionals need to know about the EU AI Act
The EU AI Act follows a risk-based approach: Many use cases, such as using generative AI for drafting an email, require no action while others – for example, a chat bot that provides employees or candidates with information about company benefits – have new transparency obligations.
AI systems, like the Eightfold Agentic Talent Operating System, that are used in recruitment or employee management are considered to be “high-risk” under the EU AI Act. These AI systems are not prohibited from use, but they must comply with the provisions of the EU AI Act.
Just like medical devices or new drugs are carefully tested before they are being used or prescribed to patients, the EU AI Act requires a similar process for high risk AI systems. These requirements aim to ensure that the AI system works as intended and can be deployed safely.
As an HR leader, you do not have to memorize the full 144 pages of the EU AI Act. But it is helpful to understand the basic requirements of the law.
Transparency
Deployers of high-risk AI systems need to be able to understand the purpose, capabilities, and limitations of the system they are using. We provide this information for our customers (the “deployers”) in our “Instructions for Use.”
In addition, we provide transparency around our responsible AI practices and make our external bias audit results publicly available. This information, and more, is available in our Trust Center.
Human oversight
Human oversight to protect the safety and fundamental rights of the users of an AI system is an integral part of the EU AI Act. AI may provide insights and recommendations to support and assist recruiters and hiring managers, but a human makes the final decision.
Recruiters and hiring managers exercise human oversight by providing input to the model as part of the job calibration and by deciding whether to advance or reject a candidate.
Accuracy and fairness
The Eightfold Talent Intelligence framework encompasses an extensive database of entities, including skills, job titles, companies, schools, and career trajectories. Through the use of Natural Language Processing (NLP), Eightfold comprehensively understands over one million skills and titles, as well as over one billion career trajectories in multiple languages.
Related content: Learn more about our certifications and commitment to responsible AI.
When does the EU AI Act become applicable?
The obligations for high-risk AI systems were scheduled to become applicable on August 2, 2026, but in November 2025, the European Commission proposed an amendment to the EU AI Act that would delay the date of application of the obligations for high-risk AI systems until compliance guidelines are available or no later than December 2, 2027.
The Commission’s proposal still has to be approved by the European Parliament and EU member states, which means that it is currently unclear when the high risk AI system obligations of EU AI Act will become applicable.
We continue to prepare for compliance with these obligations and continue to invest in responsible AI.
How Eightfold prepares for the EU AI Act
Since our founding, we have been committed to responsible AI development. Our compliance with the EU AI Act and other applicable laws puts this commitment into practice.
Following the adoption of the AI Act, we established a dedicated team focused on producing the required documentation, such as the “Instructions for Use.” This document outlines the characteristics, capabilities, and limitations of our platform and complements the onboarding we provide to all customers.
An independent evaluation by an external auditor has reviewed these instructions and confirmed that they contain the required information elements in full. You can view both the “Instructions for Use” and the letter of assurance from our auditor in our Trust Center.
We also continue to externally audit our Match Score model for bias and algorithmic discrimination. This audit covers our governance structure, risk assessment, and our testing protocols. Our latest audit results are in our Trust Center.
Eightfold was one of the first organizations to achieve ISO 42001
In August 2025, our organization achieved ISO 42001 certification. As the first internationally certifiable standard for Artificial Intelligence Management Systems, ISO 42001 aligns closely with the AI governance requirements of the EU AI Act, showing that our internal management systems match the rigor of the law.
Much like the GDPR, compliance with the EU AI Act is a shared responsibility between developers, like Eightfold, and our customers, who deploy the system. Because you can customize how our platform is deployed within your organization, your specific use cases matter.
We encourage you to reach out to your legal department to support your own compliance strategy. To help facilitate that conversation, your account manager can provide guidelines on your obligations as a deployer, ensuring we work together to keep your organization compliant.
Leading the conversation
AI is fundamentally shaping how we work. As an HR professional, you are ideally placed to lead this transformation and support your organization in finding, hiring, and up-skilling the talent of the future.
We’re here to help you become a champion of the core principles of transparency, fairness, and human oversight. Used responsibly, AI can broaden your talent pool, open new opportunities for existing employees, and strengthen equity in your hiring process.
We are committed to being by your side on this journey.
Learn more about our approach to responsible AI.