- Long-term stability with AI use in HR practices will eventually need federal AI laws and regulations.
- HR practitioners should ask AI vendors questions about their AI and seek clear, quantifiable answers before committing to a product.
- Coming soon: watch for regulations concerning algorithmic bias, human involvement in AI use, and responsibility for AI decision-making.
AI regulations in HR are evolving almost as quickly as the technology itself, but the absence of federal government action to provide comprehensive, federal laws on the use of AI is leaving a void that state and local governments are filling. This has resulted in challenges for HR leaders and organizations to sort through a patchwork of new laws and proposed laws and chart the path forward for the compliant use of AI in HR.
It’s a complex landscape that changes daily, but it’s important to keep a pulse on the conversation if you’re using AI in recruiting or talent management.
Craig Leen, former Director of the Office of Federal Contract Compliance Programs and now a Partner with the law firm K&L Gates (full disclosure: Leen also sits on our Ethics Council), joined Ligia Zamora and Jason Cerrato on The New Talent Code to help you navigate these complexities responsibly.
In this episode, Leen shares why a federal approach to AI in HR regulation is needed, questions to ask when vetting AI vendors, and four regulatory areas to watch.
Related content: Craig Leen, former Director of the Office of Federal Contract Compliance Programs and now a Partner at K&L Gates, joins The New Talent Code to talk about the latest in AI regulations and compliance laws.
Why we need federal AI regulations in HR
As a rapidly evolving field, AI demands a unified, nationwide approach to regulations for its long-term sustainability, according to Leen.
“There are areas where you do want different approaches, like in employment law,” Leen said. “Some states are going to provide additional protections to employees and others — that’s OK. But when you’re talking about AI that’s going to be used nationwide, and you have recruiting markets that go beyond individual states, it’s much better to have a federal approach.”
The Biden-Harris Administration’s executive order issued in October 2023 was a significant step toward promoting AI’s safe, secure, and trustworthy use by the federal government. To date, development of a new, comprehensive federal law on AI, applicable to the private sector, is ongoing. In fact, it’s been recently reported that such a law is not likely to appear this Congressional session.
Long-standing federal employment laws and regulations, many born out of the civil rights movements of the 20th century, and resulting regulations, including from the Equal Employment Opportunity Commission (EEOC) and Office of Federal Contract Compliance Programs (OFCCP), of course, include requirements that apply to the processes employers use to recruit, hire, and employ talent. Federal agencies continue to issue guidance about how these existing laws and regulations are being interpreted to address the use of AI in HR.
Cities and states are also taking action in creating and implementing new AI-specific acts and laws. Leen says it’s best to understand the local and state laws that might affect your business. Some major laws and acts in the United States include:
- New York City Local Law 144 (regulating Automated Employment Decision Tools)
- Colorado’s Artificial Intelligence Act
- AI Amendment to the Illinois Human Rights Act
Leen adds that the new AI laws are not aimed at restricting or stopping AI. Instead, these laws are designed to establish guardrails to address potential algorithmic bias and adverse impact based on protected class.
“There’s been a lot of scrutiny in that area,” Leen said. “I think the top AI companies welcome that because they embrace having AI ethics councils and trying to make sure that they’re acting in a way that’s non-discriminatory and not with bias.”
We’ve also published our perspective on AI regulations and compliance here.
Related content: Read our guide on Responsible AI, which explains how our AI works to benefit employers and employees with a skills-based approach.
4 questions to ask AI vendors
When researching any AI vendor in HR tech, asking questions about how they build AI products and how the products work are important. Vendors building responsible AI should be able to provide definitive answers with data to back their findings.
“That doesn’t mean you need to know their code or their proprietary information, but they should be able to explain to you in a qualitative way [and] what factors are being considered,” Leen said. “It’s even better if they can actually show you and can even show you numerically the impacts, and how the different factors are impacting a score that someone gets.”
Leen recommends asking these questions:
- What factors do you consider when building your AI product?
- How does your AI product test for and prevent algorithmic bias?
- How does your AI mask identifying information?
- How are humans involved in the outcomes?
Leen says that in his experience asking these questions — especially those around testing for algorithmic bias — leads to better products that can create more equal opportunities for underrepresented groups, including women, BIPOC, and people with disabilities.
Asking these questions is also a great way to self-audit a recruiting process.
“This idea of having humans involved in AI — that’s really considered the gold standard,” Leen said. “You want both — people checking AI, but you want AI checking people.”
Read more about how to research AI providers in our latest e-book, The CHRO’s guide to responsible AI.
Related content: Learn more about navigating global policies and regulations for AI in this on-demand session from Cultivate ‘24.
4 governance gaps to watch
Leen says that the most significant gap in AI regulations is the lack of a federal standard in the United States right now. However, implementing universal guidelines will not be enough to effectively manage AI.
He added that the second gap is regulation that steps beyond governing the person facilitating the process and looks at algorithmic bias.
“We want to be able to demonstrate in a repeatable experiment that the AI is helping identify better talent,” Leen said. “You want people thinking like that because that’s where the law is going to go eventually. Either enough states or the United States and the EU [are] going to adopt a law that’s going to require you to not have a certain amount of algorithmic bias. That’s going to require higher risk decision-making, like employment decisions, and to be auditing your own AI.”
The final two gaps focus on employer and AI vendor responsibility, respectively. While the employer is ultimately responsible for hiring decisions, the role of an AI vendor is equally critical. How that breaks down in a legal setting could change and place more or equal responsibility on an AI vendor, highlighting these companies’ roles in the process.
“You want an AI company that cares about their reputation and works with a lot of other companies,” Leen said. “You want to use [someone] that complies with the law … will help you with auditing and will be there if you’re ever in an investigation.”
Leen hopes to see movement on a federal standard within the next few years to address these gaps and ensure the United States leads the way on AI governance.
“We need to be competing as a country with other countries and not falling behind,” he said. “We want to be ahead of everyone. I think it would help to pass a national AI law. Certainly from a business standpoint, we want to be on the forefront of AI.”
Listen to the full episode of The New Talent Code with Craig Leen on our website or wherever you listen to podcasts.