Deregulating AI in HR: Better outcomes, transparency, and compliance

Many companies and governing bodies are taking a critical look at the use of AI in the workplace — and rightly so.

Deregulating AI in HR: Better outcomes, transparency, and compliance

Many companies and governing bodies are taking a critical look at the use of AI in the workplace — and rightly so. 

Keith Sonderlin, Commissioner of the U.S. Equal Employment Opportunity Commission (EEOC)

But as Keith Sonderling, Commissioner of the U.S. Equal Employment Opportunity Commission (EEOC), explains, AI hasn’t changed longstanding civil rights laws like Title VII and the Americans With Disability Act (ADA). Rather, these existing laws can serve as guideposts for organizations to stay ahead of compliance while capturing AI’s benefits. 

In his role at the EEOC, one of Sonderling’s highest priorities is to ensure that AI and other workplace technologies are designed and deployed to comply with civil rights laws. He has several publications about and speaks globally on the benefits and potential harms of AI in HR.

Sonderling recently sat down with Eightfold AI’s Ligia Zamora and Jason Cerrato for a podcast interview about the benefits and potential perils of using AI in HR. The big takeaway? A deregulatory approach to AI in HR puts greater emphasis on outcomes. 

Read on for more insights into how AI can make HR processes more transparent and explainable. (Ed note: Quotes have been edited for clarity and length. For the full conversation, listen to Eightfold’s podcast, The New Talent Code).

A deregulatory approach to using AI in HR

Ligia Zamora: Tell us more about your focus on AI for HR as an EEOC commissioner.

Keith Sonderling: Our mission at the EEOC is to prevent and remedy unlawful employment discrimination and to advance equal opportunity for all in the workplace. The laws we enforce are not just for employees but for applicants, too, and protect against all the big ticket items: race, color, religion, sex, sexual orientation, pregnancy, national origin, age, disability, and genetic information.

Our laws apply to every type of work situation, including hiring and firing, but also promotions, training, wages, benefits, and the prevention of retaliation and harassment. Since joining the EEOC, I’ve prioritized addressing the use of AI in the workplace anywhere it may affect discrimination laws. 

I want this technology to flourish. I want it to take off. For companies to stay competitive in this marketplace, having and using AI is no longer a conversation. The question is how they will use that technology and for what purposes. 

Related: At Eightfold’s 2022 Cultivate conference, U.S. equal opportunity experts share what every HR leader must know about complying with federal law in this important area.

Jason Cerrato: How can companies build a framework for using AI and gaining its benefits while managing any negatives? 

K.S.: With New York City proposing we audit AI, Europe proposing its Artificial Intelligence Act, and states like Illinois regulating facial recognition, employers are being turned in all different directions. Many companies see new AI laws on the horizon, wondering how they can start complying with those future laws today. 

I’ve been trying to change that narrative. Since the 1960s, laws have been in the books. These same laws apply equally to today’s technology and software that hasn’t even been created yet.

If we talk about how the law should change related to AI, we’re getting distracted from the existing laws that already protect employees. That’s Title VII, the Americans With Disability Act, the AIDS Discrimination Act — you name it. 

Whether an HR supervisor making a discriminatory decision, an unchecked algorithm based upon an incomplete data set, or an algorithm allowing an individual to put their own bias in and scale it, the employer is liable. Just like all other areas of the law, employers have a duty to comply with the law now. They don’t need to wait for large litigations. 

Employers should ensure that their tools now comply with longstanding civil rights laws without the distraction of potential new laws or waiting for the government to come to tell them to do it.

J.C.: What advice can you give or risks you want to point out with doing AI audits, whether with a third party or doing a self-audit?

K.S.: If the EEOC shows up to investigate your company, all we know is the results. We’re not computer scientists, right? So if the results show discrimination — intentionally or unintentionally — we can look at how we got there.

AI actually makes HR processes more transparent because AI records what that algorithm is looking for. We can also look at the data set itself and see if there was any discrimination. If there’s discrimination, companies should figure out how they got there and ask how they can fix it before ever making a decision that affects someone’s livelihood — whether that’s hiring, promotion, or termination.

Deregulating AI in HR: Better outcomes, transparency, and compliance

Don’t wait for AI regulations to come. Just start looking at your processes today.

J.C.: Some of the benefits of using advanced technologies are creating some concerns because the process has more visibility and transparency, where you can track how some of the data weighed into this decision. Is that right?

K.S.: That’s absolutely true. AI can help HR teams in so many ways. It can help organizations take a skills-based approach to hiring and help employers see their employees’ real skills. But for each of the positives, there could be a potential negative if employers aren’t diligent.

This gets into this other potential benefit of explainability and transparency of the auditing trail, which we didn’t have before, and people aren’t talking about. 

From an HR and legal perspective, now you have guardrails around this. You know who’s using the systems, and you can have certain restrictions in place in these systems. Like a bank, not everyone has access to the vault where the money is. It could be the same thing when using these technologies. Right now, corporations don’t need the government to force them to have their internal handbooks, their own internal best practices, or identify specific individuals who have access to those. 

The people who have access should be people in decision-making positions whom you trust — and who have been trained on bias laws and anti-discrimination. You trust these individuals won’t use these tools for the wrong purposes. And that is all internal governance. Corporations are very familiar with doing that in the labor and employment space. 

L.Z.: How can companies and people understand whether the AI tool will do harm or good?

K.S.: We must be very thoughtful when deciding what purposes we will use AI for at our companies because there are many avenues. Whether creating a job description, conducting interviews, using facial recognition, managing employees, doing performance reviews, or terminating an employee, there are a lot of different uses.

Deregulating AI in HR: Better outcomes, transparency, and compliance

It’s hard to focus on one area because our laws apply to all of them, and employers look for guidance at every stage. 

If it’s facial recognition, it could be potential disability discrimination if you’re looking to judge somebody on how often they smile and the person can’t smile. Or it could be racial discrimination if the camera can’t see somebody with dark skin the same way it could see somebody with light skin. So for each use, I can tell you the potential benefits, but at the same time, I can also show you the potential perils of using them if some of these things aren’t controlled for beforehand.

L.Z: So you’re telling them to look at the output. Look at how AI is going to enhance their processes. Because the laws have existed forever, and whether or not they use AI, they will have to continue to comply and show that they comply.

K.S.: From our perspective, we’re not in the business of telling employers what technology they should or should not use. If an employer wants to use a technology that discriminates, it’s a free country, and they can do that. But will there be consequences? Absolutely. Will you be breaking the law? Absolutely. Will the EEOC be involved? Absolutely. 

So it’s just bringing awareness to the issues we care about, ensuring no discrimination for each use. And the simplest way to do this is to ensure that whatever the algorithm is, whatever the program or decision, employers have to watch for two things: discriminatory uses and discriminatory outcomes, and that’s no different than how HR has been operating from a compliance perspective forever. 

AI can assist greatly with discrimination. AI can tell us things we couldn’t see without it. AI can show who has the best qualifications for a job and find individuals who typically wouldn’t be selected for the job or even be considered. It can find patterns in résumés or performance reviews that help reduce discrimination and assist human decision-making.

Ready to dig deeper into Sonderling’s approach to AI in the workplace? Listen to the complete episode of The New Talent Code wherever you listen to podcasts.

You might also like...