Resources

Podcast

Deregulating AI in HR: Better outcomes, transparency, and compliance

Business and HR leaders curious about ethical applications of AI in HR won’t want to miss this episode of The New Talent Code

Eightfold welcomes Keith Sonderling, Commissioner of the Equal Employment Opportunity Commission (EEOC), to discuss his co-authored proposal arguing for a deregulatory approach to AI recently published in the University of Miami’s Law Review.

In this episode, Sonderling shares that the ultimate mission of the EEOC is to prevent and remedy unlawful employment discrimination and to advance equal opportunity for all in the workplace. That’s why ensuring AI and other workplace technologies are designed and deployed to comply with existing civil rights law is a top priority. 

Sonderling says that it’s no longer a choice for companies to use HR technology in operations, given the modern labor market demands and HR trends. Instead of asking if they should use AI in HR, talent leaders should ask themselves how they plan to use the technology and how longstanding civil rights laws are integrated into its adoption.

In fact, AI can help organizations become more compliant and transparent. Here’s how:

  • If the EEOC is investigating a workplace for discrimination, AI can make the process easier, faster, and more transparent by generating a record that might be otherwise difficult or impossible to produce. 
  • Determining whether AI further contributes to discrimination requires organizations to consider for what purpose they will use it.
  • Regardless of the use of AI, employers should always look for discriminatory uses and outcomes. 

Ethical Applications of AI in HR

Ligia:

Welcome to the New Talent Code, a podcast with practical insights dedicated to empowering change agents in HR to push the envelope in their talent functions. We’re your hosts. I’m Leia Zamora.

Jason:

And I’m Jason Serrato. We’re bringing you the best thought leaders in the talent space to share stories about how they are designing the workforce of the future, transforming processes, rethinking old constructs, and leveraging cutting edge technology to solve today’s pressing talent issues. It’s what we call the new talent code.

Ligia:

So if you’re looking for practical, actionable advice to get your workforce future ready, you’ve come to the right place.

Ligia:

Welcome to the new Talent Code. It’s been a minute, we know, but we’re excited to be back. Jason and I had to take a brief hiatus, you know, with all the industry events that happen in the fall, traveling across the world, HR Tech, unleash, Paris, Gartner, you name it, <laugh>. We were there and boy, were we tired?

Jason:

Yeah, it’s super great to be back. I’ve missed these conversations. We’ve actually just wrapped up an interview with our, with our next guest, commissioner Keith Saunder, link from the E E O C. And it was another great conversation. You know, the E E O C is the Equal Employment Opportunity Commission, a federal agency established via the Civil Rights Act of 1964 to administer and enforce civil rights laws against the workplace. Discrimination. Super important stuff. Definitely has a lot of passion. What a great discussion we had. You know, what a commissioner Saunder Ling’s highest priorities is ensuring that AI and other workplace technologies are designed and deployed, uh, to comply with these civil right laws. And he recently published a proposal in the University of Miami’s Law Review for litigators to take a deregulatory approach to AI and hr, which is the topic of the discussion we had with him as part of today’s podcast.

Ligia:

Yeah, yeah. Full admission. I actually did download this paper, <laugh>, and I think anyone in hr, seriously, HR or, or business or managers for that matter, could really benefit from listening to the podcast. We’re sort of did the Cliff Notes, <laugh>. I mean, you’re welcome to read all 87 pages cuz I did, but you can thank us later cuz we definitely broke it down into some bite sites chunks for this podcast. I love how friendly he is, how he breaks it. You know, he’s a lawyer, but he makes it so easy to understand. I think, like the light went off for me when he said, look, these discrimination laws have existed forever. Companies have been complying with them forever, or they should be, you know, he’s like free country, but everyone’s been complying and, and it’s not up to the government to tell you whether you use ai. You u don’t use ai, any technology or your process. At the end of the day, it’s the results that matter. That’s what actually, uh, gets audited. You know,

Jason:

That was my big aha moment, that he’s focusing on the results based off of these laws that have been in existence for, uh, a long period of time. And it really is more a matter of, maybe the concern is coming from some of these processes with new technologies make the process more transparent and more visible. So now you have to be more accountable for it, cuz it’s not just within someone’s head, right? You can actually see how a decision gets formulated and gets made, and now you have to be accountable for that as the results to how your outcomes portray themselves in, you know, in your organization and in your world. So, definitely appreciate his passion and, and as you said, his clarity made it very clear and easy to understand and also talked about, you know, you shouldn’t get distracted with chasing shiny objects or the la the latest thing you hear. It really does focus on the outcome and how you got there.

Ligia:

Yeah. Yeah. Anyway, that’s enough from us. Enjoy the interview with Commissioner Keith Saunder link from the E E O C Commissioner Saunder Lang. Welcome to the show. Welcome to the podcast.

Keith:

Thank you for having me.

Ligia:

No, absolutely. Jason, say hi.

Jason:

Hello. So excited to talk with Keith.

 

The AI in HR Landscape 

Ligia:

We know you’re a busy guy, so we’re gonna go ahead and jump right in for the next 30 minutes. Dig into, you know, your thoughts for this deregulating approach to AI and HR, and, you know, hopefully help our listeners understand AI a little better and how it should be used for interviewing, hiring, upskilling, promoting, et cetera. Maybe first let’s orient the audience. So before we jump into your recently published opinion, tell us a little bit more about why the focus for AI and hr, particularly as part of your role as an E E O C commissioner.

Keith:

Yeah. And let’s first take a step back and, and, and what is the e e ooc? There’s so many federal agencies out there. They all have, uh, different acronyms. But the EOC is the Equal Employment Opportunity Commission. And our mission is to prevent and remedy unlawful employment discrimination and also to advance equal opportunity for all in the workplace. So the laws, you know, we enforce, uh, very relevant in this conversation are not just for employees, it’s for applicants as well. And it protects from discrimination against all the big ticket items. So race, color, religion, sex, sexual orientation, pregnancy, national origin, age, disability, and genetic information. And a lot of times people just think, oh, it’s just hiring and firing. But our laws apply to every type of work situation, including the big ones, hiring and firing, but promotions, trainings, wages, benefits, it also prevents retaliation and harassment.

So it’s really all encompassing when you’re talking about entering in the workplace and being in the workplace. So we have a lot to do. We have the me too movement, pay discrimination, everything related to covid and accommodations, disability discrimination, pregnancy discrimination, you name it. So for me though, since joining the E E O C, I’ve really made it my priority to address the use of artificial intelligence in the workplace. Not just hiring, not just promoting, but just literally everywhere where technology is affecting the workforce and how it intersects with discrimination laws. And I’ve really been making a lot of noise about it for good reason. And the reason I’m talking about artificial intelligence and the reason, you know, the E E O C is looking at it and you’re seeing a lot of different government agencies look at in every way, but it’s specific to human resources.

It’s already been involved in the decision making process for employees and their entire life cycle for years. And the technology has been out there, and it’s being used by large companies, by smaller companies on a mass scale, really with no guidance, with no best practices, with no general awareness of the potential legal ramifications if you’re using the software for the wrong reason. So, without any of those big litigations yet, or those big huge federal government investigations, which I’m trying to avoid <laugh> avoid by raising my awareness, you know, um, I I don’t want to see the best practices or the guidelines for employers or employers to get serious about compliance with this area made through litigation, made through federal enfor uh, enforcement. I want this technology to flourish. I want it to take off, you know, for companies now to stay competitive in this marketplace. It’s no longer a conversation.

Am am I going to use HR technology? It’s about how we are going to use that technology and for what purpose are we gonna use that technology and how do we use that technology with the longstanding civil rights laws that we’re all subject to. And really diving into this even further about AI in the workplace is because, you know, artificial intelligence as, as we all know, companies are using it in every area of their business line mm-hmm. <affirmative>. But when it comes to using it in hr, it’s different than those other uses of, of ai because using AI and HR deals with some of the most fundamental civil rights we have, I think it’s the ability to enter in the workforce and thrive in the workforce free of discrimination.

Jason:

Now, Keith, you’ve talked about all the different areas where AI is being utilized and all the different areas that the E E O C covers in the empl employee life cycle are, are there specific areas that are drawing either the most attention or maybe creating the most confusion for how AI is coming into this conversation?

 

Regulating AI in HR 

Keith:

You know, that’s a great question, and from my perspective, we can’t favor one use over the other. We can’t really hone in, in, in, in, in specific use over the other, because our laws apply to every single use of artificial intelligence. So whether it’s AI that writes job descriptions, screens, resumes, AI that chats with the applicants or conducts the job interview themselves, or some of the software out there that predicts salary or tracking productivity, it all matters to us because it all implicates the laws we enforce. Now, to your question, are there some specific areas that may, you know, be on our radar more? Well, really, and, and the e o C put out some guidance on this topic last May about workers with disabilities and how they’re going to interact with the use of HR technology and artificial intelligence. And we put out some practices and guidelines about how the Americans with Disability Act applies to some of these software.

And a lot of it too is, look, there’s a lot of benefits we can talk about, about how, um, workers with disabilities can really flourish in the workplace being assisted by AI technology. But at the same time that technology cannot screen them out from employment. And if they aren’t able to use those technologies, let’s say if it’s an interview program or employee assessment program, they’re gonna have to do it with some sort of accommodation. So that is really important, what we’ve been focusing on first. And we thought that was important because outside of retaliation, which is our number one claim of discrimination year over year of at the E E O C, the second highest claim is disability discrimination. So bringing bringing awareness to the fact that, um, workers with disability or applicants with disability are going to be impacted by technology like they all are other areas was really important for us to do first.

Jason:

Now, some of the tools that are on the market are actually trying to address those concerns and help overcome some of those discriminatory practices and increase opportunity for individuals, every individual, even people with disabilities. Specifically, how do you go about trying to build a framework for approaching AI and manage getting all the benefits while managing all the concerns? I know you’ve, you’ve written about this in your recent report. What, what’s the kind of framework or the approach that you’re recommending?

Keith:

Yeah, so for me and in this discussion about how do we regulate ai, how do companies comply with, uh, laws related to ai, what the future of laws may be when it comes to ai, there’s a lot of distraction out there. And what we’re seeing, for instance, in New York City with their proposal about auditing ai, what we’re seeing in Europe with the proposed Artificial Intelligence Act, what we’ve seen in some states like Illinois, about facial recognitions or some proposals in California, employers are really, you know, being turned in all different directions about, well, AI laws are coming, how do we start complying now? And I’ve been trying to change that narrative saying, you know, there’s laws on the books since the 1960s, and these laws apply equally to the technology being used by employers today through artificial intelligence and software that hasn’t even been created yet through neural networks or any of those fancy new terms that are out there equally as they do to employers making decisions with humans or making decisions about an employee with a pencil on a paper.

So the process since the 1960s, the process is the same. The laws are out there. Yeah, yeah. And, and if we get distracted talking about how the law should change related to artificial intelligence, new laws for artificial intelligence, new agencies for artificial intelligence, we’re being distracted from the laws that exist right now that the E E O C enforces today that protect employees. I’ve been trying to get the conversation back saying, look, there’s laws that the E EOCs enforces, whether it’s Title Sevens, the Americans with Disability Act, the AIDS Discrimination Act, pregnancy, you name it, and they’re all like, we just went over, they’re all applying to these different uses of ai. So companies have a duty now to comply with the law, as simple as it sounds. But, but a lot of that has been lost. So that’s why I’ve been trying to talk, and especially in my new paper, about the more deregulatory approach or, or self-regulatory approach for companies, because they are going to be the ones who are ultimately liable for any decisions any kind of tool makes.

Whether it it’s a human HR supervisor making, uh, a discriminatory decision, or whether it’s an unchecked algorithm based upon a data set that’s not complete, or an algorithm that allows an individual to put their own bias in there and, and scale it, the employer is liable for those decisions. So just like all other areas of the law, employers have a duty to comply with the law now, and they can be doing that right now. They don’t need to wait for enforcement, they don’t need to wait for large li litigations, you know, they could be making sure that the tools they have now are compliant with long-standing civil rights laws themselves without the distraction of potential new laws or, um, waiting for the government to come to tell ’em to do them south. And that’s what I’m trying to raise awareness of, because that’s how employers can feel more comfortable using these programs because they’re familiar with how to do audits of their company, do audits of their business. So let’s just start doing that when it comes to using artificial intelligence, without anyone pushing you or forcing you to,

Jason:

You mentioned the topic of doing audits, doing audits of your company, audits of your business. I think that’s come up quite a bit, especially with some of the new, you know, as you mentioned, different locations that are coming up with regulations. For the folks that are listening, what are, what are some of the things you’re seeing or, and maybe advice or risks to that process around auditing, whether it’s with a third party or doing a self-audit? Like how does someone try to focus on the reality versus avoid the distractions

 

Transparency of AI in HR, and possibility of internal regulation

Keith:

For now for employers who are using artificial intelligence in the workplace? You know, what is the E E O C going to look at when we show up in an investigation? What do we know best? Not computer scientists. We don’t know what algorithms look like. What we do know is results, and we look at, see their results, and the results show discrimination, you know, that either a certain protected class was discriminated against, whether it is intentional or unintentional, but then, then we’re gonna backtrack on how we got there. And with artificial intelligence, it actually makes it more transparent than other areas. Because if you think about what are we dealt with now, we’re dealt with somebody’s bias, and where is that it’s in their brain, and how do we get in their brain? Pretty difficult, right? But AI can actually make that much more transparent because having a record of what that algorithm was looking for or the dataset, and if the dataset, you know, only included, let’s just say men under 40, then, you know, we could say, well, the, the problem was there opposed to the algorithm that, uh, potentially it was only looking for those characteristics, which you’re not allowed to make an employment decision on.

So in a way, being able to audit your artificial intelligence is looking at the results of what it predicted and making sure that there is some still human intervention there that you could see if there’s a discrimination and say, how did we get there? And how can we fix it before ever making a decision on someone’s livelihood?

Ligia:

But that in essence then requires transparency at the point of decision making, right? Because I’m gonna say this in an awful way, but to avoid any sort of litigation or questioning of a decision, when we’re talking about auditing, it’s more than auditing. It’s almost like transparency for the candidate or the employee on how the decision was made and providing insight, correct?

Keith:

Yeah. But providing in what, now I’m talking about in internal of what companies can be doing themselves internally before even dealing with third parties, just, just in inside. And, you know, presumably the employer who, who is making that decision has created the job description, is looking for certain skills within the applicant pool they have, or within, you know, the potential applicant pool they have through artificial intelligence. And if they’re not getting the results or they’re getting discriminatory results, there’s a lot of different ways to then go back and see why is it, and that is information that’s available, that is information that artificial intelligence can help with. Because if you look at saying, well, here’s what we had in the job description, and the job description gave us these candidates, and the candidates happened to be all of one race, one national origin, um, one sex, whatever, you know, however we ever got there, whatever the potential problem was, well, what did we have in that job description that potentially produced those results?

Or job description was perfect, but the candidate pool of who we got was only those people, those with those certain characteristics, or did you have the most diverse candidate pool? But then what happened when it went through the algorithm that somebody who maybe shouldn’t have had access or somebody put their own bias in there, then put filters that were discriminatory there. So there’s so many ways to potentially figure it out that employers can do themselves with due diligence before ever actually making a decision on someone’s livelihood to be able to enter in the workforce or to get a promotion. So those tools exist now, and, and employers are using them, but just, just take the step internally without anyone <laugh>, essentially showing up at your door to demand that you do, telling you how, or forcing you how, and then really wanting to look more, uh, expansively.

And that’s the point I’m trying to make, is that, you know, instead of relying like we’ve seen in other areas where regulation comes after, companies have done something wrong, but they’ve made a lot of money doing that. At the same point you’ve seen that in the financial industry and other industries. That’s what I’m trying to avoid here, because like I said earlier, companies, especially with the labor market now wanna hire. And a lot of larger companies need to hire a lot of people very quickly, and it’s not possible for them to do it at this point without the assistance of some sort of technology.

Jason:

As I’m hearing you say this and explain it, let me know if I’m understanding this correctly. It’s almost as if some of the benefits of using this advanced technologies are what are creating some of the concerns because the process actually has more visibility and like you said, transparency, where you can actually track how some of the data weighed into this decision. And it’s not entirely in someone’s head.

Keith:

That’s absolutely true. I mean, that that’s really, that, that that’s something that a lot of people aren’t talking about. You know, you see it all over the news or, you know, in a lot of this world mm-hmm. <affirmative>, the, the positive benefits of ai. But then you’re also hearing so much about the potential negatives about it. And for each of the positive, there could be a potential negative if employers aren’t diligent with all this. But you know, from an HR and legal perspective, you know, now you have guardrails around this, now you know who’s using the systems, and now you can have certain restrictions in place in these systems like you do in other systems, like in a bank. You know, not everyone has access to the, the keys to the vault, right? To get the money where the, the money is. And it could be the same thing when using these sort of technologies that corporations right now don’t need enforcement to have their own internal handbooks, their own internal best practices, or identify certain individuals who have access to those.

And those are individuals in decision making positions where you trust and have been trained on bias laws and been trained on anti-discrimination that you also know won’t use these tools for the wrong purposes. And that is all internal governance. Corporations are very familiar doing that in the labor and employment space. We have handbooks, we have HR departments, we have a lot of different ways for companies right now to make employment decisions, whether it’s promotions, whether it’s transfers, or even terminations or reporting discrimination. There’s significant structures in place in companies. Everyone, you know, probably listening remembers when they first day at work and they had to sign those thick handbooks, or now it’s probably all digital or say, or say you’ve, you’ve read them or watched training and yearly hand die harassment videos. Why are companies doing that? Because they have a system in place to prevent discrimination. And that’s what we’re, you know, I’m arguing for artificial intelligence here because it has a lot of great potentials, and at the same time, you know, it could potentially cause harm like anything else. So how do we mitigate that? And that’s what companies need to be doing that.

 

Outcomes-based, self-auditing of AI in HR to avoid litigation 

Ligia:

So how do we help people understand, or the industry at large understand or think about whether the AI is gonna be doing harm and contributing to discrimination versus helping.

Keith:

How we do that now by using these tools is first of all the very thoughtful and careful decision of what AI do I need for my company and what purpose am I going to use it for? Because there’s so many different avenues to use ai, whether it’s the creating a job description, conducting the interviews, facial recognition, which has been widely, you know, talked about and, and criticized in this space. Or, you know, even using AI to manage employees using AI to do performance reviews or using AI to terminate an employee. I mean, there’s a lot of different uses of it from as simple as a chatbot to get in an HR form through performance reviews. So it’s a hard question to, to answer broadly, but now you could see the for, for businesses who are going to use the software, all the different areas they’re going to use it on.

And that’s why it’s so hard to focus on one area because our laws apply to all of them. Right? Right. And employers are looking for guidance on every single stage. And that’s where I’m coming in and reminding everyone, okay, depending on the different use, different things are implicated, right? So if it’s facial recognition, it could be potential disability discrimination if you’re looking to judge somebody on how often they smile and the person can’t smile. Or it could be racial discrimination if the camera can’t see somebody who’s dark skin, like they could see light skin. So for each use, I can tell you potential benefits for using them, but at the same time, I can, I can also show potential perils of using them if some of these things aren’t thought about before using them.

Jason:

I loved how you said on your end of the process, you’re remaining to be focused on the outcomes and guiding organizations to look at the outcomes and pay attention to the laws that already exist and how these new tools play into driving those outcomes against those laws from the other side of the coin, are there any things that you see or any advice you have for organizations evaluating the inputs, the data that’s used, how the technology works, or the process or policy that flows into these kind of practices?

Keith:

Well, from the inputs, from a, uh, a recruiting perspective and a, and a talent sourcing perspective, again, it, it’s very similar to e everything else because the, you know, AI doesn’t have any intentions of its own, right? It’s just the function of the data that’s fed to it. Mm-hmm. <affirmative>. So, you know, this is really something that gets out of the whole technology space in a way of, you know, what is going into it. And employers have been dealing with that for a long time, since these laws were in place. And, you know, where are we recruiting? Are we doing things to make sure that our dataset, which was really before, you know, it was, uh, technology was involved our applicant pool, right? Right. Yeah. Is diverse and representative of the, the, the area where we’re recruiting from. And again, technology can now assist with that because technology can really look through, like I said before, some of those jobs descriptions and tell you what lines in there may no longer be necessary and have historically prevented people from certain backgrounds from entering in the workforce or having some, um, descriptions or job advertisement in there.

That is discouraging. You know, especially, there’s a lot of studies on gender, on, on male and female willingness to apply. So AI can really help there. But a lot of it too on the, on the inputs is not that much different than, than not using technology. And so are we going to the right places? Do we have a diverse applicant pool so we can make a decision then based upon, um, the skills and job requirements?

Ligia:

So what advice would you give then for anyone, any HR practitioners who are evaluating technologies for AI technologies, for any of these, you know, employee or workforce related processes? So you’re basically telling them, show me the output. Show me how you’re going to enhance my process. Because the truth is the laws have existed forever. Whether or not you use ai, your existing processes or some technology in the future, you’re gonna have to continue to comply and show that you comply.

Keith:

You know, from our perspective, we’re not in the business, or at least I’m not in the business of telling employers what technology they should use mm-hmm. <affirmative> or should not use. If an employer wants to use a technology that discriminates, it’s a, it’s a free country and they can do that, but will there be consequences? Absolutely. Will you be breaking the law? Absolutely. Will the E E O C be involved? Absolutely. So from our, from, from my perspective, it’s just making the awareness of the potential issues from our perspective of what we care about, which is ensuring there’s no discrimination for each of those uses. And then the, the simplest way again, which is what you alluded to, is that whatever the algorithm is, whatever the program, whatever the decision employers have to watch for two things that’s discriminatory, uses and discriminatory outcomes. And that’s no different than HR has been operating in a compliance perspective forever.

Right? So the discriminatory use is using AI to discriminate using AI to scale one individual’s bias to a degree larger than we’ve seen before. Because think about it, if you have a biased person in talent acquisition, a biased manager, a biased hiring professional, how long does it take that person to before ai to manually go through each resume and say, I don’t wanna hire this person because they’re this national origin, they’re this race, et cetera. Right? But now using ai, you could scale that pretty quickly. So that would be an example of preventing a discriminatory use. And we talked about some of the, the preventions of that, or, you know, a discriminatory outcome when you’re using it with what you believe are neutral characteristics and it discriminates and you cannot show a business necessity for that, which is hard. So it’s, it’s the uses and the outcome, but liability for employers is going to be stay the same whether you intend to discriminate or not, because, you know, deciding to entrust an algorithm with people’s livelihood, it’s, it is a complex topic and it’s an important manner.

So basically the, those two points require that human intervention requires to make sure that the ultimate decisions that are being made are actually based upon legal and proper characteristics within the job descriptions and with what the actual job requires. And then AI can assist greatly with that. And AI can tell us humans things we couldn’t, you know, our brains weren’t capable of imagining of what the actual best qualifications for job and even what some of the best employees and be able to find, um, individuals who normally wouldn’t be selected for the job or even be considered for the job, but be able to find, you know, patterns in their resumes or patterns in their performance reviews that actually will help us get there.

Jason:

Now Keith, you mentioned New York City, you mentioned state of Illinois. You mentioned some of the things that are happening in Europe as a result of these things. There’s a lot of different industry organizations and consortiums that are starting up and third party auditing firms and organizations and HR practitioners are trying to look to a variety of places to, to learn and research and get a lot of information onto this topic. Do you have any advice or suggestions on great sources of, of research or resources people can seek out to maybe learn rather than trying to react to everything they’re seeing?

Keith:

Yeah, well obviously the federal government, we don’t endorse, uh, anyone. We just say comply with a law. But obviously, you know, part of where I think this should go with the self-regulation and, you know, less regulation here and just sticking with, uh, the laws that have worked for a long time. Companies have really been auditing other areas of their employment practices for a very long time. And, and my plea there is just to include artificial intelligence in there because it’s, it’s, it’s the same sort of decisions and the same sort of practices that you were doing before now just with computers. So that just needs to become a part of the regular routine. And however that is done, whether it it’s internal, whether it’s through third parties, whether it’s just looking at the EOCs guidance, you know, we have really a host of guidance, not just speci.

We have some specific to artificial intelligence when it comes to the Americans with Disability Act, but all of you know, recruiting policies, promotion policies, policies, pay policies, we have endless resources on our website with promising practices that tell you how to get it right and what we were looking for. And if an E E O C investigator shows up and questioning your practices, and if you can show, you know, we really tried, we looked at what the E E O C uh, told us to do here, you know, that has been existing. And like I said, AI is just doing things that HR professionals were doing before. It’s making those decisions that have been being made for a very long time. So we have guidance on each of those. They may, they may not specifically say artificial intelligence like we have with the, uh, disability, but it’s equally applicable to those. So use us as a resource at the E O C and even some of the more complicating testing analysis of how to actually perform an audit. You know, we have guidelines from 1978 on our website with frequently asked questions on, on how to do those traditional employment assessment audits as well. So it’s all there, it’s a free resource for employers and for employees to know what their rights are as well. So use us as a resource cuz we’re free

Ligia:

I have to ask, how did you become so knowledgeable in ai? And I can definitely sense a passion in your voice,

Keith:

A lot of reading. Now, if, if you showed me ask me to design AI with

Keith:

No, write an algorithm. I’m not that smart. Okay. Yeah, yeah, yeah. Uh, no, I just, you know, the, I I really, I felt as part of my job to prevent discrimination to provide e e equal employment for, uh, all workers in, in United States. You know, what are the issues facing them? And you know, the fact is that most workers now or in the future, we’re going to deal with this technology in the workplace. And, you know, I think that’s very incumbent on me to be up to speed on that, to learn that. So I I, I, I’ve studied a lot, I’ve read a lot about it. I, I maybe am being subjected to artificial intelligence myself in this interview, who knows? But <laugh>, um, no, it’s just, it’s, it’s so important. It’s so critical because like I said on the outset, I do personally, you know, as a, as a labor employment lawyer, as an E E O C commissioner, anything that can help us reduce bias and provide that equal employment opportunity for workers is a good thing. And we should be behind because if, if, if bias is eliminated one day, then I’m outta business here <laugh> and nobody’s being discriminated, which is a

Ligia:

Good thing. Which is your ultimate goal. Yeah,

Keith:

Right. But yeah, in all seriousness, you know, anything that can help us with our mission and using artificial intelligence properly, programs that are carefully designed, programs that are properly used can only help our mission here at the E E O C. So that’s why I’m talking a lot about it because it’s out there and, and ensuring that it’s used properly in accordance with our long-standing civil rights laws because, you know, as AI becomes mainstream technology in the workplace, discrimination by algorithm can’t, so that’s sort of where my mindset is why it was so important for me to get involved in this.

Jason:

We appreciate your work and the way you’re thinking about it, and it was wonderful to kind of hear your thoughts. I know we also wanna go through a couple other of the standing questions that we have on our podcast. So, uh, I’ll hand it over to Laia. We’re, we’re kind of thinking of some, some questions we want to ask to maybe get, get to know you a little bit better.

Ligia:

This has been absolutely enlightening, but I’m curious, if you had never gone into law, and I, I’m wondering if now you’re gonna say data scientists, <laugh>, what other kind of work would you have pursued? What other passions did you have? And if somebody had believed in your potential, you know, as a younger Keith Saunder, where would you be today?

Keith:

I would’ve stuck with what my major was in college, which was TV and radio <laugh>. So I would’ve, uh, my, I don’t wanna say it’s, it’s my dream job, but certainly I would like to, if I wasn’t doing this, I would like to do a 4:00 AM morning show, uh, on TV for local news. You know, waking everyone up in the morning,

Jason:

Morning drives. Yeah, that’s great.

Keith:

As a morning person, that is something that actually i, I would like to

Ligia:

Do. Motivates you. Yeah. Yeah. Well, you’re welcome to come back on the podcast. You’re quite good at this. How did you start out in this career? Where did you start in law?

Keith:

I started as a, uh, summer associate at a, uh, law firm in, in Florida. It’s a full service, uh, law firm. I tried all the different areas and, uh, labor and employment stuck mainly because the labor and employment partners there who are still, uh, my mentors, I got along with them really well and I was really saw the passion and their work they were doing and, uh, wanted to be a part of that team. So that’s, that’s how I got into labor, employment law.

Ligia:

Awesome. Best career advice you’ve ever had?

Keith:

There’s only one career advice, which is just to work very hard.

Keith:

I give that advice.

Ligia:

Excellent. Well, Keith, thank you so much. This has been enlightening.

Ligia:

Thanks for listening to the New Talent Code. This is a podcast produced by Eightfold ai. If you’d like to learn more about us, please visit us at eightfold AI and you can find us on all your favorite social media sites. We’d love to connect and continue the conversation.

Related Resources

    Eightfold AI’s CHRO: How to step into the boardroom

    Darren Burton, former CHRO of KPMG and current Chief People Officer of Eightfold AI, talks about why HR data needs to be a driver in strategic workforce planning.

    View Podcast

    How to build an agile workforce with a skills-driven approach

    HR teams need a better way to engage and train their employees while discovering new talent with the right skills for today and tomorrow.

    View Resource

    Why an Elastic Workforce May Be the Answer to Your Talent Challenges

    Talent leaders from across the organization need to consider their contingent workforce to quickly scale up or down with sudden changes.

    View Webinar