Stay ahead of the curve — the latest on AI regulations in HR

Cities and states are moving ahead with AI governance while we wait for federal guidance in the United States. Learn what to watch for in this space as AI regulations in HR develop.

Stay ahead of the curve — the latest on AI regulations in HR

Overview
Transcript

Craig Leen, former Director of the Office of Federal Contract Compliance Programs and now a Partner at K&L Gates, joins The New Talent Code podcast to talk about AI and regulations in HR. Leen says that if we want to ensure long-term stability, responsible and ethical use of AI, we will eventually need federal guidance.

While HR leaders wait for federal laws, several cities and states, including Colorado, California, and New York City, have already taken proactive steps to enact laws governing AI.

Listen to our episode with Leen to hear more about:

  • Why federal regulations could be critical to the successful deployment of AI in HR.
  • The questions you should be asking AI vendors.
  • And how governance could change responsibility roles with employers and AI vendors.

New Talent Code

[00:00:00] Ligia: Welcome to the New Talent Code, a podcast with practical insights, dedicated to empowering change agents in HR, to push the envelope in their talent functions, where your hosts. I’m Ligia Zamora. 

[00:00:19] Jason Cerrato: And I’m Jason Cerrato. We’re bringing you the best thought leaders in the talent space to share stories about how they are designing the workforce of the future, transforming processes, rethinking old constructs, and leveraging cutting-edge technology to solve today’s pressing talent issues.

[00:00:34] It’s what we call the New Talent Code. 

[00:00:38] Ligia: So if you’re looking for practical. actionable advice to get your workforce future ready, you’ve come to the right place.

[00:00:48] Regulations are top of mind for most HR pros exploring how AI fits into their business strategy. Today’s guest is here to answer many of the top questions about AI and HR. As former director of the Office of Federal Contract Compliance Programs, and now a partner at K&L Gates, Craig Lean has been watching the use of AI in HR for a while.

[00:01:10] In our conversation, Craig shares why a federal approach to regulations is needed and beneficial, as opposed to a piecemeal approach by city or state. He also shares the top questions you should ask when vetting AI providers, including what your legal team wants you to consider when getting started with AI.

[00:01:29] There’s a lot to cover with Craig, AI, and the law, so let’s get started. Great to have you back, Craig. Welcome. 

[00:01:37] Craig Leen: Thank you. I’m so happy to be here. 

[00:01:39] Ligia: This is just such a great topic. It keeps coming up in conversations and I have a ton of questions for you. But before we get started, many of our listeners know that we are fascinated by non linear career paths.

[00:01:51] We’ve actually uncovered some interesting things people wanted to be when they grew up, you know, And so since we’re firm believers that people should be hired for potential and not for credentials, tell us a little bit more about how you got started, did five year old Craig always want to be a lawyer, particularly a director of the OFCCP?

[00:02:10] Craig Leen: Five-year-old Craig wanted to be either president, pope, or general. That’s what Craig wanted to be. He wanted to be, I had big, big dreams. I still probably would love to be the president one day. You never know, but yeah, being a lawyer was part of that. I’d say. My dad is an attorney, he’s retired now, but at the time he was a criminal defense attorney and before that he had been a prosecutor and I was interested in that and I knew that many of the presidents, I’ve always been a student of the presidency, I’ve read almost every presidential biography, I even have a presidential library in Coral Gables that they knew how much I love it.

[00:02:49] The presidency. So even though I haven’t been president and probably won’t be, they did create the Craig E. Lean presidential library, which is full of presidential books that I provided and some different portraits and things like that, like of Lincoln and the Roosevelt’s and, and others. So I’m very proud of it.

[00:03:06] It’s still there. I always go and check on it to make sure that that library is still there. It was the library I worked in quite a bit as city attorney. Yeah. So I was always interested in being an attorney. I went to college at Georgetown. President Clinton was in office at that time. And he had gone to Georgetown.

[00:03:23] Of course, he was president. So I was interested in that. I went to Georgetown, came to the capital city, and was an intern for a congresswoman, my congresswoman at the time I was living in Washington state. In Bellevue in the Seattle area. And so I interned for my Congresswoman. I interned for the white house.

[00:03:40] I love being at Georgetown. And then I went to Columbia law. And after being at Columbia law, I went into temperate law and it was only after about five years that I really pursued my dream of going into government. At that point, I became a county attorney and assistant county attorney. I was the head of federal litigation for Miami Dade County.

[00:04:00] And then I moved over to more advice and counsel, general counsel type role with the city of Coral Gables. I was the city attorney general counsel did that for six and a half years. I also taught during all this time. I’ve been an adjunct professor since 2005 or so. I had different schools right now. I’m teaching at George Washington.

[00:04:19] And so I did that and I got, I started getting very involved in the disability rights area and disability inclusion. My daughter is autistic and. That really opened my eyes to how much progress still needs to be made in disability inclusion. You know, notwithstanding the ADA and the rehabilitation act, there’s still a lot more needed to be done.

[00:04:38] So I got involved in that and Coral Gables. I came to the attention of the Dean of FIU who became secretary of labor, secretary Costa, and he offered me the post at OFCCP so that I became OFCCP director. I was nominated to be the office of personnel management inspector general. They never got to my nomination.

[00:04:59] So I didn’t get to, I didn’t get confirmed, but no fault of my own. I had unanimous support. So maybe one day, but at that point I went back into the private sector. Now, I am a partner at K&L Gates. And I’ve been serving on the eightfold advisory board and AI ethics council. And I serve on a number of other boards and committees in the civil rights.

[00:05:18] And disability inclusion area.

[00:05:21] Ligia: My Lord, quite the background, Craig. 

[00:05:23] Jason Cerrato: And we are so lucky to have you join us. And we want to dig into that background and share some of those experiences with our audience. The reason we’re all here is to learn something new and crack the new talent code. And as Ligia mentioned, we’re happy to have you back because you were here during our first season as one of our initial guests, but it’s been a little bit of time since that first conversation.

[00:05:45] And one of the things we want to do is we want to kind of get the conversation started and have you talk us through the current landscape of what’s happening in the US today when it comes to AI law and regulations. I know there’s been some recent announcements earlier this year. I kind of want to get your perspective on what’s top of mind and what people should be thinking about here as you’ve come back to talk to us here on the new talent code.

[00:06:07] So with the new regulations that had come out earlier this year, we have some regulations that came out in the U. S. as well as the EU AI Act. Both talk a lot about the requirements for the human in the loop and especially around governance. Can you talk about kind of what this means for organizations as they’re trying to wrap their mind around this and put together an approach and a strategy?

[00:06:30] Craig Leen: Yes. There has been a lot of activity. You do have the EU AI law and the European Union AI law, and you also have it in the U.S., but there has not been a federal one. Statute adopted by Congress that governs AI, but you do have existing federal statutes that have been applied to AI by federal agencies like the Equal Employment Opportunity Commission, EEOC, and the Office of Federal Contract Compliance Programs, OFCCP, which we’ll talk about today.

[00:07:00] And then you’ve probably had more of the legislative activity at the state and local level. You have the New York City AI law, and that’s a big one that impacts a lot of companies because a lot of companies do business in New York city and have employees in New York city. You have the Colorado AI law, which is.

[00:07:18] Really coming into effect in early 2026. And then you have a number of other actions or attempted actions or soon to be actions from different states, including California, Massachusetts, other states. So you really got to keep an eye on what’s happening in this field. It is starting to get filled up with AI laws.

[00:07:37] Now, having said that there’s two things you can draw from all of them, including the EU one, it’s not anti AI. If anything, it’s open to AI. And I think everyone recognizes that every company will be using AI all the time in the next few years. Many are already using AI. I saw a stat that among the largest companies in the U S the fortune 50, the fortune 50, 500 that you’re seeing like something like 90 percent or more of them already using artificial intelligence in many different things they do, including in employment, you have the president’s executive order on artificial intelligence.

[00:08:13] Which is basically a mandate to the federal government to use AI and to find ways to use AI to improve their processes, not just in employment, but in, in every area. So you’re going to see the federal government moving into AI big time. You’re seeing a lot of companies already using AI. So none of these laws really seek to restrict it or stop it.

[00:08:34] But what they do is they try to put guardrails in place, particularly to address the potential of algorithmic bias or to address adverse impact based on protected class. There’s been a lot of scrutiny in that area, understandably. And I think the top AI companies welcome that because they really embrace having AI ethics councils and trying to make sure that they’re acting in a way that’s non-discriminatory, of course, and not with bias.

[00:09:05] After all, if what they’re telling you is they’re picking the best people, there’s a strong business case that you, you don’t want then bias to come in because then you’re not picking the best people, but they also recognize there’s a legal case too, and that compliance is really important and needs to be a top priority.

[00:09:20] I do serve on an AI ethics council with April. And I think that every AI company should do that. And frankly, any company that we’re going to talk about this later, I suspect, but I think any company that is using AI extensively and soon that’s going to be everyone should have an AI. Officer AI ethics council, that’s thinking about these issues, grappling with them, making sure that the use of AI is appropriate, considering the guidance from the OFCCP and the EOC and the EU, because many companies operate in multiple jurisdictions and making sure you’re following the best practices.

[00:09:54] So that’s one point I would AI to, they are concerned about bias. But there’s ways to address that in a way that’s compliant. And that’s good. That means they’re finally telling companies what they need to do. 

[00:10:06] Jason Cerrato: So Craig, is that a big part of update from the guidance that came out this spring is, is it the visibility and the oversight and this kind of position of governance and advising people to have awareness and partner with vendors and understand how decisions are being made?

[00:10:23] Craig Leen: Yeah, I think the best guidance that’s been put out so far, and maybe, maybe I’m a little biased here, and I’m not the director anymore, so it was put out after me, but OFCCP really issued some outstanding guidance. And I was at the recent national industry liaison group conference, which is the conference of like almost all the federal contractors or the big ones, at least in the United States.

[00:10:46] And, you know, a lot of us come together to talk about these issues. It was in Orlando this year, and everyone was really seeing the praises of this OCCP guidance because it’s really neutral. It’s helpful. It gives a lot of best practices for companies. It’s open to using artificial intelligence and recognizes that it can be positive for equal employment opportunity.

[00:11:07] So I’ll tell you, yes, the EEOC has issued guidance, the EU, a number of states, New York’s issued guidance, but I would start with those CCP guidance because it gives you lots of best practices you can use in entering into an agreement with an AI vendor, what sort of precautions you should be taking internally, if you’re using AI.

[00:11:28] What are some benefits of using AI? What are some areas where you have to be careful? It’s really helpful guidance and you can get that just going online, putting OFCCP AI guidance and you’ll, you can read all about it. 

[00:11:41] Ligia: So you just mentioned that we haven’t rolled out any federal AI laws. You’ve mentioned New York city, you mentioned California is working to pass a bill.

[00:11:49] So we’re at different stages, cities and state laws may differ. Also, it’s going to vary from country to country. So what is your advice for multinational companies? What is your advice for. Companies that operate, for example, in more than one state, more than one city. How do you make sense of all of this? How do you approach it? 

[00:12:09] Craig Leen: Definitely. You have to assume there’s going to be more and more laws adopted because AI is such a significant part. It will be a significant part of every company’s business. I mean, already you go online and you do even a basic Google search or Bing search and you end up with an AI answer as well.

[00:12:25] Now there’s so easy to use chat GPT or copilot or any of the different options that are provided. And you know, generally they’re free. I guess there might be advertising sometimes and things like that, but they’re generally free to use. So lots of people are going to AI now and seeing how useful it is.

[00:12:41] In fact, I’m a law professor and we have to basically instruct students how they can use AI because of the concern. That AI is getting to the point where their answers are getting harder to distinguish between a person and AI. And they’re quite good. These AI platforms could get a good grade in your class.

[00:12:58] And so there, there could be a potential concern about cheating or things like that. So that’s become a big issue in academia. But you know, in the long run, it just means that every single person, no matter your means or your class or your affluence will have access to all the information in the world in an easier and easier way.

[00:13:18] Which is great. It’s going to make our lives so much easier, but you know, of course, the concern is always, you want to be aware of privacy issues, like, so if you’re putting information in this AI, you don’t necessarily want it to get out, particularly if it’s privileged information or things like that. So you want to make sure you know how the AI you’re using deals with information, personal information.

[00:13:39] You also want to make sure that there’s not bias. And I’m sure we’ll continue to talk about that, but you know, that there’s not algorithmic bias or that it’s within, it’s probably impossible as a matter of chance to completely eliminate. A group of men in a group of women, how they’re assessed by an AI in any given case, you’re always going to have a few people who are stronger just by random chance.

[00:14:00] And it could be on either side, men or women in a particular group. So you’re never going to have perfect score, but there are zones of risk. So you need to get that algorithmic bias to be lower and lower so that it’s more useful, the tool and less likely to cause adverse impact. And then you should be, of course, Assessing whether there is adverse impact.

[00:14:19] And if so, whether there’s something that’s explainable or not. And by adverse impact, I mean, whether groups are being treated differently based on their race, ethnicity, gender. And is that because of their race, ethnicity, or gender? That’s a concern, obviously. Orally and under the law, that’s a concern. You don’t want that, obviously you want that eliminated.

[00:14:39] But sometimes you may have a particular group. Of applicants where the men in this group happen to have an additional, more likely to have an additional degree or an additional qualification, or maybe the women in this group happen to have that additional degree or qualification. A lot of that happens by random chance.

[00:14:57] Or access within society. And so in those circumstances, you just want to make sure that it’s not the AI that’s causing the adverse impact. 

[00:15:06] Jason Cerrato: But Craig is, is one of the challenges with this kind of patchwork approach with state by state and city by city, that people tend to react to the one with the most stringent restrictions.

[00:15:18] So at the end of the day, if a certain state goes out ahead of others, that’s the one that people have to respond to. So at a certain point. There should be kind of a federal approach because the states are getting out ahead of it. 

[00:15:33] Craig Leen: I think there should be a federal approach. I think having a lot of different state approaches is not useful for a market economy.

[00:15:41] There are areas where you do want different approaches, like, you know, in employment law, it’s true. Some states are going to be provide additional protections to employees and others. That’s okay. But when you’re talking about AI, that’s going to be used nationwide and you have recruiting markets that go beyond individual states, it’s much better to have a federal.

[00:15:58] Approach in my opinion. And you’re right. A lot of times in employment law, for example, particularly larger States like California or New York. If they adopt a law that ends up becoming the de facto national standard, because it’s more restrictive and companies are going to be operating in multiple States, so they’re like, well, do we really want to operate in five different ways, or should we just take the most stringent standard and apply that?

[00:16:24] And that often happens, but I’ll tell you with AI, you know, Right now, I don’t see any states that have adopted laws that a really good AI company shouldn’t be following. Anyway, the OCCP best practices go far beyond the New York AI law. I would hope an AI company would be considering their algorithmic bias and testing for it.

[00:16:44] You want that, you know, whether it’s required or not. And I think New York, well, I advise on New York. So I know this, they basically just require you to post it. So you have to post it publicly. Any, you know, what you’re, Four fifths ratio it’s called, but what the impact ratio is and whether there’s any sort of adverse impact or algorithmic bias, and you have to post this so that there’s transparency.

[00:17:07] And by post, I mean, you put it online so that people interacting with your company, applying for a job, vendors, anyone who’s working with your company knows that you use AI and that the AI you use doesn’t have much algorithmic bias. That’s the goal. But the point is. Whether or not there’s a New York AI law, they should be doing that.

[00:17:26] Likewise, you should be asking your AI vendor, what factors do you consider in your AI? You should know that. Doesn’t mean you need to know their code or their proprietary information, but they should be able to explain to you in a qualitative way, what factors are being considered and even better. And this, I know from being on the ethics council, but it’s even better if they can actually show you and can even show you numerically the impacts and how the different factors are impacting a score that someone gets.

[00:17:55] Likewise, being able to mask people’s protected class. When you’re assessing them is really important. So AI that can do that, that allows you to just look at skills and takes out any sort of correlation or indication of, of race, gender, so that you can really make a decision based on the skills and, and merit that’s ideal.

[00:18:17] And what we found, right, I think what you’re starting to see is that actually benefits significantly women and minorities, people with disabilities. Cause often affinity bias. And unconscious bias comes in and, and prevents those individuals from getting really a fair opportunity. And that’s what equal opportunity is about.

[00:18:35] So I’ve seen studies that say that I saw a study regarding minorities indicating that there’s a preference for AI. There’s a desire that it not be biased. Of course, it’s known that there’s affinity bias and that there’s unconscious bias. And you know what, no matter how many trainings we do, It’s still there.

[00:18:54] So you mentioned earlier, this idea of having humans involved and AI, that’s really considered the gold standard. Now you want both, you do want people checking AI, but you want AI checking the people because they, I can tell you really quickly, Hey, why is it that this individual, or why is it that this group is always choosing AI?

[00:19:13] The lesser qualified person when they are this race or this gender. And it’s a really good way to audit yourself internally, because the AI is going to be using objective factors, hopefully. And if you’re seeing that people are making different decisions, you can then ask them why, why are you choosing the person that has the lower score here when they’re African American or when they’re Hispanic, but you’re not doing that when they’re white.

[00:19:37] So this is going to be a really helpful tool. 

[00:19:39] Ligia: The reality is I think AI has progressed so much in the last 12 months that it’s less about, should I bring in AI to my organization, but in how many ways can I innovate within my organization with AI? So let me ask you just really practical question. We all know at some level, internal general counsel is going to be involved, obviously.

[00:19:59] What can our listeners do to prepare for that conversation? What kind of things should they bring to the table as they’re making the business case that will appease the concerns of general counsel? Give us a little bit insight into the mindset today. Of general counsel at most organizations with respect to bringing in AI innovation.

[00:20:22] Craig Leen: That was a big challenge for me when I was OCCB director trying to get general counsel. I did a big pitch, a big program, trying to get companies to have disability hiring programs. And often general counsel would come in and say, well, we haven’t done those before. Is that going to be a form of discrimination in favor of people with disabilities?

[00:20:40] Even though that’s not a real claim. And it’s really clear under the ADA that you are allowed to favor and employment people with disabilities. And you are allowed to have disability hiring programs with accommodations, but I would always get pushback. And it was always from general counsel. What I’ve seen now in private practice, it’s often the same on AI.

[00:20:57] It’s like, this is something new. We know how to address situations where people are making decisions. This is what you’ll typically hear. You’ll often hear from general counsel who are not, you know, So keen on AI, they’ll say something like to the effect, we know what the legal liability is in relation to people making decisions about employment.

[00:21:17] We know what similarly situated is. We know that there’s unconscious bias, but we also know that to show a disparate treatment claim, you have to show X and to show a disparate impact claim. You have to show why that’s what I’m used to bringing in AI. I don’t know. I don’t feel so they maybe don’t feel so comfortable.

[00:21:35] They see all these laws coming in and they’re like, you know what? It’s easier for me. You may have a great business case, but I’m not here about the business case. I’m here about the legal case. I think there’s too much risk. Let’s not do it. And a lot of times what ends up happening, I found with contractors and companies is they don’t involve legal.

[00:21:51] They end up just doing it. And then legal fights out about it later. I’ve seen many instances where someone has said, even at a conference, we don’t use AI and then someone comes up to them and they go, actually, we use that several times. It even happened at the NLG with OFCCP. They mentioned, you know, we don’t use AI in some way.

[00:22:07] And then someone said, well, don’t you use this and this? And they’re like, oh yeah, we do use that. Maybe we use AI and always legal. It’s like, oh, you’re using AI, you know, like we need to be involved. So the point is you’re absolutely right. General counsel will typically be the hurdle and the way to address and understandably, because their job is to manage risk, reduce risk.

[00:22:29] And to make sure that if you’re doing something new, that it’s not going to cause a problem for the company. But having said that it shouldn’t be about that particular council’s knowledge of AI. Like if they don’t or ignorance related to AI or whatever it may be, they should learn about AI and they should raise legitimate risks, but nothing based on not knowing about AI.

[00:22:52] That’s not a good reason to block something that really is a good business thing to do. So what I see with the best general counsel, and I happen to work with a lot of them, if they want to know about AI, they bring in counsel, they learn about it, they read these OFCCP best practices. So if you’re at a company where you’re coming to general counsel and you want to convince them to let you use a new AI platform, I would go to the OFCCP best practices and I’d adopt a number of them.

[00:23:17] So for one thing, I wouldn’t just come to them and say, we’re already using AI and we’re using it. Throughout our employment processes, we didn’t tell you, like, I want to do that. That’s going to immediately get them to try to block it because they haven’t gotten to do a review. So one, I want to do that too.

[00:23:33] Assuming you’re coming to them when you’re about to use AI, I would do several things. One, I’d make sure you can explain to them how you’re using the AI in employment. Like, so is it something where they’re assessing how people speak or appear on an interview? Is it videotaping them? Is it more of a personality type test?

[00:23:53] Is it some other aptitude test? Is it something like looking at their resume and looking for skills and matching it to a goal or, or a, an exemplar type employee, you need to be able to explain what is the AI doing? Not just that AI is going to tell us who to pick. So that’s one, two, and it really should come from the vendor.

[00:24:12] They should give you something. That you can explain to, you should be proactive. They’re always going to raise concerns about bias. That’s really their only concern is that there’s going to be some bias claim. So, and they probably heard about that. So do you should preempt that you should say this company we’re using, they’ve satisfied the New York AI law.

[00:24:33] They’ve already done algorithmic bias studies. They’ve limited algorithmic bias so that it passes any sort of algorithmic bias study. Or a test. So it’s been, it’s already minimal. So it’s going to definitely be better than any affinity bias that people may have or unconscious bias. And then there’s ways that we can test for adverse impact.

[00:24:53] We do it already. We’re going to do the same way with the AI. We’re prepared to do that. And if there’s an issue, we’re going to address it proactively. Three, this entity that we’re working with, they’re going to be able to explain what factors were used in considering the AI, and they’re going to keep good records.

[00:25:10] So that we can satisfy an OFCCP audit or an EOC complaint if one comes up and they’ll be able to support us, it’ll still be, we’ll be taking the lead, but they’ll be able to support us. So you want to already have put that all into place, have an AI ethics council. You should propose that. It’s not just going to be you, Mr.

[00:25:28] General counsel, Madam general counsel. We’re going to have an AI ethics council, you’ll be on it or you’ll be the advisor to it, whatever you’d like, but there’s going to be other people that are going to take responsibility for the ethics component under your supervision, of course, or, you know, however you want to tell them that.

[00:25:43] And so we’re going to involve that we’re going to fund if necessary outside council or this office to be able to do the research that you need to do to feel comfortable. And by the way, here’s a couple of studies that we’ve already identified in the RV articles. And it’s just making them feel comfortable that you’re not going to try to bypass them or box them in and force them into something where then they’re going to have legal liability down the line and feel like they have to say no, so that they protected themselves, that’s what you want to avoid with general counsel.

[00:26:14] But it’s very easy to do as long as. And maybe I shouldn’t say very, it can be done fairly easily. If you dot your I’s cross your T’s and follow these best practices. And the other thing I would do, forgive me, I would print these out, these OCCB best practices and the EOC guidance on AI that they published.

[00:26:34] And I would give it to the general council. And I would, with this AI guidance, I would check it off. We did this, we did this, we did this. We did this. Oh, if CCP is known for being the. The most restrictive of all the federal agencies, the most protective of employees, because they do these audits and they use statistics and you can tell that general counsel, we followed their guidance and we’ve actually reduced the chance of an audit finding because disparate treatment now has taken off the table.

[00:27:02] This is all now about the use of an objective factor. Humans are involved as they’ve asked us to. And we can test for adverse impact in real time and we can address it. Plus we can also do the resume masking or the making sure that decisions are made without consideration of race or gender, and we can even test to see if it is using the AI.

[00:27:24] So that’s how I would present it to general counsel. I think most general counsel presented with that be like, wow, this is great. They’ll probably say, let me look into it more. They always do, but they’re not going to just say no, at least. 

[00:27:37] Jason Cerrato: Yeah, Craig, you said it was easy. I’m not sure it’s easy, but we’re hoping that you can help make it easier for people with some of your advice.

[00:27:45] So to continue to pick your brain and share some of your best practices and advice, you keep mentioning these councils and the formation of these councils. We’ve had a couple of guests on previous episodes that have talked about them forming their own respective councils. If I’m at me manufacturing and I’m building my own AI ethics council or AI innovation council from your guidance or from your experience, who should be on that council?

[00:28:10] Who should be involved if I’m trying to balance proper oversight with also the right representation and knowledge and also trying to innovate. So 

[00:28:20] Craig Leen: I would. Definitely have your general counsel’s office involved because ultimately, no matter what the ethics council says, most companies, general counsel can probably block something anyway.

[00:28:30] So it’s, it’s just practically a good idea to have general counsel involved. Also, you want to educate your general counsel’s office. You want them invested in the AI because it’s going to continue being more uses of AI and it won’t only be in employment. So definitely someone from your general counsel’s office.

[00:28:45] Probably better if it’s not the general counsel, if it’s a big enough office, they have a designated attorney that you work with a lot on AI issues and maybe serves on the council. It keeps the general counsel updated, but it could be the general counsel as well. So you have that. Then probably the head of HR I’d say, or your chief people officer, someone like that.

[00:29:05] Or designee from HR because. That’s really the business. So you have the legal and you want the business case or their HR. This hopefully will be helping HR to make better employment decisions, to fill spots more easily, to get employees that are better situated for that particular job. And hopefully it will stay longer and do better.

[00:29:25] That’s why you’re using the AI to begin with. So definitely someone from HR, probably a chief people officer or a designee. Then if you’re a large enough organization and you have an EO officer, Or someone who’s in charge of like your EOC or OFCCP complaints or compliance or affirmative action compliance.

[00:29:44] I’d have that person on there too, particularly to make sure that the federal contractor issues are considered, that they’re evaluated. I’d have that person. And then I’d probably have one or two independent people. That you could bring it, like I mentioned, I’m on an advisory board, but I’m still independent in the sense that I’m not an employee of a faithful, for example.

[00:30:03] Ligia: But should those independent people come from an HR background or legal background? 

[00:30:08] Craig Leen: I have to say, I like the way that April did it. Cause they brought in Vicky Lipnick and myself and not just cause they brought in myself, but we’re both, I’m a former OFCCP director. Vicky’s a former EOC acting chair and also former EOC commissioner.

[00:30:23] We both have reputations in the field that we care about. We’re going to give you our advice, our best advice. We want this to work because our names are associated with it too, and our reputation. So if you get someone like that, that’s from a former government regulator, I think that would be the best.

[00:30:40] But other people you might consider are general counsel from other entities, other companies, or HR people. Cause you always want to combine the legal and the business when you’re assessing AI. Then of course, if you happen to have an academic that knows a lot about AI, that could be helpful. So someone from academia is possible too.

[00:30:59] Ligia: And just for our listeners, the scope, or shall we call it the mission that what should this ethics council or internal ethics committee then have as an objective? 

[00:31:10] Craig Leen: To ensure that the company is doing everything in its power to, uh, Make sure that the artificial intelligence being used is being used in a way that’s ethical and consistent with civil rights requirements.

[00:31:27] There’s an efficacy requirement too. I mean, the thing is you already have your people in tech. You have your AI individuals, your computer programmers. You have people who, who know that. And to some extent, the AI ethics council checks them because. There’s a lot of brilliant things that can be done, but some of which may not be ready to do, and that we need to check.

[00:31:48] So there’s a little bit of a check, by the way, it doesn’t mean you wouldn’t want a tech person or programmer or someone like that on your council. But I would just make that one person to provide that perspective. But the point is, a lot of times those are the individuals who are going to be presenting to the ethics council.

[00:32:05] Hey, I have this great idea. Like if you’re a tech company or an AI company, Then there’ll be presenting lots of different ideas. If you’re a company considering use of AI, it may be vendors or some of your either HR or other folks who are working with vendors or trying to find vendors and bringing possible uses of AI to your ethics council.

[00:32:26] So they’re often going to be looking at these things. You want people who are open minded to new ideas. So you don’t want like we, what we just talked about with general councils who are closed minded, you don’t want that. And by the way, I’m not saying all general, I was a general council, a lot of general council are open minded, but the point is you want open minded individuals, but part of their role is to check and to review.

[00:32:46] So the goal of the council is to review all uses of AI by the company and make sure that ethical, ethical, ethical. And other compliance standards are being followed. So that really is what they should be measured on. I mentioned efficacy a little bit, or the effectiveness of the AI, because what you’ll find in the law is that you shouldn’t be using any sort of.

[00:33:07] AI or test or any sort of selection procedure, if it’s not effective, because that’s the justification for its use. So you’re not supposed to just use any test or ask people what your favorite color is and people will say blue. You hire people to say green. You don’t, they often used to say that you can hire someone for any reason or no reason, but just not an illegal reason.

[00:33:30] But that’s not really the way it is anymore. You need to hire people because of job related criteria. And so we do talk a little bit about effectiveness because the defense of the AI, if there is adverse impact is that it’s effective. And that the adverse impact is minimal. You do want to get into that a little bit.

[00:33:47] So we learn about how is the AI being used? Is it job related? Is it something that can be validated, which is a legal term, but basically is it connected to business necessity? Is it helping the business? Is it positive? Is it something that we should really be looking at and defending? That’s a baseline question.

[00:34:04] But once you satisfy that, you then also want to make sure that you’re That there’s not unnecessary algorithmic bias, that any adverse impact is being assessed to make sure that it’s not being repeated or that if it’s happened, it’s because of random chance and not because of some flaw in the AI system or in the selection system, if people are involved.

[00:34:24] So your AI ethics council should be reviewing every use of AI at your company and also your AI policies. And you do need to have a couple independent people just because it gives them more credibility. Honestly, I think it’s a better council because you’re getting a, A different perspective that’s not internal.

[00:34:42] So you avoid the possibility to some extent of group speak or some of the negatives of group decision making where everyone just agrees with each other. You get a couple independent views. I think that’s useful, but you know, it gives credibility because you can say, look, we, we don’t just have employees telling us whether this is okay or not because the AI and they feel compelled.

[00:35:03] We have a couple independent people too. That’s really important. 

[00:35:06] Ligia: Yeah. What are some of the biggest gaps today in AI governance and how do organizations stay abreast of them? How do they get educated? 

[00:35:16] Craig Leen: The biggest gap in AI governance is the lack of a federal standard and the youths look in and different regulators will have different views on this.

[00:35:23] My own personal view is that the uniform guidelines are not sufficient to deal with artificial intelligence, that they really are focused on human selection, that they don’t address algorithmic bias at all. Okay. They just address outcomes like adverse impact. So I don’t think there’s sufficient, but they’re what we have.

[00:35:41] You do want people that are understanding, you know, from guidelines and how to assess for adverse impact and how to validate AI and all that. But you also want forward looking individuals who are like, you know what? Even if we don’t have adverse impact, we don’t want algorithmic bias, period. We want to limit that as much as possible.

[00:35:57] We want to be able to demonstrate in a repeatable experiment that the AI is Helping identify better talent and getting them in. You want people thinking like that because that’s where the law is going to go eventually. Sooner or later, one day, either enough States or the United States and the EU is already doing this.

[00:36:17] Obviously are gonna adopt a law that’s gonna require you to not have a certain amount of algorithmic bias, and that’s gonna require you for higher risk sort of decision making, like employment decisions to be auditing your own ai. And you’re gonna want an AI company that’s gonna help you with that. Not one that you’re gonna go to, and, and that’s either out of business because of liability, or they’re gonna tell you, you know, we don’t do that, that’s all on you because that, that is the current state of the law.

[00:36:43] At least from the, on the OSCCP side, it’s that is the employer that’s ultimately responsible for their employment decisions. There is a case that the EOC has been involved with where they’re maybe taking a little bit different look and saying that, well, maybe sometimes, An AI company could be, could be liable as well, but I don’t think that will be the ultimate, my own view.

[00:37:06] Every other area, they always say it’s the employer. And frankly, even if it’s the AI company too, it’s still going to be the employer and the AI company. So regardless, you’re never going to be able to push it off completely on an AI company, I don’t think. So you want an AI company that cares about their reputation, that works with a lot of other companies so that you don’t want it to be a, you know, I don’t even know what the right word is, but like a fly by night or some sort of you want to use an AI entity with a good reputation that complies with the New York AI law, whether or not they have to in that audits that will help you with auditing and will be there if you’re ever in an investigation.

[00:37:44] Those are the four big things I would want. And I would include those in my agreements with the AI company that you’re going to be there for us if we get reviewed. That there’s a way to audit, to make sure there’s not adverse impact, that you comply with the New York AI law and any future AI law that’s similar to that, and that you keep records so that we can produce something to OCCP or to the EOC.

[00:38:07] Ligia: The underlying narrative really around AI conversations is really about trust, or maybe it’s called the lack of trust. Do you think that over time, as the dust settles and more and more regulations come about and get put in place? That trust level will improve and ameliorate concerns over time. 

[00:38:25] Craig Leen: Yes, I definitely think so.

[00:38:27] I think as people use AI more and more all the time, which I think you’re going to see this year and next year with people using AI all the time on their browsers, and they’re going to start getting answers that are really good. I’m already getting answers that are really good, that people are going to become more comfortable with it.

[00:38:43] There’ll be a point. I used to say 10 years. Now I think maybe three years or maybe less. Where it’s going to be like, why aren’t you using AI? Like someone who will apply to have a good resume and they’ll have their skills. You won’t use AI and you won’t pick them. And you’ll pick someone else from a different protected class and they’ll say, I was discriminated against and they didn’t even use ai.

[00:39:05] If they’d used ai, they would’ve picked me because I have more skills and qualifications. Why aren’t you using ai? And I think you may, it may take some time from the EOC to get there and maybe OCCP, but I think you, they will eventually, just like they say with, if I can make the comparison, I know it’s not a, it’s not the neatest comparison, but I think there’s some similarity.

[00:39:26] But it’s like with the driverless vehicles. I feel like in 10 or 15 years, you’re really going to have to get a special license to be able to drive because you’re not going to drive as well as the AI is going to mess it up. It’s going to cause more opportunity for negligence or accidents. And you’re going to have to justify why you weren’t allowing the AI to drive you if you get in an accident.

[00:39:45] And it’s going to be much more easy to be grossly negligent or negligent as a driver because everyone else is using AI and they know it was your fault that you messed it up because you decided you were too good for the AI or you really love driving and you wanted to drive. So I think it’s the same thing with AI.

[00:40:00] I think there’ll be a point. We’re all the major companies will use AI all the time. Will there still be a human element? Yes, probably. I think an argument could be made that there may be a point where that’s not the best practice, but yes, probably because these are still human institutions. But having said that, I think AI is going to have a bigger and bigger role because honestly, the best way to say this, let’s say you have to look at, I remember when I applied to the government, thousands of people would apply for every position.

[00:40:25] Thousands. You think that someone from the government Really goes through everyone. I’m sure they do. I’m sure they tell you they do. And I’m sure they do. Sure. They do. But how quickly, how fast do they look at those things? Is that really fair? And the government I’m sure doesn’t even better job than the private sector.

[00:40:40] I’m sure they get thousands of resumes too, and they have to find data management techniques and other ways to lower that without really looking at everybody because they can’t look at thousands of resumes and then we all know that when you look at resumes. Even like universities, they have studies that say this, they look at it for like 10 seconds, 20 seconds, you know, maybe a couple of minutes if you’re lucky.

[00:41:02] And how many things are they going to look at there? They look at your hobbies. They look at what school, what an opportunity. And I say this in a negative way for affinity bias, you know, like, Oh, I have the same hobby. Let’s call it that person. Oh, they went to the same school. Let’s bring that person in.

[00:41:19] It’s very rare that those are done in a way where there’s true equal opportunity. I would think just because. You’re not really looking at everything and assessing them for skills at an objective scale. And if you say you’re doing that, so the point is, would it be better? Isn’t it better? Cause it’s already being done.

[00:41:35] Assuming that you’ve eliminated the bias and you’re having an objective AI look at 10, 000 resumes, they’re going to look at 10, 000 resumes. They’re going to do it pretty quickly. And they’re going to identify the top 50 or a hundred or 200 for you to look at. And then you can do a much more fulsome review.

[00:41:51] And you know what, it’s better for everybody, assuming there’s not bias and all that’s been addressed and you’re using a good AI company. Because now at least I know my resume got looked at. I know I got assessed based on my skills and not because of something that could be affinity bias. 

[00:42:06] Jason Cerrato: Well, on this show, we often talk about how AI and talent intelligence turns processes upside down and changes the way you look at them.

[00:42:14] Your driving example just turned driving for me upside down. So that was a great analogy and a nice bit 

[00:42:20] Craig Leen: about it. Cause I 

[00:42:21] Jason Cerrato: was 

[00:42:21] Craig Leen: a local government attorney and everything is about driverless vehicles and 

[00:42:25] Jason Cerrato: things like that right now. You’ve been giving lots of great advice through the course of the conversation today.

[00:42:30] And a few minutes ago you had kind of counted, you know, your four pieces of advice. So for the people that have been listening, hopefully you’ve been following along and jotting those down. But for folks that are looking for resources out there, do you have any recommendations for resources that are available to listeners where they can go to learn more or specific places on the web or organizations that they should check out for what we’ve talked about today?

[00:42:55] Craig Leen: The OFCCP guidance is quite good. I’m proud of it. And, and so I’m glad that they did. I didn’t, I’m not taking credit for it. I didn’t do it, but I’m so proud of the agency for doing that. I think it was excellent guidance. There’s a couple EOC guidances on the use of AI, particularly with disability and some other areas.

[00:43:10] So I take a look at that. For eightfold, I’ve done a whole presentation on OFCCP and an introduction to OFCCP that talks a lot about artificial intelligence and OFCCP. It’s available on YouTube. I’m sure on eightfold’s website. So take a look at that. Vicki Lipnick, who’s also on the eightfold. AI ethics council and advisory board with me again.

[00:43:29] She was the former acting chair of the equal employment opportunity commission and former commissioner for them as well. She did a whole EOC one on one talking about artificial intelligence. So I think those are real. I mean, again, I’m touting my own program, but it’s a, it’s a great program. So I highly recommend it.

[00:43:47] And then it has been viewed over 2000 times. I love that. And then obviously if you go to my website at KNL Gates, I have a lot of free CLEs. Including on artificial intelligence, those would be useful. Um, more generally. So in terms of AI guidance, I would definitely go to the AFL website. You have a lot of really great resources.

[00:44:07] My focus has traditionally been more on the civil rights impacts of AI. So I’ve more focused on that, but you know, I would take a look at the president’s AI executive order. There’s a lot of really useful. Definitions on what artificial intelligence is, how it can be utilized. It also talks about some of the business cases the government could use it for.

[00:44:29] I think that’s useful. I often like to cite to things like that because it’s coming from the government. So they can’t really, it’s an executive order, you know, now maybe a different president will change the executive order, but it is an executive order. And a lot of those executive orders do continue to persist for a long time.

[00:44:45] And I will tell you, and obviously we’re not political here, but I will tell you One thing that I think that is in common between President Biden and President Trump before him, and probably would be in the future with either a president Harris or Biden is an interest in AI. I mean, I think that there’s a real view that it’s forward-looking and that the government needs to utilize AI.

[00:45:05] And we need to compete as a country with other countries and not fall behind; we want to be ahead of everyone. I think it would help to pass a national AI law. I’ll say that, but certainly from a business standpoint, we want to be on the forefront of AI. I think that’s something you’re going to see is largely bipartisan.

[00:45:22] Ligia: I think the last thing, which you’ve already mentioned too, is just get involved with it. Don’t be afraid. Like you said, look for different instances of using it, understanding it. Cause I think that’s the biggest example. I think I’m going to get into a driverless car soon. I haven’t tried it, but you talked about Chat GPT and prpts.

[00:45:39] Yeah. I love it. Craig, thanks so much as usual. Amazing advice. Don’t be surprised if this podcast gets as many listeners as the last time you were with us on that note. I think we’re going to wrap it up here. Thanks so much, Craig. Thanks, Jason. 

[00:45:55] Craig Leen: Thank you. A lot of fun. 

[00:45:57] Ligia: Thanks for listening to the New Talent Code.

[00:46:00] This is a podcast produced by Eightfold AI. If you’d like to learn more about us, please visit us at eightfold.ai, and you can find us on all your favorite social media sites. We’d love to connect and continue the conversation.

You might also like...

Listen On

Share Popup Title

[eif_share_buttons]