Webinar

No more black boxes: The new mandate for explainable AI in HR

In this fireside chat, we unpack why explainable AI is now a non‑negotiable in HR and how leaders can turn transparency and governance into a real advantage in talent decisions.

No more black boxes: The new mandate for explainable AI in HR

Overview
Summary
Transcript

AI is transforming every stage of the talent lifecycle — making how you govern it as critical as what it can do. Organizations can no longer rely on black‑box models or unchecked automation; trust must be engineered, continuously governed, and proven. As AI shifts from optional to operational, transparency and governance have become board‑level imperatives.

In this webinar, Varun Kacholia, Co‑founder and CTO of Eightfold, and Meghna Punhani, Chief People Officer at Eightfold, are joined by Bill Pelster of The Josh Bersin Company for an in‑depth conversation on what responsible AI really means in HR and how to put those principles into practice.

Tune in to the conversation to better understand:

  • How this wave of AI is changing the fundamentals of people decisions and expectations of HR.
  • What it means to design AI for transparency, fairness, and accountability rather than bolt them on after the fact.
  • How to move from intent to evidence with bias testing, monitoring, and governance that stand up to regulators and boards.
  • How leaders can operationalize trust — through governance, standards, and oversight — so AI strengthens, rather than erodes, confidence in talent decisions.

Bill Pelster, Varun Kacholia, and Meghna Punhani discussed the transformative impact of responsible AI in HR. Varun highlighted the rapid adoption of AI, emphasizing its role in augmenting human reasoning and the need for transparency, fairness, and accountability. Meghna stressed the importance of redesigning workflows, skills-based hiring, and internal mobility to leverage AI effectively. They noted that AI should expand opportunities, not limit them, and that trust is crucial for adoption. Varun and Meghna also discussed the evolving role of HR in governance, compliance, and the need for cross-functional collaboration to ensure responsible AI practices.

Responsible AI in HR: Introduction and Context

  • Bill Pelster introduces the session on responsible AI, emphasizing its importance in HR and the need for trust and transparency.
  • Bill Pelster outlines the session’s goal: to provide a framework for evaluating responsible AI in HR.
  • Varun Kacholia discusses the unique nature of the current AI wave, highlighting its rapid adoption and its ability to augment human reasoning.
  • Meghna Punhani emphasizes the importance of redesigning work and organizational structures to leverage AI effectively.

The Impact of AI on Organizations and Workforce

  • Meghna Punhani discusses the need for organizations to understand future skills and reskill their workforce to adapt to AI.
  • Meghna Punhani highlights the importance of skills-based hiring, internal mobility, and development to keep pace with technological changes.
  • Meghna Punhani notes that employees’ expectations are changing, and they want AI-enabled experiences that are fair and transparent.
  • Varun Kacholia explains that AI systems must be built and used in ways that are fair, transparent, and aligned with the best interests of individuals and humanity.

Responsible AI Practices in HR

  • Varun Kacholia breaks down responsible AI into three non-negotiables: transparency, fairness, and accountability.
  • Varun Kacholia explains the concept of model cards, which provide transparency about the data used in AI models.
  • Meghna Punhani emphasizes the importance of trust in AI systems, as employees want to understand how decisions are made and feel that AI is fair and transparent.
  • Meghna Punhani discusses the need for accountability by design, ensuring that AI systems are designed to be fair and transparent.

AI in the Full Talent Lifecycle

  • Meghna Punhani explains how AI can improve the hiring process by providing consistent evaluations and reducing bias.
  • Meghna Punhani discusses how AI can expand opportunities for employees by surfacing roles based on skills rather than titles or personal networks.
  • Meghna Punhani highlights the importance of performance and succession planning, where AI can highlight patterns and readiness signals.
  • Varun Kacholia discusses how AI can make opportunities more accessible and equitable, using examples like masking candidate profiles and AI interviewers.

Balancing Innovation with Risk

  • Varun Kacholia emphasizes the need for intentionality in balancing speed and safety, with safety being non-negotiable.
  • Varun Kacholia explains that at Eightfold, they innovate rapidly by leveraging a foundation of responsible AI.
  • Meghna Punhani discusses the importance of setting clear decision boundaries, focusing on expanding opportunities, and maintaining visibility and control.
  • Meghna Punhani highlights the need for clear communication and change management to build trust and adoption of AI within the organization.

Governance, Accountability, and Compliance

  • Varun Kacholia explains that governance is no longer an IT or GRC project but is deeply embedded in every part of the organization.
  • Varun Kacholia discusses the collaboration between builders, legal, HR, and customer teams to ensure compliance and responsible AI practices.
  • Meghna Punhani explains that HR governance has evolved to focus on the impact of AI on outcomes, trust, and accountability.
  • Meghna Punhani highlights the importance of clear employee communication and aligning AI practices with the company’s values.

Future Predictions for Responsible AI

  • Varun Kacholia predicts that AI will continue to accelerate, and the stakes for responsibility will be higher.
  • Varun Kacholia emphasizes the importance of building AI systems with transparency, auditability, and accountability by design.
  • Meghna Punhani predicts that talent leaders who adopt AI responsibly will have a competitive advantage in building trust and redesigning roles.
  • Meghna Punhani highlights the importance of fluid careers, where AI matches people to work based on skills, and accountability remains with humans.

Closing Remarks and True North Principles

  • Bill Pelster summarizes the true north principles of responsible AI: transparency, fairness, and accountability by design.
  • Bill Pelster emphasizes the importance of these principles in guiding the development and adoption of AI in HR.
  • Bill Pelster thanks Varun Kacholia and Meghna Punhani for the thoughtful conversation and looks forward to continuing the discussion.
  • The session concludes with a focus on the exciting and abundant future of responsible AI in HR.

Bill Pelster 0:00

Hi everyone. Welcome to today’s session. We’re diving into one of the most important conversations shaping our industry right now: responsible AI. AI is no longer experimental or optional. It’s reshaping every function, including HR—any part of the organization that makes decisions about hiring, mobility, pay, performance, and where trust and transparency matter most. It now sits at the core of how organizations operate, compete, and grow, which is why trust, transparency, and governance have become board-level priorities. I’m really excited to be joined today by Varun, Eightfold’s Co-founder and Chief Technology Officer, and Meghna, Eightfold’s Chief People Officer. And very simple, our goal today is: by the end of this session, you’ll have a clear framework for evaluating responsible AI in HR—what to ask for, what proof to look for, and what good looks like in practice. Before we dive into AI, let’s zoom out a bit. And Varun, I’m going to come to you first. From your perspective, what’s fundamentally different about this AI wave compared to all the previous tech waves that we’ve lived through?

Varun Kacholia 1:10

Absolutely, Bill, and like, I’m very happy to have this conversation. Like you know, this is timely, and this is important to have right now, especially as technology is accelerating. I was reflecting on this before, like you know, this session. Personal computers have been around less than 50 years, smartphones less than 20 years, and many of us still remember when the iPhone arrived, 2007, 2008—so very, very recent. So technology has progressed incredibly fast. ChatGPT, which arrived a couple of years ago, has seen faster adoption than any of the prior technologies. And this wave, I believe we all see it, is very different, much faster, and much stronger than all other technology changes we have seen in the past. And the difference that I see is technology now is able to augment our reasoning capabilities—not just facilitate workflows, not just enforce documentation or create a database of customers or leads, but really help us think, help us reason. And in HR, we believe that this shift is very, very consequential. At Eightfold, we have been building for, like you know, this change for almost a decade now. We have built these systems in a very responsible way, thought through from the ground up. We have talked about many of these things in the past, and I’m sure we’ll talk more during this call as well. So when AI is part of such a system, as you have said, trust and transparency stop being optional; they become a prerequisite to any and all adoption. So very excited to talk more about this today.

Bill Pelster 3:17

No, Varun, I really appreciate that. And also putting into context this AI in the context of all the technology transformations we’ve lived through in our careers. And Meghna, maybe from a people perspective, you know, what does this shift mean for organizations, people leaders, and the workforce?

Meghna Punhani 3:36

Yeah, great question. And Bill, I would start off by saying exactly what Varun said: really an important conversation. And thank you for, you know, conducting this with us. Very happy to be here talking about this important topic. So first of all, I would say it’s a very, very exciting time for organizations. And I really believe that people teams are having a moment with this technological change. For that matter, you know, when we really think about the value that AI brings, and it’s not necessarily really about adding additional AI tools to any of the processes, but how do we think about redesigning work itself? Right? Like, what would be the impact on roles, workflows, organizational structures, and how can AI accelerate? And what are the areas AI accelerates, and what are the areas that humans should really own? So when I think about it from a people perspective and a people leadership perspective, one thing we feel is that fundamentally, we need to understand what the skills of the future are going to be, right? And how do we find the right skills, reskill people, support our people, so that AI is more of a capability, and people don’t really fear AI that much. And then second, how is talent really moving around organizations? So you know, again, like skills-based hiring, internal mobility, development of people. Like, how do we make sure that we as humans are really able to keep pace with how the technology is evolving and how the jobs are really evolving because of the technology? And when I think about the employees, I really feel that their expectations are changing as well. Obviously, there is fear; people are scared. At the same time, they also want, you know, AI-enabled experiences. They really want to understand that and really believe that it’s fair, and they have clear visibility into, you know, how their skills really matter, what skills they’re going to need, where they are lacking, and how their jobs are going to evolve, for that matter, and you know, if they’re going to have opportunities to grow or they’re going to be stuck. So that’s why, you know, we believe that a skills-based approach is really, really important. And you know, as you know, the Eightfold platform is used across like 150-plus countries, like 20-plus languages, right? Like we have… we have it all over, all across the board. And I feel that really helps us understand, you know, how do you move from static job descriptions to skills-based talent decisions for that matter, right? And so this AI wave, I believe, is going to be a huge unlock for productivity, for opportunity, for our people.

Bill Pelster 6:14

No, that’s fantastic. And I think, you know, on the person side, we really… we have to acknowledge the fear. And anytime there’s change, there’s fear out there, but what a huge opportunity to reimagine what it is we do. And we actually think there’s going to be a huge boost to not just individual productivity, but the overall growth that comes with AI being able to be put into the workflows. But if we take a step back, and Varun, I’m going to come back to you here, so the core thing here is it’s great that we have all this AI; when you hear “responsible AI,” what does that really mean in practice?

Varun Kacholia 6:46

That’s a very good question. And for the last decade… so AI is not new. I started my career in core parts of search at Google, building applied machine learning, applied AI. So AI has been around for a while, and the term “responsible AI” has become a lot more structured, a lot more concrete in recent years as AI usage has now permeated many, many, if not all parts of our lives. So I’ll break this down in two ways. First, I think of this in a very human way. What does responsible AI mean for every single person? What it means is that AI systems are built and used in ways that are, number one, fair, transparent, and aligned with what’s best for the individual and also humanity or people at large. It’s no longer about systems being just functional. It’s no longer about systems doing and producing some output. It has to be fair, transparent, and aligned with what’s best for our collective future. Now, the second way is like, “Okay, you know, this definition makes sense to us. How do we implement it? How do we execute on this?” In the last few years, one of the common things that the industry has now embraced—and you see this with many of the model providers out there—is they publish a model card with every single model. And it is still a work in progress for, like you know, what data is shared, but at a high level, I would say any AI system, especially AI in HR, will come down to three non-negotiables, again. The first one, same thing: transparency. People should understand what data is being used and where this is coming from. Second is fairness—that you can measure, that you can demonstrate, that you can monitor, and that you can audit. This, as model cards have, like, enabled this. It’s still a work in progress for many of these model providers, but at Eightfold, we have been doing this for almost a decade now. And like you know, when many of us… when we go through this with our customers who are used to this, who are used to model risk management, this, like you know, really comes to light. And the third is accountability by design. AI systems should not be just designed for functionality, but through, like, the impact that it has, and the fairness and like that these systems can increase and enable. So all of these things, I believe, have become more important as AI has been embedded in our daily lives through all the possible workflows, many different products that we all use. So responsible AI has to cover the full lifecycle: the data model, transparency, metrics, and finally, auditing, monitoring, and human in the loop.

Bill Pelster 10:20

No, that’s… that’s really helpful. And the ideas of, you know, the foundational principles of transparency and fairness and accountability… Meghna, as you look at it through the people lens, you know, from a people perspective, why is responsible AI so urgent right now?

Meghna Punhani 10:36

Yeah, that’s so important. You know, at the end of the day, AI is technology, but who’s using the technology? It’s humans, right? And what’s the most important thing for human relationships? I feel that trust is really, really important when it comes to interacting with the system, interacting with each other. So I believe that our employees—or whether it’s candidates, or you know, any in any organization—how leaders deploy AI sends a huge signal about what value it’s creating and how people really perceive it. So if it feels opaque or just fully focused on efficiency, people really assume, I feel, the worst, right? They feel that, “Is it like… how is it thinking about me? Is it evaluating me without having the right level of context about me? Is it going to treat me just like a regular score? Is it like really going to only think about me as data and or a data point somewhere, right? Like, what does it know about me? What incomplete information might exist?” So as humans, you know, we would be thinking about all of this. And I just feel that that really erodes trust in people, right? So the reason responsible AI, I feel, is absolutely urgent right now is because employees and all candidates, we think, want to look at decisions and really feel human about it, you know? So that’s why transparency, like Varun was earlier saying, right? Transparency matters because decisions that we make about employees or we make about candidates should not really feel like they’re coming from a black box, right? People need to understand what’s happening with the technology. So how are they being treated? Then fairness also matters, because people want to believe that the system is… AI is really helping them expand the opportunity and not necessarily just narrow it, right? Like going back to “it’s going to take away jobs” and so on and so forth. How is it really going to help them, enable them to get better? And then finally, I feel accountability also matters, right? Because people know that there is always a human in the loop that’s making decisions, and it’s not just purely based on an algorithm, because biases and so many other things creep in when it just purely, just depends on an algorithm. So if we kind of treat this with transparency, with fairness, with accountability, then it really reduces the anxiety among humans, and it also helps us increase adoption. So I think leaders really need to be explicit about what AI can do and what it cannot do. And then that’s when people will feel that they are moving from, like, real fear to fluency. And that’s when, you know, you would be really unlocking productivity and growth with people. And you know, that’s… that’s… that’s the most urgent issue right now with responsible AI.

Bill Pelster 13:34

No, I would agree. And Meghna, if you don’t mind, I’m going to stick with you for the next question, and maybe we’ll double-click and go a little deeper into the full talent lifecycle. And so again, you know, from an HR perspective, you know, when AI touches everything from hiring to succession, what does responsible AI look like in practice? How does it change the experience for candidates and employees?

Meghna Punhani 13:56

I mean, I think in today’s day and age, like even at Eightfold, the kind of systems we are building, we know that AI is going to touch every single career moment that people are having, right? Whether it’s… and it does today. Like, whether it’s hiring, or it’s mobility, or or any of, you know… you know, how people are evaluated, and so that journey needs to be super transparent for people, right? So people are watching how, as I said earlier, AI is being used. And trust, we believe, is built based on three things: people are able to see the systems and really question and understand how it really works. As I was saying earlier, AI needs to expand that opportunity for people, and they do not… they should not be feeling restricted by the technology that’s out there. And then again, like, humans, you know, stay accountable. So if we really break it down, and we take some of these examples… we purely think about hiring, right? Responsible AI in hiring starts with candidate trust. And so candidates should understand what the system is looking at and how the data about them is being surfaced, and what information about them is correct. And if it’s incorrect, and they have the ability to actually go in and correct it, that’s what will improve the experience for the candidate. Like, when… when we think about the AI Interviewer product that we have, it’s very much about consistency. So human interviewers are naturally influenced by, like, mood, fatigue, how they’re feeling that day. They are biased, right? Which leads to inconsistent interviews. Like, you know, I could just wake up happy a day, and I could be a kind of interviewer, versus like, you know, just wake up grumpy. So… but when you look at AI Interviewer, it evaluates every single candidate with the exact same criteria and ensures that hiring decisions are, you know… don’t have biases. They are based on merit, rather than actually the interviewer’s state of mind that day, right? Then, when I think about mobility and how that expands opportunity for people, if you really think about how AI would surface, you know, roles based on people’s skills—like, not just titles or their personal networks, like… but really help people discover paths that they never really thought about, that’s opportunity expansion for people. And, you know, the other thing we were talking about was performance and succession. And so that’s where I feel that augmented judgment really matters, because AI can then highlight patterns, then readiness signals, and, you know, and leaders can actually remain accountable for the final decisions, because context and judgment will really matter for them. So employees are watching, as I was saying earlier, how AI is being deployed in, like, hiring, mobility, and all sorts of career movements in a whole talent lifecycle. And so, you know, it’s very important for leaders to then, you know, demonstrate through the choices that they are making that will enable trust, you know? And leaders have to earn that trust through using the technology the right way.

Bill Pelster 17:18

No, Meghna, I really appreciate kind of the detailed commentary on some of the key things in employees’ kind of talent lifecycle. Varun, I’m going to go back over to you. And you know, how does this end-to-end reach change both the opportunities and the risks for organizations?

Varun Kacholia 17:32

Yeah, actually, I’ll pick up from, like you know, what Meghna was talking about. We are living in an age of technological abundance. We all have at least one very powerful computer in our pockets that has access to a whole range of supercomputers, so to speak, in like you know, data centers all around the globe. And so this has created a massive upside of opportunities, a massive upside where people… where these opportunities can be more accessible and equitable with the right set of responsible AI practices put in place. And this is something at Eightfold we have not just built as part of our models, but also our products. Meghna touched on a few, and I’m happy to walk through the entire range of them. One of the common things that we hear from many, many candidates out there is how they land on the career site for our customers, and they discover jobs that they would not have imagined to search for. It’s a consistent theme that we hear: they land, they upload their resume, and they are guided, they are recommended. They still have the choice to go search. But by far, technology is making opportunities more accessible, more discoverable. This shows up in all the metrics downstream. When you measure based on any, like you know, diversity metrics or otherwise, it shows up. Second, we have had some customers who used to mask candidate profiles before selecting which ones to interview. And they would do it by hand. They would print it, and then they would use a marker, and then, like you know, scrub out the name and a few other things that could be used. This is now built-in; the product can be done in an intelligent way. For what are the things that could create an inconsistent access to opportunities? It doesn’t matter what is a factor, but what could bias humans in those ways? Because at the end, as we say, that our goal with this… with all of these changes, is: how do we improve access to opportunities? The next thing that Meghna said: AI Interviewer. What a day, like you know, we are living in where we can have a consistent, like, interviewing experience with a multimodal, intelligent agent that does not judge you in real-time, that assesses, like you know, your skills, and that makes opportunities for you more accessible. Before AI Interviewer, for many organizations, it would be hard to assess based on a one-pager, like you know, “Is this person a good fit for this role?” But now—just two weeks ago, we announced “Apply and Interview,” where every single candidate, after the job application, can have access to being interviewed, irrespective of the decision downstream, to present, like you know, their best foot… their skills forward, and how that might be applicable to this role. Succession planning… I think you touched on this a little bit. For many organizations, succession planning used to, and still sometimes happens, based on like, who you know, who’s like you know around you, who you interact with more frequently. And with AI, this can now become a lot more accessible. This is very applicable for large organizations, where experts are not always sitting next to you, where they might be in a different country, in a different region, but nonetheless, they might be the best fit for succession for a particular role that you might be thinking. So this is something that Eightfold, in addition to our AI models, we have very thoughtfully weaved into every single product of how… how our products will improve access to opportunities and expand them.

Bill Pelster 22:03

No, Varun, that’s… that’s very helpful, and I appreciate that. And I’m going to stick with you and ask you another question here, but I’m going to take it from a slightly different perspective, and we’re going to talk about balancing innovation with risk. And you know, as organizations are under a lot of pressure to innovate quickly with AI—it’s in the news all the time—but we also have a regulatory environment. We have boards, we have workers all asking for stronger risk management and oversight. And you know, there’s always a tension between moving fast and staying safe. From your perspective, how do you innovate quickly without creating unacceptable risk?

Varun Kacholia 22:39

Yes, and that is something that comes to me every single day, because the role that we play in building these products for our customers, for our community at large, requires making these decisions. And my framework is very simple. We need to be very intentional where speed is allowed and where safety is non-negotiable. And we have seen this pattern in many industries previously. Whether you take… yeah, many, many industries. There are some where speed is okay, where you can go fast. And there are others where we want to make sure, as a society, that safety is absolutely thought through, end-to-end, with like you know, different plans being put in place. And when it comes to responsible AI, for me, safety is non-negotiable. We have to work that out diligently. And then… so as a result, what, like you know, you and our customers have seen is that at Eightfold, we innovate rapidly because we have built that foundation where we can leverage this foundation of responsible AI that is rock solid, or, like you know, a very high-quality, thoughtful foundation that we have built, and we continue to improve it. And on top of it, we continue building products that we can move at a faster pace, but they all kind of tap into the same foundation of: Number one, transparency and trust. Show what, like you know, we are using, what the AI is using. How can you correct it? How can you, like you know, control it? Number two, allow measuring fairness. Allow monitoring it, auditing it, and very transparently demonstrate, like you know, all the things that we have done. And finally, when we are building products, we always think about how these products will help expand opportunities, not just accomplish a particular workflow or a business process. And that is accountability, responsive… responsible AI by design. Yes, this has… this has helped us not only innovate but do it in a way that is more responsible, and it has allowed us to experiment quickly as well.

Bill Pelster 25:13

And Varun, I really appreciate that organizational view. And Meghna, I’m going to kind of ask you the exact same question, but from your perspective as a Chief People Officer, what does that balance between bold innovation and protecting employees actually look like in reality or in practice? Yeah.

Meghna Punhani 25:31

I mean, innovation is a fun… is a… it’s a fun thing, right? Like, I mean, that’s… that’s… I would say that’s the very soul and bread and butter of what we do on a daily basis. So I think that’s what makes it really exciting for us here at Eightfold. But you know, it’s super important, but we also realize that it cannot come at the expense of trust for our customers, our people, you know, our own employees, candidates, and so on and so forth. So I think we’ve all noticed that with technology, with AI especially, right, the technology is evolving faster than people are able to absorb it today. And fear and lack of trust really become impediments to, you know, that quick adoption and innovation within any organization, for that matter. We are not different internally here at Eightfold, too. But pushing the speed of innovation without having proper guardrails is definitely going to erode trust. And so, you know, again, like if employees feel that they don’t understand what’s deciding their fate and everything is going into a black box, the result is going to be not what we expect. They will respond with a lot of fear and resistance and would find it really hard to adopt new ways of working, right? So to innovate fast without breaking the human connection, like… we… we focus on three pillars of adoption as well internally. So first of all, absolute clear on decision boundaries, right? Setting expectations that AI is here to support people; it does not really replace them, right? We are able to surface the data, humans are making the final decisions, and the accountability that comes with this… it’s explicit, and it’s absolutely non-negotiable. So defining those decision-making boundaries and driving that with clarity is of utmost importance. Then finally, then the second way to look at it is this, like, you know, again, like not really focusing too much on efficiency, but focus on expanding the opportunity for our people. So… so how do we prioritize use cases that feel helpful for people, that’s going to help them drive productivity, enable them so that they’re able to free up time for, like, judgment, connecting the dots, and so on and so forth, right? Like, that’s where the human decisions, judgment, and thought process really, really matters. And then finally, visibility and control. So as long as we keep AI visible to the people the way it affects them, right? Because we use a lot of our products internally, also. So all of these points, you know, apply to us internally, too. So candidates and employees can see their profile, see how the system is, you know, really using their data. And so that level of transparency with our own employees, on their skills, with our candidates, like… that builds confidence, right? And that’s what leads to more trust, and that’s what leads to more innovation as well. So you really have to treat it as a change management journey, especially with a technology like that. And so, and really listen to the feedback that your people are telling you, and then turn the tools to, like, real human needs, you know, where we are able to sort of, you know, maybe automate the repetitive tasks and free up the humans to do what they do best. And that’s where I feel innovation lies, and you would be able to tap into the speed for innovation.

Bill Pelster 29:09

No, and Meghna, I really appreciate kind of the perspective and framework you provided there. And this whole topic about governance and accountability and compliance is really important. So Varun, if you don’t mind, I’m going to go back, and let’s just go a little deeper. You know, there are so many new regulations and guidelines around AI and employment. They’re emerging globally. Boards are asking for evidence of governance, not just intentions. When it comes to governance, accountability, and compliance, who actually owns this? And in practicality, Varun, how do you and Meghna partner with other functions like Legal to ensure Eightfold stays ahead of things like the EU AI Act?

Varun Kacholia 29:44

Absolutely, and like you know, many of these things are critical, timely, that we are, like you know, as a… as a society, we are, like you know, bringing these things into practice now. Historically, if you look at it, governance has been either an IT or a GRC project, where it’s, like you know, part of the governance, risk, and compliance team in most organizations. And it has been more… it many times has been done as the last mile after the product has been built, just to ensure that the minimum required checkboxes are ticked before it can be adopted or, like you know, the implementation can be completed. And this is fundamentally changing. It is no longer an IT or a GRC project. Now these things are being embedded deeply in every part of the organization. So let’s talk about it. First is the builders. Whether it is the engineers, the product managers, product owners, all of them are deeply thinking… must be deeply thinking for how to build products in a way that meets not only the current regulatory bar of governance, but also the future of where, as a collective, we want to be… as a collective that we want responsible AI to define. Number two, Legal and HR teams are no longer the last mile—the team that just ensures that the products meet the basic minimum. They are continuously collaborating on how these are built, on how they can be further improved, whether it is for any… any step in that entire journey. And for us at Eightfold, we have done this on an ongoing basis for the last 10 years in the journey of the company. And I’m glad now this is becoming more and more embedded in every single product that is being built across the industry. Meghna and I collaborate very closely, and also with our Legal team and our Customer teams, that not only aligns with our compliance requirements, but also how, at Eightfold, we are continuing to set the high bar on responsible AI, on trust, and on transparency.

Bill Pelster 32:33

No, I really appreciate that, Varun. And Meghna, I love the way that Varun talked about governance isn’t an IT project anymore; it’s a cross-functional operating model. So with that perspective, how has HR governance evolved? Yeah.

Meghna Punhani 32:46

I mean, I think that the role of, you know, people leaders is just completely evolving with how, you know, AI is moving and the way, you know, it’s like really bridging boundaries across organizations, for that matter. And I feel that responsible AI in HR is really at the intersection of, like, the integrity of the technology, legal compliance, right, employment equity. And so the groups… you require many groups to come together and operate at scale, right? And I feel that whether it’s Chief People Officers or just general people leaders, are at the heart of, like, driving with a lot of that accountability, you know, in mind, and drive that cross-functional collaboration across all of these groups as well. HR governance, like… used to be a lot about, you know, policies and processes. But the role of HR has evolved, and so now it also has to prove AI’s impact on, you know, how outcomes are being delivered, how trust is being dealt with, or, you know, build on and across, like, the entire working or talent lifecycle that we were discussing earlier around, like, hiring, mobility, promotion, you know, performance management, just general workforce planning for that matter. So HR is no longer saying, right? Like, “Did you follow the process?” Like, you know, “Did we check the compliance box?” But, “Did we get it right?” right? Like, “Did we do it the right way? Did we drive with a lot of transparency?” Like, “Did we… were there… are we making agents and humans sort of work in the right places? And is a human really being accountable for the actual oversight so that we are not, you know, making mistakes, for that matter?” It also requires good change management, right? Like, real important, clear employee communication. How, you know, we are assessing the impact on the workforce, and at the end of the day, like, how all the work that we are doing really, you know, goes back and aligns to the values that the company, you know, holds important. Whether it’s like, you know, privacy first or opportunity first, and it’s not just focused on efficiency and productivity. No.

Bill Pelster 35:07

And Meghna, I really appreciate how… and the entire theme here… you always talk about the human still being in the loop and all of this. I’m going to switch gears here for the final section, and actually to kind of put your… your forecasting hat on, and as we look in the future and kind of “what does responsible AI look like on the horizon, especially in HR?” Varun, I’m going to come first to you. With AI and technology evolving so rapidly, what is your bold prediction for three years from now? How does agentic AI change the stakes for responsibility?

Varun Kacholia 35:37

Absolutely. As… as I said earlier, like you know, we are living in an age of technological abundance. And by all measures that we can see, this is not slowing down. It has been… it is accelerating, and it continues to accelerate. There is one like, paradox that comes to mind, which is Jevons Paradox. And the observation that Jevons had back in the day: when steam engines… their efficiency improved, the coal industry was very concerned that the consumption of coal will go down. And Jevons observed—and that is how this paradox came into place—is if the efficiency improves, actually the consumption, many times, may go up. And that is what has happened as humanity, whether it is like, you know, steam engines, or like, you know, general in coal, or you take electricity, or you take the Internet, and now it is AI and agents. As the efficiency improves, the consumption will go up. And this applies to all of us as well. For us, many of these AI apps are now very easily accessible. And I’ve observed myself, my usage of those has gone up. The same is true in all the different products, too, that we all use in our daily work, day-to-day work. I look at all of them—even the new products that we are building—not only from an efficiency point of view, but also what these products are enabling that we couldn’t have done in the past. So a very simple example of AI Interviewer: there is no way any organization could have interviewed 10x or 100x more candidates in the past, or would have interviewed them outside business hours, on nights, on weekends, that might be best for those candidates. But now technology has allowed us… has enabled us. There is no way as a manager I could have debriefed, I could have assessed, I could have reviewed the 10x or 100x candidates who are interviewed, but now technology is going to allow us to do that. So this… the consumption for all of us will improve, will accelerate, will grow dramatically. And this is where your next question comes in. The stakes are high. All these systems that have to be built, including products like Eightfold and also the public large models that exist out there, have to be built with responsibility in mind. This is a very active topic today in the tech industry as well, that: how are these systems helping contribute positively, reducing downsides to society and the risk, the governance, the like. You know, the three pillars that we talked about: transparency, auditability, monitoring, and accountability by design, will be more important than ever. The systems that are built on these three pillars will have an outsized impact, not just for their customers, their users, but I believe strongly, for us as a collective humanity overall. And the ones that actually aren’t, they will sooner or later have to be rebuilt on top of these principles. Now.

Bill Pelster 39:25

Varun, I love the way you talk about abundance built on these foundational pillars. Meghna, can I ask you for your predictions?

Meghna Punhani 39:33

Yeah, absolutely. So, you know, when I think about the HR side, I mean, I said it earlier, like people teams are really having a moment. We totally are at an inflection point right now. And if I look… I used to think five years, 10 years out, but we’ve stopped doing that, because everything is evolving so quickly. But if you really think about a couple of years out, right, talent leaders who I feel will adopt AI and will do it responsibly will have a real competitive advantage, you know. Not just in efficiency, but also, you know, as we were talking earlier, like how trust is really built within organizations. And so there are three fundamental shifts I feel that… that will happen in the next few years. So first of all, the roles are going to get redesigned around, you know, what AI will do versus what humans will do. And we are already in the process of sort of making that happen. So it becomes a team of humans and AIs, right? So work will be split between human judgment and AI execution. And you know, people who are keeping skills at the center of redesigning this, like… that would be the growth strategy, you know. And responsible organizations will be very clear about where decisions stay with AI and where decisions stay with humans.

The second would be, careers going forward are going to be absolutely fluid. Like, I almost feel like the linear progression of people or career ladders, like, may or may not exist in the future, right? Like, AI will continuously match people to work based on their skills. So it’s our responsibility as HR leaders to make sure that we are able to use AI to expand the human opportunity and not really narrow it. And then finally, accountability in all of this is absolutely critical, right? So if there is an AI agent that triggers any of… any of these, you know, moves around a talent lifecycle, like… someone has to own the outcome. So I still think that the governance is going to sit with HR and with IT combined together. So responsible AI is no longer a footnote, for that matter, and it really becomes part of an employer brand and, and, you know, really becomes a competitive advantage for people. So companies who choose to do it responsibly will obviously have a huge advantage in the future.

Bill Pelster 42:11

No, and Varun and Meghna, thank you for a really thoughtful conversation on probably one of the most important topics out there. Everyone’s focused on the technology, but there’s something deeper here. And I like the way we talk about responsible AI, and I’m really just going to close on what I keep hearing as the True North. And when we’re working on things that are moving very quickly and we’re under a lot of pressure, it’s always good to understand what those True North principles are. And Varun, you said them at the very beginning, and I’m going to close by repeating them. You know, first, it’s really around transparency. That’s what people want. I may disagree with the answer, but if you tell me where you got the answer, and I can see it, at least I understand it. The second is really around fairness and just being unbiased, and you can actually understand where it is coming from, and we have procedures to test and monitor and audit for it. And really this whole idea about accountability by design, I think is a game changer there. Humans are building these agents. Agents are going to be building other agents. Hopefully, humans are going to be auditing the agents. But “responsible by design” is one of the North Star principles there. So with that, thank you very much. I hope to continue this conversation; it is one not just for Eightfold, but for the entire industry. As we race ahead and we go into a future that is, man, I think it’s exciting and abundant, but hold on tight, because it’s moving fast. Thank you very much for a great conversation today. Thank you, Bill.

 

Get the latest talent news in your inbox every month

By submitting this form, I consent to Eightfold processing my personal data in accordance with its Privacy Notice and agree to receive marketing emails from Eightfold about its products and events. I acknowledge that I can unsubscribe or update my preferences at any time.

Share Popup Title

Share this article