The state of AI in HR: Josh Bersin and our Co-CEO and Co-Founder Ashutosh Garg talk emerging tech in the talent space

AI is making big headlines. In this exclusive Q&A, HR expert Josh Bersin interviews our Co-CEO and Co-Founder Ashutosh Garg on the state of AI.

The state of AI in HR: Josh Bersin and our Co-CEO and Co-Founder Ashutosh Garg talk emerging tech in the talent space

9 min read

AI is impacting our lives faster than ever before and creating opportunities to improve how we do almost anything — including hiring and managing talent.

Talent leader Josh Bersin is an advocate of organizations using AI to help them in their talent acquisition and management processes. In fact, he’s said that in this new world of work, it will be impossible to remain competitive without it.

Recently, Bersin sat down with Eightfold AI’s Co-CEO and Co-Founder Ashutosh Garg to discuss the state of AI in HR. Garg has deep roots in the field of AI and is equally invested in pairing people with the right careers using a skills-based approach. Here, Bersin and Garg discuss how AI technologies can positively impact the talent space, what they think about new regulations and concerns about AI, and ultimately, why talent intelligence is the only way to get — and stay ahead — of the competition. [Ed note: Quotes have been edited for clarity and length.]

Related content: Watch the full interview about the state of AI in HR with Josh Bersin and Eightfold Co-CEO and Co-founder Ashutosh Garg here.

Josh Bersin: AI has suddenly become one of the most important issues in the world and certainly in the role of HR. What role do you see Eightfold playing in this transformation?

Ashutosh Garg: We started the company six years back, and our key thesis was “employment is the most fundamental thing in our society.” And our thing was can we actually use the power of AI and machine learning to understand people’s potential — not what they have done, but what they can do next. So that we live in the world of skills, we can really understand the skill capability of every individual. 

Now, especially over the last year as AI has become a lot more mainstream, there are two things that are happening. One is the rate at which AI is maturing, enabling a number of processes that can be taken over by AI. In that case, Eightfold AI is playing a leading role in assisting HR professionals in doing their job in a more consistent, unbiased fashion. Second is the changes in AI coming. 

Every role in the world is changing — how I’m doing my job, how you’re doing your job, how everyone else is doing their job — and the implication of that is now HR professionals have taken the front seat in guiding the organization in this changing world of AI.

JB: How do you maintain privacy and security?

AG: At the heart of this is it’s not about an individual. It’s not about who’s John or Mary. It is about skills. What skill a person has, how these skills are developing over time, how this is reflected in the success they are having in their job, and what is leading to their future development? So it’s really all about that. 

At Eightfold, when we talk about a billion people we really think of it as a billion career trajectories, how different people have progressed, right? So we anonymize all the PIA [privacy impact assessments] stuff about people, and we focus on their capabilities, skills, potential, and career development.

Related content: Learn more about Responsible AI at Eightfold.

JB: All of a sudden there seem to be dozens of companies trying to analyze skills. Tell us just a tiny bit about what you do to analyze skills that’s unique.

AG: Six and a half years back, when we started the company, everyone was like, why AI? Second question was, what is a skill? How do you guys define this? Is it a skill, is it a capability? Is it some other key word or artifact, right? 

And what we did from day one, we were like, “Let’s understand what people have done, with the primary purpose first of understanding what they can do next.” So it was less about what you have done, what you are saying in your résumé, and a very simple analysis walking through the details. 

We look at what skills this person has in year one, what skills this person has in year two, and what skills what this person has in year three, using that to understand what skills in year one are reflected in the skills of year two and reflected in skills of year three. So this way when a company is trying to hire this person, or trying to promote internal mobility, they can say that “OK, this person has had these skills to date,” and what it means about this person’s skill next year. So it has always been about skill learnability, future skills, and skills potential.

Related content: Read more about how our AI works and what goes into its development and oversight on our AI FAQ page.

JB: My assessment is that you are at least one generation ahead, maybe two, of most of the technology in the market that’s trying to do this. And I think it’s because of your AI background and the problem you’re trying to solve and the depth of the problem. And just the way you have been thinking about the problem for a long time.

AG: When you think about skills, another thing we think a lot about is the skill context. What is the context in which you’re talking about this skill? 

The other thing which has worked out well for us, and important for this industry is from day one, we were all about diversity, how do we reduce the bias? And one of the conjectures that we had going from day one was anything that is not relevant for the job should not be part of the résumé.

JB: So you just ignore information that’s not relevant.

AG: For the job, right? And a simple thing was that your name is not relevant. Your age is not relevant. Your gender is not relevant. Your race is not relevant. Your ethnicity is not relevant. In fact, it doesn’t even matter whether you work at Google versus Facebook versus Microsoft.  What is truly relevant is your skills. And your learnability. So it almost came out of thinking about “what is relevant for the job?” and focusing on that.

JB: What do you think we should do about regulation in AI, and what role does Eightfold play in all this?

AG: Like any important technology, any important development in our society, as they say, with great power comes great responsibility. And AI is one of those things. AI is extremely powerful. It can do a lot. But if you’re not being responsible with it, you may not go too far with it, right? Now there’s debate going on. Around who is responsible, when they’re responsible. Is the buyer of this technology responsible or someone else? 

And frankly, the answer is all of this. As a candidate, it is my responsibility to ensure that I’m providing the right data. As an employer, it is my responsibility that when I’m using these systems, I’m using them in the way that they are designed and not misusing them, using it to reduce the bias versus perpetuating the bias. 

But these are complex systems. These are complex technologies, expecting everyone in the world to know the intricacies and the details of these is hard. 

So I see these regulations as a good thing, provided they do not hinder development and innovation. AI should be developed with transparency in mind. AI should be developed with the right analytics, so that you can see how these systems are behaving. What they are doing is the intended use of these systems, the way they are designed. But then all three of us, like vendors, users, and employers, should take the responsibility to make sure that these systems are designed well.

JB: Geoffrey Hinton was in The New York Times talking about the fact that he believes that AI could end the human race. What’s your position on these kinds of conversations? 

AG: If you look at the history of the last 60-70 years, it’s the longest stretch where we never had any deadly war. And the reason is nuclear weapons. Nuclear weapons have stopped this kind of war, right? So if you think from that angle, right, these technologies are extremely powerful. 

Can they be misused? Yes, they can be misused, right? You can really train an AI system to do extremely bad things in society. Yes, you can. Which is true for any technology out there. So AI is not the only thing that can end the human race — thousands of other things in the world can end the human race.

The world is still full of a lot of problems that need to be solved. Six months back, I lost my dad to cancer, and I wish there was a treatment available. And what I was told by every doctor was, don’t even bother with a third treatment, it’s just going to be painful and not affect the outcome. I wished that AI was advanced enough to come up with a solution to those kinds of problems. 

There’s a lot AI can do in a positive sense. So my suggestion would be, let’s focus on building these practices that are aligned to solve the world’s problems, including employment and HR. And let’s be responsible.

JB: One of the big questions customers have about vendors is credibility. How do we know that you’re as good at this as we think you are?

AG: Eightfold’s credibility comes from multiple places. One is years of experience. I personally started doing AI 28 years back, when you were doing things like Markov models, speech recognition, hidden Markov models, Bayesian analysis, and so on. And over the years working across multiple domains, it gives us a sense of how the data works, how these models learn, what is the generalizability of these models, and what will work well in the field or not. 

Second thing is the scale of the data with which you are working. Today we are working with the data of more than a billion people across the globe. It’s not about one person here, one person there, one industry or not right? It comes from the unique data sets of outcomes that we have collected over the years. So anytime when we go to customers, we try to collect as much data as we can, of course anonymize it, to preserve the privacy and security, so that we can learn from every outcome that enterprise has. 

It comes from not only using the best algorithms out there, but actually pushing the envelope, pushing the frontiers of advancement. A simple example is focusing on Equal Opportunity Algorithms that enable systems to learn across every protected class and ensuring that the behavior of the system is the same for men versus women, young versus old, no matter who you are, right? It comes from the analytics of people. It comes from the transparency that we bring to everyone. It comes from the patents that we have filed over the years.

So when we started the company, I had no idea what HR was. I didn’t even know what an ATS was. And I would struggle to figure out what is HR versus HRMS. But the interesting thing because of that was we never thought of the actual fragmentation. We never thought we were solving a talent acquisition problem or a talent management problem or a diversity problem or succession-planning problem or a L&D [learning and development] problem or career-development problem or a payroll or performance problem, right? We always thought it was a talent problem. Enterprise needs the best talent that can do the work, wherever that talent is. 

So you can’t think of talent as a siloed, and the reason why that is important is once you start cutting through this entire talent lifecycle that is when you have the best understanding of the data. So that now you can think of who I am attracting, who I’m hiring, who I am promoting, who I am growing in my company, who I am upskilling, who I am retaining over time, right, and what skills tell me that story, and what skills don’t tell me that story. So that has been our big focus area and a different approach to solving this problem. 

JB: How do you think HR people should explain AI to their function and their peers? 

AG: The simplest way I would say is, “Think of AI as a human being who can read all the text that is out there in the world, make sense out of it, can access it in real time and help with decision-making.” But at the same time, the person is a human being. So it will make mistakes. So it will have flaws in thinking and approaches. A highly scalable human being is how I think of it, right?

Or I think of AI as “assisted” intelligence, not artificial intelligence, right? It’s not magic. But it can do a lot.

It’s now here. It is here to help you scale and do your job consistently at scale on an ongoing basis. Help you adopt access to changing realities. The world is moving fast. And if you don’t do it, you’re going to be left behind. 

Watch the full interview with Josh Bersin and Eightfold Co-CEO and Co-Founder Ashutosh Garg here.

You might also like...

Share Popup Title

[eif_share_buttons]