Agentic AI marks a profound shift in how work is designed, delivered, and governed, with significant implications for HR. Yet, most HR teams remain unprepared — only 21% of leaders are shaping AI strategy, and even fewer have adapted their practices. The challenge lies in understanding the risks and opportunities for workforce structures, skills, and leadership, while navigating complex and evolving AI regulations.
This on-demand session brought together forward-looking HR leaders to share strategies for building systems, mindsets, and partnerships that prepare people for the next era of work. By watching you will gain insights on moving beyond reactive transformation, equipping employees with critical skills, and creating strategies that balance organizational protection with workforce potential.
This HR Leaders session featured forward-thinking experts sharing strategies to prepare the workforce for the future of work.
The shift from traditional AI to agentic AI signals a deeper transformation:
Despite growing urgency, few HR teams are ready for the change:
Future-ready HR teams are focused on creating resilience and adaptability:
Hey everyone. Good morning, good afternoon, good evening, depending on where you’re tuning in from. My name’s Chris Rainey, co-founder here at HR Leaders, and I’ll be your host for today’s live panel discussion where we’re gonna be talking about how to prepare HR for the age of Ag agentic ai.
Uh, before we jump into to the discussion, for those that are new, let me give you a quick tour of the platform. On the left hand side, you’ve got the live chat, so just take a second.
Now, introduce yourself. Let everyone know who you are, where you’re tuning in from around the world. Also, take a moment to make sure you hit the follow button ’cause if you have to leave or you’re, you know, and you can get the on demand replay, send straight to inbox and share that with your colleagues, uh, as well.
With that being said, let me introduce you to our amazing panelists. We’ve got Andreas Meyer, who’s the Director of Employee Experience and Operational Excellence at ING, Claire Nord, who’s the global HR Director for Dell Technologies.
Jason Bloomfield, who’s the head, uh, global Head of Tan Acquisition Transformation at Ericsson.
Christophe Geral, who’s a workforce transformation partner at Deloitte.
And last, but so not least, Daniel Florian, who’s the head of AI policy and Regulation at Eightfold.
Nice to see everyone. Hope you’re all doing well. We’ve got true, truly global audience from all over the world.
Andreas, I said, I’m gonna pick on you first, so I’ve gotta hold up to that promise.
So, um, let’s jump in with an easy question. What role do you expect a Gentech AI could play in how HR reshapes the way we attract, develop, and, uh, support our organization? Just a small question to kick things off.
Yeah, thanks a lot. Chris, how much time do you give me for answering?
We’ve got 45 minutes. Just go for it. Yeah. Okay.
Okay. Lemme, lemme try to, to answer it in less than 45 minutes. Well, look, I think it’s, it’s nothing else as the com first of all, the complete removal of what we still have today vastly in, in hr, which is the model.
There might be some companies who are taking the step beyond the Ulrich model, but I think the, the vast majority of, of companies still use this HR operations model.
I believe that in, well, let’s, let’s give it, timeframe between three to seven years for this, for those companies who are a bit more mature and a bit more, uh, progressing, um, they might take three to five years for those who are a little bit behind. They might need five to seven years. But in that timeframe of three to seven years, I think the ORIG model is, is totally gone.
And, um, we, we, we will see a development where the HR function becomes more a platform other than a true, um, central, local, global, um, HR organization.
It will be a platform where you have agenda ai, um, being, um, the, the ones who are replacing the work that currently HR business partners are doing, COEs are doing, or, uh, um, people services are doing. So these traditional current roles, I would say to an extent of about 80% or even more, will be gone.
Now that does not mean that, um, 80% of all positions that we have today in hr, um, are completely gone.Of course, new positions or new HR roles will emerge. Um, for example, in the area of, um, data insights, uh, workforce strategy when it comes to experience, it will be all about, um, employee experience design when it comes to operational topics.
It will be, of course the HR tech orchestration that is still required and will also, uh, further evolve. And something that comes up pretty much is on the ethics side and on the governance side, the part of governing the AI and, and while, uh, managing the AI in a certain way, also from an ethical perspective, from an empathy perspective, also from the idea of coaching, um, and, and collaborating between HR or humans and, and ai.
So these are the new roles that will, that will emerge. Um, but just I think everybody needs to be prepared. Um, the world as we know it, uh, from an HR perspective is, is coming to an end. Uh, and it’s going probably much faster than your thing.
Wow. I feel like we just lost all of the audience now. ’cause you scared them away. I’m only playing now, like I’m gonna be quiet ’cause I’m so excited to hear the rest of the panelists perspective on this.
But thank you so much Andreas, for kicking us off. And Andreas, I love how you describe it’s reinvention rather than replace.
Yes. I think there’s that very, very scary narrative as you just alluded to, where a number of people might start freshening up their cvs.
But I think, you know, ag agentic as a term is not as intuitive or, or easily understood as generative ai, generative ai, self-explanatory.
AG agentic, does it mean it likes digital martinis that are shaken, not stirred, you know, as it defeats some evil villain, I, I prefer to kind of think of it as transactional ai because transactional is really the fundamental distinguishing factor, right?
And if we think about hr, we think about other functions, whether it be sourcing, legal, marketing, it’s that shift from activity to impact, right?
Mm-hmm. So as we think about who we can attract, we can attract in a more diverse way, inferring even where a word might not be explicitly listed on a cv.
There’s adjacent experience, there’s adjacent skills which we can infer.
So we can attract from a broader, wider net.
We can develop our people based on their interests, their performance ratings, their level of potential comparatives with the workforce at large. We can see patterns for those who tend to excel.
And from a support standpoint, now we can make a lot of the employee services that we make available today through phone or email, even more omnichannel.
So now I can start in one channel concluded another, or go in between seamlessly without losing any progress.
Those are some really interesting new use cases for transactional ai.
Yeah. And if, if I may add to that, I also think it’s not a question of what AI can do, but what it should do.
And at the end of the day, that’s a design choice we must own within hr.
And again, I also think that it’s better more from moving to see AI as a technology to AI being a teammate of ours, being a team member actually within our HR departments.
Because it’s, at the end of the day, in my opinion, and, and Andreas, you were pointing it out in three to five, seven years time, our HR workforce will be combined out of human beings as well as AI agents.
And I think it’s therefore also not a, a fear of replacing, losing all jobs, but it’s really a way to rethink work instead of just imagining which tasks can be done by AI or by humans.
I agree with you, Christophe. I think AI is going to become one of our team members as anybody else, and it’ll be also an enabler of our human team members.
So that’s the combination that will actually bring much more value to our internal customers.
And I personally think that it’s our opportunity to actually shoot back.
We talk a lot about technologies right now, especially in hr.
And we should, I think in the future it might be our opportunity to shift back towards more human, what makes us really human and where do we want to be present as human in these interactions, uh, within our organizations.
Yeah. I I I think it’s, uh, I like what you said, um, Andreas about HR as a, as a platform.
Um, and, and you Jason, about, uh, the shift from activity to, to impact.
I think what AI allows us to do is to see the HR function in a bit more of a context.
Traditionally, we had a very good knowledge about our employees on the day after they started, uh, in their new role and new function.
We knew the skills, the talents, their ambitions, but the, the next day, like our knowledge about all these employees grad, which gradually dec decline and AI allows us to keep up to date with, with the skills of, of our employees.
And I think that’s particularly important in, at this point in time because we, we don’t quite know yet what skills will be important in the future.
So AI might help us to find this one employee who in his, in the, in the, in the past already has made a career pivot.
And this knowledge of like learning and unlearning new skill, uh, skills is something that’s super well valuable.
So this person might be a perfect, uh, you know, partner to become an ai AI champion within my organization. And that is what AI allows us to kind of recognize more, more quickly and more effectively.
Love that. Uh, we’re, we’re talking about the technology part, but this is in, in reality a huge mindset and culture shift for, for, for everyone and your organization.
Uh, a good to understand from yourself, Jason, how are you equipping your HR team with the knowledge and mindset shift that to help them operate alongside agen ai?
That’s another small question, right?
That’s really easily answered.
Uh, you know, whether it’s HR or any other functional area, empathy is really key.
It’s, it, it, what, it’s what differentiates us from technology now.
It’s what’s going to continue to differentiate us in the future.
And the elephant in the room then, ’cause that’s what you have to think about and you have to tackle the elephant in the room is what does it mean to me rhetorically?
What does it mean to my job?
And so my approach now is, and the way I think about it and talk about it, is everyone is going to move into a management role with one or more digital direct reports.
Each of those have unique roles.
Each of those have unique responsibilities all reporting into you.
They’re human manager and maybe it might be thankful, uh, helpful to think of an adjacent kind of term.
Think about what are the activities you might delegate, what are the activities you might choose to outsource?
Those are highly repetitive things that get in the way of creativity that get in the way of curiosity.
It’s the things we have to do rather than the things we love to do.
And that’s what we now get to delegate as part of our jobs to a digital direct report.
And then secondly, I think AgTech in a service, if you hear productivity or cost of savings, those are very weak and lazy.
Uh, why?
That’s a very, very, you, you’re gonna be heading down a path where for sure you’re gonna have arms crossed because of resistance and your adoption’s gonna suffer for it, and you’ll never quite recover.
It’s a lazy why. It misses the full and the true, the aspirational promise of being able to be freed from the things we have to do so that we can do more of what we love to do.
People leveraging agentic or transactional ai.
It’s our opportunity each and every one of ours to do more of what we love to do it better, to do it easier, to do it faster, to do it with greater impact than we’re able to do it today.
So those are the, the two mindsets that I would, uh, posit for you.
One is the one to embrace and the one that I believe is a win-win value proposition people can understand and rally around.
And equally the lazy why the equal, the lazy mindset shift around productivity gains and cost reductions.
Those are rewarding signals.
If you hear those, I would say have that person, uh, counsel them quietly to have a a further think on it.
I think it, it also has a lot to do with trust and encouraging people to experiment within very clear boundaries, of course, because we all know all the risks associated with AI.
So that’s really the, the approach that we are taking here at Dell Technologies is combining more for formal training around making sure all the team members have all the AI foundationals understand what agent AI is about and how they, they might leverage it in the future.
And at the same time, strongly encouraging them to experiment and test having a very clear set of, uh, guiding principles.
Yeah, and I couldn’t agree more, Claire.
It’s, uh, what we see with our customers that very often, like we mentioned this before, technology seems to be this like one solution to everything.
This one had wonder, right?
And on the other hand, it starts far before that.
So it’s about being AI literate in order literate, sorry, in order also to understand what can I expect from ai, whatnot.
Um, we need this digital playground clear that you’re referring to where our employees can test out things in a safe zone without being like punished if something goes wrong.
Because at today, we don’t know what our futures chops would look like in terms of like skills required, but we also are not a hundred percent sure what kind of task or outcome we should shift to AI and what should remain human, for example.
And therefore it’s really important to give that digital playgrounds to, to try it out.
And, uh, I think the coexistence of human beings and gen AI is really a mind shift and also like a behavioral change we need to address.
And behavioral change is not only about training and communication, it’s like a process where we need the, the purpose, as you mentioned, Jason, in order to really make it happen
and also see the benefit which is in it for me.
Otherwise I wouldn’t do it right, or I would be afraid or scared of.
Yeah.
Yeah. If I may add, um, I think, um, looking a bit across the HR community, um, I think that, um, there might be companies like, like yours, Claire, who are pretty, pretty much already, uh, in, in, in, in, in good progress.
But I know there are many, many companies who are at the moment, at least, uh, from an HR perspective, are still struggling with the impact that AI has and also with what it actually conceptually means for hr.
The one hand, how can they prepare their, their workforce in, in the business for the ai?
Um, but then, um, even more important, um, how do they prepare HR for it? And I think if you are not prepared for hr, how can you then prepare the workforce?
So I think it even needs to start with the own organization.
You need to be a front runner on AI in hr if you really want to then also support and, and, um, coach your, your workforce for, for the age of ai.
And I think that’s where we are really much behind.
We, we have to start thinking the unthinkable in hr.
Um, that goes up to the point of, um, the, as I said before, the operations model, what today we are working in central, local, and global HR teams.
We need to think more about how can we use AI to, for example, then have more in business HR as well, so that we have more HR sitting directly with the business, rather in central teams or in local teams.
And then we have to also think about how do we really make AI work as a, as a tool that that helps us in order to, to, um, achieve the goals that, that we have from a business perspective.
So I think that’s something that, um, HR is still very much behind.
I, I would agree that I think that, uh, these automation anxiety is real.
There was recently a, a poll that I came across from the US where, uh, which said that more than 50% of the workers were actively trying to not use AI because of the fear of, you know, automation or them being kind of laid off as a result.
Um, I think it’s worthwhile to think about the deployment of AI in various steps and processes.
So in the first step, you might think about you do your normal job, uh, but you use a chat bot to support you in this.
Uh, in the second stage, you may do your job, use a chat bot, but this chat bot is grounded in your company’s knowledge space.
So it knows your policies, it knows your tone of voice and all this kind of stuff.
The third step is what we are talking about now, which is agenda ai, which really can help teams to get rid of a lot of the kind of repetitive manual tedious tasks and automate some of this.
And this gives you team the space to think about how you wanna reimagine workflows across, across your team.
And once you’ve gone through all these stages, what you have in your HR department is a playbook.
And this playbook you can share with other teams and make sure that like every, every other team in your organization has the confidence that ai, AI is there to help you become more successful and effective in your job rather than replace you.
Yeah, so many great points there.
And I, I, I love going back to Jason’s point about the word curiosity, right?
Uh, like this, this, it was kind of so, so, uh, flooded with work.
Do we, we don’t even give ourselves an opportunity to, to, to remain curious.
And like Claire, you mentioned like give, creating a safe sandbox for people to, to play in, right?
And be curious and innovate and try, uh, uh, and, uh, create that cultural curiosity is something that we talk a lot about the technology, but I’m excited at that, what we’re gonna do do with that time now that to create the curiosity and the innovation that can come within the organization, um, as well.
Um, a couple of people mentioned this in the chat, and we are gonna get to this topic anyway, so it’s probably a good time to go back to it.
Daniel, back to you. A a a lot of people in the chat have asked questions around regulation.
So you our go-to correspondent on air regulation, so you know, we’ve global, global air regulation evolving rapidly.
How are you staying ahead and what advice would you give to people listening?
Yeah, I obviously like AI regulation is becoming increasingly important.
And that’s not just the case in the eu with the EU AI Act, but also in the US where many federal states have started to regulate AI or in Asia and other kind of jurisdictions in the world.
Uh, I would say however, that I believe a HR teams are really kind of well equipped, uh, for this environment because they’re already used to working in highly regulated environments.
So what is needed, I think, is becoming more fluent in AI technology, but also in AI regulation to, you know, be able to talk at eye level with your internal stakeholders, whether it’s legal or AI governance boards or procurement, to make sure that you, uh, have the tools that are best suited to kind of solve your problems.
And I think what’s important to understand is that not all models are, uh, you know, are equally task, uh, equally well placed to, to do every task.
Um, well, um, some colleagues of mine recently published some research where they compared standard large language models, um, and compared them to our own proprietary model.
And, um, what they did is they, they used these models to, um, find out like job matches between candidates and jobs, and found that these standard models are less capable of finding the right, uh, matches when it comes to metrics like accuracy or fairness.
So really understanding these models and how they work and the strength and weaknesses is crucial.
And that’s not just for performance reasons, but also for compliance reasons.
And I will also would also recommend to, to tackle this topic really proactively.
So not just reactively based on any regulations, compliance, we need to do, but really be transparent where you use AI tools, for example.
Mm-hmm. Because this also reduces fear, um, of, of your employees.
The other thing is offer, for example, bias audits for your AI tools in order to just say, yeah, we take this seriously and we also make sure that we are not like blindly trusting the system because this is what we all have learned about AI as well, that there is hallucination going on, that the language models are as good as they are trained, that we also need to train them with our own data and using them in order to improve them.
So that’s not self-explanatory.
And I think the more transparent you talk about this and also act on on this, the, the easier it is.
Um, uh, at the end of the day for, for the internal adoption, it’s transparency that is the important point.
It’s really key. There’s so much black box Yeah.
Kind of myth around things, and that’s why that’s what’s given such rise to xai, right?
Explainable ai, I’m looking at a number of technology vendors, right?
And in various use cases, it’ll prescribe or suggest an outcome or suggest the next action, but I wanna be able to interrogate it.
I wanna be able to know why, how did you come to this conclusion so that I can agree or disagree?
Like it’s like back in school, you know,in math show your work, right?
How did you get to the answer?
Was it a hallucination or was it an accurate one?
The one that I happen to agree with that transparency, that explainability, that un black boxing, so to speak, is absolutely key to adoption.
Hmm. I think you also need to ground the implementation of AI into your values as a company.
So beyond transparency, which is definitively a key point and that impact can impact dramatically.
Trust is the, I think the, the, the questions around values, the real question about values and our ethics as a company.
What are our values and how are we going to make sure that we stick and we see through to those values.
Yeah. 1, 1, 1 of that, one of the questions I suppose we, we have in that we had planned, which is a good segue from there, uh, maybe to come to Christophe on this, is as AI grows and becomes more autonomous, how do we decide what stays human led versus AI driven?
Yeah, it’s a good question and I would even challenge it with regards to is it really about versus AI versus humans, or is it more about it’s AI with humans by design in future?
And what we do is, is we have like this, uh, three lens, uh, three part lens approach.
So we say, if it’s about human judgment and empathy, that’s definitely topics that should stay human, that it’s very obvious.
So it’s tasks that require emotional intelligence, um, ethical reasoning as we were just, uh, mentioning, or also like complex interpersonal topics.
This is for humans and should stick with the humans on the far other end.
There is lots of things that we also have mentioned today, like things that we can scale, that we can automate, which is not really ai, but also things where we say, well, we use mass of data in order to have AI evaluated, um, analyze it, et cetera.
So that’s topics like payroll, anomaly detection for example, or in job descriptions created automatically or nice topic towards the skill-based organization.
Um, mapping skills to jobs.
Um, certainly this is something you can do manually, but you certainly will have much more benefit and efficiency if you do that together, uh, with supported technology and ai.
And that’s like the third lens actually, where we say there’s lots of topics like strategic workforce planning, for example, where we can rely on technology and still we need humans in order to crosscheck validated it, et cetera.
And this at the end of the day, I think is then again, a good reason for saying, well, it’s not about saying AI can or should do this task, and the human should do that task, but to say, well, how should HR work look in future?
So what’s the future of work for hr?
And this will make necessary
that some roles are adapted.
There might be also a retirement of some roles, but there is also like a reimagination about new roles that we’re currently not thinking of.
So we might have an AI product owner for HR tools or for people products as we call them, um, to really say, well, is the first touch point from anyone in the company towards hr, always the HR business partner?
Or is it our agent AR bot that is answering potentially 70% of all initial first tier requests
and only the special expert ones are then further routed to human being.
And I think that’s really the, the most important thing in the topic to, again, see it as one team that in future not only exists of robots or of human beings, but of a combination of both.
Really interesting points.
Um, and on that topic around like how much or how little should we be using ai?
Uh, I saw an article recently and it argued that the use of AI engages less of our brain and is gonna lead to cognitive atrophy.
And I read that, I’m like, really?
I mean, remember video games, were gonna do the same, right?
And television before that.
And yet here we are in 2025, the pace of digital innovation is the fastest now than at any point in human history.
It’s easy to see, it’s only going to get faster, faster, but suddenly, like, does every action having a reaction, does that not just go out the window?
It, why is it half the equation?
And for me, it’s, we get to reallocate, we get to redeploy our cognitive capacity now by outsourcing or delegating again, those tedious things and focusing our time more on the capacity for human engagement, the curiosity, the creativity, the innovation, the consultative gratifying things in a role.
And that’s where I think we can really lean and leverage on ai.
Absolutely. And do you know what Jason, I sometimes have the feeling that people love to do the stupid repetitive things because it’s so relaxing for them, and it like helps them to just like turn off their brain, done it all the time, like these automated things.
And that may be like a human factor that’s intervening here.
On the other hand, I fully agree and I’m, I’m always doubting that if I look at all these AI things we are having, I would rather get and receive more time in order to draw a nice picture, and I’m not able to paint.
So don’t get me wrong.
Instead of hanging, having AI doing this creative job for me, then, um, maintaining an Excel spreadsheet with a thousand lines.
And sometimes I have the feeling that we are mismatching the use of AI again.
So it’s about us to own the decision, what we shift to AI and what we should keep, uh, keep for us.
Yeah. And why I think Christophe, you, you said it’s, you named it, why, why are we using ai even for creativity, for instance?
I personally find it very intellectually, uh, challenging and interesting to have this dialogue and to generate more ideas, and you can think about those new ideas and so on.
So it, it’s not really, it, it it’s a, it’s a mindset at the end of the day.
And it’s, it’s answering the question, why are we using it?
Andreas, I’d love to hear your thoughts on this topic, on this question.
Yeah, I was, I was listening to, uh, what, what my esteem colleagues said, and I, I totally agree with, uh, mainly with the, with the face of, uh, that there must be more collaboration between humans and ai.
And I think this, this, what you pointed out, Christophe, that it should not be a kind of versus one against the other, but that it is a kind of joint, um, work.
This is, I think, extremely important.
And I think that also will help to maybe remove a bit the, this kind of, this idea of fear and, and, and being afraid about ai, from our colleagues in, whether it’s in HR or it’s in the other functions of, of the company.
Uh, to say that this takes my job away.
It’s not about taking a job away, it’s about helping to make the own job better, or even maybe helping to bring yourself into a new job.
Because the current job can be done by AI in a, in a much better way, as it was said, with the maintain maintenance of Excel.
Um, which is something that, that we should really keep, uh, or should, should give to, to, to ai.
Whereas humans need to need to grow into, into roles where they have, um, the ownership on, creative stuff.
However, and that’s the point, as you also said, um, before, um, some people just love to do the easy stuff.
The, let’s say the simple transactional stuff, because they feel good in it, they feel comfortable in it, they see the result of something.
So whereas when you are in a creative process of, let’s say, writing a book, you might never get to the end of the book because you’re simply running outta creativity.
So in order to get there, you need to get trained for getting this done.
And that’s what I said before.
If we in HR are not ready to embrace ai, we, how can we even teach our workforce to be ready for ai?
So we need to be actually at the spearhead in any company around the world when it comes to dealing with AI so that we can help them, our workforce, to use AI in the right way and see them as a peer, as a colleague, um, rather than as a threat.
And Chris, sorry, I just need to add, add one sentence because thanks Andreas, for building the bridge.
I’ve forgotten to mention something before. I think based on what you’ve said and this coexistence, it’s also clear that we as HR need to make sure that it’s not only about technology and tasks, but it’s affecting the roles, it’s affecting the structure, it’s affecting the activity split.
It’s even, or it even might affect the organization design.
So by adopting a agent, AI fully to a company, it’s not building technology on top, but it’s about rethinking the entire structure.
Yeah. A digital direct report concept.
Exactly. One get better over time.
One that takes its direction from a manager, the managers always the human.
I think that’s one of the challenges, right?
You’re seeing companies right now apply AI to the existing operating model and wondering why it’s not working.
Right. And it is, it wasn’t designed, uh, for that.
A few people in the chat have asked, um, when, when going through this process, and Claire, I’m gonna come to you on this first, who are the right partners you need to have in the room in terms of preparing your organization,
um, for iGen ai?
You know, what’s some of the different leaders that you, you that you brought together?
’cause many people listening perhaps still on that journey, so they’d love to learn from you on that.
Yeah, absolutely. And I, I think the, the first point is exactly what we were, we were talking about, no, is that the implementation of AI is not only a technological stuff, it’s really reinventing all the organization, the processes, the organization, the roles.
So in order to do this, we need to have all the key partners around the table.
And I would say there is no, there are equally critical around the table.
So in our case, we have our AI center of excellence that is driven by John Rose, who actually is also our CTO now is working, is partnering together with everybody in the organization.
And maybe if I, I can picture some of the big players in the, in the room.
So legal compliance and data privacy are really essential partners from the start, because Agentic AI introduces new questions around accountability, data usage, their impact on the decision making and so on.
So these teams, how define the boundaries, you know, what are the guiding principles that should serve as a framework for everybody in the organization?
What tasks can be handed off to agents and under what conditions?
For instance, another group of people, which is obviously critical, is it data science and security.
So obviously they do play a role, a critical role in operationalizing agent agent workflows.
Um, they play, they play a big role in terms of the, the data, preparing the data, mapping the data, the infrastructure, and then also everything that has to do with security.
And this is, again, a dialogue not only from a an IT perspective, but also with compliance, HR and, um, and the like.
So from an HR perspective, I, I think we have, we play a critical role in the process from different angles actually, whether it is HR technologies, compliance, learning and development.
We want to make sure that we source and build the right skills to be able to implement this technology and this change across all the organization.
We want to ensure workforce readiness and support heavy change management, because really AI agents implementing AI agents is not only about automating tasks, as we were saying, it’s really about redesigning roles and the whole organization at the end.
So it’s a, it’s a massive change that we need to orchestrate altogether.
Yeah, I’ll go as far to say it’s redesigning work itself, right?
Um, absolutely.
And, um, I think one of the things that we don’t talk about enough also is how this is gonna impact employee experience, uh, as well.
Now, I dunno about, uh, I dunno about everyone listening, but I think I’ve got about 20 communication tools and about 15 different agents within, within our,
within organizations that are being thrown at us.
What does that mean for the employee experience? Um,
That’s a fascinating question because there’s a lot of, what I’ve been focusing on recently
has been the employee experience.
So how easy and intuitive is it for you to get done what you’re looking to get done?
Yeah. And there’s a traditional power, uh, paradigm around that, right?
You know, screen layouts, ease of navigation, cutting down the number of screens or, or different tools that I’m pivoting between.
And now if you think about a prompt paradigm or maybe from one collaborative tool and using transactional or agentic ai, I’m now able to say, please do this.
And off it goes behind the scenes.
And so maybe it’s the agent collaborating with other agents behind the scenes invisible to me as the end user.
And so does that paradigm, how fully does that shift?
Does it shift from today’s traditional kind of construct of what a great user experience is like to one that’s more prompt generative?
And it’s an interesting question that I’m giving a lot of thought to because we need to think about
where we focus and invest.
And I think Jason one is for sure correct.
That we need this kind of AI orchestrator sitting on top of the agents and doing everything in the background for us as end user.
The other topic I see at customers, but also within Deloitte, I think we all would say we have, we have enough knowledge and we have enough data in the companies, but it’s definitely a huge, huge, huge effort to get the data in a quality and in a place and in a structure that all the agents can work with it.
And that there is like clear and unique messages and no contradictions based on contradictions we have in the different data being in the companies.
Hmm. Uh, and that’s like so easy to see all these nice demos where the agent AI bot is communicating with you and then at the end even delivering a pizza to you.
But this is like not the real world that we see in companies, right?
Because there is lots of data. Yes, for sure.
Is it usable for agent ai?
I’m not that sure, or certainly not like out of the box.
Hmm. And maybe also coming back on what you were on, on the topic that you were raising Chris around employee experience.
I, I think two, two things that might drive actually our decision in that front are probably emotional needs and ethical needs of the, of human.
So, you know, where there is an emotional need, whether to connect or to build trust or, uh, to address whatever emotional needs we have or unethical needs is where probably human should play a stronger role versus AI doing well.
Probably a, a lot of the rest. No.
Yeah. I think, um, what, during all of my podcasts with CHS at a moment, they’re battling with this right now because every single solution provider they have across the tech stack is saying, Hey, use our agent.
Mm-hmm. Let our agent be the orchestrator, right?
And I’m sure you’re all experiencing this listening plug everything else into our one, right?
And it’s sort of this like agent war, this is going on at the moment, right?
Which one do I use? ’cause everyone
claims to have the one, right?
The, but how do we create one po one point of contact for our employees, right?
But they’re not overwhelmed having to log into different platforms and solutions.
Um, I don’t think anyone really has the answer for that right now is too, is too early to say, but I can, even in my small organization, I could already see my team already overwhelmed.
So what, what we’ve created to help support people and get instant answers to questions in the flow of work personalized to them is quickly becoming overwhelming, um, uh, as well.
So it’s gonna be interesting to see, uh, how this plays out.
Someone asked him to chat quite an interesting question.
Sorry. Does anyone want anything else to that?
Uh, I think it’s what, uh, what Jason said, like my team of agents, we look very differently from your team of agents.
Um, so I think like having these specifically designed agents for each function is crucial.
I think the important question then is like, what is the layer above this?
Like what is the, where is this all grounded in? Yeah.
And that’s I think the, the, the, the problem that we are still struggling to solve.
Yeah, no, I agree.
Um, someone asked in the chat, what does this, we mentioned about the individual employee having a team of agents, right?
Uh, we all have our own teams of agents.
What does that mean for the role of a manager?
So someone just threw that in the chat, you know,
if you’re a manager, you’ve got 10 people on your team and they’ve all got x many
agents, I don’t have the answer to this.
I’m hoping the panelists have some insights on that.
What do you think that means for the role of a manager in an organization?
You, you become more of an orchestrator.
Uh, that’s, that’s really what it becomes.
You are focusing on more of the strategy, more of the vision, um, the, what you would traditionally have with a, a human direct report as opposed to a digital one is you’d also look to, to grow that talent, to stretch that talent, right?
Provide opportunity and, and and so forth.
Some of that will happen just because the, the agent is trained right and continually getting better.
And so that frees the manager up to now focus more on that strategic intent, more of that vision, more of that aspiration.
But what the person also though, if you look at a skills gap, what will become increasingly important and is, has not been as important in the past, is that orchestrator as an auditor, he or she needs to be able to spot hallucinations, spot quality gaps and ensure that he or she’s able to then action those.
So is it a correction to the agent?
Is an augmentation to the agent?
Is it a retirement of the agent? Whatever the case might be.
So I think having that, that orchestrator mentality, having clear roles and responsibilities for each agent and performance management as a concept gets translated and updated into an entirely new construct where it’s around performance of the agent and how to optimize that agent going forward.
I must say, I would’ve loved the question if the question would’ve been, what if my manager is ai?
Yeah. Because I think that’s, that’s, that’s, that’s a, that’s a, yeah, you’re taking us full,full circle for when you got started.
Yeah, that’s very, I think
Managers will probably also need to develop the, their ability to balance the workloads between the different, like human AI and so on.
And again, I’m back to the emotions because I’m, I am generally interested, passionate about, uh, human skills.
Um, but I think, uh, something that managers will probably need to develop even further is this ability to also manage human emotions as they transition to these new cyber teams.
Yeah. It requires, sorry, go ahead Jason.
Sorry, go ahead. I just had a quick question.
’cause you, you’re, you’re quite a long down the, the journey.
Um, you spoke about some of the core functions that you had along with you and, and you’ve also talked about the human and emotional side.
So can you talk a little bit around the role of communications and change management that’s played at Dell?
Yeah, I mean, we are on the journey, um, agentic ai, we’ve started, we’ve definitely started the journey.
We’ve trained everybody.
So we rolled our second round of AI training earlier this year that was focusing on agentic AI to make sure that everybody would understand.
We’ve also made a, I think, a very clear and strong call what it means for us as a company and how we’re planning to use it in the future.
So addressing this transparency, Krista, that’s you are mentioning.
And, and now is when we are really starting to work on the, on the change management.
Part of it is definitively raising awareness, encouraging people to experiment, uh, for the moment with a series of tools that is available for everybody.
Some tools are more specific, whether to sales or to operations, for instance.
And then we gradually open to, to more, uh, things.
But I think experimentation, trust, transparency, are probably key things in this change management journey.
Love that. Well, I mean I actually didn’t realize we’re already at time.
I’m getting told off in my ears, so that, that went really fast.
I wish, I mean, that was such an incredible conversation.
I wish we didn’t have to end, to be honest.
Um, before I let you go, um, we covered a lot and so we actually got through most of the questions, the congratulations panel.
Um, OO on that, what would be your parting piece of advice or, or, or takeaway from the panel
that you’d share with everyone?
And I’m gonna give you a second to think, uh, about that.
And then, and then we’ll say goodbye.
So let me, lemme come to you first, Daniel.
Yeah. So I would, uh, I would say, um,
AI literacy really starts in the, uh, HR department.
Um, domain expertise is crucial when you talk about deploying AI across the organization.
So HR teams should really feel empowered and encouraged to, you know, become experts in their own rights in, you know, all things ai, whether it is technology or uh, or regulation or compliance.
Um, AI literacy really starts within the HR team.
Love that. Claire.
I think I loved what you, what you said actually, uh, around, uh, ai implementing AI is not only about technology, legal, compliance and so on, is really about reinventing the work.
So it, it requires engagement from all across the organization.
Uh, Christophe I would say be curious, don’t be scared and just get started.
Nothing to lose, just to win part of that. Yeah.
Uh, Jason,
I’d say one, there’s gonna be plot twists, but none of us can foresee if we watch this video back a year from now, two years, five years from now, guarantee you things will be very different than the world we’re sitting and talking around right now.
But I think the, the kind of overall advice would be this.
There have been other paradigms that have occurred.
Um, and ultimately what will happen thematically is AI will dissolve into the back.
It’ll become the new norm of how we do work.
There will be another gap, another change that we’ll tackle together.
But I’d say approach this with an open mind.
Think about this as an opportunity.
Always think about, gee, what would I do if I had an assistant?
If I had AI by my side, I had the ability to delegate.
If I had the ability to outsource, what are those things that I would love to shed if I could do more of what I love?
That would be what I would suggest about that.
Who did I miss? Dan, uh, Andreas.
Yeah. I think my advice and takeaway is do it like Columbus, go for India and discover America.
And, uh, on that chicken, we have lots of surprises and, uh, great outcomes.
Yeah. Did I miss, did I miss anyone by the way?
Is that, that was everyone, right?
In terms of applying advice? Yeah. Amazing.
Well, thanks so much everyone. I know.
Let, if you, if, if, if the panelists could see the chat, there’s a lot more questions, but we have to maybe bring everyone back together for our part two.
But, um, thank you so much for joining us.
Um, time absolutely flew by.
I know there’s many of you tuning in from all over the world, some of you just waking up.
So firstly, thank you to our panelists and to all of you at home.
Um, special thank you to our friends at Eightfold for helping us bring this panel together.
Uh, I know many of you have already done so, but if you haven’t directly below this video,
there’s a huge button there.
If you click that button, you can download Eightfold.
New report on how HR executives are accelerating their use of ii.
So there’s many use cases and practical advice in there.
So make sure you download that.
If you enjoyed today’s discussion, we’re back next week on October 2nd where we’ll be talking about how to transform your leave of absence strategy into a culture of care strategy.
So make sure you sign up for that.
Apart from that, wherever you are in the world, enjoy the rest of your day and we’ll see you again soon.
Bye everyone.
By submitting this form, I consent to Eightfold processing my personal data in accordance with its Privacy Notice and agree to receive marketing emails from Eightfold about its products and events. I acknowledge that I can unsubscribe or update my preferences at any time.