Guest blog: Keeping it ethical: How HR teams can harness AI

Hear how Fosway’s Director of Research recommends the ethical adoption of AI to create scalable, repeatable, and compliant HR programs.

Guest blog: Keeping it ethical: How HR teams can harness AI

This contributed post comes from David Perring, Director of Research at the Fosway Group, a leading European analyst firm focused on next-gen HR, talent, and learning.

AI’s people experience potential

In 2018, McKinsey predicted that up to 56 percent of all employee lifecycle processes could be automated. Four years on, and we are on the way to this prediction becoming a reality as almost one in four organisations say they are using automation or artificial intelligence (AI) to support HR-related activities. 

And this is set to grow dramatically. In Eightfold AI’s 2022 Talent Survey, 92 percent of HR managers said they were planning to expand their use of AI in HR, including in recruitment, employee listening, people management, onboarding new employees, and payroll. 

But as AI becomes a foundational part of the HR landscape and increasingly influential on the employee experience, HR teams are facing increasing scrutiny about how, where, when, and why AI is used and whether it is working as it should. There are real concerns about machine-learning bias, hardwired discrimination, and the mystery of a black box of algorithms to do … who knows what. So, as much as HR looks to embrace innovation and AI, some HR managers are also worried about becoming the “beta testers of AI functionality” and are wary of being the early adopters. 

Guest blog: Keeping it ethical: How HR teams can harness AI

So, despite the rapid innovation and market momentum, questions remain: Can we trust AI? Is AI positively impacting the employee experience? And how do we ensure that AI is a power for good and not amplifying the bad? Put simply, how do we keep AI ethical?

What are providers and HR teams doing to make sure their use of AI is ethical?

Good AI design is good solution design, and vendors who are leading in the HR space are following these design principles to ensure their AI is ethical and effective. It’s the mantra for ethical AI, and it is simple. 

  • We can articulate the ethical benefits of using AI for a given scenario
  • We can provide a clear explanation of how the AI model works
  • We use valid and verifiable data for training the AI 
  • We are transparent about the risks, and alerts will highlight problems
  • We can show how the solution will track any drift in the AI’s ongoing accuracy
  • We can show how the AI is being refined to improve its performance.

While this list is almost a checklist for buyers, there are a number of other questions buyers can ask to make sure their AI provider can deliver ethical AI. If you can get clear answers to these as a buyer, then you will have significantly managed your risks.

  1. What is the ethical purpose for AI being adopted in this instance, and how will it positively impact the personas it touches?
  2. Is the AI transparent in how and why it makes recommendations to end users, including what data is used?
  3. Are there people analytics that will forensically check if the AI is working as intended?
  4. What is being provided to enable us to assess the fairness, reliability, privacy and security, transparency, and accountability of where AI is being used?
  5. How will we be alerted if the AI is not working as expected?
  6. When designing the AI, was a representative and unbiased data set used to calibrate it?
  7. Is the specific use case that taught the AI consistent with my scenario? 
  8. What is the reliability of the AI model for my use case?
  9. Is the AI explainable and transparent in what data drives the automation and decision?

Ethics can’t work in a black box because ethics is about transparency and good intent. So, of all these questions, it is the last one that is perhaps the most important. That the recommendations, shortlist, decisions and suggestions are transparent and shared with the end user — the candidate, employee, HR team, or manager.

Guest blog: Keeping it ethical: How HR teams can harness AI

Having explainable decisions is crucial. Explaining what the AI did and why it did it at the point of the decision is key to building trust and being the catalyst for gathering feedback necessary to keep the AI on track. Without providing transparency and monitoring outcomes, you’ll never know if your AI is a force for good or not.

Why adopting a “people-centred” approach to AI is critical to success

“Implementing HR tech will only succeed if employers manage to transparently show how these systems are tracked and monitored.” — Dr. Mathias Kühnreich, Lawyer in European Employment Law

Things are moving fast. New York City is one of the first jurisdictions in the world to pass a law aimed at reducing bias in automated employment decisions made by AI. This is due to be effective on 1 January 2023. Equally, the EU Commission has already made proposals for regulating artificial intelligence in April this year, which could become live in 2023.  

So, with looming regulations in the EU and U.S., it is essential that organisations are proactive in how they manage the ethics of their AI. The EU highlights the five areas of AI compliance as data quality, documentation and transparency, human oversight, accuracy, and data security. These, along with the questions we highlighted earlier and the design principles adopted by good providers, are central to a “people-centred” approach to AI. And that is ultimately why taking a people-centred approach to AI — especially for those who might be the victims of bias — is so important. It creates a winning scenario in which AI is working for your people, building trust with users, and is not at odds with the law. 

David Perring, Director of Research, Fosway Group,

David Perring is the Director of Research at Fosway Group, where he independently explores the experiences of practitioners and suppliers to understand the realities of what’s happening in corporate learning, talent development, and HR. 

Ready to learn more? Listen to our recent podcast episode about systemic discrimination and AI compliance with a former director at the U.S. Department of Labor, Craig Leen.

You might also like...