Responsible AI at Eightfold

Eightfold’s Talent Intelligence Platform empowers enterprises to acquire and retain diverse talent, and provides the foundation for public agencies to reemploy and upskill citizens. As the pioneer and leader in talent intelligence, our mission is to enable the right career for everyone.

Responsible AI at Eightfold

Overview
Summary

Eightfold’s Talent Intelligence Platform empowers enterprises to acquire and retain diverse talent, and provides the foundation for public agencies to reemploy and upskill citizens. As the pioneer and leader in talent intelligence, our mission is to enable the right career for everyone.

Eightfold’s AI delivers relevant recommendations at scale to predict the next role in an individual’s career. Our models understand more than one million unique roles and one million skills across many languages.

In this guide, we discuss:

  • AI principles
  • Right products and analytics
  • Right data and features
  • Right algorithms and training
  • Right governance and monitoring

Responsible AI at Eightfold

Eightfold’s Talent Intelligence Platform empowers enterprises to acquire and retain diverse talent, and provides the foundation for public agencies to reemploy and upskill citizens . As the pioneer and leader in talent intelligence, our mission is to enable the right career for everyone .

Eightfold’s AI delivers relevant recommendations at scale to predict the next role in an individual’s career . Our models understand more than one million unique roles and one million skills across many languages .

With Eightfold technology, candidates can instantly match to the jobs that fit their skills and potential, see why each job is a match, and apply in a matter of seconds . Recruiters and hiring managers get instant ranked lists of candidates who match their requirements, and can engage them through our platform up to the point of making an offer .

Employees can explore future career paths with detailed understanding of the skills and experiences for their next step in their career, and find the projects, courses, mentors, and gigs that can help deliver these skills and experiences. Organization leaders can oversee their talent strategies, find successors for roles, compare scenarios, and determine the upskilling and reskilling plans for their future needs .

Governments and social service organizations can deploy our platform to match individuals with job opportunities at scale in support of reemployment and community building initiatives .

AI Principles

At Eightfold, we are committed to a responsible and ethical development and use of artificial intelligence. As a company, we understand that AI has the potential to significantly impact many aspects of our lives, and we build AI solutions to benefit society while respecting the rights and dignity of our users .

Our team of experts works closely with stakeholders, our committee of representatives from various departments, our AI Ethics Council, and external consultants to design and deploy our AI systems in a responsible and ethical manner . At the core of every design, we prioritize the following principles:

  • Fairness: Design and Use AI systems that are just and mitigate bias . This includes mitigating discrimination based on factors such as race, gender, age, or other protected characteristics .
  • Transparency: We believe how AI systems work and how decisions are made should be understandable and explainable .
  • Safety and Reliability: We strive to design and develop stringent safety measures that our AI has to pass before it rolls out as our product . We believe that it’s our responsibility to provide solutions that add value to our society .
  • Active Monitoring And Response: We believe that any AI system needs to have continuous active monitoring to check that the system behaves as expected . Deviations are treated and responded to on a priority basis .

In this report, we’ll be taking a deep dive into our thoughts on fairness in the interests of transparency and to hopefully serve as reference to other companies looking to mitigate biases in their AI systems. We should note that this is an active field of research and as things evolve, we will re-visit and update our approaches.

Fairness

Artificial Intelligence can revolutionize employment processes in countless ways. As the industry evolves and increasingly relies on AI systems, it’s important to consider the potential of AI to perpetuate social injustices or biases . Fairness is a particularly important issue in the HR recruiting space as biases in AI systems can perpetuate and even amplify existing inequalities in society if left unchecked .

AI fairness refers to the idea that AI systems should not discriminate against groups of people based on characteristics such as race, gender, age, etc . . There are a lot of variables that go into developing an AI system and gaps in oversight can lead to an unfair model . When it comes to applying AI technology to employment practices, we believe the principle of fairness applies at all stages of the development and application of AI technology . It’s important for AI developers and users to be aware of the potential for bias in AI systems and take steps to identify and mitigate these issues .

The most common pitfalls we’ve seen can largely be placed into the following buckets:

Data

One possible source of bias is the data used to train models within an AI system . The data used by AI models should be representative across protected categories, and industries . Features should be representative of the population and should not favor any one group . The feature engineering process should be thoroughly vetted . For example, in the case of HR systems, we feel that the model should only need to learn the qualifications of successful individuals rather than their identity.

Training, Evaluation and Model Selection

Alternatively, the choice of model and the training process used can themselves lead to biased outcomes . The models and algorithms used should go through a rigorous and thorough evaluation framework where they are tested for performance across the measurable protected categories . It is crucial that checks and balances are in place during model training to check against learning decisions based on protected categories . At Eightfold, we build models that strive to mitigate amplifying the classic stereotypical patterns in data and human behavior . For example, a model used to recommend candidates for a Software Engineering position should not perform better for one gender than the other.

Active Measurement And Monitoring

In addition to the above, as bias can occur across multiple hiring stages and in a myriad of ways, there is no single test that can test for bias . A robust methodology for measuring bias and monitoring models for biased outcomes is a key component involved in mitigating AI bias.

Product Safeguards

Finally, without any safeguards in the product, even when AI is developed appropriately, outcomes may reflect bias due to human error . While reviewing lists of candidates, for example, people making employment decisions may intentionally or accidentally introduce personal biases into the hiring process. They may favor certain last names or social activities identified in the candidate profiles that reflect historical trends of hiring. Additionally, having detailed monitoring and analytics helps track potentially biased outcomes of human and AI decisions .

By being aware of the potential for bias and consistently aligning our designs with our AI principles, we believe that responsible approaches to AI at Eightfold will help revolutionize employment processes in a fair and equitable way . In the following sections, we’ll cover how our principles help us avoid these pitfalls in the development of Eightfold’s Talent Intelligence Platform.

You might also like...

Get the latest talent news in your inbox every month

Share Popup Title

[eif_share_buttons]