Why you need a plan to implement AI now

Most leaders know they need AI but don't know exactly how to do it yet. These strategic planning questions will keep you ahead of the curve.

Why you need a plan to implement AI now

6 min read
  • 83% of executives say AI is a strategic business priority
  • Business leaders need a clear plan to adopt AI in their organizations — and an idea of where they want to go
  • Leaders need to answer 4 big questions: Who needs to use AI and why; what do they want to achieve; what roles they’ll need to support adoption; and how well do they understand the risks and challenges with AI?

When people talk about the growing use of A.I. in organizations, the focus is often on productivity gains, workforce restructuring, and ethical usage. All of this is extremely important, but A.I. can also have an enormous impact on dynamics within organizations, and these must be factored into any plan to adopt A.I. Otherwise, organizations may lose out on many of A.I.’s expected benefits and find themselves unprepared for future needs and shifts in the labor market.

Eighty-three percent of executives say A.I. is a strategic business priority, meaning that how and why companies implement it will have far-reaching effects. Whether a business has already started using A.I. or has plans to adopt it more widely, it’s essential to create an overarching, risk-aware A.I. strategy to avoid pitfalls and achieve the best outcomes. Before developing that strategy, leaders need a clear picture of where they’re starting from and where they’re hoping to go. These four questions can help bring that picture into focus.

1. Who in the organization is using A.I. and why?

While worker displacement is a common fear, generative A.I. could have an equalizing effect on the labor market, offering a solution to the talent gap between open jobs and available workers.

Related content: Josh Bersin and Co-CEO and Co-Founder Ashutosh Garg talk about talent in this new age of technology

In a recent study published by the National Bureau of Economic Research (NBER), less-experienced, lower-skilled customer service reps trained to use a generative A.I.-based conversational assistant saw larger gains in their job performance than more experienced or higher-skilled workers. This means companies that are having a hard time finding candidates may be able to lower skill and experience requirements for certain roles, train the people in those roles to use A.I. assistants, and achieve performance outcomes that meet or exceed their original expectations.

However, a recent BCG study revealed that 80 percent of leaders say they use generative A.I. regularly, while only 20 percent of non-management employees say they do. This suggests that overall, there is a disconnect between those who should use it and those who do. An A.I. strategy should go beyond figuring out what job functions can be replaced and enhanced by A.I. to include research-based assessments of which workers can benefit most, and achieve greater productivity gains, from investment in A.I.

To answer this question, organizations also need to know what they’re starting with. A potential pitfall here is that workers may be hiding their use of A.I. and keeping their productivity gains quiet out of concern that they’re training their replacements. The impact of these tools on performance could be transformative for a company, but no one is analyzing or measuring it.

It also means no one is setting policies or standards for the use of the technology, which could have risky implications. A GitHub survey recently revealed that 92 percent of developers admitted to using A.I. coding assistants. At the same time, a November 2022 Stanford study showed that developers who use A.I. tools to help solve security-related tasks wrote significantly less-secure code than those who don’t.

It’s critical, therefore, that companies thoughtfully audit A.I. use throughout the organization, incentivize transparency, and embrace and reward productivity gains enabled by A.I. By doing this and getting the full picture, businesses can set appropriate policies, project realistic expectations for A.I. adoption, and develop a holistic strategy that deploys A.I. resources where they’ll have the greatest impact. It’s also crucial for organizations to manage the transition effectively, provide adequate support, and address any concerns or anxieties employees may have about implementing A.I.

2. What long-term business objectives is the company trying to achieve?

While much of today’s focus is on the potential impact of generative A.I. on daily work, “more productivity” shouldn’t be an end in and of itself. Organizations need to think about developing and implementing an overarching A.I. strategy that encompasses all business functions — and how that can benefit workers and the business.

It would also be unwise to implement A.I. to achieve short-term cost savings by lowering head count — the real power of A.I. isn’t in how well it can do a person’s job for them, but in the potential for long-term gains in performance and innovation when people and machines work together. McKinsey forecasts that A.I. automation could eventually take over as much as 70 percent of worker hours, but it should be obvious that this does not mean replacing that proportion of the workforce — it means extending what humans can do.

By automating mundane and repetitive tasks, A.I. allows employees to spend more time on meaningful and fulfilling work. This may require upskilling these employees to focus on the skills for tomorrow — another win for the employee and organization. This could be particularly beneficial for workers who want to take on new challenges and contribute more but have become disengaged from their company for lack of those opportunities. Instead of letting those people go or waiting for them to move on, a company can re-engage these employees and the capabilities they were hired for, reducing turnover costs and retaining valuable institutional knowledge.

Rather than replacing humans, A.I. should augment human capabilities and enable collaboration between humans and machines. Jobs that require human judgment, creativity, emotional intelligence, and critical thinking are likely to be in higher demand as they complement A.I. capabilities. This can lead to the emergence of hybrid job roles that combine A.I. expertise with domain-specific knowledge and human skills.

Related content: Learn more about Responsible AI at Eightfold

3. What roles will the company need in the future to thrive?

As A.I. adoption grows, some roles will disappear, some will emerge, and others will evolve to account for A.I.’s impact. As companies develop their overall A.I. and workforce strategies, one of the biggest challenges will be to guess what the new roles will look like and how workers can be upskilled to fill them.

The company Eightfold A.I. used extensive workforce data to help decipher which roles will be in demand, which skills they’ll require, and which adjacent skills give people the greatest potential to learn the skills needed for the rising role, thereby opening the talent pool for businesses to get the right talent.

For example, we know that product owner will soon be a top role as it focuses on the tactical engine driving the development of A.I.-powered products. This role is a highly specialized version of a general software product owner role, and it’s most common in organizations that use Agile methodology. According to Eightfold data, if machine learning, Hadoop, and AWS are primary skills for this role, people with any of the adjacent skills, like experience in algorithms or data science, could potentially upskill to do this role.

4. How well does the organization understand the risks and challenges of A.I.?

A.I. programs are value-neutral, but how they’re used isn’t. Businesses must know how and where A.I. is used because under-informed or unethical use can expose a company, an industry, or a whole economy to serious legal and security issues, not to mention have a terrible effect on employee morale.

Organizations need to ensure transparency in A.I. decision making processes, address biases in A.I. algorithms, and establish guidelines for responsible A.I. use. Additionally, ethical challenges related to data privacy, security, and the impact on job security need to be proactively addressed.

A.I. combined with upskilling and strategic planning can future-proof a business, but the future is coming faster than we might think. No matter where an organization is in its A.I. adoption journey, it will be extremely important to proactively develop an overarching plan that accurately assesses needs and thoughtfully deploys resources for the best outcomes.

Sania Khan is the chief economist at Eightfold AI, the AI-powered platform for all talent, and the author of the book Think Like an Economist. She previously worked for the U.S. Bureau of Labor Statistics.

This article originally appeared on Inc. in June.

You might also like...

Share Popup Title

[eif_share_buttons]