March 9, 2021

The State of International Regulation of Artificial Intelligence

Artificial intelligence is quickly becoming the key driver of business decisions. Businesses are increasingly relying on the ability of AI technology to analyze high volumes of data to present information and solutions which guide faster and more intelligent decision making. 

AI “is adept at processing and analyzing troves of data far more quickly than a human brain could,” writes Adam Uzialko, freelance editor at business.com. “Artificial intelligence software can then return with synthesized courses of action and present them to the human user [who can] game out possible consequences of each action and streamline the decision-making process,” Uzialko explains.

This has been transformative for businesses of all sizes in all industries.

“AI technologies are helping businesses enable better ways of working, but more importantly, they are creating value by freeing up time for innovation and enhanced human creativity — powering human enterprise,” writes Nigel Duffy, Ph.D., EY global artificial intelligence leader.

That’s a big reason why people in general have been so supportive of the advancement of AI technology. It presents opportunities in business that weren’t even conceivable only a few years ago. But with this growth has come a number of questions and challenges that shine a light on the complicated side of technological innovation. 

One of the biggest questions being debated right now is whether or not AI should be regulated. 

The Great Debate: Should the Use of Artificial Intelligence Be Regulated?

As with any new technology, there are questions about how tightly AI should be regulated, if at all. A major reason there hasn’t been universal consensus on how to approach this issue is that there’s no precedent on how or why to regulate this type of technology.

“Artificial intelligence regulation isn’t just complex terrain, it’s uncharted territory for an age that is passing the baton from human leadership to machine learning emergence, automation, robotic manufacturing, and deep learning reliance,” writes The Last Futurist’s Michael Spencer. 

Certainly, the novelty of AI has created a space where there are vocal advocates on either side of the argument. 

Those who advocate for regulating AI believe it is necessary to protect the public. They recognize the importance of innovation, but they are also cautious of unrestrained growth. 

“Smart regulation that gets out in front of emerging technology can protect consumers and drive innovation,” writes Mark MacCarthy, senior fellow and adjunct professor at Georgetown University. “AI is too important and too promising to be governed in a hands-off fashion, waiting for problems to develop and then trying to fix them after the fact.”

The other side of the coin is the limits regulation would place on AI innovation. Those who argue caution in regulating AI believe that regulatory requirements can impede investment and serve as barriers to market entry, writes Keith B. Belton, Ph.D., founder and principal at manufacturing sector consulting firm Pareto Policy Solutions.

The answer to the question of whether or not AI should be regulated isn’t clear or universal. What is clear is that answering the question is complicated on many levels, with governments around the world responding differently.

sideview of a conference panel; Artificial intelligence international regulation concept

AI is Regulated Differently Around the Globe

Government leaders around the world have been debating this issue, and some have even taken steps toward regulation. These steps vary from country to country and may be different from one city to the next. That’s to be expected because, as MIT’s Director of the Project on Technology, the Economy, and National Security, R. David Edelman, explains, “There is no one-size-fits all approach.”

Here’s a brief look at regulation efforts being undertaken by the European Union, China, Great Britain, and the United States. 

The European Union 

The European Union has been the most progressive in pushing for AI regulations. The EU started having serious discussions on regulating AI in 2017. The European Commission published its first communication on AI in 2018, which was followed by a report on the ethics guidelines for trustworthy AI. These guidelines presented suggested standards for addressing potential issues with AI, but stopped short of regulating the technology.

In 2020, the European Commission went a step further by proposing concrete actions in its white paper on Artificial Intelligence. In the paper, the EU proposed a regulatory framework for creating an “ecosystem of trust” around AI technology and a policy framework for mobilizing resources to achieve that end. It also created two regulatory “buckets” for high-risk and low-risk AI applications, notes William Crumpler, research associate in the Strategic Technologies Program at the Center for Strategic and International Studies. Binding regulations were proposed for those applications considered high-risk. 

These measures would impact all member states, some of whom have urged the commission not to pass “excessively strict rules” regarding AI, writes Juan Murillo Arias, senior manager of data strategy and data science innovation at BBVA. That’s because they’ve each started taking their own actions around AI regulation in the context of what works best for their citizenry and economies. 

China

China doesn’t currently have a regulatory body set up to govern AI, but Susan Ning and Han Wu, partners at law firm King and Wood Mallesons, write that the government will soon install a regulatory system. To that end, the State Council in China has outlined strategic policies that established goals in both technology achievement and the future regulatory regime of AI, write Ning and Wu.

Absent current national policies, there are some sector specific guidelines and best practices in data protection that extend to AI technology.  

The United Kingdom

AI regulation in the UK has focused on data governance. The General Data Protection Regulation (GDPR) and the Data Protection Act of 2018 established transparency and accountability for businesses collecting and using individuals’ personal data. The UK’s Information Commissioner’s Office also released proposed guidance on the auditing framework for AI in 2019 in an effort to help companies identify the threats AI poses to people’s freedoms and how to mitigate those risks. 

The United States

In the U.S., “major legislative changes to AI oversight seem unlikely in the near future, which means that regulatory interventions will set precedent for the government’s approach to protecting citizens from AI harms,” writes Rubenstein Fellow of Governance Studies at The Brookings Institute Alex Engler.

In 2019, President Trump did sign an executive order regarding America’s development of AI that takes a relatively hands-off approach to regulation. In addition, the Office of Management and Budget established a data strategy for the federal government that includes best practices for using and managing AI applications. As noted by Engler, this document offers “a set of guiding principles and generally adopting an anti-regulatory framing.”

Individual agencies and organizations as well local governments have also adopted their own regulations for governing AI. San Francisco became the first city in the country to ban the purchase and use of facial recognition technology by city agencies. A number of cities have since followed suit. 

In Illinois, The Artificial Intelligence Video Interview Act went into effect in January 2020. The legislation “regulates use of AI in recording and analyzing video interviews of potential job candidates,” write lawyers Evan Nadel and Natalie Prescott at Mintz Levin. The law was designed to protect applicants from privacy invasions and data breaches when their body language, facial expression, word choice, and tone of voice are recorded and stored for interviews.

Other cities and states around the country have started having conversations and passing legislation addressing issues brought about with the proliferation of AI. 

facial recognition software on a smartphone; Artificial intelligence international regulation concept 

The Business Landscape While This Debate Continues

The lack of AI regulation is turning out to be good for big tech companies such as Google and Facebook, as they have been able to consolidate their power through their data-collection capabilities facilitated by AI. In contrast, it has been detrimental for smaller companies and overall market competition because those data monopolies limit innovation and diversification.

The monopolization of data and AI by big tech companies can “adversely affect future innovation and the shared benefits it would bring,” writes Karen Mills, senior fellow at Harvard Business School.

“Amid this challenging legal and regulatory environment, the tech giants are attracting ever more customers and quietly continuing to access and generate ever more data to perfect their AI algorithms,” write Michael M. Lokshin and Craig Hammer at the World Bank. “Such market concentration—coupled with certain limitations in antitrust legal and regulatory regimes—is a worrisome state of affairs, even for largest world economies.”

That unchecked data consolidation within a small number of tech companies is a leading reason behind the push for universal regulation of AI.

Growing Call for International AI Regulation

In the early stages of AI development, the push for regulation was very much a local or regional effort. Now, however, as AI becomes more prevalent in everyday life and every aspect of business, government and technology leaders are calling for greater regulation. 

“The rapid proliferation of applications of artificial intelligence and machine learning … coupled with the potential for significant societal impact has spurred calls around the world for new regulation,” writes Belton. What that regulation looks like and how to bring it about are questions that will require cooperation on a global scale. 

“International alignment will be critical to making global standards work,” says Google CEO Sundar Pichai. “To get there, we need agreement on core values.” Time will tell whether or not that vision becomes reality. Until then, there is no denying that things are moving in the direction of tighter regulation of artificial intelligence.

Images by: Andriy Popov/©123RF.com, Katarzyna Białasiewicz/©123RF.com, supparsorn/©123RF.com