The EU’s AI Act: What you need to know

The EU is taking strides to regulate AI, focusing on strengthening rules around responsible and ethical use across industries.

The EU’s AI Act: What you need to know

5 min read

Policymakers around the world are keeping a close watch on the EU’s developing Artificial Intelligence (AI) Act, which aims to strike a balance between managing the risks associated with AI and harnessing its potential benefits. 

As one of the first pieces of legislation regulating AI, the EU AI Act will put guardrails on emerging AI technologies with a focus on strengthening rules around data quality, transparency, human use, and accountability. This draft of legislation is expected to be passed into law by the end of the year. Companies will likely have a two- to three-year grace period to comply with this law once it’s adopted. 

Ayisha Piotti, Managing Partner at the Swiss-based firm RegHorizon and the Director of AI Policy at the Center for Law and Economics of the Swiss Federal Institute of Technology (ETH Zurich), joined us at Cultivate in London to discuss what this emerging legislation means for the EU. Piotti emphasized the need for more innovative policy solutions to adopt AI sustainably and responsibly across the EU, while also noting the benefits AI can bring. Here are some highlights from that conversation. [Ed note: Quotes have been edited for length and clarity.]

 

 

Related content: Watch the entire Cultivate session, “What companies need to know about new AI regulations everywhere,” on demand now.

The EU AI Act: A ‘risk-based approach’ to regulation

As a comprehensive piece of legislation aimed at managing the risks and opportunities associated with AI, the EU’s AI Act is taking a “risk-based approach” to regulation, with classification rules for high-risk AI systems. 

The proposed law classifies an AI system as high risk if:

  • The AI system is, or is a safety component of, particular sensitive products that are already required under EU law to undergo conformity assessments, such as medical devices, cars, airplanes, or children’s toys; or
  • The AI system is used in particular sensitive use cases identified in the draft AI Act, such as employment, education, law enforcement, or in the judicial system.

If an AI system is not high risk, the law would still apply but with far fewer requirements.     

“Depending on which risk category you’re in, you will need to comply with certain rules and conditions as companies or as people who are using the technology,” Piotti said. “HR systems are a part of the high-risk categories, which is defined by the EU.”

Piotti said that in the context of HR use, regulators are particularly interested in privacy around what data is sourced and how it’s used to train models to ensure it’s not introducing bias; transparency in how it’s used and how that is communicated to candidates and employees; and in surveillance, how organizations are using AI to oversee employees. 

The European Commission outlined what systems in the “high risk” category, including HR, will need to be accountable for when using AI technologies in employment practices:

  • Adequate risk assessment and mitigation systems
  • High quality of the data sets feeding the system to minimize risks and discriminatory outcomes
  • Logging of activity to ensure traceability of results
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance
  • Clear and adequate information to the user
  • Appropriate human oversight measures to minimize risk
  • High level of robustness, security, and accuracy.

With guardrails in place to manage risk, Piotti said that the benefits of responsible and ethical use of AI are “immense.”

“Artificial intelligence really is a force for the good,” Piotti said. “It does offer immense benefits, not just in the business world increasing productivity, but also specifically in HR, it’s also very relevant to manage disruption that might happen. Skills and skills-based systems could be very important there to help us.”

 

Build trust in AI with a risk-assessment framework

Building trust in AI systems, including generative AI like ChatGPT, involves transparency, involvement, and accountability. Building a trust framework in any organization includes openly communicating how AI systems are being used, involving employees in decision-making, and focusing on internal accountability measures. 

“Make sure that you understand first and foremost where AI is being used in your company and how it is being used,” Piotti said. “Be transparent about it to your employees and your customers. Ensure that you involve your employees in the decision-making processes as much as you can share that knowledge. 

“And the last one is where I would spend a little bit more time if it’s possible,” she continued. “Accountability is very important. What can you do internally as a company to ensure that there is accountability even if you don’t necessarily have the laws? My one point would be to act before the regulators asked you to do so.”

Piotti said that putting risk-assessment frameworks in place is a good way to work toward compliance and mitigate risk. Engaging in dialogue with external stakeholders is also encouraged to contribute to the ongoing development of AI regulations.

“Have some input from the outside world, be it through ethics councils, be it through people that are auditing your algorithms or whatever you’re using to understand what is the risk specifically for your business,” she said. “To have that trust, you’ve got to be doing something about it, you’ve got to be mitigating that risk.”

Related content: Read more about what the New York City law around automated tools in HR and compliance means for you. 

Seize the opportunity for engagement

This is a moment in time in which organizations have an opportunity to help shape laws in the EU on automated tools and AI. By engaging in this discourse, organizational leaders can gain insights into industry trends, influence policy outcomes, and foster collaborative relationships. 

For example, Piotti said that talking with experts and policymakers at conferences is a good way to enter the conversation. 

“It’s so important to be able to reach out to the regulators, and I think we have a moment now where regulators are actually asking for input from industry,” Piotti said. “There really is this opportunity that can shape this future.”

In addition to following the latest news on the EU AI Act, Piotti said organizations can tap into several resources, including the Organization for Economic Cooperation and Development’s (OECD) AI Policy Observatory, UNESCO, and the National Institute of Standards and Technology (NIST) for guidance. 

“It is in our common interest to see maximum adoption of the technology, as long as it’s done responsibly,” Piotti said. 

Watch the entire Cultivate Europe session on “What companies need to know about new AI regulations everywhere” now on demand.

You might also like...

Share Popup Title

[eif_share_buttons]