AI and the law: Guiding transformation with legislation in the UK

UK lawmakers are watching AI development and governance closely. Here’s how one member of Parliament, Lord Chris Holmes, is taking a proactive and pragmatic approach to regulating the new technology.

AI and the law: Guiding transformation with legislation in the UK

7 min read
  • AI creates possibilities and should be seen as a tool to augment workers, not replace them.
  • A “wait-and-see” policy approach to governance will lead to more harm than good.
  • Lord Chris Holmes’ AI bill in UK Parliament seeks to add clarity and guidance in a tech landscape fraught with limited accountability.

HR’s biggest challenge: ever-changing priorities. For the last several years, talent leaders have been navigating the intricacies of remote work and day-to-day operations. But today all eyes are on AI — its use and governance, and how HR can take a more strategic approach to support their organizations’ goals.

“The way that we’re [HR] going to be that leading player is to talk about AI, talent intelligence, skills-based transformation, and strategic workforce planning,” said Madeline Laurano, Aptitude Research Founder and Chief Analyst. “That’s how HR is really going to be that proactive player in transformation.”

In addition to embracing an AI-powered workforce, leading the transformation requires understanding existing and upcoming AI laws — while reconciling organizational skepticism and fear. Lord Chris Holmes, a member of the UK House of Lords and a forerunner in AI legislation, joined Laurano to discuss his AI initiatives in UK Parliament and the future of global AI law at Cultivate Europe.

Here, we highlight key moments from their conversation, including the human opportunities AI presents and the urgent need to take action today.

Unlocking humanity’s potential: AI for good

At the age of 14, Holmes lost his sight. It was only through the help of technology that he could return to school. From that point on, he realized that technology could be a transformative force for good, creating possibilities that didn’t previously exist.

Here’s the fundamental truth,” Holmes said. “No matter how good or how transformational AI is, or indeed any of these technologies are, they are fundamentally human endeavors — human-led, human in the loop, human over the loop. That’s how we make a success out of all of this.”

Holmes explained that we should fundamentally see technologies like blockchain and AI as tools.

“That’s not to denigrate them, they’re extraordinarily powerful tools, but they’re tools in our human hands,” he continued. “We decide, we determine, we choose. And we know everything that we need to make a success of AI because we understand what it is to be human. … If it goes wrong, that won’t be an AI failure, that will be a human failure.”

However, global governments have resisted enacting regulations for fear of hindering innovation. Holmes argued that this approach is reckless — and potentially disastrous.

Related content: Read Eightfold’s perspective on AI and compliance.

Existing and upcoming AI regulations 

To date, the world has enacted few regulations around AI. The United States has a couple of executive orders. There are two small legislative acts in China, and of course, the EU released the EU AI act in March 2024 — a document, Holmes said, that has many difficulties due to its overly prescriptive nature. 

“The government’s position of ‘wait and see’ is not only suboptimal; it doesn’t even achieve their stated objective,” Holmes said. “Their position is to wait and see because, otherwise, we don’t know enough and will stifle innovation and inward investment. 

“If we don’t do anything in the UK, the most likely outcome is businesses will either do nothing or will align with the EU act, and that’s a missed opportunity for the UK and, indeed, all of the common law jurisdiction,” he continued.

“And we know everything that we need to make a success of AI because we understand what it is to be human. … If it goes wrong, that won’t be an AI failure, that will be a human failure.” —Lord Chris Holmes

Holmes sees this barren legislative landscape as a golden opportunity for the UK to lead the world. His proof? The CMA9 order — legislation that required the UK’s nine largest banks to give consumers and businesses more control over their financial data — and the FinTech Regulatory Sandbox initiative from 2016, in which Holmes was involved. The Sandbox initiative encouraged the creation of new financial technologies while protecting consumers.

To fill the legislative gap, Holmes decided to introduce his own AI bill to Parliament. 

“We know how to do this, and we know that right-size regulation is good for innovation and good for inward investment because what do businesses want? Certainty, clarity, security, safety, and so on. We all know that, so to wait and see just doesn’t give us that opportunity,” Holmes said.

Lord Holmes’ AI bill: Outputs-focused and inputs-understood

Holmes’ goal was to introduce legislation that clearly but broadly defined AI so that the debate would center around AI’s guiding principles. These principles included the goals of an AI technology, its inputs, required human consent, and the appropriate and proportional remuneration for people’s data where necessary. 

Holmes went on to detail several more factors of his bill:

The role of the AI officer

“Some of the threads that run through my bill are around the ethical deployment of AI,” he said. “It can’t be used with bias, leading to any particular perspective. I suggest an AI responsible officer in all organizations that develop or deploy AI.”

Holmes said that the AI officer doesn’t necessarily need to be an entire position but rather a role that someone could assume. “It can be proportional to the size of the operation, with reporting obligations similar to those in the Companies Act so well understood, but it’s absolutely critical. The principles that can bring it to life are everything around trust and transparency, inclusion and innovation, interoperability at an international perspective, assurance, accountability, [and] accessibility, because, when we talk [about] bias and an ethical approach, what are we talking about? What, ultimately, is AI? It’s the data.”

He adds that there needs to be an understanding that data is not a neutral force. “There are biases baked in,” he said. “The AI has potential to deal with those biases. It has potential to exacerbate those biases. But again, back to that human-led, human in the loop, human over the loop — that’s the best way to understand, and if we have that outputs-focused, inputs-understood, that really gives us the best chance of understanding what’s going on throughout the process.”

Who’s responsible for ethical AI?

Laurano asked Holmes about who was ultimately responsible for the ethical creation and deployment of AI —  the creator, the provider, or the user? 

“It’s a critical point,” Holmes said. “Without clear legislation and regulation, inevitably, you’ll see that batting game where it gets batted between all the parties who are involved in the value chain with each of them hoping that, at some point, it will drop through the cracks, and nobody ends up on the hook for it. 

“So again, it’s important to be very clear. … If you develop, if you deploy, if you use, then the bill will bite on that. There are clear clauses set out in terms of the sanctions that can be brought in for transgression there.”

Empowering the people

Many lawsuits in the United States are pending, with authors and celebrities suing AI companies for using their work or likenesses. Holmes argued that lawmakers need to urgently get into the issues of consent and copyright. Otherwise, it will be too late to help those who have suffered on the wrong end of AI. 

“It takes us back to whether we talk [about] privacy, data rights, IP rights, copyright — what’s the thread which runs through all of that? It’s the sense of, ‘It’s our stuff.’ It’s the need for the Indigenous people to have agency and be properly powered up with the right legislation and regulation. To be sure that in this jurisdiction, those rights are clearly set out, clearly understood, and clearly, effectively, and quickly able to be asserted where transgressions occur.”

Share This Post

The future of Holmes’ AI bill

Holmes pushed his AI regulation bill through all the legislative states in the House of Lords and had it fully prepared for the House of Commons when former Prime Minister Rishi Sunak called a general election in late May.

“On an otherwise unremarkable Wednesday afternoon … we get the announcement of the general election,” Holmes said. “And for all of the elements that happen there, the one thing I heard when that was announced in terms of my AI bill was [falling sound], and away it went, so all that work over.”

As for the next steps, Holmes says his bill must be among the top 25 bills to have a chance of getting into law. His goal is to get it into the top 10. 

Before the election, he had full support from the Labour Party. Now, he’s hoping that support will continue, bolstered by mounting pressure from the dawning realization that “do nothing isn’t actually the way to encourage and enable innovation. Do nothing is the way to potentially stifle innovation, and it would cause an alignment to the nearest piece of legislation, the EU AI act.”

The future of AI is promising — but only if we act now

Holmes hopes that “there’ll be a lot more in this next Parliament on AI, blockchain, data, and digital information, so we have this sense of not just a coherent approach, but a positive statement about what the next government believes, a positive narrative built around our digital futures in an era of negative, fear-laden politics.”

He concluded by weighing the potential of AI with the consequences of idleness. 

“You have to positively engage with the AI opportunity and understand that there will be uncomfortable moments, periods of very difficult transition,” Holmes said. “But if we don’t engage with it, we’re in danger of doing that classic thing that happens, so much of trying to freeze the past or trying to hold on to something which wasn’t even as we imagined it. 

“In any event, this is coming,” he continued. “We don’t need to be bedazzled or frozen in fear about it. We know what we need to make a success of this because we’ve had the most phenomenal, great good fortune of being born human. And when we come together and collaborate and connect, we achieve stunning things.” 

Watch the full session, “AI and the law: Navigating global policies and regulations,” now on demand.

You might also like...

Share Popup Title

[eif_share_buttons]