Reflections from Paris: the case for regulation to enhance innovation

The AI Action Summit took place in Paris this week. Ryan shares his thoughts and takeaways from the gathering.
Ryan DonnellyRyan Donnelly
Ryan Donnelly
14 Feb
2025
Reflections from Paris: the case for regulation to enhance innovation

At the same time as US Vice President JD Vance gave his remarks at the AI Action Summit in the Grand Palais, I was in the OECD for a day of roundtables discussing AI safety, governance and ethics. This was the same city, separated only by a couple of streets, but it felt worlds away. In the Grand Palais, the concept of regulating AI was a political piñata as American innovation and dynamism took centre stage. Whereas at the OECD, the need for AI regulation was thought to never be more pressing.

This was unlike any other AI summit I’ve been to. The mood was different and divided, and I think that is largely just a reflection of the current western zeitgeist. We’re living through a massive shift in public opinion across the world at the moment - debates around immigration, international trade and the role of the state are creating (or exacerbating, depending on how you view it) deep societal divides across western democracies.

Regulatory backlash

This rupture was very clearly on display in Paris, where government officials and industry experts gathered to discuss the latest developments in AI. In stark contrast to the UK’s first summit entitled the AI Safety Summit, the theme in Paris was rapid innovation and a fear of being left behind. The mood is clear – it’s time to ‘plug, baby, plug’1 and the idea of regulating frontier technologies is squarely in the crosshairs. Is this just opportunism at its best, or is it something a bit more dangerous?

I not only understand the sentiment behind much of this regulatory backlash, I share it. That might sound strange and counterproductive coming from the founder of an AI governance startup, but I do truly believe it. I am tremendously excited by the potential of AI technologies to transform how we live our lives for the better. We need relentless innovation to make it happen and a framework of badly-designed rules won’t get us there. Silly rules, that introduce un-necessary and often unrealistic burdens on market participants, help no one and do pose a legitimate threat to progress and innovation.

But consider for a second an alternative universe, as the adjective badly-designed in the paragraph above is doing all the heavy lifting here. What if we design some helpful rules instead? The adage “good laws are good and bad laws are bad” has never been more true than in the digital age. If AI regulations were drafted in a context specific way, that accounts for the unique attributes of the technologies and the societal context they are deployed in, would this not be a net positive for society? Good regulation is a way of ensuring that bad actors are controlled and punished for breaches, and that organisations seeking to advance society in a positive way can do so within a strong framework of high standards. With the right framework in place, innovation can flourish with all participants playing the same game by the same rules.

Rules aren’t all bad

Not yet convinced by this new framing? Then consider some of these principles out of context. The game of rugby is-what-it-is precisely because of the rules that the two opposing teams play the game under. We can and we should debate the merits of some of those rules (I think the held up over the line rule is too harsh on attacking teams, and kills off the suspense), but no one is calling for the rules of the game to be scrapped entirely. The recent rules around protecting a player’s head in the tackle have been messy to implement, but are critical for both player welfare and the long term survival of the game. In fact these new rules allow the game to not only survive, but to thrive.

The social media era offers a sobering lesson in the cost of regulatory inaction. For over a decade, platforms were allowed to scale to billions of users with virtually no oversight of their data practices, content algorithms or impact on youth mental health. This was an innovation first approach, where the mantra was literally to ‘move fast and break things’. The results of this approach became painfully clear: unprecedented privacy breaches like Cambridge Analytica, democratic institutions under strain and documented harm to teen mental health. By the time regulators began to act, these problems were deeply entrenched and far harder to solve.

Applying this analysis to AI regulation

And we can see how this pattern might play out again with AI. When we prioritise speed over systems of accountability, we risk undermining the very foundations that make progress possible. One of the most consistent criticisms against AI regulation is the argument that it imposes additional, complex regulatory burdens on shoulders that cannot carry the weight. Again, this sounds like a decent criticism at face value but in the case of the EU AI Act, it simply isn’t true. Implementing things like a risk management system (Article 9) (“RMS”) and a quality management system (Article 17) (“QMS”) for AI technologies that are clearly very risky is not too much to ask. In fact, high quality businesses will be doing this anyway because it is in their commercial interests to do so - it helps build customer trust. And if you think the EU AI Act unfairly prejudices smaller businesses, pay close attention to the smart SME formulation in Article 99(6) around penalties. Above all – in the same way AI can help transform medicine, law and finance, it can help with regulatory compliance too. Enzai’s AI governance platform, allows organisations to implement comprehensive AI compliance programmes efficiently, at scale.

The AI liability directive is a particularly lamentable victim of this year’s AI Summit. As a quick reminder, the liability directive was designed to ensure that if a claimant has suffered harm as a result of an AI system, they would not have to prove in-depth causality between the AI System and the harm caused. The directive also provided a strong rebuttable presumption in favour of any defendant that could evidence its QMS and RMS. More on that here, but the bottom line is that although this may seem like a pro-innovation move, it is in fact the total opposite. The result is legal uncertainty which doesn't help the deregulation cause, it exacerbates the holes in it.

A view to the future

On the day after the AI Action Summit, Anthropic hosted their first AI Builders Summit in Paris. It was reassuring to hear Anthropic CEO Dario Amodei describe the “missed opportunity” of the summit, and refocus the discussion back on safety, transparency and ensuring that everyone shares in the economic uplift of very powerful AI. The future of AI isn't going to be decided by who shouts the loudest in grand palaces. It's going to be built by those willing to do the quiet, methodical work of figuring out how to make these powerful tools work reliably and responsibly.

1 French President Emmanuel Macron coined this phrase during a speech at the AI Action Summit

Build and deploy AI with confidence

Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.