What’s Next for EU-US Collaboration on AI Policy?

The incoming US administration has promised to focus on national security and economic growth
Var ShankarVar Shankar
Var Shankar
11 Nov
2024
What’s Next for EU-US Collaboration on AI Policy?

The EU AI Act is a comprehensive, horizontal law that address a wide range of AI risks. In comparison, American approaches to AI regulation have varied, both geographically and across levels of government, and typically focused on specific AI risks, like discrimination or surveillance.

While accepting these fundamental differences, the Biden administration has engaged European partners in meaningful ways on AI risk issues. Since the Biden administration published its Trustworthy AI Executive Order in October 2023, the administration has advanced EU-US collaboration on AI risks in three major ways.

First, the US-EU Trade and Technology Council (TTC), established in 2021 and chaired by high-level officials from both sides of the Atlantic, has met frequently at the ministerial level to coordinate approaches to issues like AI risk, technology standards, digital trade, and resilient supply chains. These ministerial dialogues have resulted in meaningful outputs, like a shared Terminology and Taxonomy for AI.

Second, the US joined the EU in signing the world’s first AI safety treaty, developed by the Council of Europe – an international organization based in France, with a history of creating significant international conventions and treaties. Known formally as the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, the treaty is based on shared principles, while allowing for flexible regulatory approaches to address AI risks.

Third, the US has collaborated with the EU on addressing AI risks through forums like the OECD and G7, including to promote interoperability in AI risk management and to provide specific guidance to companies using generative AI. The US and EU have also participated in a global track of AI safety summits, with their first two meetings in the UK and South Korea respectively. An emerging effort at these summits has been to coordinate the approaches of newly established AI Safety Institutes (AISIs) in many countries. The US AISI, hosted by the National Institute for Standards and Technology, was established by Biden’s Trustworthy AI Executive Order less than a year ago. 

New priorities under an incoming administration 

Though the first Trump administration had taken positions on AI policy, ChatGPT had not been released when the administration left office, and the AI conversation has evolved dramatically since its release. The 2024 Republican platform included a pledge to repeal Biden’s Trustworthy AI Executive Order, but it is not clear how much of a priority this will be for the second Trump administration, set to take office in January 2025, or what will replace Executive Order. 

The incoming administration is likely to focus on reducing barriers to AI innovation. It is also likely to see AI issues through the prism of economic competition with China, rather than collaboration with European partners on AI safety. One area of continuity with the Biden administration is likely to be promoting AI adoption and literacy in national security agencies of the US government.

Though it is possible that the incoming administration will stop collaborating with European partners in the forums discussed above, it is also possible that it will use some of these forums to emphasize different objectives. For example, it may refocus TTC and G7 conversations to discuss tariffs, sanctions, and export controls.

The future of the US AISI is also unclear. While some Republicans see it as a potential roadblock to innovation, others see it as an example of American leadership on AI. Since most leading AI companies are American or Chinese, the second Trump administration is less likely to meaningfully engage with European perspectives on AI risk, unless these perspectives are seen as important enablers of American national security or economic growth.

In the years to come, companies are likely to see an even clearer divergence in American and European regulatory approaches to AI than exists currently. Internal AI governance programs will need to be both flexible and scalable, to report into different regulatory regimes as required.

Enzai is here to help

Enzai’s AI GRC platform can help your company deploy AI in accordance with best practices and emerging regulations, standards and frameworks, such as EU AI Act, the Colorado AI Act, the NIST AI RMF and ISO/IEC 42001. To learn more, get in touch here.

Build and deploy AI with confidence

Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.