What’s Next for UK AI Policy?

Global businesses wait for the new UK government to set expectations on AI policy
Var ShankarVar Shankar
Var Shankar
30 Aug
2024
What’s Next for UK AI Policy?

Global businesses are awaiting the details of the new UK government’s AI Opportunities Action Plan, announced on July 26 and led by tech entrepreneur and chair of Advanced Research and Invention Agency, Matt Clifford. The Plan’s stated objectives include using AI to improve people’s lives and developing a globally competitive AI sector in the UK.

The EU AI Act (the “AIA”) which entered into force on August 1, covers ‘AI systems’ broadly and horizontally. Global financial firms are already revamping their AI governance programs to comply with AIA requirements. In comparison, any new statutory requirements introduced by the new UK government will likely only apply to powerful foundation models.

In its manifesto ahead of the July 4 election, Labour pledged to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models.” In the King’s Speech on July 17, the new government pledged only to “establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” And on July 31, according to the Financial Times, UK Science Secretary Peter Kyle told executives at Google, Microsoft and other large technology companies on July 31 that a potential AI bill will focus on making voluntary pledges by technology companies binding and on giving more independence to the UK’s AI Safety Institute.

An initial set of non-binding recommendations for UK AI policy is due in September from Matt Clifford. Clifford is developing these recommendations with a backdrop of fiscal instability. For example, the government announced this month will not go forward with investments of £800 million for a supercomputer at the University of Edinburgh and £500 million for the public AI Research Resource (AIRR), which the previous Conservative government had announced.

Despite the scrapped investment, the UK government remains aligned for the time being with the principles-based approach taken by the previous government. Under this non-statutory approach, sectoral regulators consider five AI principles when developing requirements for AI use:

1-   Safety, security, and robustness

2-   Appropriate transparency and explainability

3-   Fairness

4-   Accountability and governance

5-   Contestability and redress

In some cases, sectoral regulators have published regulatory updates interpreting the five principles. For example, regulatory updates published in April 2024 by the Financial Conduct Authority (FCA) and the Bank of England (BoE) are still the clearest articulations of AI policy from UK financial regulators. In these updates, the FCA and BoE map their respective AI risk management efforts to the government’s five AI principles and suggest that generally, AI risks can be managed within existing efforts, such as Model Risk Management guidance, UK GDPR requirements, SM&CR, the UK Consumer Duty and efforts related to DP5/22.

Global businesses should keep a close eye on UK AI policy. Even limited statutory requirements for powerful foundation models will have significant implications for businesses in most sectors.

Enzai is here to help

Enzai’s product can help your company deploy AI in accordance with best practices and emerging regulations, standards and frameworks, such as EU AI Act, the Colorado AI Act, the NIST AI RMF and ISO/IEC 42001. To learn more, get in touch here.

Build and deploy AI with confidence

Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.