UK Financial Sector Weighs in on AI/ML Framework
The Bank of England, along with the Prudential Regulation Authority and the Financial Conduct Authority - who are collectively known as the supervisory authorities - published a discussion paper, DP5/22, on Artificial Intelligence and Machine Learning in October 2022. The purpose of the paper was to deepen their understanding and facilitate a dialogue on how AI could impact their objectives in prudential and conduct supervision of financial institutions. This discussion paper was part of a broader initiative related to AI, which included the AI Public Private Forum.
The Bank published a feedback statement on October 26th 2023, which summarises the responses the supervisory authorities received and identifies common themes. It does not propose specific policies or indicate how the authorities intend to clarify, design, or implement regulatory proposals related to AI.
DP5/22 received 54 responses from various stakeholders in the financial sector. These responses came from a diverse range of institutions, with industry bodies accounting for nearly a quarter, and banks representing an additional fifth. The feedback statement highlights that there wasn't significant divergence of opinion among these different sectors.
Below is a summary of the key points:
- A regulatory definition of AI would not be useful. Instead, respondents preferred alternative, principles-based or risk-based approaches
- Regulatory guidance should be ‘live’. In response to rapidly changing AI capabilities, regulators could periodically update guidance and examples of best practice
- Ongoing industry engagement is crucial. Instead, respondents preferred alternative, principles-based or risk-based approaches
- We need more coordination between regulators. The current landscape is too complex and fragmented, both domestically and internationally
- To address data risks, more alignment is necessary. Especially risks related to fairness, bias, and management of protected characteristics
- Consumer outcomes are key. Especially with respect to ensuring fairness and other ethical dimensions
- The use of third-party models is a concern. More regulatory guidance would be helpful. Respondents noted the relevance of the discussion paper
- A joined-up approach could help to mitigate risks. Closer collaboration between data management and model risk management teams would be beneficial
- Areas of CP6/22 could be clarified or strengthened. There is still a need to address issues particularly relevant to models with AI characteristics
- Existing firm governance structures are sufficient to address AI risks
Read more about the feedback report here
Build and deploy AI with confidence
Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.