The NIST AI Risk Management Framework

An overview of the NIST AI RMF and the steps needed to ensure AI governance compliance.
Max CluerMax Cluer
Max Cluer
22 Apr
2024
The NIST AI Risk Management Framework

As artificial intelligence becomes ubiquitous across industries, so too do the risks associated with AI systems. From biased algorithms to data privacy concerns, businesses must proactively manage AI risks to maintain trust with stakeholders and realise the full potential of this transformative technology. Fortunately, there are established best practices to help - one of the most important being the National Institute of Standards and Technology's (NIST) AI Risk Management Framework (the “AI RMF”). NIST, which is an agency of the US Department of Commerce, released the AI RMF in January 2023 and it has quickly become one of the leading standards frameworks for ensuring AI is responsible and trustworthy.

Though adherence is voluntary, we have noticed an increasing number of organisations (particularly in the US) have adopted the AI RMF when building out their AI programme. In this blog, we: (1) set out an overview of the AI RMF; (2) provide some guidance on whether you should adopt the AI RMF; and (3) if you should adopt it, give some practical tips in order to be able to do so.

An Overview of the AI RMF

The AI RMF applies to ‘AI Actors’, which is defined as either businesses or individuals who play an active role in designing, developing and/or using AI systems. Therefore, if your company makes use of AI, or you personally are involved in AI workstreams, you should adhere to the requirements of the framework to ensure compliance.

The AI RMF can be broken down into two broad sections, “Foundational Information” and “Core and Profiles”. The requirements of each are set out below.

Foundational Information

Foundational Information deals with the potential risks and harms of AI usage. Businesses complying with the guidelines must put processes in place to track and measure risks as they emerge, prioritise these, and assess the organisation’s overall risk tolerance level. The aim is to ensure responsible AI use by putting in place a robust and proactive risk management process.

Core and Profiles

Core and Profiles, the second part of NIST’s framework, focuses on how to put such a system into practice. Here, NIST sets out the four core functions of a developed risk management system. These are:

  • Govern - create a culture of risk management within a business;
  • Map - build the processes to identify the risks and potential risks of AI usage;
  • Measure - assess the potential impact of these risks on relevant stakeholders; and 
  • Manage - adopt risk treatment and risk mitigation activities. 

Finally, the AI RMF sets out the ‘Profiles’ method of establishing and assessing risk in a specific use-case, within the context of the overarching risk management programme. For example, an AI system that helps Human Resources teams with sorting through candidate CVs would constitute a Profile that you should govern, map, measure and manage within your wider AI risk management programme. 

Should you adopt the NIST AI RMF?

While NIST’s framework is a voluntary standard, unlike legally-binding regulations such as the EU AI Act, there are clear benefits to putting in the effort necessary to comply:

  1. Understand the risks involved with your AI: Almost all businesses now use AI in at least some of their functions. However, many risk leaders within these companies remain unaware of the true risks that are being created. Compliance with the RMF will bring these to light and allow teams to mitigate them.
  1. Build Trust in your AI: One of the core tenets behind the framework is the need for companies to build trust among their stakeholders, which is often lacking when it comes to AI. Compliance with this standard is a third-party seal of approval from a prestigious organisation that your AI usage is responsible and trustworthy.
  1. Increase AI usage to boost productivity and profit: Many companies see the impressive benefits that AI can bring but are afraid to maximise usage due to unclear risks and worries about accountability. Adherence with NIST’s framework can assuage those fears, increase the use of AI, and lead to enhanced productivity and profit within a business.

Despite these strong reasons in favour of adherence to NIST’s framework, there are situations where it might not make sense from businesses to seek to comply:

  1. Some other frameworks are more prescriptive: the NIST AI RMF is a helpful guide for AI risk management, but it does leave organisations with a lot of work to do to interpret the requirements and apply them to their own business. This space is moving quickly and, since the publication of the AI RMF, many other standards and regulations have since emerged. Some of these, such as ISO 42001, are more prescriptive in nature and (in our experience) easier to adopt. Further, the NIST AI RMF on its own will not ensure compliance with the emerging AI regulatory landscape (such as the EU AI Act), nor industry specific guidelines (such as SR 11-7 and the UK PRA rules on model risk management).
  1. Companies which don’t make use of AI: if the only usage of AI within your organisation is the occasional employee using a public LLM, such as Chat GPT or Claude, to polish an email, it might not be worth complying with the framework. Here, it might make more sense to adopt a light-weight governance framework by establishing an AI policy to govern how the people in your organisation interact with these tools. 

Moving Forward

Although the NIST guidelines will always remain voluntary, we foresee their adoption by enterprises continuing to expand from the significant level already achieved. Indeed, for large businesses which make comprehensive use of AI solutions, it is likely that compliance with NIST (or other similarly/ more robust standards such as ISO 42001), will become almost a requirement from stakeholders including employees, customers, and shareholders. 

We have set out some practical tips below that you can use to get moving with the NIST AI RMF.

  1. Secure executive buy-in: ensure that senior leadership understands the importance of AI risk management and supports the adoption of the NIST AI RMF. Their backing will be crucial for allocating resources and driving organisation-wide adherence.
  1. Establish an AI governance structure: form a cross-functional AI governance committee or assign roles and responsibilities for overseeing the implementation of the framework. This may include representatives from IT, legal, risk management, ethics, and relevant business units.
  1. Create an AI inventory: catalogue all of your organisation's AI systems and projects, including those in development, to get a comprehensive view of your AI landscape. This will help you prioritise risk assessment efforts.
  1. Assess risks for each AI system: for each AI system or use case, conduct a thorough risk assessment covering data privacy, security, fairness, transparency, and other key areas outlined in the NIST AI RMF. Use the "Profiles" methodology to tailor the assessment to each specific context.
  1. Develop risk mitigation plans: based on the risk assessment results, create actionable plans to mitigate identified risks for each AI system. This may include technical measures (e.g., bias testing, security controls) as well as organisational measures (e.g., policies, training).
  1. Integrate with existing risk management processes: align AI risk management with your organisation's overall risk management framework and processes. This will help ensure a consistent, integrated approach and avoid silos.
  1. Engage stakeholders: involve relevant stakeholders, such as end-users, customers, and regulators, in the risk assessment and mitigation process as appropriate. Solicit their feedback and communicate transparently about your AI risk management efforts to build trust.
  1. Monitor and reassess continuously: AI risk management is an ongoing process. Continuously monitor AI systems for emerging risks, and regularly reassess risk profiles as the technology, use cases, and regulatory landscape evolve.
  1. Provide AI ethics training: educate employees involved in AI development and deployment on AI ethics principles and the requirements of the NIST AI RMF. This will help embed responsible AI practices into your organisation's culture.
  1. Leverage AI governance tools: consider using AI governance platforms, like Enzai, to streamline and automate aspects of AI risk management. These tools can help you efficiently assess systems against the NIST AI RMF and other standards, track risk mitigation actions, and generate audit trails.

Remember, adopting the NIST AI RMF is a journey. Start with your highest-priority AI systems and gradually expand the scope of your risk management efforts over time. Regularly review and refine your approach based on lessons learned and evolving best practices.

Click here to book a demo and learn more about how your business can comply with NIST’s AI Risk Management Framework using Enzai’s AI governance technologies.

Build and deploy AI with confidence

Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.