Skip to Content

NIST AI Risk Management Framework

The NIST AI Risk Management Framework (AI RMF 1.0) provides a structured approach to managing risk throughout the lifecycle of AI systems. It is organised into four core functions — GOVERN, MAP, MEASURE, and MANAGE — each comprising practices and sub-categories that align with organisational risk management goals.

VeriProof’s observability and audit capabilities address a distinct set of RMF practices: those that require evidence about production behaviour, not just design-time documentation.

The AI RMF is voluntary in the United States, but adoption is increasingly expected by regulated industries (financial services, healthcare, federal contractors) and is referenced in EO 14110 implementation guidance. The NIST AI RMF Playbook  provides suggested actions for each sub-category.


GOVERN

The GOVERN function establishes organisational policies, roles, and accountability structures for AI risk management.

PracticeSub-categoryVeriProof support
GOVERN 1.1Policies and procedures for AI risk are in placeSession evidence artifacts and audit exports provide production audit data that supports policy enforcement documentation
GOVERN 1.7AI risks are communicated to relevant stakeholdersAlert rules enable real-time notification to risk owners when production thresholds are breached
GOVERN 2.2Risk tolerance statements existGovernance scoring thresholds translate risk tolerance statements into operational monitoring
GOVERN 4.1Organisational teams understand their AI risk rolesRole-based access in the Customer Portal enforces separation between data capture, review, and administration
GOVERN 6.2Policies for data governance are in placeGDPR cryptographic erasure and data subject management support data governance policies

MAP

The MAP function situates the AI system’s context, intended uses, and known risk factors.

PracticeSub-categoryVeriProof support
MAP 1.1Context for AI deployment is establishedSession metadata captures deployment context (model version, adapter type, SDK version) at ingest
MAP 2.1Scientific findings are applied to system designBlockchain-anchored records provide the immutable ground truth for comparing system behaviour against design specifications
MAP 3.1AI system use is monitored post-deploymentTime-series dashboards and governance score trend charts provide continuous post-deployment visibility
MAP 5.1Impacts to individuals are trackedData subject linkage allows session records to be grouped and audited by the individuals they relate to

MEASURE

The MEASURE function applies analysis and evaluation tools to assess AI risks and performance.

PracticeSub-categoryVeriProof support
MEASURE 1.1Approaches exist to test and evaluate AI systemsSession evidence exports, portal dashboards, and bulk application evidence packages support quantitative review of captured production behavior
MEASURE 2.1System performance metrics are capturedAdapter metadata (latency, token counts, confidence scores) and governance scores are captured per session
MEASURE 2.5Privacy risks are evaluatedData subject management and erasure workflow address MEASURE 2.5’s privacy risk sub-practices
MEASURE 2.8AI system output is monitored for trustworthinessBlockchain anchoring provides a verifiable chain of custody; tampered records fail Merkle verification
MEASURE 2.11Fairness indicators are trackedCustom governance dimensions can be defined to capture fairness-relevant outputs (demographic signals, refusal rates)
MEASURE 3.3Metrics are available for AI impact assessmentAlert rules and governance dashboards support periodic impact assessment reviews

MANAGE

The MANAGE function prioritises and addresses identified risks; it also includes incident handling and response.

PracticeSub-categoryVeriProof support
MANAGE 1.1Root cause analysis procedures existTime-machine queries allow full session replay for incident investigation
MANAGE 2.2Risk response procedures are in placeAlert rules trigger notifications that initiate response procedures
MANAGE 2.4Mechanisms for responding to incidents are availableEvidence artifacts can be exported on demand to support incident response documentation
MANAGE 3.1Risks are communicated with leadershipAlert escalation paths can be configured to ensure risk events reach appropriate stakeholders
MANAGE 4.1Lessons learned inform future designSession history provides the longitudinal data for post-incident review and system improvement cycles

Mapping to NIST SP 800-218A

NIST SP 800-218A (Secure Software Development Practices for Generative AI) extends NIST SP 800-218 to AI-specific concerns. VeriProof’s audit architecture is relevant to several SP 800-218A practices:

  • GENAI-PW-1.1 (Protect model integrity): Blockchain anchoring and immutable audit records provide tamper evidence for the commitment generation path and downstream review
  • GENAI-PW-2.1 (Protect inference outputs): Blockchain anchoring provides an immutable record of what the model actually produced, supporting output integrity claims
  • GENAI-RV-3.1 (Monitor for unexpected behaviour): Governance scoring and alert rules operationalise continuous anomaly detection as recommended

Building AI RMF Review Material

The current product does not generate a dedicated NIST AI RMF package from a framework selector. In practice, teams assemble RMF review material from:

  • Session evidence JSON or PDF for representative decisions
  • Bulk application evidence ZIP exports for broader sample coverage
  • Blockchain audit certificates for integrity verification
  • Governance dashboards, alert histories, and remediation records from the portal

That gives you concrete production evidence for GOVERN, MEASURE, and MANAGE discussions without overstating the current export surface.


Next Steps

Last updated on