NIST AI Risk Management Framework
The NIST AI Risk Management Framework (AI RMF 1.0) provides a structured approach to managing risk throughout the lifecycle of AI systems. It is organised into four core functions — GOVERN, MAP, MEASURE, and MANAGE — each comprising practices and sub-categories that align with organisational risk management goals.
VeriProof’s observability and audit capabilities address a distinct set of RMF practices: those that require evidence about production behaviour, not just design-time documentation.
The AI RMF is voluntary in the United States, but adoption is increasingly expected by regulated industries (financial services, healthcare, federal contractors) and is referenced in EO 14110 implementation guidance. The NIST AI RMF Playbook provides suggested actions for each sub-category.
GOVERN
The GOVERN function establishes organisational policies, roles, and accountability structures for AI risk management.
| Practice | Sub-category | VeriProof support |
|---|---|---|
| GOVERN 1.1 | Policies and procedures for AI risk are in place | Session evidence artifacts and audit exports provide production audit data that supports policy enforcement documentation |
| GOVERN 1.7 | AI risks are communicated to relevant stakeholders | Alert rules enable real-time notification to risk owners when production thresholds are breached |
| GOVERN 2.2 | Risk tolerance statements exist | Governance scoring thresholds translate risk tolerance statements into operational monitoring |
| GOVERN 4.1 | Organisational teams understand their AI risk roles | Role-based access in the Customer Portal enforces separation between data capture, review, and administration |
| GOVERN 6.2 | Policies for data governance are in place | GDPR cryptographic erasure and data subject management support data governance policies |
MAP
The MAP function situates the AI system’s context, intended uses, and known risk factors.
| Practice | Sub-category | VeriProof support |
|---|---|---|
| MAP 1.1 | Context for AI deployment is established | Session metadata captures deployment context (model version, adapter type, SDK version) at ingest |
| MAP 2.1 | Scientific findings are applied to system design | Blockchain-anchored records provide the immutable ground truth for comparing system behaviour against design specifications |
| MAP 3.1 | AI system use is monitored post-deployment | Time-series dashboards and governance score trend charts provide continuous post-deployment visibility |
| MAP 5.1 | Impacts to individuals are tracked | Data subject linkage allows session records to be grouped and audited by the individuals they relate to |
MEASURE
The MEASURE function applies analysis and evaluation tools to assess AI risks and performance.
| Practice | Sub-category | VeriProof support |
|---|---|---|
| MEASURE 1.1 | Approaches exist to test and evaluate AI systems | Session evidence exports, portal dashboards, and bulk application evidence packages support quantitative review of captured production behavior |
| MEASURE 2.1 | System performance metrics are captured | Adapter metadata (latency, token counts, confidence scores) and governance scores are captured per session |
| MEASURE 2.5 | Privacy risks are evaluated | Data subject management and erasure workflow address MEASURE 2.5’s privacy risk sub-practices |
| MEASURE 2.8 | AI system output is monitored for trustworthiness | Blockchain anchoring provides a verifiable chain of custody; tampered records fail Merkle verification |
| MEASURE 2.11 | Fairness indicators are tracked | Custom governance dimensions can be defined to capture fairness-relevant outputs (demographic signals, refusal rates) |
| MEASURE 3.3 | Metrics are available for AI impact assessment | Alert rules and governance dashboards support periodic impact assessment reviews |
MANAGE
The MANAGE function prioritises and addresses identified risks; it also includes incident handling and response.
| Practice | Sub-category | VeriProof support |
|---|---|---|
| MANAGE 1.1 | Root cause analysis procedures exist | Time-machine queries allow full session replay for incident investigation |
| MANAGE 2.2 | Risk response procedures are in place | Alert rules trigger notifications that initiate response procedures |
| MANAGE 2.4 | Mechanisms for responding to incidents are available | Evidence artifacts can be exported on demand to support incident response documentation |
| MANAGE 3.1 | Risks are communicated with leadership | Alert escalation paths can be configured to ensure risk events reach appropriate stakeholders |
| MANAGE 4.1 | Lessons learned inform future design | Session history provides the longitudinal data for post-incident review and system improvement cycles |
Mapping to NIST SP 800-218A
NIST SP 800-218A (Secure Software Development Practices for Generative AI) extends NIST SP 800-218 to AI-specific concerns. VeriProof’s audit architecture is relevant to several SP 800-218A practices:
- GENAI-PW-1.1 (Protect model integrity): Blockchain anchoring and immutable audit records provide tamper evidence for the commitment generation path and downstream review
- GENAI-PW-2.1 (Protect inference outputs): Blockchain anchoring provides an immutable record of what the model actually produced, supporting output integrity claims
- GENAI-RV-3.1 (Monitor for unexpected behaviour): Governance scoring and alert rules operationalise continuous anomaly detection as recommended
Building AI RMF Review Material
The current product does not generate a dedicated NIST AI RMF package from a framework selector. In practice, teams assemble RMF review material from:
- Session evidence JSON or PDF for representative decisions
- Bulk application evidence ZIP exports for broader sample coverage
- Blockchain audit certificates for integrity verification
- Governance dashboards, alert histories, and remediation records from the portal
That gives you concrete production evidence for GOVERN, MEASURE, and MANAGE discussions without overstating the current export surface.
Next Steps
- EU AI Act — EU regulatory requirements
- Governance Scoring guide — configuring MEASURE practices
- Compliance Monitoring guide — implementing MANAGE practices
- Governance section — per-article and per-framework deep dives