AI Governance
AI governance is the set of policies, processes, and technical controls that ensure your AI systems behave as intended, can be audited and explained, and comply with relevant regulations. As AI systems become more consequential — making decisions that affect credit, healthcare, hiring, and public services — governance has shifted from a best practice to a regulatory requirement.
VeriProof is built to be the infrastructure layer of your AI governance programme: the component that makes every decision traceable, every record tamper-evident, and every compliance obligation evidenceable from production data.
This section covers the governance frameworks most commonly relevant to AI systems built with large language models. If you’re looking for introductory coverage, start with EU AI Act overview or NIST AI RMF overview.
What Is an AI Governance Programme?
A complete AI governance programme covers the full lifecycle of an AI system:
| Phase | Activities | VeriProof role |
|---|---|---|
| Design | Risk classification, intended use documentation, bias assessment, data governance | Documentation feed — provides production baselines for comparison |
| Development | Model selection, training data documentation, test evaluation | Out of scope for VeriProof |
| Deployment | Configuration management, access controls, deployment documentation | Deployment context signing; infrastructure security |
| Production | Continuous monitoring, drift detection, incident response | Core VeriProof use case |
| Review | Periodic audit, compliance evidence generation, model update assessment | Compliance evidence export; time-machine analysis |
| Retirement | Decommissioning records, data deletion | GDPR cryptographic erasure; records retention |
VeriProof is primarily a production and review tool. It does not replace design-time governance activities — it provides the production evidence that makes those design decisions verifiable after deployment.
Regulatory Landscape
The major frameworks and regulations applicable to AI systems in 2026:
| Framework | Jurisdiction | Type | Scope |
|---|---|---|---|
| EU AI Act | European Union | Mandatory regulation | All AI systems placed on the EU market; high-risk systems have specific requirements |
| NIST AI RMF | United States | Voluntary framework | All organisations; adoption increasing in regulated industries and federal contracting |
| HIPAA | United States | Mandatory regulation | AI systems processing Protected Health Information |
| GDPR | EU + UK + 100+ countries | Mandatory regulation | Data protection for AI systems processing personal data of EU/UK individuals |
| SOC 2 | US-originated, global acceptance | Voluntary audit standard | Service organisations handling customer data |
VeriProof Governance Capabilities
Immutable Audit Trail
Every AI decision captured through VeriProof is stored with a blockchain-anchored Merkle proof. This means no record can be silently altered — any tampering would break the cryptographic chain and be immediately detectable. This is the foundation that makes all other governance evidence trustworthy.
Governance Scoring
You define what constitutes a well-governed decision for your use case — confidence thresholds, refusal rates, tone indicators, fairness signals — and VeriProof scores every session against those criteria automatically. This produces a continuous, quantitative governance signal rather than a periodic sampling exercise.
Compliance Evidence Export
On demand, VeriProof generates structured evidence packages for specific regulatory frameworks. These packages include session records, proof data, governance scores, and attestation material in a form suitable for auditor review.
GDPR Cryptographic Erasure
The session record lifecycle is designed to satisfy GDPR right-to-erasure requirements without disrupting audit continuity. Erasing a data subject deletes the key material needed to verify their linked sessions — the records remain structurally intact for the audit trail, but the content is permanently inaccessible and unverifiable.
Getting Started with AI Governance
If you’re building a governance programme from scratch with VeriProof:
- Choose your framework — Start with the regulation most urgent to your organisation (EU AI Act, NIST AI RMF, or SOC 2 are good starting points)
- Integrate the SDK — Capture sessions from your AI pipelines (Getting Started)
- Configure governance scoring — Define the thresholds that matter for your use case (Governance Scoring guide)
- Set up alert rules — Get notified when production behaviour deviates from policy (Alert Rules guide)
- Generate your first evidence package — Validate that the output meets your auditors’ expectations before you need it (Evidence Export guide)
Framework Deep Dives
Per-article implementation guidance for Articles 9, 10, 11, 13, and 17
EU AI ActVeriProof’s role in each of the four RMF functions: GOVERN, MAP, MEASURE, MANAGE
NIST AI RMFBAA requirements, PHI minimisation, and audit trail requirements for healthcare AI
HIPAAControl expectations for AI systems under SOC 2 Trust Service Criteria
SOC 2Technical deep dive into the HKDF key derivation and erasure workflow
GDPR Cryptographic ErasureStep-by-step guide to generating compliance evidence packages
Evidence ExportNext Steps
- Security Overview — platform security architecture
- Compliance Evidence guide — practical guide to evidence generation
- Governance Scoring guide — configuring production monitoring