MAP Function
The MAP function establishes the operational context of an AI system: where and how it will be used, who it affects, what could go wrong, and how the potential consequences are distributed. A thorough MAP exercise informs the risk thresholds you’ll implement in MEASURE and the response procedures you’ll activate in MANAGE.
Relevant MAP Categories
MAP 1 — Context Establishment
MAP 1.1 Context for AI deployment is established and agreed upon.
VeriProof captures deployment context automatically at session ingest:
- Model metadata: Model name, version, provider, and prompt configuration are captured if your adapter emits them
- SDK version: The SDK and adapter version is recorded with every session
- Environment: Sessions are tagged with your deployment environment (production, staging)
- Timestamp: Every session includes precise server-side arrival and processing timestamps
This deployment context is searchable and filterable, letting you analyse sessions by deployment version, model configuration, or time window — critical for isolating issues to specific deployment changes.
To filter sessions by model version, open the application’s Sessions tab in the Customer Portal and use the metadata filters (model name, model version, date range). The filtered list can be exported as CSV for further analysis.
MAP 1.5 Organisational risk tolerances are applied to the system.
Once you’ve established risk tolerances in GOVERN 2.2, MAP 1.5 is where you verify the system is operating within them. Open the application workspace in the Customer Portal and select the Dashboard tab. The governance score trend chart shows weekly average scores over the selected period. Use the date range picker to compare recent transactions against the baseline period.
If the trend shows governance scores consistently near the lower bound of acceptable, your risk tolerance may need recalibration or the underlying system may require intervention.
MAP 2 — Risk Identification in Systems
MAP 2.1 Scientific findings are applied to system design and used as benchmarks.
Your pre-deployment benchmarks and safety evaluations establish the expected risk levels for the system. VeriProof provides the production data to assess whether actual risk levels match your pre-deployment assumptions. Open the application workspace Dashboard tab and adjust the date range to the early deployment window (for example, the first 30 days). Note the mean governance score, then change the date range to the mature period for comparison. Alternatively, go to Compliance → Evidence Exports and generate evidence packages for each period; the score distribution tables in each PDF show mean, median, and distribution for direct comparison.
Statistically significant differences between early deployment scores and mature deployment scores indicate that actual production risk levels are different from what your benchmarks predicted — a signal to revisit your risk analysis.
MAP 3 — Risk Analysis
MAP 3.1 AI system use is monitored and the system is re-evaluated when significant changes occur.
Configure a governance score alert to detect score drift. Open Monitoring in the Customer Portal and click + New Rule. Set the metric to governance score, configure a threshold that represents a meaningful drop from your deployment baseline, and set severity to Medium with your AI risk owner as the recipient.
When significant system changes occur (model update, prompt change, new use case), review your alert thresholds and export a new baseline evidence package from Compliance → Evidence Exports to document the state of the system at the time of the change.
MAP 5 — Impact Assessment
MAP 5.1 Likelihood and magnitude of each AI risk is assessed and documented.
Data subject linkage in VeriProof allows you to analyse impact at the level of specific individuals, not just aggregate statistics. Open Compliance → Privacy & Data Rights in the Customer Portal. The Data Subjects tab shows the total number of linked subjects and lets you filter by flags such as whether the subject has governance alerts. The Data Retention tab shows session distribution across subjects to help you assess concentration risk (users with disproportionately high or low interaction volumes).
This supports MAP 5.1’s requirement to assess the magnitude of risks — a system that produces low-quality outputs for a small number of highly reliant users may present a higher impact risk than aggregate statistics suggest.
MAP Evidence in Packages
The MAP section of a VeriProof AI RMF evidence package includes:
- Deployment context summary (model versions, SDK versions, session volumes by deployment)
- Governance score baseline and drift analysis for the period
- Data subject coverage (number of subjects, sessions per subject distribution)
Next Steps
- MEASURE function — quantitative risk metrics
- MANAGE function — incident response
- Article 9 — Risk Management — EU AI Act parallel for risk analysis