Skip to Content
Governance & ComplianceEU AI ActArticle 13 — Transparency

Article 13 — Transparency and Provision of Information to Deployers

Article 13 requires that high-risk AI systems be designed to enable deployers to understand, interpret, and use the system appropriately. Providers must also supply instructions for use that meet specific content requirements.


What the Article Requires

Article 13 has two related obligations:

Transparency by design — The system must be designed so that its outputs can be interpreted by deployers and, where applicable, by the natural persons affected. This means the system must produce outputs that can be explained, not just delivered.

Instructions for use — Providers must supply instructions covering:

  • Identity and contact details of the provider
  • Capabilities and limitations of the system
  • Circumstances that could lead to risks to health, safety, or fundamental rights
  • Performance levels and testing conditions
  • Human oversight measures (who is responsible, what they should check)
  • Maintenance and care requirements

VeriProof’s Role in Article 13 Compliance

Article 13 transparency obligations are primarily met through product design and documentation — what your system outputs, how it explains its decisions, and what documentation you provide to deployers. These are your responsibilities.

VeriProof contributes to Article 13 in three ways:

1. Decision Traceability Infrastructure

For an AI system to be interpretable, its outputs must be attributable to specific inputs and reasoning steps. VeriProof captures the full session record — inputs, intermediate steps, tool calls, and final outputs — creating the audit record that supports post-hoc explanation.

When a deployer or affected individual asks “why did the system produce this output?”, the session record from VeriProof provides the factual basis for that explanation.

2. Performance Documentation

Article 13(3)(c) requires instructions to include actual performance levels on accuracy, robustness, and cybersecurity metrics. The Application Workspace in the Customer Portal shows longitudinal governance score history — the Dashboard tab displays score trends over time, while the Governance Coverage tab breaks down performance by dimension.

Use the Compliance → Evidence Exports tab to download a signed evidence package covering the report period. The package includes session-level governance score summaries, percentile distributions, and trend charts that give you production data to complement pre-deployment benchmark results. Include these in your instructions for use to satisfy the Article 13(3)(c) requirement with evidence from real production operation, not just test benchmarks.

3. Capability and Limitation Monitoring

Your instructions for use must describe circumstances that could lead to errors or unexpected outputs. VeriProof’s alert trigger history and governance score analytics provide the empirical basis for identifying these circumstances in production. Open Monitoring in the Customer Portal and select the Analytics tab. The analytics view shows alert trigger frequency by rule name, severity distribution, and common session characteristics across flagged sessions — revealing input patterns that correlate with quality issues.

For a detailed review, switch to the Trigger History tab, filter by severity (Medium, High, Critical) and date range, and open individual triggers to inspect the sessions involved. Look for patterns in input length, topic, or metadata that appear repeatedly across flagged sessions.

Systematic analysis of flagged sessions reveals the input characteristics that correlate with quality issues — directly informing the “circumstances that could lead to risks” section of your instructions for use.


Human Oversight and Article 14

Article 13 works closely with Article 14 (Human Oversight). The instructions for use must specify the human oversight measures you require deployers to implement.

VeriProof supports the evidence requirements for human oversight where your deployers review flagged sessions. If your system routes low-confidence outputs to a human reviewer, create a governance policy in Settings → Governance Policies using the Requires Human Oversight rule type, applied to the relevant application. Set the enforcement mode to Violation so that sessions missing the oversight signal are flagged in your compliance score.

Pair this with a Monitoring alert rule that fires when the oversight rate drops below your SLA, so deviations are caught promptly rather than only at the next evidence export.


Article 13 Documentation Checklist

Use this checklist when reviewing your instructions for use:

  • Provider identity and contact information
  • Intended purpose and intended user population
  • Capabilities, including performance metrics from production (from VeriProof evidence export)
  • Known limitations and risks, including circumstances identified through alert analysis
  • Human oversight requirements (who, what to check, what to do when uncertain)
  • Changes requiring notification to VeriProof or your conformity assessment body
  • Data retention and record-keeping requirements for deployers

Next Steps

Last updated on