Skip to Content
ExamplesPythonMulti-Turn Workflow

Multi-Turn Workflow

Orchestrate three sequential Pydantic AI agents — a classifier, a risk assessor, and a decision agent — under a single VeriProof session. Each agent run is a separate step with its own governance metadata, but all steps share a single Merkle-sealed audit trail.

PythonMulti-agentPython ≥ 3.10

Prerequisites

pip install veriproof-sdk veriproof-sdk-instrumentation-pydantic-ai pydantic-ai

Environment

VERIPROOF_API_KEY=vp_live_... VERIPROOF_APPLICATION_ID=loan-workflow ANTHROPIC_API_KEY=sk-ant-...

Complete example

import asyncio import os from veriproof_sdk import ( configure_veriproof, VeriproofClientOptions, AgentRole, DecisionType, HumanOversightType, RiskLevel, SessionIntent, SessionOutcome, StepOutcome, StepType, ) from veriproof_sdk_instrumentation_pydantic_ai import VeriproofPydanticAISession from pydantic_ai import Agent from pydantic_ai.models.anthropic import AnthropicModel configure_veriproof( VeriproofClientOptions( api_key=os.environ["VERIPROOF_API_KEY"], application_id=os.environ["VERIPROOF_APPLICATION_ID"], ), service_name=os.environ["VERIPROOF_APPLICATION_ID"], set_global=True, ) # ── Agent definitions ────────────────────────────────────────────────────── classify_agent = Agent( AnthropicModel("claude-3-5-haiku-20241022"), system_prompt=( "Classify the loan application. " "Return one of: STANDARD | COMPLEX | HIGH_RISK. " "Respond with the classification only." ), ) risk_agent = Agent( AnthropicModel("claude-3-5-haiku-20241022"), system_prompt=( "You are a credit risk analyst. " "Given a loan classification and financials, produce a risk score (0–100) " "and list up to 3 risk factors. Format: SCORE:<n> FACTORS:<factor1>|<factor2>" ), ) decision_agent = Agent( AnthropicModel("claude-3-5-sonnet-20241022"), system_prompt=( "You are a senior loan officer. " "Based on the classification and risk assessment, make a final APPROVE or DECLINE decision. " "State the reason in one sentence." ), ) async def process_loan_application(applicant_id: str, credit_score: int, income: float) -> dict: """Run the full three-step loan decision workflow under a single VeriProof session.""" async with VeriproofPydanticAISession( application_id=os.environ["VERIPROOF_APPLICATION_ID"], session_name=f"loan-workflow:{applicant_id}", intent=SessionIntent.DECISION_SUPPORT, human_oversight_type=HumanOversightType.HUMAN_ON_THE_LOOP, subject_id=f"applicant:{applicant_id}", ) as session: application_summary = ( f"Applicant: {applicant_id}. " f"Credit score: {credit_score}. " f"Annual income: ${income:,.0f}. " f"Requested loan: $25,000. Debt-to-income ratio: 0.34." ) # ── Step 1: Classify ────────────────────────────────────────────── await session.add_step( step_name="classify-application", step_type=StepType.CLASSIFICATION, agent_role=AgentRole.CLASSIFIER, input={"applicant_id": applicant_id, "credit_score": credit_score}, ) classify_result = await session.run(classify_agent, application_summary) classification = classify_result.data.strip() await session.complete_step( step_name="classify-application", output={"classification": classification}, outcome=StepOutcome.COMPLETED, ) # ── Step 2: Risk assessment ─────────────────────────────────────── await session.add_step( step_name="assess-risk", step_type=StepType.ANALYSIS, agent_role=AgentRole.ASSESSOR, input={"classification": classification, "credit_score": credit_score}, ) risk_prompt = f"{application_summary} Classification: {classification}." risk_result = await session.run(risk_agent, risk_prompt) risk_text = risk_result.data # Parse score from response (e.g. "SCORE:42 FACTORS:high-dti|limited-history") risk_score = 50 # default if "SCORE:" in risk_text: try: risk_score = int(risk_text.split("SCORE:")[1].split()[0]) except (ValueError, IndexError): pass risk_level = ( RiskLevel.HIGH if risk_score >= 70 else RiskLevel.MEDIUM if risk_score >= 40 else RiskLevel.LOW ) await session.complete_step( step_name="assess-risk", output={"risk_score": risk_score, "risk_level": risk_level.value}, risk_level=risk_level, outcome=StepOutcome.COMPLETED, ) # ── Step 3: Final decision ──────────────────────────────────────── await session.add_step( step_name="make-decision", step_type=StepType.DECISION, agent_role=AgentRole.DECISION_MAKER, input={"classification": classification, "risk_score": risk_score}, ) decision_prompt = ( f"{application_summary} " f"Classification: {classification}. Risk score: {risk_score}/100." ) decision_result = await session.run(decision_agent, decision_prompt) decision_text = decision_result.data approved = "APPROVE" in decision_text.upper() await session.set_decision( decision_type=DecisionType.APPROVAL if approved else DecisionType.REJECTION, decision_summary=decision_text, risk_level=risk_level, outcome=StepOutcome.COMPLETED, ) session.set_outcome(SessionOutcome.SUCCESS) return { "session_id": session.session_id, "merkle_root": session.merkle_root, "classification": classification, "risk_score": risk_score, "approved": approved, "decision": decision_text, } if __name__ == "__main__": result = asyncio.run( process_loan_application("A-1042", credit_score=712, income=82000) ) print(f"Session : {result['session_id']}") print(f"Anchor : {result['merkle_root']}") print(f"Class : {result['classification']}") print(f"Risk : {result['risk_score']}/100") print(f"Decision : {'APPROVED' if result['approved'] else 'DECLINED'}") print(f"Reasoning : {result['decision']}")

What you’ll see in VeriProof

All three agent runs appear under a single session trace:

session: loan-workflow:A-1042 ├── step: classify-application (CLASSIFICATION / CLASSIFIER) │ └── pydantic_ai.agent (model: claude-3-5-haiku) ├── step: assess-risk (ANALYSIS / ASSESSOR, risk_level=MEDIUM) │ └── pydantic_ai.agent (model: claude-3-5-haiku) └── step: make-decision (DECISION / DECISION_MAKER) └── pydantic_ai.agent (model: claude-3-5-sonnet)

The Merkle root in session.merkle_root covers all three steps — any after-the-fact change to any step’s data is cryptographically detectable.

The three-agent pattern above is a simple sequential pipeline. For branching or parallel execution, use LangGraph to model your graph and the LangGraph instrumentation adapter to capture it.


Adding a human review gate

If your compliance requirements mandate human sign-off before actioning the decision, add an oversight record to the final step:

await session.set_decision( decision_type=DecisionType.APPROVAL if approved else DecisionType.REJECTION, decision_summary=decision_text, risk_level=risk_level, human_oversight_type=HumanOversightType.HUMAN_IN_THE_LOOP, human_reviewer_id="reviewer:R-007", human_approved=True, # set after human confirms outcome=StepOutcome.COMPLETED, )

This generates an Article 14 (transparency) evidence record in VeriProof’s EU AI Act compliance export.


Next steps

Last updated on