Skip to Content
GuidesMulti-Adapter Setup

Using Multiple Adapters

Production AI applications often combine more than one framework in a single process — a LangGraph orchestrator that spawns CrewAI agent crews, an OpenAI Agents SDK pipeline whose tools call a LlamaIndex retriever, or a Next.js route that chains LangChain.js and the Vercel AI SDK. VeriProof handles this automatically.

Because all adapters share a single OpenTelemetry TracerProvider, spans from different frameworks are woven together into one unified trace tree. You see the complete picture — which framework called which, in what order — in the Time Machine view and compliance dashboard.


The fundamental rule: configure once, first

Call the VeriProof setup function once, before importing or initializing any framework adapter. All adapters share the provider configured by that single call.

# telemetry.py — import this module before any AI framework from veriproof import configure_veriproof from veriproof_instrumentation_langgraph import instrument_compiled_graph from veriproof_instrumentation_crewai import instrument_crew configure_veriproof( api_key=os.environ["VERIPROOF_API_KEY"], application_id="my-multi-agent-app", ) # Both instrumentors attach to the same provider — no conflict instrument_compiled_graph() instrument_crew()

How span nesting works

When framework A calls framework B, OpenTelemetry’s context propagation makes B’s spans appear as children of the currently-active span from A. This works automatically for any supported adapter combination, as long as the setup rule above is followed.

A typical multi-framework trace looks like:

Session root └── LangGraph node: "plan" └── CrewAI task: "ResearchAgent.research_topic" └── Tool call: "web_search" └── CrewAI task: "WriteAgent.write_report" └── LLM call: gpt-4o

Common adapter combinations

Python: LangGraph + CrewAI

Both are async, so context propagates automatically across await boundaries.

from veriproof import SessionBuilder async def run_pipeline(user_request: str) -> str: async with SessionBuilder(intent="research_report") as session: result = await compiled_graph.ainvoke({"request": user_request}) session.set_outcome("APPROVED") return result["report"]

If you need to call synchronous CrewAI code from an async LangGraph node, propagate OTel context manually:

import asyncio from opentelemetry import context as otel_context async def call_sync_crew(sync_fn, *args): ctx = otel_context.get_current() def run_with_context(): token = otel_context.attach(ctx) try: return sync_fn(*args) finally: otel_context.detach(token) return await asyncio.to_thread(run_with_context)

Python: OpenAI Agents SDK + LlamaIndex

LlamaIndex spans from tool calls nest under the OpenAI Agents SDK tool-call span automatically.

from agents import Agent, Runner, function_tool from llama_index.core import VectorStoreIndex query_engine = VectorStoreIndex.from_documents([...]).as_query_engine() @function_tool async def search_knowledge_base(query: str) -> str: """Search the internal knowledge base.""" # LlamaIndex spans produced here are children of the current tool-call span response = await query_engine.aquery(query) return str(response) agent = Agent(name="ResearchAgent", tools=[search_knowledge_base]) result = await Runner.run(agent, messages=[...])

TypeScript: LangChain.js + Vercel AI SDK

import { SessionBuilder } from '@veriproof/sdk-core'; import { ChatOpenAI } from '@langchain/openai'; import { generateText } from 'ai'; import { openai } from '@ai-sdk/openai'; async function processRequest(input: string): Promise<string> { return await SessionBuilder.run({ intent: 'content_generation' }, async () => { const model = new ChatOpenAI({ model: 'gpt-4o' }); const summary = await model.invoke([{ role: 'user', content: `Summarise: ${input}` }]); // Vercel AI SDK call inside the same session — spans appear as siblings const { text } = await generateText({ model: openai('gpt-4o-mini'), prompt: `Analyse this summary: ${summary.content}`, }); return text; }); }

.NET: Semantic Kernel + AutoGen

[KernelFunction("delegate_to_agent_group")] public async Task<string> DelegateAsync(string task, CancellationToken ct) { // AutoGen spans produced here nest under the current Semantic Kernel function span return await _agentGroupChat.RunConversationAsync(task, ct); }

Supported combinations

Adapter combinationLanguageStatus
LangGraph + CrewAIPythonFull support
LangGraph + OpenAI Agents SDKPythonFull support
OpenAI Agents SDK + LlamaIndexPythonFull support
PydanticAI + CrewAIPythonFull support (requires asyncio.to_thread for sync kickoff)
LangGraph + LlamaIndexPythonFull support
CrewAI + Google ADKPythonPartial — ADK spans may appear as siblings
LangChain.js + Vercel AI SDKTypeScriptFull support
LangChain.js + LlamaIndex.tsTypeScriptFull support
Next.js (App Router) + Vercel AI SDKTypeScriptFull support
Semantic Kernel + AutoGen.NETFull support

Known edge cases

LlamaIndex global Settings: The LlamaIndex instrumentor patches Settings.callback_manager once at startup. If your code replaces Settings.callback_manager later, call instrument_llamaindex() again to reattach.

TypeScript: multiple configureVeriproof() calls: The first call wins. If you call it a second time, it logs a warning and is ignored. Ensure your telemetry bootstrap file loads before any adapter.

.NET with a second OTel exporter: If you also export to Datadog or Honeycomb, register both exporters on the same TracerProviderBuilder — do not call AddOpenTelemetry() twice:

builder.Services .AddVeriproof(options => { ... }) .AddOpenTelemetryTracing(tracing => tracing.AddOtlpExporter(o => o.Endpoint = new Uri("https://api.honeycomb.io")) );

Testing multi-adapter pipelines

Use the built-in span collector to assert correct nesting without a live backend:

from veriproof.testing import SpanCollector import asyncio def test_multi_adapter_nesting(): with SpanCollector() as collector: asyncio.run(run_pipeline("Write a report on LLM governance")) spans = collector.spans crewai_spans = [s for s in spans if s.attributes.get("vp.framework") == "crewai"] langgraph_spans = [s for s in spans if s.attributes.get("vp.framework") == "langgraph"] # Every CrewAI span should have a LangGraph span (or the session root) as its parent langgraph_ids = {s.span_id for s in langgraph_spans} assert all(s.parent_id in langgraph_ids for s in crewai_spans)

Next steps

Last updated on