Skip to main content

Framework Integrations

Cordum sits between your agent framework and the tools/data the agent acts on. For each supported framework, the integration is a thin adapter that:

  1. Wraps the framework's tool-invocation surface so every tool call routes through the Cordum MCP bridge.
  2. Lets the Safety Kernel evaluate the call before it executes.
  3. Surfaces approvals and denials back to the agent as tool-call results (so the loop continues cleanly).

Supported frameworks

FrameworkAdapterStatusTutorial
LangChaincordum-adapters[langchain]✅ ShippedLangChain guard
LangGraphcordum-adapters[langchain] + graph wiring✅ ShippedLangGraph 5-min
CrewAIcordum-adapters[crewai]✅ ShippedCrewAI safety gates
AutoGen / AG2cordum-adapters[autogen]🚧 In progressComing soon — see the epic
OpenAI Agents SDKcordum-adapters[openai-agents]🚧 In progressComing soon
LlamaIndexcordum-adapters[llama-index]📋 Planned
TemporalCordum pack📋 Planned

Install the adapters

pip install cordum-adapters[<framework>]

Available extras: langchain, crewai, autogen, autogen-classic (pyautogen 0.2), openai-agents, all, dev.

Common wiring

Every adapter needs three things:

from cordum_agent_adapters.mcp_client import McpStdioClient

client = McpStdioClient(
command=["cordum-mcp-bridge"],
env={
"CORDUM_GATEWAY_URL": "https://localhost:8081",
"CORDUM_API_KEY": "<your-key>",
},
)

Then hand client to the framework-specific builder:

  • LangChain: build_langchain_tools(client) → list of BaseTool.
  • CrewAI: build_crewai_tools(client) → list of BaseTool subclasses.
  • AutoGen (classic): build_autogen_tools(client)(functions, function_map).
  • AutoGen (AG2 0.4+): build_ag2_tools(client) → list of FunctionTool.
  • OpenAI Agents: build_openai_agent_tools(client) → list of FunctionTool.

What governance looks like in the loop

When Cordum denies a tool call (policy violation, scope filter, rate limit), the adapter translates the deny into a framework-native error:

  • LangChain / CrewAI / AutoGen: the tool returns a string prefixed [POLICY DENIED] … — the LLM sees the deny in the next turn and can try a different approach.
  • OpenAI Agents: same string format, returned from on_invoke_tool.

When Cordum requires a human approval, the adapter:

  1. Returns an approval pending indicator to the LLM.
  2. Surfaces the pending approval in the Cordum dashboard.
  3. The LLM retries after the human resolves (or on a configurable timeout).

Conversation audit

Every framework adapter integrates with the Cordum audit chain via CordumConversationLogger. Every tool call, every turn's metadata, every approval resolution lands in the tenant's SIEM event stream with hash-chained integrity.

from cordum_agent_adapters.audit import CordumConversationLogger

logger = CordumConversationLogger(client=client)
# pass `logger=logger` to any adapter builder

Tutorials

See also

  • Agent Protocol (CAP) — the wire protocol the adapters speak.
  • Safety Kernel — what the adapters ultimately call into.
  • cap — SDKs (Go, Python, Node, C++) if you want to build a framework adapter yourself.