Framework Integrations
Cordum sits between your agent framework and the tools/data the agent acts on. For each supported framework, the integration is a thin adapter that:
- Wraps the framework's tool-invocation surface so every tool call routes through the Cordum MCP bridge.
- Lets the Safety Kernel evaluate the call before it executes.
- Surfaces approvals and denials back to the agent as tool-call results (so the loop continues cleanly).
Supported frameworks
| Framework | Adapter | Status | Tutorial |
|---|---|---|---|
| LangChain | cordum-adapters[langchain] | ✅ Shipped | LangChain guard |
| LangGraph | cordum-adapters[langchain] + graph wiring | ✅ Shipped | LangGraph 5-min |
| CrewAI | cordum-adapters[crewai] | ✅ Shipped | CrewAI safety gates |
| AutoGen / AG2 | cordum-adapters[autogen] | 🚧 In progress | Coming soon — see the epic |
| OpenAI Agents SDK | cordum-adapters[openai-agents] | 🚧 In progress | Coming soon |
| LlamaIndex | cordum-adapters[llama-index] | 📋 Planned | — |
| Temporal | Cordum pack | 📋 Planned | — |
Install the adapters
pip install cordum-adapters[<framework>]
Available extras: langchain, crewai, autogen, autogen-classic
(pyautogen 0.2), openai-agents, all, dev.
Common wiring
Every adapter needs three things:
from cordum_agent_adapters.mcp_client import McpStdioClient
client = McpStdioClient(
command=["cordum-mcp-bridge"],
env={
"CORDUM_GATEWAY_URL": "https://localhost:8081",
"CORDUM_API_KEY": "<your-key>",
},
)
Then hand client to the framework-specific builder:
- LangChain:
build_langchain_tools(client)→ list ofBaseTool. - CrewAI:
build_crewai_tools(client)→ list ofBaseToolsubclasses. - AutoGen (classic):
build_autogen_tools(client)→(functions, function_map). - AutoGen (AG2 0.4+):
build_ag2_tools(client)→ list ofFunctionTool. - OpenAI Agents:
build_openai_agent_tools(client)→ list ofFunctionTool.
What governance looks like in the loop
When Cordum denies a tool call (policy violation, scope filter, rate limit), the adapter translates the deny into a framework-native error:
- LangChain / CrewAI / AutoGen: the tool returns a string prefixed
[POLICY DENIED] …— the LLM sees the deny in the next turn and can try a different approach. - OpenAI Agents: same string format, returned from
on_invoke_tool.
When Cordum requires a human approval, the adapter:
- Returns an approval pending indicator to the LLM.
- Surfaces the pending approval in the Cordum dashboard.
- The LLM retries after the human resolves (or on a configurable timeout).
Conversation audit
Every framework adapter integrates with the Cordum audit chain via
CordumConversationLogger. Every tool call, every turn's metadata, every
approval resolution lands in the tenant's SIEM event stream with hash-chained
integrity.
from cordum_agent_adapters.audit import CordumConversationLogger
logger = CordumConversationLogger(client=client)
# pass `logger=logger` to any adapter builder
Tutorials
- LangChain guard — full end-to-end LangChain agent with Cordum governance.
- LangGraph 5-min — drop Cordum into a LangGraph state machine in five minutes.
- CrewAI safety gates — CrewAI crew running governed tool calls.
- AutoGen multi-agent — multi-agent conversation with governance at every handoff.
See also
- Agent Protocol (CAP) — the wire protocol the adapters speak.
- Safety Kernel — what the adapters ultimately call into.
cap— SDKs (Go, Python, Node, C++) if you want to build a framework adapter yourself.