AutoGen quickstart — governed AG2 agent in 15 minutes
Route every AutoGen 0.4+ (AG2) tool call through Cordum's MCP bridge
for scope filtering, per-tool approval gates, and tamper-evident
audit — with the standard AssistantAgent API, no LLM-gateway in
the middle.
What you'll build
An AssistantAgent whose tools come from Cordum's MCP bridge. A
scripted ChatCompletionClient (no OpenAI API key required) drives
the agent's tool-caller loop so the quickstart is hermetic. After the
run, the Cordum dashboard shows one mcp.tool_invocation event per
dispatched tool call. We then seed a deny policy and watch the
adapter convert the gateway's JSON-RPC -32099 into
CordumPolicyDeniedError, which AG2 renders as a
ToolCallResultEvent the LLM sees — the loop keeps running.
Before you start
- Python 3.11+ (3.9–3.12 all supported).
- A running Cordum instance (
cd cordum && make dev-up). CORDUM_API_KEYexported.cordum-mcp-bridgeon$PATH.- Optional:
OPENAI_API_KEYfor a real model. This quickstart uses a scripted_fake_model.FakeModelso you can complete it with no paid API calls.
:::warning AutoGen version matrix
autogen-core / autogen-agentchat (modern AG2 0.4+) and
pyautogen (legacy 0.2) pin incompatible openai versions. Never
install both extras in the same interpreter.
- Modern (this page): installs via source (see below).
- Legacy 0.2: see the pyautogen tutorial. :::
1. Install
cordum-adapters is not yet on PyPI (tracked in
task-386f52f4). Clone
the source repo and install from the local checkout:
git clone https://github.com/cordum-io/cordum-packs.git
pip install "./cordum-packs/integrations/agent-adapters[autogen]"
Pip treats the local directory as a project root, so extras resolve
against the checkout's pyproject.toml reliably across pip versions.
Pulls autogen-core>=0.4, autogen-agentchat>=0.4, and
pydantic>=2.0. Base cordum-adapters has zero heavy deps — pick
exactly the extras you need.
Once the PyPI publish lands, this becomes:
pip install "cordum-adapters[autogen]"
2. Configure — connect to the bridge
import os
from cordum_agent_adapters.mcp_client import McpStdioClient
client = McpStdioClient(
command=["cordum-mcp-bridge"],
env={
**os.environ,
"CORDUM_GATEWAY_URL": "http://localhost:8081",
"CORDUM_API_KEY": os.environ["CORDUM_API_KEY"],
"CORDUM_NATS_URL": "nats://localhost:4222",
"CORDUM_REDIS_URL": "redis://localhost:6379",
},
)
3. Build a governed AG2 agent
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from cordum_agent_adapters.audit import CordumConversationLogger
from cordum_agent_adapters.autogen import (
build_ag2_tools, register_cordum_tools,
)
logger = CordumConversationLogger(client) # session_id auto-generated
tools = build_ag2_tools(client, logger=logger)
# Use your real model client here when you're ready:
# from autogen_ext.models.openai import OpenAIChatCompletionClient
# model = OpenAIChatCompletionClient(model="gpt-4.1-mini")
# For the hermetic quickstart, swap in the scripted FakeModel shipped
# with the public cordum_agent_adapters.testing module.
from cordum_agent_adapters.testing import FakeModel, ScriptedTurn
model = FakeModel(turns=[
ScriptedTurn(tool_calls=[{"name": "cordum.workflow.list", "arguments": {}}]),
ScriptedTurn(content="Listed workflows successfully."),
])
assistant = AssistantAgent(
name="governed_runner",
model_client=model,
tools=tools,
system_message="You operate Cordum governed workflows.",
)
# Or wire in one call:
# binding = register_cordum_tools(assistant, client, logger=logger)
async def main():
response = await assistant.on_messages(
[TextMessage(content="List available workflows.", source="user")],
CancellationToken(),
)
print(response.chat_message.to_text())
asyncio.run(main())
register_cordum_tools(agent, client, logger=logger) also wraps
agent.on_messages_stream so pure-LLM turns (no tool call) land in
the audit trail — not just tool calls.
4. Run and verify governance
python autogen_quickstart.py
Open the Cordum dashboard → Policy Decision Log. Filter by the
logger's session_id and you'll see:
- One
mcp.tool_invocationevent for the scriptedcordum.workflow.listcall —decision=allow,latency_ms,args_redacted. - One
cordum.audit.log_turnentry per pure-LLM turn (the on_messages_stream tee).
Seed a deny policy
cordumctl policy bundle create --tenant=default --yaml - <<'YAML'
id: deny-dlq-retry
rules:
- when: {tool: "cordum.dlq.retry"}
decision: deny
reason: "manual review required"
YAML
Change the scripted turn to call cordum.dlq.retry. Re-run. The
adapter converts the JSON-RPC -32099 error into a
CordumPolicyDeniedError; AG2 renders it as a
ToolCallResultEvent the assistant sees next turn:
ToolCallResultEvent(content="[POLICY DENIED] tool=cordum.dlq.retry
reason=manual review required. Try a different approach or request
approval.")
The assistant typically apologises and either tries a different tool or asks the user to escalate. No code change on the AG2 side.
What's next
- Deep AutoGen walkthrough — see the modern AG2 tutorial.
- On legacy
pyautogen0.2? See the classic tutorial. - CrewAI quickstart — same governance
surface, CrewAI
Agent.toolsAPI. - OpenAI Agents quickstart — same governance surface, openai-agents SDK.
- AutoGen multi-agent narrative — a longer demo covering group-chat governance and per-turn redaction.