OpenAI Agents quickstart — governed Runner in 15 minutes
Route every OpenAI Agents SDK
tool call through Cordum's MCP bridge. The adapter exposes MCP tools
as agents.FunctionTool instances with strict JSON-Schema, so the
Runner's tool-caller loop validates arguments before a call ever
touches the gateway.
What you'll build
An agents.Agent wired to Cordum MCP tools, driven by
Runner.run_streamed under a run_governed wrapper that injects
the Cordum session id into the OpenAI trace metadata. A scripted
ChatCompletionClient (no OpenAI API key required) makes the
quickstart hermetic. After the run, the Cordum dashboard shows one
mcp.tool_invocation event per tool call plus
cordum.audit.log_turn entries for each streamed conversation turn.
Before you start
- Python 3.11+ (3.9–3.12 all supported).
- A running Cordum instance (
cd cordum && make dev-up). CORDUM_API_KEYexported.cordum-mcp-bridgeon$PATH.- Optional:
OPENAI_API_KEYfor a real model. The quickstart uses a scripted FakeModel so you can complete it without a paid API key.
:::note Strict JSON-Schema
openai-agents>=0.14 rejects any tool whose input schema has
additionalProperties: true. The adapter's
normalise_strict_schema recursively forces it to false on every
nested object. If you need a tool that accepts open kwargs, pass
strict=False to build_openai_agent_tools.
:::
1. Install
cordum-adapters is not yet on PyPI (tracked in task-386f52f4).
Clone the source repo and install from the local checkout:
git clone https://github.com/cordum-io/cordum-packs.git
pip install "./cordum-packs/integrations/agent-adapters[openai-agents]"
Pip treats the local directory as a project root, so extras resolve
against the checkout's pyproject.toml across every supported pip
version.
Pulls openai-agents>=0.1 and pydantic>=2.0. Base
cordum-adapters has zero heavy deps.
Once the PyPI publish lands, this becomes pip install "cordum-adapters[openai-agents]".
2. Configure — connect to the bridge
import os
from cordum_agent_adapters.mcp_client import McpStdioClient
client = McpStdioClient(
command=["cordum-mcp-bridge"],
env={
**os.environ,
"CORDUM_GATEWAY_URL": "http://localhost:8081",
"CORDUM_API_KEY": os.environ["CORDUM_API_KEY"],
"CORDUM_NATS_URL": "nats://localhost:4222",
"CORDUM_REDIS_URL": "redis://localhost:6379",
},
)
3. Build a governed Agents SDK run
import asyncio
from agents import Agent
from cordum_agent_adapters.audit import CordumConversationLogger
from cordum_agent_adapters.openai_agents import (
build_openai_agent_tools, run_governed, tee_events,
)
logger = CordumConversationLogger(client) # session_id auto-generated
tools = build_openai_agent_tools(client) # list[agents.FunctionTool]
# Swap in a real model when you're ready:
# from agents.models.openai_provider import OpenAIProvider
# model = OpenAIProvider().get_model("gpt-4.1-mini")
# For the hermetic quickstart, use the scripted FakeModel shipped with
# the public cordum_agent_adapters.testing module:
from cordum_agent_adapters.testing import FakeModel, ScriptedTurn
model = FakeModel(turns=[
ScriptedTurn(tool_calls=[{"name": "cordum.workflow.list", "arguments": {}}]),
ScriptedTurn(content="Listed workflows successfully."),
])
agent = Agent(
name="governed_runner",
instructions="You operate Cordum governed workflows.",
tools=tools,
model=model,
)
async def main():
result = await run_governed(
agent, "List available workflows.", client=client, logger=logger,
)
async for event in tee_events(result, logger):
item = getattr(event, "item", None)
if getattr(item, "type", "") == "tool_call_output_item":
print("tool output:", item.output)
asyncio.run(main())
run_governed injects trace_metadata={"cordum_session_id": ...}
into the Runner's trace, so the OpenAI trace UI and Cordum's
audit chain share a correlation key. tee_events relays each
stream event to the caller AND tees tool-call + tool-call-output
items into logger.log_turn — no manual audit plumbing.
4. Run and verify governance
python openai_agents_quickstart.py
Open the Cordum dashboard → Policy Decision Log. Filter by the
logger's session_id to see the tool invocations with
args_redacted, latency_ms, and decision=allow.
Seed a deny policy
cordumctl policy bundle create --tenant=default --yaml - <<'YAML'
id: deny-dlq-retry
rules:
- when: {tool: "cordum.dlq.retry"}
decision: deny
reason: "manual review required"
YAML
Change the scripted turn to call cordum.dlq.retry. Re-run. The
adapter translates the gateway's JSON-RPC -32099 policy-deny into
a [POLICY DENIED]-prefixed string; the Agents SDK surfaces it as
a tool-result message the LLM sees on the next turn. The Runner
never raises, so your agent code keeps running.
What's next
- Deep OpenAI Agents walkthrough — see the adapter tutorial.
- CrewAI quickstart — same governance
surface, CrewAI
Agent.toolsAPI. - AutoGen quickstart — same governance surface, AG2 0.4+ API.
- Framework integrations overview — when to pick which adapter.