import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
Govern Your LangGraph Agent in 5 Minutes
Problem: Your LangGraph agent calls external APIs and processes user queries. How do you prevent it from accessing unauthorized data or leaking PII?
Solution: Cordum sits between your users and your agent. Every job is evaluated by the Safety Kernel before it reaches your agent code. Denied jobs never execute.
User → API Gateway → Safety Kernel → [ALLOW/DENY] → LangGraph Agent
↓
Audit Trail
Prerequisites
- Docker and Docker Compose installed
cordumctlbinary (install guide)
Step 1: Scaffold the Project
cordumctl init --framework langchain my-langgraph-agent
cd my-langgraph-agent
This generates:
my-langgraph-agent/
├── docker-compose.yml # Cordum services + your worker
├── config/
│ └── safety.yaml # Safety policy (deny-by-default)
├── worker/
│ ├── agent.py # LangGraph agent with Cordum integration
│ ├── requirements.txt # cap-sdk, langgraph, langchain-core
│ └── Dockerfile
└── README.md
Step 2: Review the Safety Policy
Open config/safety.yaml. Cordum ships a deny-by-default policy:
default_decision: deny
rules:
- id: allow-research
match:
topics: ["job.default"]
decision: allow
input_rules:
- id: deny-pii-queries
severity: high
match:
topics: ["job.default"]
scanners: ["pii"]
decision: deny
reason: "Query contains PII (SSN or credit card number)"
This policy:
- Denies all jobs by default — only explicitly allowed topics pass through
- Allows jobs on the
job.defaulttopic - Blocks any job whose input contains PII (SSNs, credit card numbers) detected by the built-in
piiscanner
Step 3: Start the Stack
docker compose up -d
Wait for all services to be healthy:
docker compose ps
You should see: nats, redis, cordum-api-gateway, cordum-scheduler, cordum-safety-kernel, and cordum-langchain-worker all running.
Step 4: Submit an Allowed Job
Send a research query that passes the safety policy:
curl -s http://localhost:8080/api/v1/jobs \
-H "Content-Type: application/json" \
-H "X-Tenant-ID: default" \
-d '{
"topic": "job.default",
"input": {"query": "What are the best practices for API security?"}
}' | jq .
Expected response:
{
"id": "job-abc123",
"status": "completed",
"safety_decision": "ALLOW",
"result": {
"answer": "Research complete for: What are the best practices for API security?",
"sources": ["internal-docs", "approved-apis"],
"reviewed": true
}
}
The Safety Kernel evaluated the query, found no PII, and allowed it through to your LangGraph agent.
Step 5: Submit a Blocked Job (PII Detection)
Now try sending a query that contains a credit card number:
curl -s http://localhost:8080/api/v1/jobs \
-H "Content-Type: application/json" \
-H "X-Tenant-ID: default" \
-d '{
"topic": "job.default",
"input": {"query": "Look up account for card 4532-1234-5678-9012"}
}' | jq .
Expected response:
{
"id": "job-def456",
"status": "denied",
"safety_decision": "DENY",
"reason": "Query contains PII (SSN or credit card number)"
}
The Safety Kernel detected PII in the input and denied the job. Your agent code never ran. The PII never reached your LangGraph agent, your LLM provider, or any external API.
Step 6: View the Audit Trail
Open the Cordum dashboard at http://localhost:8080 and navigate to the Audit page. You'll see:
| Time | Job ID | Topic | Decision | Reason |
|---|---|---|---|---|
| now | job-def456 | job.default | DENY | Query contains PII |
| now | job-abc123 | job.default | ALLOW | — |
Every decision is recorded — who submitted the job, what the Safety Kernel decided, and why.
What's Happening Under the Hood
1. curl POST /api/v1/jobs → API Gateway receives the job
2. Gateway publishes to NATS: sys.job.submit
3. Scheduler picks up the job and calls Safety Kernel
4. Safety Kernel evaluates rules:
a. Topic "job.default" matches allow-research → ALLOW candidate
b. Input scanner "pii" checks for SSN/CC patterns
c. If PII found → DENY (rule deny-pii-queries wins)
d. If clean → ALLOW
5. ALLOW: Scheduler routes to worker → LangGraph agent processes
6. DENY: Scheduler records denial → job never reaches worker
7. Dashboard reads audit trail from Redis
Next Steps
- Add approval gates: Change a rule's
decisiontorequire_approvalto add human-in-the-loop review - Custom scanners: Add your own content scanners for domain-specific patterns
- Output policy: Enable output scanning to redact sensitive data from agent responses
- See the Safety Kernel docs for the full policy reference
- Try the CrewAI tutorial for multi-agent safety gates