Skip to main content

Cordum v2.9+ CrewAI 0.80+ Python 3.12

Add Safety Gates to CrewAI

Problem: Your CrewAI crew processes customer data. How do you enforce PII redaction and add approval gates for sensitive operations — without modifying your crew code?

Solution: Cordum enforces safety policies at the platform level. Your crew code stays clean — governance is handled before jobs reach your agents.

User → API Gateway → Safety Kernel → [ALLOW/DENY/APPROVE] → CrewAI Crew

PII scan, approval gates, audit trail

Prerequisites

  • Docker and Docker Compose installed
  • cordumctl binary (install guide)

Step 1: Scaffold the Project

cordumctl init --framework crewai my-crewai-project
cd my-crewai-project

This generates a project with a two-agent crew (Research Analyst + Content Writer) and a safety policy with PII detection and approval gates.

Step 2: Review the Safety Policy

Open config/safety.yaml:

default_decision: deny

rules:
- id: allow-crew-tasks
match:
topics: ["job.default"]
decision: allow

input_rules:
- id: deny-pii-input
severity: high
match:
topics: ["job.default"]
scanners: ["pii"]
decision: deny
reason: "Input contains PII — redact before submitting"

- id: require-approval-sensitive
severity: high
match:
topics: ["job.default"]
keywords: ["delete production", "drop table", "admin access"]
decision: require_approval
reason: "Task involves sensitive operations — human approval required"

Three safety gates:

  1. PII gate — Denies jobs containing SSNs or credit card numbers
  2. Approval gate — Holds jobs with sensitive keywords for human approval
  3. Default allow — Clean jobs on job.default pass through to your crew

Step 3: Start the Stack

docker compose up -d
docker compose ps # Wait for all services to be healthy

Step 4: Submit a Clean Job (Allowed)

curl -s http://localhost:8080/api/v1/jobs \
-H "Content-Type: application/json" \
-H "X-Tenant-ID: default" \
-d '{
"topic": "job.default",
"input": {"prompt": "Research best practices for container security"}
}' | jq .

Expected response:

{
"id": "job-001",
"status": "completed",
"safety_decision": "ALLOW",
"result": {
"summary": "Research complete for: container security best practices",
"agents_used": ["Research Analyst", "Content Writer"],
"process": "sequential"
}
}

Step 5: Trigger the PII Gate (Denied)

curl -s http://localhost:8080/api/v1/jobs \
-H "Content-Type: application/json" \
-H "X-Tenant-ID: default" \
-d '{
"topic": "job.default",
"input": {"prompt": "Summarize account for SSN 123-45-6789"}
}' | jq .

Expected response:

{
"id": "job-002",
"status": "denied",
"safety_decision": "DENY",
"reason": "Input contains PII — redact before submitting"
}

Your crew never saw the SSN. The Safety Kernel blocked it at the gate.

Step 6: Trigger the Approval Gate (Held for Review)

curl -s http://localhost:8080/api/v1/jobs \
-H "Content-Type: application/json" \
-H "X-Tenant-ID: default" \
-d '{
"topic": "job.default",
"input": {"prompt": "Review procedure to delete production database backups"}
}' | jq .

Expected response:

{
"id": "job-003",
"status": "pending_approval",
"safety_decision": "REQUIRE_APPROVAL",
"reason": "Task involves sensitive operations — human approval required"
}

The job is held. Your crew won't process it until a human approves:

# Approve the job via CLI
cordumctl approve job-003

# Or deny it
cordumctl deny job-003 --reason "Not authorized for production deletions"

You can also approve/deny from the Cordum dashboard at http://localhost:8080 under Approvals.

Step 7: View the Audit Trail

Open the dashboard and navigate to the Audit page:

TimeJob IDDecisionRuleReason
nowjob-003REQUIRE_APPROVALrequire-approval-sensitiveSensitive operation
nowjob-002DENYdeny-pii-inputPII detected
nowjob-001ALLOWallow-crew-tasks

Every decision is auditable — who submitted, what was decided, which rule matched, and why.

What You Get Without Changing Crew Code

ConcernHandled byYour crew code
PII in inputSafety Kernel scannerUnchanged
Sensitive operationsApproval gateUnchanged
Audit trailCordum platformUnchanged
Output redactionOutput policy (opt-in)Unchanged

Your crew.py stays focused on business logic. Governance is a platform concern.

Next Steps

  • Output scanning: Add output_policy.enabled: true to scan crew responses for sensitive data
  • Custom keywords: Add your own keywords to the approval gate (e.g., "transfer funds", "modify permissions")
  • Multi-tenant: Configure per-tenant policies for different security postures
  • See the Output Safety docs for output policy reference
  • Try the AutoGen tutorial for multi-agent governance