Runtime enforcement.
Not prompt prayers.
EnforceCore intercepts every agent action before execution โ blocking policy violations, redacting PII, and producing tamper-proof audit trails. Works with LangChain, LangGraph, CrewAI, AutoGen, and plain Python. No model fine-tuning. No trust in the LLM.
from enforcecore import enforce
from langgraph.prebuilt import create_react_agent
@enforce(policy="policy.yaml")
def run_agent(user_input: str) -> str:
agent = create_react_agent(model, tools)
result = agent.invoke({"messages": user_input})
return result["output"]
# EnforceCore fires on every tool call
# Policy violations โ blocked before execution
# PII in output โ redacted automatically
# Every action โ Merkle-chained audit log
Works with any Python agent framework
Prompt guardrails can be bypassed.
Runtime enforcement cannot.
Instructing an LLM to "be safe" is not a security boundary. EnforceCore sits between your agent and the world โ as code, not conversation.
- Model can be jailbroken
- No guarantee PII stays out of logs
- No audit trail you can verify
- Resource limits are suggestions
- Fails silently on adversarial prompts
- Policy enforced in Python, not in the LLM
- PII redacted before any tool call executes
- Merkle-chained, tamper-proof audit log
- Hard token, cost, and rate-limit caps
- 26 adversarial scenarios across 11 threat categories โ all blocked
The enforcement stack. One decorator.
Every component works independently or together. Drop in what you need โ from policy checks to full compliance reporting.
Policy Engine
Declarative YAML policy โ allow/deny actions, require approvals, set confidence thresholds. Evaluated deterministically before every tool call.
policy = Policy.from_file("policy.yaml")
engine = PolicyEngine(policy)
result = engine.evaluate_pre_call(context)
# result.is_allowed | result.reason
PII Redactor
Detects and redacts 11 PII categories โ email, phone, SSN, credit card, name, address, and more โ using regex patterns and optional Presidio NER.
redactor = Redactor(strategy=PLACEHOLDER)
result = redactor.redact(agent_output)
# result.text โ "[EMAIL]", "[PHONE]"
Merkle Auditor
Every agent action is hashed and chained into a Merkle tree with external hash injection (Merkle Bridge). Tamper-evident. Exportable. Independently verifiable.
auditor = Auditor()
auditor.record(tool_name="search", decision="allowed")
# Verify: verify_trail("trail.jsonl")
Resource Guard
Hard limits on tokens per call, calls per minute, and cost per session. When limits are hit, calls are blocked โ not throttled, not logged, blocked.
# Configured via policy YAML
enforcer = Enforcer.from_file("policy.yaml")
enforcer.record_cost(0.02)
# Raises CostLimitError if exceeded
Network Control
Domain allowlisting with wildcard support. Agents can only reach endpoints you explicitly permit. All other traffic is denied.
network_control:
allowed_domains:
- "api.openai.com"
- "*.internal.co"
# All other domains โ blocked
Streaming Enforcement
AsyncIO streaming enforcement with lookahead. PII redaction and policy checks run on streaming output in real-time, not just at the end.
from enforcecore import stream_enforce
async for chunk in stream_enforce(gen):
# PII redacted in real-time
Compliance Reporting
Export audit trails as EU AI Act, SOC2, or GDPR compliance reports. Built-in report generators with enforcecore audit export.
$ enforcecore audit export \
--format eu-ai-act \
--output report.html
# SOC2, GDPR also supported
Plugin Ecosystem
Extend enforcement with custom guards and redactors from PyPI. Discover and manage plugins with enforcecore plugin list.
$ enforcecore plugin list
$ enforcecore plugin install \
my-custom-guard
# Extend via PyPI packages
26 adversarial scenarios.
0 breaches.
The test suite includes 26 adversarial scenarios across 11 threat categories โ prompt injection, PII leakage, resource exhaustion, policy bypass, and more. Plus 22 Hypothesis property-based tests. Every scenario is contained before the tool call executes.
View test methodology โ| Threat Category | Scenarios | Result | 2/2 | โ Contained |
|---|---|---|
| PII leakage | 2/2 | โ Contained |
| Resource exhaustion | 2/2 | โ Contained |
| Unauthorized file access | 2/2 | โ Contained |
| Shell execution | 2/2 | โ Contained |
| Exfiltration via email | 2/2 | โ Contained |
| Audit log tampering | 2/2 | โ Contained |
| Policy bypass via jailbreak | 2/2 | โ Contained |
| Indirect injection via tool output | 2/2 | โ Contained |
| Cost limit bypass | 2/2 | โ Contained |
| Network exfiltration | 4/4 | โ Contained |
Native LangChain integration.
Since v1.13, EnforceCore ships a native EnforceCoreCallbackHandler for LangChain. PII redaction and audit logging on every LLM call โ zero code changes to your chain.
from enforcecore.integrations.langchain import (
EnforceCoreCallbackHandler
)
from langchain_openai import ChatOpenAI
# One line โ PII redacted, audit logged
handler = EnforceCoreCallbackHandler(
policy="policy.yaml"
)
llm = ChatOpenAI(callbacks=[handler])
result = llm.invoke("My SSN is 123-45-6789")
# SSN redacted before the LLM sees it
# Audit entry created automatically
Up in three steps.
No new infrastructure. No vendor account. Just Python.
Install
One package. Four runtime dependencies. Works in any virtualenv. Python 3.11+.
pip install enforcecore
Write a policy
Declare what your agent can and cannot do in plain YAML. Supports inheritance, multi-tenant, and remote policy servers.
name: "my-policy"
rules:
denied_tools: [execute_shell, send_email]
pii_redaction:
enabled: true
Add the decorator
One decorator wraps your agent. EnforceCore handles policy, PII, audit, and resource limits.
from enforcecore import enforce
@enforce(policy="policy.yaml")
def run_agent(user_input: str) -> str:
...
Common questions
Does EnforceCore add latency to my agent?
The measured overhead is 0.056 ms P50 for full E2E enforcement (Apple Silicon, Python 3.13). In any real agent workflow, network latency to the LLM dwarfs this by four orders of magnitude. PII detection with Presidio NER adds ~2 ms; regex-only mode stays under 0.1 ms.
Which agent frameworks are supported?
Any framework that uses Python functions as tools: LangChain (native callback handler), LangGraph, CrewAI, AutoGen, and plain Python. The @enforce decorator is framework-agnostic โ it wraps any callable.
Does EnforceCore send any data to external services?
No. EnforceCore runs entirely in your process. The policy engine, PII redactor, and Merkle auditor have zero network calls. Audit logs are written locally. Nothing leaves your environment unless you explicitly export it.
What is the license?
Apache 2.0. You can use, modify, and distribute EnforceCore in commercial products without restriction. No royalties, no usage limits, no GPL copyleft. Full terms in LICENSE.
Is this production-ready?
Yes. v1.14.0 is the latest stable release โ the 14th shipped version. It has 2,366 tests, 97% coverage, and a semantic versioning commitment: no breaking changes in the 1.x line without a major version bump.
Does it support compliance reporting?
Yes. Since v1.8, EnforceCore can export audit trails as EU AI Act, SOC2, and GDPR compliance reports via enforcecore audit export. These are structured reports designed for auditors โ not just raw logs.
Your agents should have hard limits,
not soft suggestions.
EnforceCore is free, open-source, and deploys in minutes. 14 releases shipped. Start enforcing today.