v1.14.0  ยท  Stable Apache 2.0

Runtime enforcement.
Not prompt prayers.

EnforceCore intercepts every agent action before execution โ€” blocking policy violations, redacting PII, and producing tamper-proof audit trails. Works with LangChain, LangGraph, CrewAI, AutoGen, and plain Python. No model fine-tuning. No trust in the LLM.

Install pip install enforcecore
from enforcecore import enforce
from langgraph.prebuilt import create_react_agent

@enforce(policy="policy.yaml")
def run_agent(user_input: str) -> str:
    agent = create_react_agent(model, tools)
    result = agent.invoke({"messages": user_input})
    return result["output"]

# EnforceCore fires on every tool call
# Policy violations โ†’ blocked before execution
# PII in output โ†’ redacted automatically
# Every action โ†’ Merkle-chained audit log
2,366 tests passing
97% coverage
26 adversarial scenarios contained
<0.06 ms E2E enforcement (P50)
4 runtime dependencies
14 releases shipped

Works with any Python agent framework

LangChain LangGraph CrewAI AutoGen Plain Python

Prompt guardrails can be bypassed.
Runtime enforcement cannot.

Instructing an LLM to "be safe" is not a security boundary. EnforceCore sits between your agent and the world โ€” as code, not conversation.

โŒ Prompt Guardrails
  • Model can be jailbroken
  • No guarantee PII stays out of logs
  • No audit trail you can verify
  • Resource limits are suggestions
  • Fails silently on adversarial prompts
โœ… EnforceCore
  • Policy enforced in Python, not in the LLM
  • PII redacted before any tool call executes
  • Merkle-chained, tamper-proof audit log
  • Hard token, cost, and rate-limit caps
  • 26 adversarial scenarios across 11 threat categories โ€” all blocked

The enforcement stack. One decorator.

Every component works independently or together. Drop in what you need โ€” from policy checks to full compliance reporting.

โš™๏ธ

Policy Engine

Declarative YAML policy โ€” allow/deny actions, require approvals, set confidence thresholds. Evaluated deterministically before every tool call.

policy = Policy.from_file("policy.yaml")
engine = PolicyEngine(policy)
result = engine.evaluate_pre_call(context)
# result.is_allowed | result.reason
๐Ÿ”’

PII Redactor

Detects and redacts 11 PII categories โ€” email, phone, SSN, credit card, name, address, and more โ€” using regex patterns and optional Presidio NER.

redactor = Redactor(strategy=PLACEHOLDER)
result = redactor.redact(agent_output)
# result.text โ†’ "[EMAIL]", "[PHONE]"
๐Ÿ”—

Merkle Auditor

Every agent action is hashed and chained into a Merkle tree with external hash injection (Merkle Bridge). Tamper-evident. Exportable. Independently verifiable.

auditor = Auditor()
auditor.record(tool_name="search", decision="allowed")
# Verify: verify_trail("trail.jsonl")
๐Ÿ›ก๏ธ

Resource Guard

Hard limits on tokens per call, calls per minute, and cost per session. When limits are hit, calls are blocked โ€” not throttled, not logged, blocked.

# Configured via policy YAML
enforcer = Enforcer.from_file("policy.yaml")
enforcer.record_cost(0.02)
# Raises CostLimitError if exceeded
๐ŸŒ

Network Control

Domain allowlisting with wildcard support. Agents can only reach endpoints you explicitly permit. All other traffic is denied.

network_control:
  allowed_domains:
    - "api.openai.com"
    - "*.internal.co"
# All other domains โ†’ blocked
๐ŸŒŠ

Streaming Enforcement

AsyncIO streaming enforcement with lookahead. PII redaction and policy checks run on streaming output in real-time, not just at the end.

from enforcecore import stream_enforce

async for chunk in stream_enforce(gen):
    # PII redacted in real-time
๐Ÿ“‹

Compliance Reporting

Export audit trails as EU AI Act, SOC2, or GDPR compliance reports. Built-in report generators with enforcecore audit export.

$ enforcecore audit export \
    --format eu-ai-act \
    --output report.html
# SOC2, GDPR also supported
๐Ÿ”Œ

Plugin Ecosystem

Extend enforcement with custom guards and redactors from PyPI. Discover and manage plugins with enforcecore plugin list.

$ enforcecore plugin list
$ enforcecore plugin install \
    my-custom-guard
# Extend via PyPI packages

26 adversarial scenarios.
0 breaches.

The test suite includes 26 adversarial scenarios across 11 threat categories โ€” prompt injection, PII leakage, resource exhaustion, policy bypass, and more. Plus 22 Hypothesis property-based tests. Every scenario is contained before the tool call executes.

View test methodology โ†’
Threat Category Scenarios Result
2/2 โœ“ Contained
PII leakage 2/2 โœ“ Contained
Resource exhaustion 2/2 โœ“ Contained
Unauthorized file access 2/2 โœ“ Contained
Shell execution 2/2 โœ“ Contained
Exfiltration via email 2/2 โœ“ Contained
Audit log tampering 2/2 โœ“ Contained
Policy bypass via jailbreak 2/2 โœ“ Contained
Indirect injection via tool output 2/2 โœ“ Contained
Cost limit bypass 2/2 โœ“ Contained
Network exfiltration 4/4 โœ“ Contained
New in v1.13

Native LangChain integration.

Since v1.13, EnforceCore ships a native EnforceCoreCallbackHandler for LangChain. PII redaction and audit logging on every LLM call โ€” zero code changes to your chain.

See integration docs โ†’
langchain_demo.py
from enforcecore.integrations.langchain import (
    EnforceCoreCallbackHandler
)
from langchain_openai import ChatOpenAI

# One line โ€” PII redacted, audit logged
handler = EnforceCoreCallbackHandler(
    policy="policy.yaml"
)
llm = ChatOpenAI(callbacks=[handler])

result = llm.invoke("My SSN is 123-45-6789")
# SSN redacted before the LLM sees it
# Audit entry created automatically

Up in three steps.

No new infrastructure. No vendor account. Just Python.

1

Install

One package. Four runtime dependencies. Works in any virtualenv. Python 3.11+.

pip install enforcecore
2

Write a policy

Declare what your agent can and cannot do in plain YAML. Supports inheritance, multi-tenant, and remote policy servers.

name: "my-policy"
rules:
  denied_tools: [execute_shell, send_email]
  pii_redaction:
    enabled: true
3

Add the decorator

One decorator wraps your agent. EnforceCore handles policy, PII, audit, and resource limits.

from enforcecore import enforce

@enforce(policy="policy.yaml")
def run_agent(user_input: str) -> str:
    ...

Common questions

Does EnforceCore add latency to my agent?

The measured overhead is 0.056 ms P50 for full E2E enforcement (Apple Silicon, Python 3.13). In any real agent workflow, network latency to the LLM dwarfs this by four orders of magnitude. PII detection with Presidio NER adds ~2 ms; regex-only mode stays under 0.1 ms.

Which agent frameworks are supported?

Any framework that uses Python functions as tools: LangChain (native callback handler), LangGraph, CrewAI, AutoGen, and plain Python. The @enforce decorator is framework-agnostic โ€” it wraps any callable.

Does EnforceCore send any data to external services?

No. EnforceCore runs entirely in your process. The policy engine, PII redactor, and Merkle auditor have zero network calls. Audit logs are written locally. Nothing leaves your environment unless you explicitly export it.

What is the license?

Apache 2.0. You can use, modify, and distribute EnforceCore in commercial products without restriction. No royalties, no usage limits, no GPL copyleft. Full terms in LICENSE.

Is this production-ready?

Yes. v1.14.0 is the latest stable release โ€” the 14th shipped version. It has 2,366 tests, 97% coverage, and a semantic versioning commitment: no breaking changes in the 1.x line without a major version bump.

Does it support compliance reporting?

Yes. Since v1.8, EnforceCore can export audit trails as EU AI Act, SOC2, and GDPR compliance reports via enforcecore audit export. These are structured reports designed for auditors โ€” not just raw logs.

Your agents should have hard limits,
not soft suggestions.

EnforceCore is free, open-source, and deploys in minutes. 14 releases shipped. Start enforcing today.

Full reference docs at akios.ai/enforcecore  ยท  Built by the AKIOS