EnforceCore EnforceCore
v1.0.1  ·  Stable Apache 2.0

Runtime enforcement.
Not prompt prayers.

EnforceCore intercepts every agent action before execution — blocking policy violations, redacting PII, and producing tamper-proof audit trails. No model fine-tuning. No trust in the LLM.

Install pip install enforcecore
from enforcecore import enforce, PolicyEngine
from langgraph.prebuilt import create_react_agent

engine = PolicyEngine.from_file("policy.yaml")

@enforce(engine)
def run_agent(user_input: str) -> str:
    agent = create_react_agent(model, tools)
    result = agent.invoke({"messages": user_input})
    return result["output"]

# EnforceCore fires on every tool call
# Policy violations → blocked before execution
# PII in output → redacted automatically
# Every action → Merkle-chained audit log
1,510 tests passing
95% coverage
20/20 adversarial scenarios contained
<0.06 ms enforcement overhead
4 runtime dependencies

Works with any Python agent framework

LangGraph CrewAI AutoGen LlamaIndex Raw Python

Prompt guardrails can be bypassed.
Runtime enforcement cannot.

Instructing an LLM to "be safe" is not a security boundary. EnforceCore sits between your agent and the world — as code, not conversation.

❌ Prompt Guardrails
  • Model can be jailbroken
  • No guarantee PII stays out of logs
  • No audit trail you can verify
  • Resource limits are suggestions
  • Fails silently on adversarial prompts
✅ EnforceCore
  • Policy enforced in Python, not in the LLM
  • PII redacted before any tool call executes
  • Merkle-chained, tamper-proof audit log
  • Hard token, cost, and rate-limit caps
  • 20/20 adversarial scenarios blocked

Four enforcement layers. One decorator.

Every component works independently or together. Drop in what you need.

⚙️

Policy Engine

Declarative YAML policy — allow/deny actions, require approvals, set confidence thresholds. Evaluated deterministically before every tool call.

engine = PolicyEngine.from_file("policy.yaml")
result = engine.evaluate(action, context)
# result.allowed | result.reason
🔒

PII Redactor

Detects and redacts 6 PII categories — email, phone, SSN, credit card, name, address — using both regex patterns and optional spaCy NER.

redactor = PIIRedactor(mode="redact")
clean = redactor.process(agent_output)
# "[EMAIL]", "[PHONE]", "[SSN]"...
🔗

Merkle Auditor

Every agent action is hashed and chained into a Merkle tree. Tamper-evident. Exportable. Independently verifiable without trusting the service that generated it.

auditor = MerkleAuditor()
auditor.record(action, result)
proof = auditor.export_proof()
# Verify: auditor.verify(proof)
🛡️

Resource Guard

Hard limits on tokens per call, calls per minute, and cost per session. When limits are hit, calls are blocked — not throttled, not logged, blocked.

guard = ResourceGuard(
  max_tokens=4000,
  max_cost_usd=1.00
)
# Raises ResourceLimitError if exceeded

20/20 adversarial scenarios.
0 breaches.

The v1.0.1 test suite includes 20 adversarial scenarios across 10 threat categories — prompt injection, PII leakage, resource exhaustion, and more. Every scenario is contained before the tool call executes.

View test methodology →
Threat Category Scenarios Result
Prompt injection 2/2 ✓ Contained
PII leakage 2/2 ✓ Contained
Resource exhaustion 2/2 ✓ Contained
Unauthorized file access 2/2 ✓ Contained
Shell execution 2/2 ✓ Contained
Exfiltration via email 2/2 ✓ Contained
Audit log tampering 2/2 ✓ Contained
Policy bypass via jailbreak 2/2 ✓ Contained
Indirect injection via tool output 2/2 ✓ Contained
Cost limit bypass 2/2 ✓ Contained

Up in three steps.

No new infrastructure. No vendor account. Just Python.

1

Install

One package. Four runtime dependencies. Works in any virtualenv.

pip install enforcecore
2

Write a policy

Declare what your agent can and cannot do in plain YAML.

actions:
  deny: [execute_shell, send_email]
pii:
  enabled: true
  mode: redact
3

Add the decorator

One decorator wraps your agent. EnforceCore handles the rest.

engine = PolicyEngine.from_file("policy.yaml")

@enforce(engine)
def run_agent(user_input: str) -> str:
    ...

Common questions

Does EnforceCore add latency to my agent?

The measured overhead is 0.056 ms per call (p99 on M2 MacBook Pro). In any real agent workflow, network latency to the LLM dwarfs this by four orders of magnitude. PII detection with the full spaCy NER model adds ~2 ms; regex-only mode stays under 0.1 ms.

Which agent frameworks are supported?

Any framework that uses Python functions as tools: LangGraph, CrewAI, AutoGen, LlamaIndex, and plain Python. The @enforce decorator is framework-agnostic — it wraps any callable.

Does EnforceCore send any data to external services?

No. EnforceCore runs entirely in your process. The policy engine, PII redactor, and Merkle auditor have zero network calls. Audit logs are written locally. Nothing leaves your environment unless you explicitly export it.

What is the license?

Apache 2.0. You can use, modify, and distribute EnforceCore in commercial products without restriction. No royalties, no usage limits, no GPL copyleft. Full terms in LICENSE.

Is this production-ready?

Yes. v1.0.1 is the first stable release. It has 1,510 tests, 95% coverage, and a semantic versioning commitment: no breaking changes in the 1.x line without a major version bump. The changelog is public at CHANGELOG.md.

Regex or NLP for PII detection?

Both. Regex patterns handle structured PII (email, phone, SSN, credit card) with near-zero overhead. Optional spaCy NER adds PERSON and GPE entity detection for names and addresses. Enable it with mode: "spacy" in your policy. Regex-only is the default.

Your agents should have hard limits,
not soft suggestions.

EnforceCore is free, open-source, and deploys in minutes. Start enforcing today.

Full reference docs at akios.ai/enforcecore  ·  Built by the AKIOS