Quick Start
Installation
pip install enforcecoreOr with all optional extras:
pip install enforcecore[all]Create a Policy
Create a file policies/strict.yaml:
name: "strict-policy"
version: "1.0"
rules:
allowed_tools:
- search_web
- calculator
- translate
denied_tools:
- execute_shell
- delete_file
pii_redaction:
enabled: true
categories: [email, phone, ssn, credit_card]
strategy: placeholder
on_violation: blockEnforce a Function
from enforcecore import enforce
@enforce(policy="policies/strict.yaml")
async def search_web(query: str) -> str:
"""This function is now enforced by the policy."""
return await api.search(query)
# Allowed — search_web is in the allowed list
result = await search_web("Python tutorials")
# Blocked — execute_shell is in the denied list
@enforce(policy="policies/strict.yaml")
async def execute_shell(cmd: str) -> str:
return await shell.run(cmd)
await execute_shell("ls -la") # raises EnforcementViolationVerify the Audit Trail
from enforcecore import verify_trail
result = verify_trail("audit_logs/trail.jsonl")
print(f"Valid: {result.is_valid}")
print(f"Entries: {result.total_entries}")
print(f"Chain intact: {result.chain_intact}")Use the CLI
# Validate a policy file
enforcecore validate policies/strict.yaml
# Verify an audit trail
enforcecore verify audit_logs/trail.jsonl
# Inspect policy decisions (dry run)
enforcecore dry-run policies/strict.yaml search_web
# Run the evaluation suite
enforcecore eval policies/strict.yamlLangChain Integration
Add PII redaction, policy enforcement, and audit to any LangChain LLM with a single callback — no changes to your chain topology required.
pip install langchain-corefrom enforcecore.integrations.langchain import EnforceCoreCallbackHandler
handler = EnforceCoreCallbackHandler(policy="policies/strict.yaml")
# Attach to any LangChain LLM
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(callbacks=[handler])
result = llm.invoke("Contact alice@example.com for details")
# Email is automatically redacted; audit entry written
# Or attach to an entire agent / chain
from langchain.agents import AgentExecutor
agent = AgentExecutor(agent=my_agent, tools=tools, callbacks=[handler])What happens automatically on every call:
on_llm_start— PII in prompts is redacted before the LLM sees themon_llm_end— PII in LLM responses is redacted before your code sees themon_tool_start— tool name is checked againstallowed_tools/denied_tools; raisesToolDeniedErrorif blockedon_chain_start/on_chain_end— PII in chain inputs/outputs is redacted- Audit — every event is Merkle-chained to
audit_logs/trail.jsonl
See examples/quickstart_langchain.py for a fully runnable demo (no API key needed).
Next Steps
- Read the Architecture to understand how enforcement works
- Browse the API Reference for detailed documentation
- See the Developer Guide to contribute