Security Model
EnforceCore is designed to provide a robust security boundary for AI agents. This document details the attack surface, entry points, and the mitigations in place to protect your infrastructure.
Trust Boundaries
EnforceCore operates on the principle that the agent is untrusted. Even if the code is written by your team, the LLM driving the agent can be manipulated via prompt injection to execute malicious actions.
EnforceCore establishes a trust boundary around the tool execution:
- Untrusted: The Agent, the LLM, and the inputs (prompts).
- Trusted: The Policy Engine, the Enforcer, and the Audit Log.
- Protected: The external tools, APIs, and filesystem.
Entry Points & Attack Vectors
1. Enforcement API
The primary entry point is the @enforce decorator or direct calls to the Enforcer.
- Attack: An attacker attempts to bypass the decorator or call the underlying tool directly.
- Mitigation: EnforceCore cannot prevent direct code execution if the attacker has shell access to the server. However, within the application logic, the
Enforceracts as a mandatory gateway.DangerDanger: Ensure that your agent code does not import the "raw" tool functions. Always import the decorated versions.
2. Policy Loading
Policies are loaded from YAML files or dictionaries.
- Attack: Malformed YAML or path traversal in
extends. - Mitigation:
yaml.safe_load()is used to prevent code execution during deserialization.- Pydantic validation ensures the schema is correct.
- Unmitigated: There is currently no jail for file paths in
extends. Ensure policy files are stored in a secure location.
3. Configuration
Configuration is handled via environment variables and the Settings object.
- Attack: Disabling audit logs or redaction via environment variables (e.g.,
ENFORCECORE_AUDIT_ENABLED=false). - Mitigation: These settings are intended for development. In production, ensure the environment is locked down.Warning
Warning: If an attacker can modify environment variables, they can disable enforcement. Secure your deployment environment (Kubernetes secrets, etc.).
4. Audit Trail
The audit log is the source of truth for compliance.
- Attack: Modifying or deleting the audit log to cover tracks.
- Mitigation:
- Tamper-Evidence: Logs are chained using a Merkle tree structure. Any modification breaks the chain validation.
- Verification: The
verify_trail()function detects integrity violations. - Limitation: If an attacker deletes the entire file, history is lost. Use remote logging (S3, Splunk) for high-security environments.
Threat Coverage
| Threat | EnforceCore Mitigation |
|---|---|
| Prompt Injection | Indirectly Mitigated. We don't stop the injection, but we stop the consequences (e.g., preventing delete_file even if the LLM is tricked into calling it). |
| PII Leakage | Mitigated. The Redactor scans and masks sensitive data in inputs and outputs. |
| Resource Exhaustion | Mitigated. Rate limits and resource quotas prevent runaway agents. |
| Dependency Confusion | Unmitigated. EnforceCore protects runtime behavior, not build-time supply chain attacks. |
| Kernel Exploits | Unmitigated. We rely on the OS and container runtime for lower-level isolation. See Defense in Depth. |
Reporting Vulnerabilities
If you discover a security vulnerability in EnforceCore, please report it via our GitHub Security Advisory page or email security@akios.ai.