4.5 KiB
ThreeGate
ThreeGate is a security-first architecture for local, agent-assisted research and analysis.
It is designed to support useful agent behavior while strictly limiting blast radius by enforcing three independent security gates between reasoning, retrieval, and execution.
The system is intentionally conservative: capability is earned incrementally, audited at every boundary, and never co-located with reasoning.
Core Design Goal
Enable powerful local research assistance without granting an AI system uncontrolled execution, network access, or persistence.
ThreeGate is especially suited to:
- academic and technical research assistants
- policy analysis
- code review and synthesis
- data transformation and ranking tasks
It is not intended to run autonomous agents with self-directed persistence or open-ended tool use.
The Three Gates (Non-Negotiable)
Gate 1 — CORE (Reasoning & Synthesis)
- No network access
- No execution capability
- Consumes only validated artifacts
- Produces analysis, summaries, and Tool Requests for human approval
CORE is the only place where LLM reasoning occurs.
Gate 2 — FETCH (Controlled Retrieval)
- HTTPS only
- Strict domain allowlist
- Proxy-enforced egress
- Size-capped and content-typed retrieval
- Emits Research Packets only (never executable instructions)
FETCH treats all external content as hostile data.
Gate 3 — TOOL-EXEC (Constrained Execution)
Execution is split into two distinct backends:
TOOL-EXEC-Lite (Monty)
- Python-subset interpreter
- No filesystem
- No environment
- No network
- No subprocess
- No external functions (by default)
- Stdio-only inputs/outputs
Used for:
- JSON transformations
- ranking/scoring
- small algorithms
- validation helpers
This is the default execution lane.
TOOL-EXEC-Heavy (ERA microVM)
- Full isolation via microVM
- Used only when Monty cannot express the task
- Requires explicit justification
Both execution lanes:
- Require human-approved Tool Requests
- Emit Tool Results as immutable artifacts
- Are never allowed to feed results back into FETCH or execute recursively
Data Flow (One-Way, Audited)
External Sources
↓
FETCH
↓ (Research Packets)
handoff/
↓
CORE
↓ (Approved Tool Requests)
TOOL-EXEC
↓ (Tool Results)
handoff/
↓
CORE
No component both decides and acts.
Artifact Types
Research Packet
- Markdown + strict front matter
- Metadata, bounded excerpts, provenance
- No instructions, no executable content
- Validated before CORE ingestion
Tool Request
- Human-approved
- Backend-specific (
ERAormonty) - Declarative constraints
- No self-modifying behavior
Tool Result
- Immutable output
- Captured stdout/stderr
- Content hashes
- Treated as untrusted data by CORE
Security Principles (Do Not Violate)
- No reasoning component may execute code
- No execution component may reason or fetch
- Network access is centralized and audited
- Redirects are never trusted without re-validation
- All cross-gate artifacts are hostile by default
- Escalation of capability is a security change
Repository Structure (Key Paths)
core/ # CORE consumers (read-only)
fetch/ # FETCH retrievers (proxy-bound)
tool-exec/
monty/ # TOOL-EXEC-Lite (pure compute)
era/ # TOOL-EXEC-Heavy (microVM)
tools/ # Validators and shared helpers
policy/ # Human-readable enforcement rules
infra/ # Docker, proxy, firewall scaffolding
docs/ # Architecture and operator guides
Optional OS Hardening
Monty can be further constrained with seccomp/AppArmor via the monty-hardened compose profile.
See docs/monty_container_hardening_runtime.md.
Status
This repository currently provides:
- Full FETCH scaffolding with allowlisted, size-capped retrieval
- Crossref DOI metadata ingestion
- Redirect-safe URL fetching
- Monty execution backend (pure compute)
- ERA execution stubs
- Validators enforcing backend-specific rules
It is suitable for local research workflows and controlled experimentation.
Philosophy
ThreeGate assumes:
- LLMs are powerful but non-deterministic
- External content is adversarial by default
- Execution is the highest-risk capability
- Separation of duties beats clever sandboxing
The goal is not to build an autonomous agent. The goal is to build a trustworthy assistant.