|
|
||
|---|---|---|
| artwork | ||
| configs | ||
| docker | ||
| docs | ||
| scripts | ||
| src | ||
| tests | ||
| .editorconfig | ||
| .gitignore | ||
| LICENSE | ||
| README.md | ||
| docker-compose.yml | ||
| pyproject.toml | ||
README.md
RoleMesh Gateway
RoleMesh Gateway is a lightweight OpenAI-compatible API gateway for routing chat-completions requests to multiple
locally hosted LLM backends (e.g., llama.cpp llama-server) by role (planner, writer, coder, reviewer, …).
It is designed for agentic workflows that benefit from using different models for different steps, and for deployments where different machines host different models (e.g., GPU box for fast inference, big RAM CPU box for large models).
What you get
- OpenAI-compatible endpoints:
GET /v1/modelsPOST /v1/chat/completions(streaming and non-streaming)GET /healthandGET /ready
- Model registry from
configs/models.yaml - Optional node registration so remote machines can announce role backends to the gateway
- Robust proxying with explicit httpx timeouts (no “hang forever”)
- Structured logging with request IDs
Quick Start
This is the fastest path to a working local setup.
1. Install
python -m venv .venv
source .venv/bin/activate
pip install -e .
2. Start two OpenAI-compatible backends
Any backend that exposes GET /v1/models and POST /v1/chat/completions will work.
One practical option is llamafile in server mode:
llamafile --server -m /path/to/planner-model.gguf --host 127.0.0.1 --port 8011 --nobrowser
llamafile --server -m /path/to/writer-model.gguf --host 127.0.0.1 --port 8012 --nobrowser
3. Create a gateway config
version: 1
default_model: planner
auth:
client_api_keys:
- "change-me-client-key"
models:
planner:
type: proxy
openai_model_name: planner
proxy_url: http://127.0.0.1:8011
defaults:
temperature: 0
max_tokens: 128
writer:
type: proxy
openai_model_name: writer
proxy_url: http://127.0.0.1:8012
defaults:
temperature: 0.6
max_tokens: 256
Save that as configs/models.yaml.
4. Run the gateway
ROLE_MESH_CONFIG=configs/models.yaml uvicorn rolemesh_gateway.main:app --host 127.0.0.1 --port 8000
5. Verify it
curl -sS http://127.0.0.1:8000/v1/models \
-H 'X-Api-Key: change-me-client-key'
curl -sS -X POST http://127.0.0.1:8000/v1/chat/completions \
-H 'Content-Type: application/json' \
-H 'X-Api-Key: change-me-client-key' \
-d '{
"model": "planner",
"messages": [{"role":"user","content":"Say hello in 3 words."}]
}'
If you prefer the provided example file, copy configs/models.example.yaml and adjust the proxy_url values.
Multi-host (node registration)
If you want machines to host backends and “register” them dynamically, run a tiny node agent on each backend host (or just call the registration endpoint from your own tooling).
- Gateway endpoint:
POST /v1/nodes/register - Node payload describes which roles it serves and the base URL to reach its OpenAI-compatible backend.
See: docs/DEPLOYMENT.md and docs/CONFIG.md.
Status
This repository is a preliminary scaffold:
- Proxying to OpenAI-compatible upstreams works.
- Registration and load-selection are implemented (basic round-robin).
- API-key auth for clients and nodes is available.
- Persistence is basic JSON-backed state, not a full service registry.
License
MIT. See LICENSE.
Node Agent (per-host)
This repo also includes a RoleMesh Node Agent (rolemesh-node-agent) that can manage persistent llama.cpp servers (one per GPU) and report inventory/metrics back to the gateway.
- Sample config:
configs/node_agent.example.yaml - Docs:
docs/NODE_AGENT.md
Safe-by-default binding
Gateway and node-agent default to binding on 127.0.0.1 to avoid accidental exposure. Bind only to private/LAN or VPN interfaces and firewall ports if you need remote access.
