3.9 KiB
RoleMesh Integration
RoleMesh Gateway is an appropriate dependency for local-LLM-backed Didactopus usage.
Why it fits
The local RoleMesh codebase provides exactly the main things Didactopus needs for a local heterogeneous inference setup:
- OpenAI-compatible
POST /v1/chat/completions - role-based model routing
- local or multi-host upstream registration
- flexible model loading and switching through the gateway/node-agent split
- per-role defaults for temperature and other request settings
That means Didactopus can keep a simple provider abstraction while delegating model placement and routing to RoleMesh.
Recommended architecture
- Run RoleMesh Gateway as the OpenAI-compatible front door.
- Point RoleMesh roles at local backends or discovered node agents.
- Configure Didactopus to use the
rolemeshmodel provider. - Let Didactopus send mentor/practice/project-advisor/evaluator requests by role.
Didactopus-side config
Use configs/config.rolemesh.example.yaml as the starting point.
The important fields are:
model_provider.provider: rolemeshmodel_provider.rolemesh.base_urlmodel_provider.rolemesh.api_keymodel_provider.rolemesh.default_modelmodel_provider.rolemesh.role_to_model
Suggested role mapping
With the sample RoleMesh gateway config, this is a good default mapping:
mentor -> plannerpractice -> writerproject_advisor -> plannerevaluator -> reviewer
This keeps Didactopus prompts aligned with the role semantics RoleMesh already exposes.
Prompt layer
Didactopus now keeps its default RoleMesh-oriented prompts in:
didactopus.role_prompts
These prompts are intentionally anti-offloading:
- mentor mode prefers Socratic questions and hints
- practice mode prefers reasoning-heavy tasks
- project-advisor mode prefers synthesis work
- evaluator mode prefers critique and explicit limitations
Demo command
To exercise the integration path without a live RoleMesh gateway, run:
python -m didactopus.rolemesh_demo --config configs/config.example.yaml
That uses the stub provider path.
To point at a live RoleMesh deployment, start from:
python -m didactopus.rolemesh_demo --config configs/config.rolemesh.example.yaml
and replace the placeholder gateway URL/API key with your real local setup.
Example transcript
The repository now includes a generated transcript of an AI learner using the local-LLM path to approach the MIT OCW Information and Entropy course:
examples/ocw-information-entropy-rolemesh-transcript/rolemesh_transcript.md
Generator command:
python -m didactopus.ocw_rolemesh_transcript_demo --config configs/config.rolemesh.example.yaml
If some RoleMesh aliases are unhealthy, the transcript demo automatically falls back to the healthy local alias and records that in the output metadata.
If local inference is slow, the transcript demo now emits pending notices such as “Didactopus is evaluating the work before replying” while each turn is still running. For a full manual capture, run:
python -u -m didactopus.ocw_rolemesh_transcript_demo \
--config configs/config.rolemesh.example.yaml \
--out-dir examples/ocw-information-entropy-rolemesh-transcript \
2>&1 | tee examples/ocw-information-entropy-rolemesh-transcript/manual-run.log
That gives you three artifacts:
rolemesh_transcript.jsonrolemesh_transcript.mdmanual-run.logwith the live “pending” status messages
For slower larger models, expect the transcript run to take several minutes rather than seconds. The command above is the recommended way to capture the whole session outside Codex.
Gateway-side note
This repository does not vendor RoleMesh. It assumes the local RoleMesh codebase or deployment exists separately. The reference local codebase mentioned by the user is suitable because it already provides the API and routing semantics Didactopus needs.