2.9 KiB
Model Provider Setup
Didactopus now supports three main model-provider paths:
ollama- easiest local setup for most single users
openai_compatible- simplest hosted setup when you want a common online API
rolemesh- more flexible routing for technically oriented users, labs, and libraries
Recommended Order
For ease of adoption, use these in this order:
ollamaopenai_compatiblerolemesh
Option 1: Ollama
This is the easiest local path for most users.
Use:
configs/config.ollama.example.yaml
Minimal setup:
- Install Ollama.
- Pull a model you want to use.
- Start or verify the local Ollama service.
- Point Didactopus at
configs/config.ollama.example.yaml.
Example commands:
ollama pull llama3.2:3b
python -m didactopus.learner_session_demo --config configs/config.ollama.example.yaml
python -m didactopus.learner_session_demo --config configs/config.ollama.example.yaml --language es
If you want a different local model, change:
model_provider.ollama.default_modelmodel_provider.ollama.role_to_model
Use one model for every role at first. Split roles only if you have a reason to do so.
Option 2: OpenAI-compatible hosted service
This is the easiest hosted path.
Use:
configs/config.openai-compatible.example.yaml
This works for:
- OpenAI itself
- any hosted service that accepts OpenAI-style
POST /v1/chat/completions
Typical setup:
- Create a local copy of
configs/config.openai-compatible.example.yaml. - Set
base_url,api_key, anddefault_model. - Keep one model for all roles to start with.
Example:
python -m didactopus.learner_session_demo --config configs/config.openai-compatible.example.yaml
python -m didactopus.learner_session_demo --config configs/config.openai-compatible.example.yaml --language fr
Option 3: RoleMesh Gateway
RoleMesh is still useful, but it is no longer the easiest path to recommend to most users.
Choose it when you need:
- role-specific routing
- multiple local or remote backends
- heterogeneous compute placement
- a shared service for a library, lab, or multi-user setup
See:
docs/rolemesh-integration.md
Which commands use the provider?
Any Didactopus path that calls the model provider can use these configurations, including:
python -m didactopus.learner_session_demopython -m didactopus.rolemesh_demopython -m didactopus.model_benchpython -m didactopus.ocw_rolemesh_transcript_demo
The transcript demo name still references RoleMesh because that was the original live-LLM path, but the general learner-session and benchmark flows are the easier places to start.
Practical Advice
- Start with one model for all roles.
- Prefer smaller fast models over bigger slow ones at first.
- Use the benchmark harness before trusting a model for learner-facing guidance.
- Use RoleMesh only when you actually need routing or multi-model orchestration.