Added dependency graph checks, artwork.
This commit is contained in:
parent
c99eea4793
commit
4c1d66cda0
180
README.md
180
README.md
|
|
@ -4,116 +4,37 @@
|
|||
|
||||
**Tagline:** *Many arms, one goal — mastery.*
|
||||
|
||||
## Vision
|
||||
## This revision adds
|
||||
|
||||
Didactopus treats AI as a **mentor, curriculum planner, critic, evaluator, and project guide** rather than an answer vending machine. The design goal is to produce capable practitioners who can explain, apply, test, and extend knowledge in real settings.
|
||||
- dependency and compatibility checks for domain packs
|
||||
- version-range validation against the Didactopus core version
|
||||
- local dependency resolution across installed packs
|
||||
- richer pack manifests
|
||||
- repository artwork under `artwork/`
|
||||
- tests for dependency and compatibility behavior
|
||||
|
||||
The platform is meant to support **AI-assisted autodidacts**: learners who pursue real expertise outside, alongside, or beyond traditional institutions.
|
||||
## Domain packs
|
||||
|
||||
## Core principles
|
||||
Didactopus supports portable, versioned **domain packs** that can contain:
|
||||
|
||||
- Active learning over passive consumption
|
||||
- Socratic questioning over direct answer dumping
|
||||
- Verification culture over uncritical acceptance
|
||||
- Competency gates over time-based progression
|
||||
- Project-based evidence of mastery
|
||||
- Local-first model use when available
|
||||
- Portable, shareable domain plans and learning artifacts
|
||||
|
||||
## Initial architecture
|
||||
|
||||
The initial prototype is organized around six core services:
|
||||
|
||||
1. **Domain Mapping Engine**
|
||||
Builds a concept graph for a target field, including prerequisites, competencies, canonical problem types, and artifact templates.
|
||||
|
||||
2. **Curriculum Generator**
|
||||
Produces a staged learning roadmap adapted to learner goals and prior knowledge.
|
||||
|
||||
3. **Mentor Agent**
|
||||
Conducts Socratic dialogue, reviews reasoning, and offers targeted critique.
|
||||
|
||||
4. **Practice Generator**
|
||||
Produces exercises aimed at specific concepts and skill gaps.
|
||||
|
||||
5. **Project Advisor**
|
||||
Proposes and scaffolds real projects that demonstrate competence.
|
||||
|
||||
6. **Evaluation System**
|
||||
Scores explanations, problem solutions, project outputs, and transfer tasks against explicit rubrics.
|
||||
|
||||
## Distribution model for contributed learning content
|
||||
|
||||
Didactopus is designed to support distribution of contributed artifacts, including:
|
||||
|
||||
- domain plans
|
||||
- concept maps
|
||||
- curriculum templates
|
||||
- exercise sets
|
||||
- roadmap templates
|
||||
- project blueprints
|
||||
- evaluation rubrics
|
||||
- benchmark packs
|
||||
- exemplar portfolios
|
||||
- rubrics
|
||||
- benchmark tasks
|
||||
- resource guides
|
||||
|
||||
These should be shareable as versioned packages or repositories so that contributors can publish reusable mastery paths for particular domains.
|
||||
Packs can depend on other packs, enabling layered curricula and reusable foundations.
|
||||
|
||||
See:
|
||||
- `docs/artifact-distribution.md`
|
||||
- `docs/domain-pack-format.md`
|
||||
## Artwork
|
||||
|
||||
## Local model strategy
|
||||
The repository includes whimsical project art at:
|
||||
|
||||
The codebase is designed to support a provider abstraction:
|
||||
- `artwork/didactopus-mascot.png`
|
||||
|
||||
- **Local-first**: Ollama, llama.cpp server, vLLM, LM Studio, or other on-prem inference endpoints
|
||||
- **Remote optional**: API-backed models only when configured
|
||||
- **Hybrid mode**: local models for routine mentoring, remote models only for heavier synthesis or evaluation if explicitly allowed
|
||||
|
||||
## Repository layout
|
||||
|
||||
```text
|
||||
didactopus/
|
||||
├── README.md
|
||||
├── LICENSE
|
||||
├── pyproject.toml
|
||||
├── Makefile
|
||||
├── docker-compose.yml
|
||||
├── Dockerfile
|
||||
├── .gitignore
|
||||
├── .github/workflows/ci.yml
|
||||
├── configs/
|
||||
│ └── config.example.yaml
|
||||
├── docs/
|
||||
│ ├── architecture.md
|
||||
│ ├── repository-plan.md
|
||||
│ ├── component-specs.md
|
||||
│ ├── prototype-roadmap.md
|
||||
│ ├── artifact-distribution.md
|
||||
│ └── domain-pack-format.md
|
||||
├── domain-packs/
|
||||
│ └── example-statistics/
|
||||
│ ├── pack.yaml
|
||||
│ ├── concepts.yaml
|
||||
│ ├── roadmap.yaml
|
||||
│ ├── projects.yaml
|
||||
│ └── rubrics.yaml
|
||||
├── src/didactopus/
|
||||
│ ├── __init__.py
|
||||
│ ├── main.py
|
||||
│ ├── config.py
|
||||
│ ├── model_provider.py
|
||||
│ ├── domain_map.py
|
||||
│ ├── curriculum.py
|
||||
│ ├── mentor.py
|
||||
│ ├── practice.py
|
||||
│ ├── project_advisor.py
|
||||
│ ├── evaluation.py
|
||||
│ └── artifact_registry.py
|
||||
└── tests/
|
||||
├── test_config.py
|
||||
├── test_domain_map.py
|
||||
└── test_artifact_registry.py
|
||||
```
|
||||
Suggested future additions:
|
||||
- `artwork/didactopus-banner.png`
|
||||
- `artwork/didactopus-logo.png`
|
||||
|
||||
## Quick start
|
||||
|
||||
|
|
@ -121,65 +42,6 @@ didactopus/
|
|||
python -m venv .venv
|
||||
source .venv/bin/activate
|
||||
pip install -e .[dev]
|
||||
cp configs/config.example.yaml configs/config.yaml
|
||||
python -m didactopus.main --domain "statistics" --goal "reach practical mastery"
|
||||
python -m didactopus.main --domain "statistics" --goal "practical mastery"
|
||||
pytest
|
||||
```
|
||||
|
||||
## Prototype capabilities in this scaffold
|
||||
|
||||
The current scaffold provides:
|
||||
|
||||
- a configuration model for local/remote provider selection
|
||||
- a concept graph data structure for domain maps
|
||||
- stubs for curriculum, mentor, practice, project, and evaluation services
|
||||
- a simple artifact registry for local domain-pack discovery
|
||||
- an example domain pack layout
|
||||
- a CLI entry point to demonstrate end-to-end flow
|
||||
- tests to validate configuration and artifact behavior
|
||||
|
||||
## Suggested first implementation milestones
|
||||
|
||||
### Milestone 1: Learner and domain modeling
|
||||
- learner profile schema
|
||||
- concept graph generation
|
||||
- prerequisite traversal
|
||||
- domain-pack schema validation
|
||||
- local artifact discovery
|
||||
|
||||
### Milestone 2: Guided study loop
|
||||
- Socratic mentor prompts
|
||||
- explanation checking
|
||||
- exercise generation by competency target
|
||||
- evidence capture for learner work
|
||||
|
||||
### Milestone 3: Project-centered learning
|
||||
- capstone generator
|
||||
- milestone planning
|
||||
- artifact review rubrics
|
||||
- distributed project pack ingestion
|
||||
|
||||
### Milestone 4: Mastery evidence
|
||||
- explanation scoring
|
||||
- transfer tasks
|
||||
- benchmark alignment
|
||||
- progress dashboard
|
||||
- artifact publication workflow
|
||||
|
||||
## Notes on evaluation design
|
||||
|
||||
A key design choice is that the assessment layer should look for:
|
||||
|
||||
- correct explanations in the learner's own words
|
||||
- ability to solve novel problems
|
||||
- detection of flawed reasoning
|
||||
- evidence of successful project execution
|
||||
- transfer across adjacent contexts
|
||||
|
||||
## Naming rationale
|
||||
|
||||
**Didactopus** combines *didactic* / *didact* with *octopus*: a central intelligence coordinating many arms of learning support.
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
|
|
|||
Binary file not shown.
|
After Width: | Height: | Size: 800 KiB |
|
|
@ -0,0 +1,52 @@
|
|||
# Dependency Resolution Plan
|
||||
|
||||
## Goals
|
||||
|
||||
Didactopus should support a pack ecosystem in which contributors can publish reusable foundations and specialized overlays.
|
||||
|
||||
Examples:
|
||||
- a general statistics foundations pack
|
||||
- a Bayesian statistics extension pack
|
||||
- an electronics foundations pack
|
||||
- a marine bioacoustics specialization pack
|
||||
|
||||
## Manifest fields
|
||||
|
||||
Each `pack.yaml` should include:
|
||||
- `name`
|
||||
- `version`
|
||||
- `didactopus_min_version`
|
||||
- `didactopus_max_version`
|
||||
- `dependencies`
|
||||
|
||||
Dependencies use a compact form:
|
||||
|
||||
```yaml
|
||||
dependencies:
|
||||
- name: statistics-foundations
|
||||
min_version: 1.0.0
|
||||
max_version: 2.0.0
|
||||
```
|
||||
|
||||
## Validation stages
|
||||
|
||||
### Stage 1
|
||||
- manifest fields exist
|
||||
- content files exist
|
||||
- required top-level keys exist
|
||||
|
||||
### Stage 2
|
||||
- internal references are valid
|
||||
- duplicate IDs are rejected
|
||||
|
||||
### Stage 3
|
||||
- each dependency names an installed pack
|
||||
- dependency versions satisfy declared ranges
|
||||
- pack compatibility includes current core version
|
||||
|
||||
## Future work
|
||||
|
||||
- full semantic version parsing
|
||||
- cycle detection in dependency graphs
|
||||
- topological load ordering
|
||||
- trust and signature policies
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
concepts:
|
||||
- id: prior
|
||||
title: Prior
|
||||
prerequisites: []
|
||||
mastery_signals:
|
||||
- explain a prior distribution
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
name: bayes-extension
|
||||
display_name: Bayesian Extension Pack
|
||||
version: 0.3.0
|
||||
schema_version: "1"
|
||||
didactopus_min_version: 0.1.0
|
||||
didactopus_max_version: 0.9.99
|
||||
description: Bayesian extension that depends on statistics foundations.
|
||||
author: Wesley R. Elsberry
|
||||
license: MIT
|
||||
dependencies:
|
||||
- name: foundations-statistics
|
||||
min_version: 1.0.0
|
||||
max_version: 1.9.99
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
projects:
|
||||
- id: bayes-mini-project
|
||||
title: Bayesian Mini Project
|
||||
difficulty: intermediate
|
||||
prerequisites:
|
||||
- prior
|
||||
deliverables:
|
||||
- short report
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
stages:
|
||||
- id: stage-1
|
||||
title: Bayesian Basics
|
||||
concepts:
|
||||
- prior
|
||||
checkpoint:
|
||||
- compare priors
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
rubrics:
|
||||
- id: bayes-rubric
|
||||
title: Bayesian Rubric
|
||||
criteria:
|
||||
- correctness
|
||||
- interpretation
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
concepts:
|
||||
- id: descriptive-statistics
|
||||
title: Descriptive Statistics
|
||||
prerequisites: []
|
||||
mastery_signals:
|
||||
- explain central tendency
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
name: foundations-statistics
|
||||
display_name: Foundations Statistics Pack
|
||||
version: 1.2.0
|
||||
schema_version: "1"
|
||||
didactopus_min_version: 0.1.0
|
||||
didactopus_max_version: 0.9.99
|
||||
description: Shared foundations for statistics learning.
|
||||
author: Wesley R. Elsberry
|
||||
license: MIT
|
||||
dependencies: []
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
projects:
|
||||
- id: summarize-data
|
||||
title: Summarize Local Data
|
||||
difficulty: introductory
|
||||
prerequisites:
|
||||
- descriptive-statistics
|
||||
deliverables:
|
||||
- summary report
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
stages:
|
||||
- id: stage-1
|
||||
title: Foundations
|
||||
concepts:
|
||||
- descriptive-statistics
|
||||
checkpoint:
|
||||
- summarize a dataset
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
rubrics:
|
||||
- id: basic-explanation
|
||||
title: Basic Explanation
|
||||
criteria:
|
||||
- correctness
|
||||
- clarity
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
concepts:
|
||||
- id: y
|
||||
title: Y
|
||||
prerequisites: []
|
||||
mastery_signals: []
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
name: incompatible-core
|
||||
display_name: Incompatible Core Pack
|
||||
version: 0.1.0
|
||||
schema_version: "1"
|
||||
didactopus_min_version: 9.0.0
|
||||
didactopus_max_version: 10.0.0
|
||||
description: Deliberately incompatible with current core.
|
||||
author: Wesley R. Elsberry
|
||||
license: MIT
|
||||
dependencies: []
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
projects:
|
||||
- id: p
|
||||
title: P
|
||||
difficulty: introductory
|
||||
prerequisites: [y]
|
||||
deliverables: []
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
stages:
|
||||
- id: stage-1
|
||||
title: Y
|
||||
concepts: [y]
|
||||
checkpoint: []
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
rubrics:
|
||||
- id: r
|
||||
title: R
|
||||
criteria: []
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
concepts:
|
||||
- id: x
|
||||
title: X
|
||||
prerequisites: []
|
||||
mastery_signals: []
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
name: missing-dependency
|
||||
display_name: Missing Dependency Pack
|
||||
version: 0.1.0
|
||||
schema_version: "1"
|
||||
didactopus_min_version: 0.1.0
|
||||
didactopus_max_version: 0.9.99
|
||||
description: Depends on something not installed.
|
||||
author: Wesley R. Elsberry
|
||||
license: MIT
|
||||
dependencies:
|
||||
- name: nonexistent-pack
|
||||
min_version: 1.0.0
|
||||
max_version: 2.0.0
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
projects:
|
||||
- id: p
|
||||
title: P
|
||||
difficulty: introductory
|
||||
prerequisites: [x]
|
||||
deliverables: []
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
stages:
|
||||
- id: stage-1
|
||||
title: X
|
||||
concepts: [x]
|
||||
checkpoint: []
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
rubrics:
|
||||
- id: r
|
||||
title: R
|
||||
criteria: []
|
||||
|
|
@ -9,29 +9,14 @@ description = "Didactopus: local-first AI-assisted autodidactic mastery platform
|
|||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
license = {text = "MIT"}
|
||||
authors = [
|
||||
{name = "Wesley R. Elsberry"}
|
||||
]
|
||||
dependencies = [
|
||||
"pydantic>=2.7",
|
||||
"pyyaml>=6.0",
|
||||
"networkx>=3.2"
|
||||
]
|
||||
authors = [{name = "Wesley R. Elsberry"}]
|
||||
dependencies = ["pydantic>=2.7", "pyyaml>=6.0", "networkx>=3.2"]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"pytest>=8.0",
|
||||
"ruff>=0.6"
|
||||
]
|
||||
dev = ["pytest>=8.0", "ruff>=0.6"]
|
||||
|
||||
[project.scripts]
|
||||
didactopus = "didactopus.main:main"
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
where = ["src"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
|
||||
[tool.ruff]
|
||||
line-length = 100
|
||||
|
|
|
|||
|
|
@ -1,11 +1,16 @@
|
|||
__version__ = "0.1.0"
|
||||
|
||||
__all__ = [
|
||||
"__version__",
|
||||
"artifact_registry",
|
||||
"artifact_schemas",
|
||||
"config",
|
||||
"model_provider",
|
||||
"domain_map",
|
||||
"curriculum",
|
||||
"domain_map",
|
||||
"evaluation",
|
||||
"main",
|
||||
"mentor",
|
||||
"model_provider",
|
||||
"practice",
|
||||
"project_advisor",
|
||||
"evaluation",
|
||||
"artifact_registry",
|
||||
]
|
||||
|
|
|
|||
|
|
@ -1,28 +1,192 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from pydantic import BaseModel
|
||||
from typing import Any
|
||||
import yaml
|
||||
|
||||
from . import __version__ as DIDACTOPUS_VERSION
|
||||
from .artifact_schemas import (
|
||||
ConceptsFile,
|
||||
PackManifest,
|
||||
ProjectsFile,
|
||||
RoadmapFile,
|
||||
RubricsFile,
|
||||
validate_top_level_key,
|
||||
)
|
||||
|
||||
class ArtifactManifest(BaseModel):
|
||||
name: str
|
||||
display_name: str
|
||||
version: str
|
||||
schema_version: str
|
||||
description: str = ""
|
||||
author: str = ""
|
||||
license: str = "unspecified"
|
||||
REQUIRED_FILES = ["pack.yaml", "concepts.yaml", "roadmap.yaml", "projects.yaml", "rubrics.yaml"]
|
||||
|
||||
|
||||
def discover_domain_packs(base_dirs: list[str | Path]) -> list[tuple[Path, ArtifactManifest]]:
|
||||
packs: list[tuple[Path, ArtifactManifest]] = []
|
||||
def _parse_version(version: str) -> tuple[int, ...]:
|
||||
parts = []
|
||||
for chunk in version.split("."):
|
||||
digits = "".join(ch for ch in chunk if ch.isdigit())
|
||||
parts.append(int(digits) if digits else 0)
|
||||
return tuple(parts)
|
||||
|
||||
|
||||
def _version_in_range(version: str, min_version: str, max_version: str) -> bool:
|
||||
v = _parse_version(version)
|
||||
vmin = _parse_version(min_version)
|
||||
vmax = _parse_version(max_version)
|
||||
return vmin <= v <= vmax
|
||||
|
||||
|
||||
@dataclass
|
||||
class PackValidationResult:
|
||||
pack_dir: Path
|
||||
manifest: PackManifest | None = None
|
||||
is_valid: bool = False
|
||||
errors: list[str] = field(default_factory=list)
|
||||
loaded_files: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
def _load_yaml(path: Path) -> dict[str, Any]:
|
||||
data = yaml.safe_load(path.read_text(encoding="utf-8"))
|
||||
if data is None:
|
||||
return {}
|
||||
if not isinstance(data, dict):
|
||||
raise ValueError(f"{path.name} must contain a YAML mapping at top level")
|
||||
return data
|
||||
|
||||
|
||||
def _check_duplicate_ids(entries: list[Any], label: str) -> list[str]:
|
||||
errors: list[str] = []
|
||||
seen: set[str] = set()
|
||||
for entry in entries:
|
||||
entry_id = entry.id
|
||||
if entry_id in seen:
|
||||
errors.append(f"duplicate {label} id: {entry_id}")
|
||||
seen.add(entry_id)
|
||||
return errors
|
||||
|
||||
|
||||
def _check_concept_references(concepts_file: ConceptsFile, roadmap_file: RoadmapFile, projects_file: ProjectsFile) -> list[str]:
|
||||
errors: list[str] = []
|
||||
concept_ids = {c.id for c in concepts_file.concepts}
|
||||
|
||||
for concept in concepts_file.concepts:
|
||||
for prereq in concept.prerequisites:
|
||||
if prereq not in concept_ids:
|
||||
errors.append(
|
||||
f"unknown concept prerequisite '{prereq}' referenced by concept '{concept.id}'"
|
||||
)
|
||||
|
||||
for stage in roadmap_file.stages:
|
||||
for concept_id in stage.concepts:
|
||||
if concept_id not in concept_ids:
|
||||
errors.append(
|
||||
f"unknown concept '{concept_id}' referenced by roadmap stage '{stage.id}'"
|
||||
)
|
||||
|
||||
for project in projects_file.projects:
|
||||
for prereq in project.prerequisites:
|
||||
if prereq not in concept_ids:
|
||||
errors.append(
|
||||
f"unknown concept prerequisite '{prereq}' referenced by project '{project.id}'"
|
||||
)
|
||||
|
||||
return errors
|
||||
|
||||
|
||||
def _check_core_compatibility(manifest: PackManifest) -> list[str]:
|
||||
if _version_in_range(DIDACTOPUS_VERSION, manifest.didactopus_min_version, manifest.didactopus_max_version):
|
||||
return []
|
||||
return [
|
||||
"incompatible with Didactopus core version "
|
||||
f"{DIDACTOPUS_VERSION}; supported range is "
|
||||
f"{manifest.didactopus_min_version}..{manifest.didactopus_max_version}"
|
||||
]
|
||||
|
||||
|
||||
def validate_pack(pack_dir: str | Path) -> PackValidationResult:
|
||||
pack_path = Path(pack_dir)
|
||||
result = PackValidationResult(pack_dir=pack_path)
|
||||
|
||||
for filename in REQUIRED_FILES:
|
||||
if not (pack_path / filename).exists():
|
||||
result.errors.append(f"missing required file: {filename}")
|
||||
if result.errors:
|
||||
return result
|
||||
|
||||
try:
|
||||
result.manifest = PackManifest.model_validate(_load_yaml(pack_path / "pack.yaml"))
|
||||
result.errors.extend(_check_core_compatibility(result.manifest))
|
||||
|
||||
concepts_data = _load_yaml(pack_path / "concepts.yaml")
|
||||
result.errors.extend(validate_top_level_key(concepts_data, "concepts"))
|
||||
concepts_file = None
|
||||
if "concepts" in concepts_data:
|
||||
concepts_file = ConceptsFile.model_validate(concepts_data)
|
||||
result.loaded_files["concepts"] = concepts_file
|
||||
result.errors.extend(_check_duplicate_ids(concepts_file.concepts, "concept"))
|
||||
|
||||
roadmap_data = _load_yaml(pack_path / "roadmap.yaml")
|
||||
result.errors.extend(validate_top_level_key(roadmap_data, "stages"))
|
||||
roadmap_file = None
|
||||
if "stages" in roadmap_data:
|
||||
roadmap_file = RoadmapFile.model_validate(roadmap_data)
|
||||
result.loaded_files["roadmap"] = roadmap_file
|
||||
result.errors.extend(_check_duplicate_ids(roadmap_file.stages, "roadmap stage"))
|
||||
|
||||
projects_data = _load_yaml(pack_path / "projects.yaml")
|
||||
result.errors.extend(validate_top_level_key(projects_data, "projects"))
|
||||
projects_file = None
|
||||
if "projects" in projects_data:
|
||||
projects_file = ProjectsFile.model_validate(projects_data)
|
||||
result.loaded_files["projects"] = projects_file
|
||||
result.errors.extend(_check_duplicate_ids(projects_file.projects, "project"))
|
||||
|
||||
rubrics_data = _load_yaml(pack_path / "rubrics.yaml")
|
||||
result.errors.extend(validate_top_level_key(rubrics_data, "rubrics"))
|
||||
if "rubrics" in rubrics_data:
|
||||
rubrics_file = RubricsFile.model_validate(rubrics_data)
|
||||
result.loaded_files["rubrics"] = rubrics_file
|
||||
result.errors.extend(_check_duplicate_ids(rubrics_file.rubrics, "rubric"))
|
||||
|
||||
if concepts_file and roadmap_file and projects_file:
|
||||
result.errors.extend(_check_concept_references(concepts_file, roadmap_file, projects_file))
|
||||
|
||||
except Exception as exc:
|
||||
result.errors.append(str(exc))
|
||||
|
||||
result.is_valid = not result.errors
|
||||
return result
|
||||
|
||||
|
||||
def discover_domain_packs(base_dirs: list[str | Path]) -> list[PackValidationResult]:
|
||||
results: list[PackValidationResult] = []
|
||||
for base_dir in base_dirs:
|
||||
base_path = Path(base_dir)
|
||||
if not base_path.exists():
|
||||
base = Path(base_dir)
|
||||
if not base.exists():
|
||||
continue
|
||||
for pack_dir in sorted(p for p in base_path.iterdir() if p.is_dir()):
|
||||
manifest_path = pack_dir / "pack.yaml"
|
||||
if not manifest_path.exists():
|
||||
for pack_dir in sorted(p for p in base.iterdir() if p.is_dir()):
|
||||
results.append(validate_pack(pack_dir))
|
||||
return results
|
||||
|
||||
|
||||
def check_pack_dependencies(results: list[PackValidationResult]) -> list[str]:
|
||||
errors: list[str] = []
|
||||
manifest_by_name = {
|
||||
r.manifest.name: r.manifest
|
||||
for r in results
|
||||
if r.manifest is not None
|
||||
}
|
||||
|
||||
for result in results:
|
||||
if result.manifest is None:
|
||||
continue
|
||||
for dep in result.manifest.dependencies:
|
||||
dep_manifest = manifest_by_name.get(dep.name)
|
||||
if dep_manifest is None:
|
||||
errors.append(
|
||||
f"pack '{result.manifest.name}' depends on missing pack '{dep.name}'"
|
||||
)
|
||||
continue
|
||||
data = yaml.safe_load(manifest_path.read_text(encoding="utf-8")) or {}
|
||||
packs.append((pack_dir, ArtifactManifest.model_validate(data)))
|
||||
return packs
|
||||
if not _version_in_range(dep_manifest.version, dep.min_version, dep.max_version):
|
||||
errors.append(
|
||||
f"pack '{result.manifest.name}' requires '{dep.name}' version "
|
||||
f"{dep.min_version}..{dep.max_version}, but found {dep_manifest.version}"
|
||||
)
|
||||
return errors
|
||||
|
|
|
|||
|
|
@ -0,0 +1,69 @@
|
|||
from typing import Any
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class DependencySpec(BaseModel):
|
||||
name: str
|
||||
min_version: str = "0.0.0"
|
||||
max_version: str = "9999.9999.9999"
|
||||
|
||||
|
||||
class PackManifest(BaseModel):
|
||||
name: str
|
||||
display_name: str
|
||||
version: str
|
||||
schema_version: str
|
||||
didactopus_min_version: str
|
||||
didactopus_max_version: str
|
||||
description: str = ""
|
||||
author: str = ""
|
||||
license: str = "unspecified"
|
||||
dependencies: list[DependencySpec] = Field(default_factory=list)
|
||||
|
||||
|
||||
class ConceptEntry(BaseModel):
|
||||
id: str
|
||||
title: str
|
||||
prerequisites: list[str] = Field(default_factory=list)
|
||||
mastery_signals: list[str] = Field(default_factory=list)
|
||||
|
||||
|
||||
class ConceptsFile(BaseModel):
|
||||
concepts: list[ConceptEntry]
|
||||
|
||||
|
||||
class RoadmapStageEntry(BaseModel):
|
||||
id: str
|
||||
title: str
|
||||
concepts: list[str] = Field(default_factory=list)
|
||||
checkpoint: list[str] = Field(default_factory=list)
|
||||
|
||||
|
||||
class RoadmapFile(BaseModel):
|
||||
stages: list[RoadmapStageEntry]
|
||||
|
||||
|
||||
class ProjectEntry(BaseModel):
|
||||
id: str
|
||||
title: str
|
||||
difficulty: str = ""
|
||||
prerequisites: list[str] = Field(default_factory=list)
|
||||
deliverables: list[str] = Field(default_factory=list)
|
||||
|
||||
|
||||
class ProjectsFile(BaseModel):
|
||||
projects: list[ProjectEntry]
|
||||
|
||||
|
||||
class RubricEntry(BaseModel):
|
||||
id: str
|
||||
title: str
|
||||
criteria: list[str] = Field(default_factory=list)
|
||||
|
||||
|
||||
class RubricsFile(BaseModel):
|
||||
rubrics: list[RubricEntry]
|
||||
|
||||
|
||||
def validate_top_level_key(data: dict[str, Any], required_key: str) -> list[str]:
|
||||
return [] if required_key in data else [f"missing required top-level key: {required_key}"]
|
||||
|
|
@ -10,12 +10,11 @@ class RoadmapStage:
|
|||
|
||||
|
||||
def generate_initial_roadmap(domain_map: DomainMap, goal: str) -> list[RoadmapStage]:
|
||||
sequence = domain_map.topological_sequence()
|
||||
return [
|
||||
RoadmapStage(
|
||||
title=f"Stage {idx + 1}: {concept.title()}",
|
||||
title=f"Stage {i+1}: {concept.title()}",
|
||||
concepts=[concept],
|
||||
mastery_goal=f"Demonstrate applied understanding of {concept} toward goal: {goal}",
|
||||
)
|
||||
for idx, concept in enumerate(sequence)
|
||||
for i, concept in enumerate(domain_map.topological_sequence())
|
||||
]
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ class ConceptNode:
|
|||
name: str
|
||||
description: str = ""
|
||||
prerequisites: list[str] = field(default_factory=list)
|
||||
representative_tasks: list[str] = field(default_factory=list)
|
||||
|
||||
|
||||
class DomainMap:
|
||||
|
|
@ -20,9 +19,6 @@ class DomainMap:
|
|||
for prereq in node.prerequisites:
|
||||
self.graph.add_edge(prereq, node.name)
|
||||
|
||||
def concepts(self) -> list[str]:
|
||||
return list(self.graph.nodes)
|
||||
|
||||
def prerequisites_for(self, concept: str) -> list[str]:
|
||||
return list(nx.ancestors(self.graph, concept))
|
||||
|
||||
|
|
|
|||
|
|
@ -2,8 +2,6 @@ from .model_provider import ModelProvider
|
|||
|
||||
|
||||
def generate_rubric(provider: ModelProvider, concept: str) -> str:
|
||||
prompt = (
|
||||
f"Create a concise evaluation rubric for learner mastery of '{concept}'. "
|
||||
f"Assess explanation quality, problem solving, and transfer."
|
||||
)
|
||||
return provider.generate(prompt).text
|
||||
return provider.generate(
|
||||
f"Create a concise evaluation rubric for mastery of '{concept}'."
|
||||
).text
|
||||
|
|
|
|||
|
|
@ -2,25 +2,24 @@ import argparse
|
|||
import os
|
||||
from pathlib import Path
|
||||
|
||||
from .artifact_registry import check_pack_dependencies, discover_domain_packs
|
||||
from .config import load_config
|
||||
from .model_provider import ModelProvider
|
||||
from .artifact_registry import discover_domain_packs
|
||||
from .domain_map import build_demo_domain_map
|
||||
from .curriculum import generate_initial_roadmap
|
||||
from .domain_map import build_demo_domain_map
|
||||
from .evaluation import generate_rubric
|
||||
from .mentor import generate_socratic_prompt
|
||||
from .model_provider import ModelProvider
|
||||
from .practice import generate_practice_task
|
||||
from .project_advisor import suggest_capstone
|
||||
from .evaluation import generate_rubric
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(description="Didactopus mastery scaffold")
|
||||
parser.add_argument("--domain", required=True, help="Target domain of study")
|
||||
parser.add_argument("--goal", required=True, help="Learning goal")
|
||||
parser.add_argument("--domain", required=True)
|
||||
parser.add_argument("--goal", required=True)
|
||||
parser.add_argument(
|
||||
"--config",
|
||||
default=os.environ.get("DIDACTOPUS_CONFIG", "configs/config.example.yaml"),
|
||||
help="Path to configuration YAML",
|
||||
)
|
||||
return parser
|
||||
|
||||
|
|
@ -30,44 +29,34 @@ def main() -> None:
|
|||
config = load_config(Path(args.config))
|
||||
provider = ModelProvider(config.model_provider)
|
||||
packs = discover_domain_packs(config.artifacts.local_pack_dirs)
|
||||
dependency_errors = check_pack_dependencies(packs)
|
||||
|
||||
dmap = build_demo_domain_map(args.domain)
|
||||
roadmap = generate_initial_roadmap(dmap, args.goal)
|
||||
focus_concept = dmap.topological_sequence()[1]
|
||||
|
||||
print("== Didactopus ==")
|
||||
print("Many arms, one goal — mastery.")
|
||||
print()
|
||||
print("== Provider ==")
|
||||
print(provider.describe())
|
||||
print("== Domain Pack Validation ==")
|
||||
for pack in packs:
|
||||
pack_name = pack.manifest.display_name if pack.manifest else pack.pack_dir.name
|
||||
print(f"- {pack_name}: {'valid' if pack.is_valid else 'INVALID'}")
|
||||
for err in pack.errors:
|
||||
print(f" * {err}")
|
||||
print()
|
||||
print("== Installed Domain Packs ==")
|
||||
if packs:
|
||||
for pack_dir, manifest in packs:
|
||||
print(f"- {manifest.display_name} ({manifest.name} {manifest.version}) @ {pack_dir}")
|
||||
print("== Dependency Resolution ==")
|
||||
if dependency_errors:
|
||||
for err in dependency_errors:
|
||||
print(f"- {err}")
|
||||
else:
|
||||
print("- none found")
|
||||
print()
|
||||
print("== Domain Map Sequence ==")
|
||||
for concept in dmap.topological_sequence():
|
||||
print(f"- {concept}")
|
||||
print("- all resolved")
|
||||
print()
|
||||
print("== Roadmap ==")
|
||||
for stage in roadmap:
|
||||
print(f"- {stage.title}: {stage.mastery_goal}")
|
||||
print()
|
||||
focus_concept = dmap.topological_sequence()[1]
|
||||
print("== Mentor Prompt ==")
|
||||
print(generate_socratic_prompt(provider, focus_concept))
|
||||
print()
|
||||
print("== Practice Task ==")
|
||||
print(generate_practice_task(provider, focus_concept))
|
||||
print()
|
||||
print("== Capstone Suggestion ==")
|
||||
print(suggest_capstone(provider, args.domain))
|
||||
print()
|
||||
print("== Evaluation Rubric ==")
|
||||
print(generate_rubric(provider, focus_concept))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
|
|||
|
|
@ -2,8 +2,6 @@ from .model_provider import ModelProvider
|
|||
|
||||
|
||||
def generate_socratic_prompt(provider: ModelProvider, concept: str) -> str:
|
||||
prompt = (
|
||||
f"You are a Socratic mentor. Ask one probing question that tests whether a learner "
|
||||
f"truly understands the concept '{concept}' and can explain it in their own words."
|
||||
)
|
||||
return provider.generate(prompt).text
|
||||
return provider.generate(
|
||||
f"You are a Socratic mentor. Ask one probing question about '{concept}'."
|
||||
).text
|
||||
|
|
|
|||
|
|
@ -2,8 +2,6 @@ from .model_provider import ModelProvider
|
|||
|
||||
|
||||
def generate_practice_task(provider: ModelProvider, concept: str) -> str:
|
||||
prompt = (
|
||||
f"Generate one practice task for the concept '{concept}'. Require reasoning, "
|
||||
f"not mere recall, and avoid giving the answer."
|
||||
)
|
||||
return provider.generate(prompt).text
|
||||
return provider.generate(
|
||||
f"Generate one reasoning-heavy practice task for '{concept}'."
|
||||
).text
|
||||
|
|
|
|||
|
|
@ -2,8 +2,6 @@ from .model_provider import ModelProvider
|
|||
|
||||
|
||||
def suggest_capstone(provider: ModelProvider, domain: str) -> str:
|
||||
prompt = (
|
||||
f"Suggest one realistic capstone project for a learner pursuing mastery in {domain}. "
|
||||
f"The project must require synthesis, verification, and original work."
|
||||
)
|
||||
return provider.generate(prompt).text
|
||||
return provider.generate(
|
||||
f"Suggest one realistic capstone project for mastery in {domain}."
|
||||
).text
|
||||
|
|
|
|||
|
|
@ -1,9 +1,31 @@
|
|||
from didactopus.artifact_registry import discover_domain_packs
|
||||
from didactopus.artifact_registry import (
|
||||
_version_in_range,
|
||||
check_pack_dependencies,
|
||||
discover_domain_packs,
|
||||
validate_pack,
|
||||
)
|
||||
|
||||
|
||||
def test_discover_example_pack() -> None:
|
||||
packs = discover_domain_packs(["domain-packs"])
|
||||
assert len(packs) >= 1
|
||||
_, manifest = packs[0]
|
||||
assert manifest.name == "example-statistics"
|
||||
assert manifest.display_name == "Example Statistics Pack"
|
||||
def test_version_range() -> None:
|
||||
assert _version_in_range("1.2.0", "1.0.0", "1.9.9") is True
|
||||
assert _version_in_range("2.0.0", "1.0.0", "1.9.9") is False
|
||||
|
||||
|
||||
def test_foundations_pack_is_valid() -> None:
|
||||
result = validate_pack("domain-packs/foundations-statistics")
|
||||
assert result.is_valid is True
|
||||
assert result.manifest is not None
|
||||
assert result.manifest.name == "foundations-statistics"
|
||||
|
||||
|
||||
def test_incompatible_core_pack_is_invalid() -> None:
|
||||
result = validate_pack("domain-packs/incompatible-core")
|
||||
assert result.is_valid is False
|
||||
assert any("incompatible with Didactopus core version" in err for err in result.errors)
|
||||
|
||||
|
||||
def test_dependency_resolution() -> None:
|
||||
results = discover_domain_packs(["domain-packs"])
|
||||
errors = check_pack_dependencies(results)
|
||||
assert any("depends on missing pack 'nonexistent-pack'" in err for err in errors)
|
||||
assert not any("bayes-extension" in err for err in errors and "foundations-statistics" in err)
|
||||
|
|
|
|||
Loading…
Reference in New Issue