Added graph-aware planning layer.
This commit is contained in:
parent
4ac65b6489
commit
dd0cc9fd08
67
README.md
67
README.md
|
|
@ -1,22 +1,77 @@
|
|||
# Didactopus
|
||||
|
||||

|
||||
|
||||
**Didactopus** is a local-first AI-assisted autodidactic mastery platform for building genuine expertise through concept graphs, adaptive curriculum planning, evidence-driven mastery, Socratic mentoring, and project-based learning.
|
||||
|
||||
**Tagline:** *Many arms, one goal — mastery.*
|
||||
|
||||
## Complete overview to this point
|
||||
## This revision
|
||||
|
||||
Didactopus is designed to support both **human learners** and, potentially, **agentic AI students** that use the same mastery infrastructure to become competent in a target domain.
|
||||
This revision adds a **graph-aware planning layer** that connects the concept graph engine to the adaptive and evidence engines.
|
||||
|
||||
The current architecture includes:
|
||||
The new planner selects the next concepts to study using a utility function that considers:
|
||||
|
||||
- **Domain packs** for contributed concepts, projects, rubrics, and mastery profiles
|
||||
- prerequisite readiness
|
||||
- distance to learner target concepts
|
||||
- weakness in competence dimensions
|
||||
- project availability
|
||||
- review priority for fragile concepts
|
||||
- semantic neighborhood around learner goals
|
||||
|
||||
## Why this matters
|
||||
|
||||
Up to this point, Didactopus could:
|
||||
- build concept graphs
|
||||
- identify ready concepts
|
||||
- infer mastery from evidence
|
||||
|
||||
But it still needed a better mechanism for choosing **what to do next**.
|
||||
|
||||
The graph-aware planner begins to solve that by ranking candidate concepts according to learner-specific utility instead of using unlocked prerequisites alone.
|
||||
|
||||
## Current architecture overview
|
||||
|
||||
Didactopus now includes:
|
||||
|
||||
- **Domain packs** for concepts, projects, rubrics, mastery profiles, templates, and cross-pack links
|
||||
- **Dependency resolution** across packs
|
||||
- **Merged learning graph** generation
|
||||
- **Adaptive learner engine** that identifies ready, blocked, and mastered concepts
|
||||
- **Concept graph engine** with cross-pack links, similarity hooks, pathfinding, and visualization export
|
||||
- **Adaptive learner engine** for ready/blocked/mastered concept states
|
||||
- **Evidence engine** with weighted, recency-aware, multi-dimensional mastery inference
|
||||
- **Concept-specific mastery profiles** with template inheritance
|
||||
- **Concept graph engine** for cross-pack prerequisite reasoning, concept linking, pathfinding, and graph export
|
||||
- **Graph-aware planner** for utility-ranked next-step recommendations
|
||||
|
||||
## Planning utility
|
||||
|
||||
The current planner computes a score per candidate concept using:
|
||||
|
||||
- readiness bonus
|
||||
- target-distance bonus
|
||||
- weak-dimension bonus
|
||||
- fragile-concept review bonus
|
||||
- project-unlock bonus
|
||||
- semantic-similarity bonus
|
||||
|
||||
These terms are transparent and configurable.
|
||||
|
||||
## Agentic AI students
|
||||
|
||||
This planner also strengthens the case for **AI student agents** that use Didactopus as a structured mastery environment.
|
||||
|
||||
An AI student could:
|
||||
|
||||
1. inspect the graph
|
||||
2. choose the next concept via the planner
|
||||
3. attempt tasks
|
||||
4. generate evidence
|
||||
5. update mastery state
|
||||
6. repeat until a target expertise profile is reached
|
||||
|
||||
This makes Didactopus useful as both:
|
||||
- a learning platform
|
||||
- a benchmark harness for agentic expertise growth
|
||||
|
||||
## Core philosophy
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,29 @@
|
|||
# Graph-Aware Planner
|
||||
|
||||
The graph-aware planner ranks next concepts using a transparent utility model.
|
||||
|
||||
## Inputs
|
||||
|
||||
- concept graph
|
||||
- learner mastery state
|
||||
- evidence summaries
|
||||
- target concepts
|
||||
- semantic similarity estimates
|
||||
- project catalog
|
||||
|
||||
## Current scoring terms
|
||||
|
||||
- **readiness_bonus**: concept is currently studyable
|
||||
- **target_distance_weight**: closer concepts to the target score higher
|
||||
- **weak_dimension_bonus**: concepts with known weakness signals are prioritized
|
||||
- **fragile_review_bonus**: resurfaced or fragile concepts are review-prioritized
|
||||
- **project_unlock_bonus**: concepts that unlock projects score higher
|
||||
- **semantic_similarity_weight**: concepts semantically close to targets gain weight
|
||||
|
||||
## Future work
|
||||
|
||||
- learner time budgets
|
||||
- spaced repetition costs
|
||||
- multi-objective planning
|
||||
- planning across multiple targets
|
||||
- reinforcement learning over curriculum policies
|
||||
|
|
@ -46,32 +46,25 @@ def validate_pack(pack_dir: str | Path) -> PackValidationResult:
|
|||
result.errors.append(f"missing required file: {filename}")
|
||||
if result.errors:
|
||||
return result
|
||||
|
||||
try:
|
||||
result.manifest = PackManifest.model_validate(_load_yaml(pack_path / "pack.yaml"))
|
||||
if not _version_in_range(
|
||||
DIDACTOPUS_VERSION,
|
||||
result.manifest.didactopus_min_version,
|
||||
result.manifest.didactopus_max_version,
|
||||
):
|
||||
if not _version_in_range(DIDACTOPUS_VERSION, result.manifest.didactopus_min_version, result.manifest.didactopus_max_version):
|
||||
result.errors.append(
|
||||
f"incompatible with Didactopus core version {DIDACTOPUS_VERSION}; "
|
||||
f"supported range is {result.manifest.didactopus_min_version}..{result.manifest.didactopus_max_version}"
|
||||
)
|
||||
|
||||
result.loaded_files["concepts"] = ConceptsFile.model_validate(_load_yaml(pack_path / "concepts.yaml"))
|
||||
result.loaded_files["roadmap"] = RoadmapFile.model_validate(_load_yaml(pack_path / "roadmap.yaml"))
|
||||
result.loaded_files["projects"] = ProjectsFile.model_validate(_load_yaml(pack_path / "projects.yaml"))
|
||||
result.loaded_files["rubrics"] = RubricsFile.model_validate(_load_yaml(pack_path / "rubrics.yaml"))
|
||||
except Exception as exc:
|
||||
result.errors.append(str(exc))
|
||||
|
||||
result.is_valid = not result.errors
|
||||
return result
|
||||
|
||||
|
||||
def discover_domain_packs(base_dirs: list[str | Path]) -> list[PackValidationResult]:
|
||||
results: list[PackValidationResult] = []
|
||||
results = []
|
||||
for base_dir in base_dirs:
|
||||
base = Path(base_dir)
|
||||
if not base.exists():
|
||||
|
|
@ -82,7 +75,7 @@ def discover_domain_packs(base_dirs: list[str | Path]) -> list[PackValidationRes
|
|||
|
||||
|
||||
def check_pack_dependencies(results: list[PackValidationResult]) -> list[str]:
|
||||
errors: list[str] = []
|
||||
errors = []
|
||||
manifest_by_name = {r.manifest.name: r.manifest for r in results if r.manifest is not None}
|
||||
for result in results:
|
||||
if result.manifest is None:
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ from __future__ import annotations
|
|||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
from pathlib import Path
|
||||
import json
|
||||
import networkx as nx
|
||||
|
||||
|
|
|
|||
|
|
@ -3,17 +3,6 @@ from pydantic import BaseModel, Field
|
|||
import yaml
|
||||
|
||||
|
||||
class ProviderEndpoint(BaseModel):
|
||||
backend: str = "ollama"
|
||||
endpoint: str = "http://localhost:11434"
|
||||
model_name: str = "llama3.1:8b"
|
||||
|
||||
|
||||
class ModelProviderConfig(BaseModel):
|
||||
mode: str = Field(default="local_first")
|
||||
local: ProviderEndpoint = Field(default_factory=ProviderEndpoint)
|
||||
|
||||
|
||||
class PlatformConfig(BaseModel):
|
||||
default_dimension_thresholds: dict[str, float] = Field(
|
||||
default_factory=lambda: {
|
||||
|
|
@ -26,14 +15,18 @@ class PlatformConfig(BaseModel):
|
|||
)
|
||||
|
||||
|
||||
class ArtifactConfig(BaseModel):
|
||||
local_pack_dirs: list[str] = Field(default_factory=lambda: ["domain-packs"])
|
||||
class PlannerConfig(BaseModel):
|
||||
readiness_bonus: float = 2.0
|
||||
target_distance_weight: float = 1.0
|
||||
weak_dimension_bonus: float = 1.2
|
||||
fragile_review_bonus: float = 1.5
|
||||
project_unlock_bonus: float = 0.8
|
||||
semantic_similarity_weight: float = 1.0
|
||||
|
||||
|
||||
class AppConfig(BaseModel):
|
||||
model_provider: ModelProviderConfig = Field(default_factory=ModelProviderConfig)
|
||||
platform: PlatformConfig = Field(default_factory=PlatformConfig)
|
||||
artifacts: ArtifactConfig = Field(default_factory=ArtifactConfig)
|
||||
planner: PlannerConfig = Field(default_factory=PlannerConfig)
|
||||
|
||||
|
||||
def load_config(path: str | Path) -> AppConfig:
|
||||
|
|
|
|||
|
|
@ -6,13 +6,10 @@ from .learning_graph import build_merged_learning_graph, namespaced_concept
|
|||
from .semantic_similarity import concept_similarity
|
||||
|
||||
|
||||
def build_concept_graph(
|
||||
results: list[PackValidationResult],
|
||||
default_dimension_thresholds: dict[str, float],
|
||||
) -> ConceptGraph:
|
||||
def build_concept_graph(results: list[PackValidationResult], default_dimension_thresholds: dict[str, float]) -> ConceptGraph:
|
||||
merged = build_merged_learning_graph(results, default_dimension_thresholds)
|
||||
|
||||
graph = ConceptGraph()
|
||||
|
||||
for concept_key, data in merged.concept_data.items():
|
||||
graph.add_concept(concept_key, data)
|
||||
|
||||
|
|
|
|||
|
|
@ -5,10 +5,11 @@ from pathlib import Path
|
|||
from .artifact_registry import check_pack_dependencies, detect_dependency_cycles, discover_domain_packs
|
||||
from .config import load_config
|
||||
from .graph_builder import build_concept_graph, suggest_semantic_links
|
||||
from .planner import PlannerWeights, rank_next_concepts
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(description="Didactopus concept graph engine")
|
||||
parser = argparse.ArgumentParser(description="Didactopus graph-aware planner")
|
||||
parser.add_argument("--target", default="bayes-extension::posterior")
|
||||
parser.add_argument("--mastered", nargs="*", default=[])
|
||||
parser.add_argument("--export-dot", default="")
|
||||
|
|
@ -20,7 +21,7 @@ def build_parser() -> argparse.ArgumentParser:
|
|||
def main() -> None:
|
||||
args = build_parser().parse_args()
|
||||
config = load_config(Path(args.config))
|
||||
results = discover_domain_packs(config.artifacts.local_pack_dirs)
|
||||
results = discover_domain_packs(["domain-packs"])
|
||||
dep_errors = check_pack_dependencies(results)
|
||||
cycles = detect_dependency_cycles(results)
|
||||
|
||||
|
|
@ -37,14 +38,39 @@ def main() -> None:
|
|||
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
|
||||
mastered = set(args.mastered)
|
||||
|
||||
print("== Didactopus Concept Graph Engine ==")
|
||||
print(f"concepts: {len(graph.graph.nodes)}")
|
||||
print(f"edges: {len(graph.graph.edges)}")
|
||||
print()
|
||||
weak_dimensions_by_concept = {
|
||||
"bayes-extension::prior": ["explanation", "transfer"],
|
||||
}
|
||||
fragile_concepts = {"bayes-extension::prior"}
|
||||
|
||||
ranked = rank_next_concepts(
|
||||
graph=graph,
|
||||
mastered=mastered,
|
||||
targets=[args.target],
|
||||
weak_dimensions_by_concept=weak_dimensions_by_concept,
|
||||
fragile_concepts=fragile_concepts,
|
||||
project_catalog=[
|
||||
{
|
||||
"id": "bayes-extension::bayes-mini-project",
|
||||
"prerequisites": ["bayes-extension::prior"],
|
||||
},
|
||||
{
|
||||
"id": "applied-inference::inference-project",
|
||||
"prerequisites": ["applied-inference::model-checking"],
|
||||
},
|
||||
],
|
||||
weights=PlannerWeights(
|
||||
readiness_bonus=config.planner.readiness_bonus,
|
||||
target_distance_weight=config.planner.target_distance_weight,
|
||||
weak_dimension_bonus=config.planner.weak_dimension_bonus,
|
||||
fragile_review_bonus=config.planner.fragile_review_bonus,
|
||||
project_unlock_bonus=config.planner.project_unlock_bonus,
|
||||
semantic_similarity_weight=config.planner.semantic_similarity_weight,
|
||||
),
|
||||
)
|
||||
|
||||
print("== Didactopus Graph-Aware Planner ==")
|
||||
print(f"Target concept: {args.target}")
|
||||
print("Prerequisite chain:")
|
||||
for item in sorted(graph.prerequisite_chain(args.target)):
|
||||
print(f"- {item}")
|
||||
print()
|
||||
print("Curriculum path from current mastery:")
|
||||
for item in graph.curriculum_path_to_target(mastered, args.target):
|
||||
|
|
@ -54,9 +80,11 @@ def main() -> None:
|
|||
for item in graph.ready_concepts(mastered):
|
||||
print(f"- {item}")
|
||||
print()
|
||||
print("Declared related concepts for target:")
|
||||
for item in graph.related_concepts(args.target):
|
||||
print(f"- {item}")
|
||||
print("Ranked next concepts:")
|
||||
for item in ranked:
|
||||
print(f"- {item['concept']}: {item['score']:.2f}")
|
||||
for name, value in item["components"].items():
|
||||
print(f" * {name}: {value:.2f}")
|
||||
print()
|
||||
print("Suggested semantic links:")
|
||||
for a, b, score in suggest_semantic_links(graph, minimum_similarity=0.10)[:8]:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,100 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from math import inf
|
||||
|
||||
from .concept_graph import ConceptGraph
|
||||
from .semantic_similarity import concept_similarity
|
||||
|
||||
|
||||
@dataclass
|
||||
class PlannerWeights:
|
||||
readiness_bonus: float = 2.0
|
||||
target_distance_weight: float = 1.0
|
||||
weak_dimension_bonus: float = 1.2
|
||||
fragile_review_bonus: float = 1.5
|
||||
project_unlock_bonus: float = 0.8
|
||||
semantic_similarity_weight: float = 1.0
|
||||
|
||||
|
||||
def _distance_bonus(graph: ConceptGraph, concept: str, targets: list[str]) -> float:
|
||||
pg = graph.prerequisite_subgraph()
|
||||
best = inf
|
||||
for target in targets:
|
||||
try:
|
||||
dist = len(__import__("networkx").shortest_path(pg, concept, target)) - 1
|
||||
best = min(best, dist)
|
||||
except Exception:
|
||||
continue
|
||||
if best is inf:
|
||||
return 0.0
|
||||
return 1.0 / (1.0 + best)
|
||||
|
||||
|
||||
def _project_unlock_bonus(concept: str, project_catalog: list[dict]) -> float:
|
||||
count = 0
|
||||
for project in project_catalog:
|
||||
if concept in project.get("prerequisites", []):
|
||||
count += 1
|
||||
return float(count)
|
||||
|
||||
|
||||
def _semantic_bonus(graph: ConceptGraph, concept: str, targets: list[str]) -> float:
|
||||
data_a = graph.graph.nodes[concept]
|
||||
best = 0.0
|
||||
for target in targets:
|
||||
if target not in graph.graph.nodes:
|
||||
continue
|
||||
data_b = graph.graph.nodes[target]
|
||||
best = max(best, concept_similarity(data_a, data_b))
|
||||
return best
|
||||
|
||||
|
||||
def rank_next_concepts(
|
||||
graph: ConceptGraph,
|
||||
mastered: set[str],
|
||||
targets: list[str],
|
||||
weak_dimensions_by_concept: dict[str, list[str]],
|
||||
fragile_concepts: set[str],
|
||||
project_catalog: list[dict],
|
||||
weights: PlannerWeights,
|
||||
) -> list[dict]:
|
||||
ready = graph.ready_concepts(mastered)
|
||||
ranked = []
|
||||
|
||||
for concept in ready:
|
||||
score = 0.0
|
||||
components = {}
|
||||
|
||||
readiness = weights.readiness_bonus
|
||||
score += readiness
|
||||
components["readiness"] = readiness
|
||||
|
||||
distance = weights.target_distance_weight * _distance_bonus(graph, concept, targets)
|
||||
score += distance
|
||||
components["target_distance"] = distance
|
||||
|
||||
weak = weights.weak_dimension_bonus * len(weak_dimensions_by_concept.get(concept, []))
|
||||
score += weak
|
||||
components["weak_dimensions"] = weak
|
||||
|
||||
fragile = weights.fragile_review_bonus if concept in fragile_concepts else 0.0
|
||||
score += fragile
|
||||
components["fragile_review"] = fragile
|
||||
|
||||
project = weights.project_unlock_bonus * _project_unlock_bonus(concept, project_catalog)
|
||||
score += project
|
||||
components["project_unlock"] = project
|
||||
|
||||
semantic = weights.semantic_similarity_weight * _semantic_bonus(graph, concept, targets)
|
||||
score += semantic
|
||||
components["semantic_similarity"] = semantic
|
||||
|
||||
ranked.append({
|
||||
"concept": concept,
|
||||
"score": score,
|
||||
"components": components,
|
||||
})
|
||||
|
||||
ranked.sort(key=lambda item: item["score"], reverse=True)
|
||||
return ranked
|
||||
|
|
@ -22,13 +22,10 @@ def resolve_mastery_profile(
|
|||
}
|
||||
else:
|
||||
effective = dict(default_profile)
|
||||
|
||||
if concept_profile.get("required_dimensions"):
|
||||
effective["required_dimensions"] = list(concept_profile["required_dimensions"])
|
||||
if concept_profile.get("dimension_threshold_overrides"):
|
||||
effective["dimension_threshold_overrides"].update(
|
||||
concept_profile["dimension_threshold_overrides"]
|
||||
)
|
||||
effective["dimension_threshold_overrides"].update(concept_profile["dimension_threshold_overrides"])
|
||||
|
||||
thresholds = dict(default_thresholds)
|
||||
thresholds.update(effective["dimension_threshold_overrides"])
|
||||
|
|
|
|||
|
|
@ -24,14 +24,6 @@ def token_cosine_similarity(text_a: str, text_b: str) -> float:
|
|||
|
||||
|
||||
def concept_similarity(concept_a: dict, concept_b: dict) -> float:
|
||||
text_a = " ".join([
|
||||
concept_a.get("title", ""),
|
||||
concept_a.get("description", ""),
|
||||
" ".join(concept_a.get("mastery_signals", [])),
|
||||
])
|
||||
text_b = " ".join([
|
||||
concept_b.get("title", ""),
|
||||
concept_b.get("description", ""),
|
||||
" ".join(concept_b.get("mastery_signals", [])),
|
||||
])
|
||||
text_a = " ".join([concept_a.get("title", ""), concept_a.get("description", ""), " ".join(concept_a.get("mastery_signals", []))])
|
||||
text_b = " ".join([concept_b.get("title", ""), concept_b.get("description", ""), " ".join(concept_b.get("mastery_signals", []))])
|
||||
return token_cosine_similarity(text_a, text_b)
|
||||
|
|
|
|||
|
|
@ -11,14 +11,6 @@ def test_concept_graph_builds() -> None:
|
|||
assert "bayes-extension::posterior" in graph.graph.nodes
|
||||
|
||||
|
||||
def test_prerequisite_path() -> None:
|
||||
config = load_config("configs/config.example.yaml")
|
||||
results = discover_domain_packs(["domain-packs"])
|
||||
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
|
||||
path = graph.learning_path("bayes-extension::prior", "bayes-extension::posterior")
|
||||
assert path == ["bayes-extension::prior", "bayes-extension::posterior"]
|
||||
|
||||
|
||||
def test_curriculum_path_to_target() -> None:
|
||||
config = load_config("configs/config.example.yaml")
|
||||
results = discover_domain_packs(["domain-packs"])
|
||||
|
|
|
|||
|
|
@ -0,0 +1,23 @@
|
|||
from didactopus.artifact_registry import discover_domain_packs
|
||||
from didactopus.config import load_config
|
||||
from didactopus.graph_builder import build_concept_graph
|
||||
from didactopus.planner import PlannerWeights, rank_next_concepts
|
||||
|
||||
|
||||
def test_rank_next_concepts() -> None:
|
||||
config = load_config("configs/config.example.yaml")
|
||||
results = discover_domain_packs(["domain-packs"])
|
||||
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
|
||||
|
||||
ranked = rank_next_concepts(
|
||||
graph=graph,
|
||||
mastered=set(),
|
||||
targets=["bayes-extension::posterior"],
|
||||
weak_dimensions_by_concept={"bayes-extension::prior": ["transfer"]},
|
||||
fragile_concepts={"bayes-extension::prior"},
|
||||
project_catalog=[{"id": "p1", "prerequisites": ["bayes-extension::prior"]}],
|
||||
weights=PlannerWeights(),
|
||||
)
|
||||
|
||||
assert len(ranked) >= 1
|
||||
assert ranked[0]["score"] >= ranked[-1]["score"]
|
||||
Loading…
Reference in New Issue