Compare commits

..

No commits in common. "dd0cc9fd084b801f2e7f80891d0ac0263069f2e1" and "964aadd38223c12767a5a60a3e390a033f63ee90" have entirely different histories.

26 changed files with 520 additions and 833 deletions

470
README.md
View File

@ -1,203 +1,319 @@
# Didactopus
**Didactopus** is a localfirst AIassisted autodidactic mastery platform designed to help motivated learners achieve **true expertise** in a chosen domain.
![Didactopus mascot](artwork/didactopus-mascot.png)
**Didactopus** is a local-first AI-assisted autodidactic mastery platform for building genuine expertise through concept graphs, adaptive curriculum planning, evidence-driven mastery, Socratic mentoring, and project-based learning.
The system combines:
**Tagline:** *Many arms, one goal — mastery.*
• domain knowledge graphs
• masterybased learning models
• evidencedriven assessment
• Socratic mentoring
• adaptive curriculum generation
• projectbased evaluation
## This revision
Didactopus is designed for **serious learning**, not shallow answer generation.
This revision adds a **graph-aware planning layer** that connects the concept graph engine to the adaptive and evidence engines.
Its core philosophy is:
The new planner selects the next concepts to study using a utility function that considers:
> AI should function as a mentor, evaluator, and guide — not a substitute for thinking.
- prerequisite readiness
- distance to learner target concepts
- weakness in competence dimensions
- project availability
- review priority for fragile concepts
- semantic neighborhood around learner goals
---
## Why this matters
# Project Goals
Up to this point, Didactopus could:
- build concept graphs
- identify ready concepts
- infer mastery from evidence
Didactopus aims to enable learners to:
But it still needed a better mechanism for choosing **what to do next**.
• build deep conceptual understanding
• practice reasoning and explanation
• complete real projects demonstrating competence
• identify weak areas through evidencebased feedback
• progress through mastery rather than time spent
The graph-aware planner begins to solve that by ranking candidate concepts according to learner-specific utility instead of using unlocked prerequisites alone.
The platform is particularly suitable for:
## Current architecture overview
• autodidacts
• researchers entering new fields
• students supplementing formal education
• interdisciplinary learners
• AIassisted selfstudy programs
Didactopus now includes:
---
- **Domain packs** for concepts, projects, rubrics, mastery profiles, templates, and cross-pack links
- **Dependency resolution** across packs
- **Merged learning graph** generation
- **Concept graph engine** with cross-pack links, similarity hooks, pathfinding, and visualization export
- **Adaptive learner engine** for ready/blocked/mastered concept states
- **Evidence engine** with weighted, recency-aware, multi-dimensional mastery inference
- **Concept-specific mastery profiles** with template inheritance
- **Graph-aware planner** for utility-ranked next-step recommendations
# Key Architectural Concepts
## Planning utility
## Domain Packs
The current planner computes a score per candidate concept using:
Knowledge is distributed as **domain packs** contributed by the community.
- readiness bonus
- target-distance bonus
- weak-dimension bonus
- fragile-concept review bonus
- project-unlock bonus
- semantic-similarity bonus
Each pack can include:
These terms are transparent and configurable.
## Agentic AI students
This planner also strengthens the case for **AI student agents** that use Didactopus as a structured mastery environment.
An AI student could:
1. inspect the graph
2. choose the next concept via the planner
3. attempt tasks
4. generate evidence
5. update mastery state
6. repeat until a target expertise profile is reached
This makes Didactopus useful as both:
- a learning platform
- a benchmark harness for agentic expertise growth
## Core philosophy
Didactopus assumes that real expertise is built through:
- explanation
- problem solving
- transfer
- critique
- project execution
The AI layer should function as a **mentor, evaluator, and curriculum partner**, not an answer vending machine.
## Domain packs
Knowledge enters the system through versioned, shareable **domain packs**. Each pack can contribute:
- concepts
- prerequisites
- learning stages
- concept definitions
- prerequisite graphs
- learning roadmaps
- projects
- rubrics
- mastery profiles
- profile templates
- cross-pack concept links
## Concept graph engine
Example packs:
This revision implements a concept graph engine with:
- prerequisite reasoning across packs
- cross-pack concept linking
- semantic concept similarity hooks
- automatic curriculum pathfinding
- visualization export for mastery graphs
Concepts are namespaced as `pack-name::concept-id`.
### Cross-pack links
Domain packs may declare conceptual links such as:
- `equivalent_to`
- `related_to`
- `extends`
- `depends_on`
These links enable Didactopus to reason across pack boundaries rather than treating each pack as an isolated island.
### Semantic similarity
A semantic similarity layer is included as a hook for:
- token overlap similarity
- future embedding-based similarity
- future ontology and LLM-assisted concept alignment
### Curriculum pathfinding
The concept graph engine supports:
- prerequisite chains
- shortest dependency paths
- next-ready concept discovery
- reachability analysis
- curriculum path generation from a learners mastery state to a target concept
### Visualization
Graphs can be exported to:
- Graphviz DOT
- Cytoscape-style JSON
## Evidence-driven mastery
Mastery is inferred from evidence such as:
- explanations
- problem solutions
- transfer tasks
- project artifacts
Evidence is:
- weighted by type
- optionally up-weighted for recency
- summarized by competence dimension
- compared against concept-specific mastery profiles
## Multi-dimensional mastery
Current dimensions include:
- `correctness`
- `explanation`
- `transfer`
- `project_execution`
- `critique`
Different concepts can require different subsets of these dimensions.
## Agentic AI students
Didactopus is also architecturally suitable for **AI learner agents**.
An agentic AI student could:
1. ingest domain packs
2. traverse the concept graph
3. generate explanations and answers
4. attempt practice tasks
5. critique model outputs
6. complete simulated projects
7. accumulate evidence
8. advance only when concept-specific mastery criteria are satisfied
## Repository structure
```text
didactopus/
├── README.md
├── artwork/
├── configs/
├── docs/
├── domain-packs/
├── src/didactopus/
└── tests/
```
domain-packs/
statistics-foundations
bayes-extension
applied-inference
```
Domain packs are validated, dependencychecked, and merged into a **unified learning graph**.
---
# Learning Graph
Didactopus merges all installed packs into a directed concept graph:
```
Concept A → Concept B → Concept C
```
Edges represent prerequisites.
The system then generates:
• adaptive learning roadmaps
• next-best concepts to study
• projects unlocked by prerequisite completion
---
# EvidenceDriven Mastery
Concept mastery is **inferred from evidence**, not declared.
Evidence types include:
• explanations
• problem solutions
• transfer tasks
• project deliverables
Evidence contributes weighted scores that determine:
• mastery state
• learner confidence
• weak dimensions requiring further practice
---
# MultiDimensional Mastery
Didactopus tracks multiple competence dimensions:
| Dimension | Meaning |
|---|---|
| correctness | accurate reasoning |
| explanation | ability to explain clearly |
| transfer | ability to apply knowledge |
| project_execution | ability to build artifacts |
| critique | ability to detect errors and assumptions |
Different concepts can require different combinations of these dimensions.
---
# Concept Mastery Profiles
Concepts define **mastery profiles** specifying:
• required dimensions
• threshold overrides
Example:
```yaml
mastery_profile:
required_dimensions:
- correctness
- transfer
- critique
dimension_threshold_overrides:
transfer: 0.8
critique: 0.8
```
---
# Mastery Profile Inheritance
This revision adds **profile templates** so packs can define reusable mastery models.
Example:
```yaml
profile_templates:
foundation_concept:
required_dimensions:
- correctness
- explanation
critique_concept:
required_dimensions:
- correctness
- transfer
- critique
```
Concepts can reference templates:
```yaml
mastery_profile:
template: critique_concept
```
This allows domain packs to remain concise while maintaining consistent evaluation standards.
---
# Adaptive Learning Engine
The adaptive engine computes:
• which concepts are ready to study
• which are blocked by prerequisites
• which are already mastered
• which projects are available
Output includes:
```
next_best_concepts
eligible_projects
adaptive_learning_roadmap
```
---
# Evidence Engine
The evidence engine:
• aggregates learner evidence
• computes weighted scores
• tracks confidence
• identifies weak competence dimensions
• updates mastery status
Later weak performance can **resurface concepts for review**.
---
# Socratic Mentor
Didactopus includes a mentor layer that:
• asks probing questions
• challenges reasoning
• generates practice tasks
• proposes projects
Models can run locally (recommended) or via remote APIs.
---
# Agentic AI Students
Didactopus is also suitable for **AIdriven learning agents**.
A future architecture may include:
```
Didactopus Core
├─ Human Learner
└─ AI Student Agent
```
An AI student could:
1. read domain packs
2. attempt practice tasks
3. produce explanations
4. critique model outputs
5. complete simulated projects
6. accumulate evidence
7. progress through the mastery graph
Such agents could be used for:
• automated curriculum testing
• benchmarking AI reasoning
• synthetic expert generation
• evaluation of model capabilities
Didactopus therefore supports both:
• human learners
• agentic AI learners
---
# Project Structure
```
didactopus/
adaptive_engine/
artifact_registry/
evidence_engine/
learning_graph/
mentor/
practice/
project_advisor/
```
Additional directories:
```
configs/
docs/
domain-packs/
tests/
artwork/
```
---
# Current Status
Implemented:
✓ domain pack validation
✓ dependency resolution
✓ learning graph merge
✓ adaptive roadmap generation
✓ evidencedriven mastery
✓ multidimensional competence tracking
✓ conceptspecific mastery profiles
✓ profile template inheritance
Planned next phases:
• curriculum optimization algorithms
• activelearning task generation
• automated project evaluation
• distributed pack registry
• visualization tools for learning graphs
---
# Philosophy
Didactopus is built around a simple principle:
> Mastery requires thinking, explaining, testing, and building — not merely receiving answers.
AI can accelerate the process, but genuine learning remains an **active intellectual endeavor**.
---
**Didactopus — many arms, one goal: mastery.**

View File

@ -4,9 +4,25 @@ model_provider:
backend: ollama
endpoint: http://localhost:11434
model_name: llama3.1:8b
remote:
enabled: false
provider_name: none
endpoint: ""
model_name: ""
platform:
default_dimension_thresholds:
verification_required: true
require_learner_explanations: true
permit_direct_answers: false
resurfacing_threshold: 0.55
confidence_threshold: 0.8
evidence_weights:
explanation: 1.0
problem: 1.5
project: 2.5
transfer: 2.0
recent_evidence_multiplier: 1.35
dimension_thresholds:
correctness: 0.8
explanation: 0.75
transfer: 0.7
@ -16,3 +32,4 @@ platform:
artifacts:
local_pack_dirs:
- domain-packs
allow_third_party_packs: true

View File

@ -1,22 +0,0 @@
# Concept Graph Engine
The concept graph engine provides the backbone for Didactopus.
## Features in this revision
- prerequisite reasoning across packs
- cross-pack concept linking
- semantic similarity scoring hook
- curriculum pathfinding
- visualization export
## Edge types
The engine distinguishes between:
- `prerequisite`
- `equivalent_to`
- `related_to`
- `extends`
- `depends_on`
Only prerequisite edges are used for strict learning-order pathfinding.

View File

@ -1,29 +0,0 @@
# Graph-Aware Planner
The graph-aware planner ranks next concepts using a transparent utility model.
## Inputs
- concept graph
- learner mastery state
- evidence summaries
- target concepts
- semantic similarity estimates
- project catalog
## Current scoring terms
- **readiness_bonus**: concept is currently studyable
- **target_distance_weight**: closer concepts to the target score higher
- **weak_dimension_bonus**: concepts with known weakness signals are prioritized
- **fragile_review_bonus**: resurfaced or fragile concepts are review-prioritized
- **project_unlock_bonus**: concepts that unlock projects score higher
- **semantic_similarity_weight**: concepts semantically close to targets gain weight
## Future work
- learner time budgets
- spaced repetition costs
- multi-objective planning
- planning across multiple targets
- reinforcement learning over curriculum policies

View File

@ -1,10 +1,6 @@
concepts:
- id: model-checking
title: Model Checking
description: Critiquing assumptions, fit, and implications of a probabilistic model.
prerequisites: []
mastery_signals:
- compare model assumptions
- critique a simple inference model
mastery_profile:
template: critique_heavy

View File

@ -4,7 +4,7 @@ version: 0.2.0
schema_version: "1"
didactopus_min_version: 0.1.0
didactopus_max_version: 0.9.99
description: Applied inference pack emphasizing transfer and critique.
description: Simple applied inference pack.
author: Wesley R. Elsberry
license: MIT
dependencies:
@ -12,19 +12,3 @@ dependencies:
min_version: 0.1.0
max_version: 1.0.0
overrides: []
profile_templates:
critique_heavy:
required_dimensions:
- correctness
- transfer
- critique
dimension_threshold_overrides:
transfer: 0.78
critique: 0.73
cross_pack_links:
- source_concept: model-checking
target_concept: bayes-extension::posterior
relation: extends
- source_concept: model-checking
target_concept: foundations-statistics::probability-basics
relation: related_to

View File

@ -1,27 +1,12 @@
concepts:
- id: prior
title: Prior
description: A probability distribution representing knowledge before evidence.
prerequisites: []
mastery_signals:
- explain a prior distribution
- compare reasonable priors
mastery_profile:
template: bayes_concept
- id: posterior
title: Posterior
description: Updated beliefs after conditioning on observed evidence.
prerequisites:
- prior
mastery_signals:
- explain updating beliefs
- compare prior and posterior distributions
mastery_profile:
required_dimensions:
- correctness
- explanation
- transfer
- critique
dimension_threshold_overrides:
critique: 0.78

View File

@ -12,18 +12,3 @@ dependencies:
min_version: 1.0.0
max_version: 1.9.99
overrides: []
profile_templates:
bayes_concept:
required_dimensions:
- correctness
- explanation
- transfer
dimension_threshold_overrides:
transfer: 0.74
cross_pack_links:
- source_concept: prior
target_concept: foundations-statistics::probability-basics
relation: depends_on
- source_concept: posterior
target_concept: applied-inference::model-checking
relation: related_to

View File

@ -1,26 +1,12 @@
concepts:
- id: descriptive-statistics
title: Descriptive Statistics
description: Core summaries of distributions, central tendency, and spread.
prerequisites: []
mastery_signals:
- explain mean median and variance
- summarize a simple dataset
mastery_profile:
template: foundation_concept
- explain central tendency
- id: probability-basics
title: Probability Basics
description: Basic event probability and conditional probability.
prerequisites:
- descriptive-statistics
mastery_signals:
- explain event probability
- calculate simple conditional probability
mastery_profile:
required_dimensions:
- correctness
- explanation
- transfer
dimension_threshold_overrides:
transfer: 0.72

View File

@ -9,11 +9,3 @@ author: Wesley R. Elsberry
license: MIT
dependencies: []
overrides: []
profile_templates:
foundation_concept:
required_dimensions:
- correctness
- explanation
dimension_threshold_overrides:
explanation: 0.75
cross_pack_links: []

View File

@ -10,11 +10,8 @@ readme = "README.md"
requires-python = ">=3.10"
license = {text = "MIT"}
authors = [{name = "Wesley R. Elsberry"}]
dependencies = [
"pydantic>=2.7",
"pyyaml>=6.0",
"networkx>=3.2",
]
dependencies = ["pydantic>=2.7", "pyyaml>=6.0", "networkx>=3.2"]
[project.optional-dependencies]
dev = ["pytest>=8.0", "ruff>=0.6"]

View File

@ -50,8 +50,8 @@ def validate_pack(pack_dir: str | Path) -> PackValidationResult:
result.manifest = PackManifest.model_validate(_load_yaml(pack_path / "pack.yaml"))
if not _version_in_range(DIDACTOPUS_VERSION, result.manifest.didactopus_min_version, result.manifest.didactopus_max_version):
result.errors.append(
f"incompatible with Didactopus core version {DIDACTOPUS_VERSION}; "
f"supported range is {result.manifest.didactopus_min_version}..{result.manifest.didactopus_max_version}"
f"incompatible with Didactopus core version {DIDACTOPUS_VERSION}; supported range is "
f"{result.manifest.didactopus_min_version}..{result.manifest.didactopus_max_version}"
)
result.loaded_files["concepts"] = ConceptsFile.model_validate(_load_yaml(pack_path / "concepts.yaml"))
result.loaded_files["roadmap"] = RoadmapFile.model_validate(_load_yaml(pack_path / "roadmap.yaml"))

View File

@ -8,23 +8,6 @@ class DependencySpec(BaseModel):
max_version: str = "9999.9999.9999"
class MasteryProfileSpec(BaseModel):
template: str | None = None
required_dimensions: list[str] = Field(default_factory=list)
dimension_threshold_overrides: dict[str, float] = Field(default_factory=dict)
class CrossPackLinkSpec(BaseModel):
source_concept: str
target_concept: str
relation: str
class ProfileTemplateSpec(BaseModel):
required_dimensions: list[str] = Field(default_factory=list)
dimension_threshold_overrides: dict[str, float] = Field(default_factory=dict)
class PackManifest(BaseModel):
name: str
display_name: str
@ -37,17 +20,13 @@ class PackManifest(BaseModel):
license: str = "unspecified"
dependencies: list[DependencySpec] = Field(default_factory=list)
overrides: list[str] = Field(default_factory=list)
profile_templates: dict[str, ProfileTemplateSpec] = Field(default_factory=dict)
cross_pack_links: list[CrossPackLinkSpec] = Field(default_factory=list)
class ConceptEntry(BaseModel):
id: str
title: str
description: str = ""
prerequisites: list[str] = Field(default_factory=list)
mastery_signals: list[str] = Field(default_factory=list)
mastery_profile: MasteryProfileSpec = Field(default_factory=MasteryProfileSpec)
class ConceptsFile(BaseModel):

View File

@ -1,94 +0,0 @@
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Any
from pathlib import Path
import json
import networkx as nx
REL_PREREQ = "prerequisite"
REL_EQUIVALENT = "equivalent_to"
REL_RELATED = "related_to"
REL_EXTENDS = "extends"
REL_DEPENDS = "depends_on"
@dataclass
class ConceptGraph:
graph: nx.MultiDiGraph = field(default_factory=nx.MultiDiGraph)
def add_concept(self, concept_key: str, metadata: dict[str, Any] | None = None) -> None:
self.graph.add_node(concept_key, **(metadata or {}))
def add_edge(self, source: str, target: str, relation: str) -> None:
self.graph.add_edge(source, target, relation=relation)
def add_prerequisite(self, prereq: str, concept: str) -> None:
self.add_edge(prereq, concept, REL_PREREQ)
def add_cross_link(self, source: str, target: str, relation: str) -> None:
self.add_edge(source, target, relation)
def prerequisite_subgraph(self) -> nx.DiGraph:
g = nx.DiGraph()
for node, data in self.graph.nodes(data=True):
g.add_node(node, **data)
for u, v, data in self.graph.edges(data=True):
if data.get("relation") == REL_PREREQ:
g.add_edge(u, v)
return g
def prerequisites(self, concept: str) -> list[str]:
return list(self.prerequisite_subgraph().predecessors(concept))
def prerequisite_chain(self, concept: str) -> list[str]:
return list(nx.ancestors(self.prerequisite_subgraph(), concept))
def dependents(self, concept: str) -> list[str]:
return list(self.prerequisite_subgraph().successors(concept))
def learning_path(self, start: str, target: str) -> list[str] | None:
try:
return nx.shortest_path(self.prerequisite_subgraph(), start, target)
except nx.NetworkXNoPath:
return None
def curriculum_path_to_target(self, mastered: set[str], target: str) -> list[str]:
pg = self.prerequisite_subgraph()
needed = set(nx.ancestors(pg, target)) | {target}
ordered = [n for n in nx.topological_sort(pg) if n in needed]
return [n for n in ordered if n not in mastered]
def ready_concepts(self, mastered: set[str]) -> list[str]:
pg = self.prerequisite_subgraph()
ready = []
for node in pg.nodes:
if node in mastered:
continue
if set(pg.predecessors(node)).issubset(mastered):
ready.append(node)
return ready
def related_concepts(self, concept: str, relation_types: set[str] | None = None) -> list[str]:
relation_types = relation_types or {REL_EQUIVALENT, REL_RELATED, REL_EXTENDS, REL_DEPENDS}
found = []
for _, v, data in self.graph.out_edges(concept, data=True):
if data.get("relation") in relation_types:
found.append(v)
return found
def export_graphviz(self, path: str) -> None:
lines = ["digraph Didactopus {"]
for node in self.graph.nodes:
lines.append(f' "{node}";')
for u, v, data in self.graph.edges(data=True):
lines.append(f' "{u}" -> "{v}" [label="{data.get("relation", "")}"];')
lines.append("}")
Path(path).write_text("\n".join(lines), encoding="utf-8")
def export_cytoscape_json(self, path: str) -> None:
data = {
"nodes": [{"data": {"id": n, **attrs}} for n, attrs in self.graph.nodes(data=True)],
"edges": [{"data": {"source": u, "target": v, **attrs}} for u, v, attrs in self.graph.edges(data=True)],
}
Path(path).write_text(json.dumps(data, indent=2), encoding="utf-8")

View File

@ -3,8 +3,41 @@ from pydantic import BaseModel, Field
import yaml
class ProviderEndpoint(BaseModel):
backend: str = "ollama"
endpoint: str = "http://localhost:11434"
model_name: str = "llama3.1:8b"
class RemoteProvider(BaseModel):
enabled: bool = False
provider_name: str = "none"
endpoint: str = ""
model_name: str = ""
class ModelProviderConfig(BaseModel):
mode: str = Field(default="local_first")
local: ProviderEndpoint = Field(default_factory=ProviderEndpoint)
remote: RemoteProvider = Field(default_factory=RemoteProvider)
class PlatformConfig(BaseModel):
default_dimension_thresholds: dict[str, float] = Field(
verification_required: bool = True
require_learner_explanations: bool = True
permit_direct_answers: bool = False
resurfacing_threshold: float = 0.55
confidence_threshold: float = 0.8
evidence_weights: dict[str, float] = Field(
default_factory=lambda: {
"explanation": 1.0,
"problem": 1.5,
"project": 2.5,
"transfer": 2.0,
}
)
recent_evidence_multiplier: float = 1.35
dimension_thresholds: dict[str, float] = Field(
default_factory=lambda: {
"correctness": 0.8,
"explanation": 0.75,
@ -15,18 +48,15 @@ class PlatformConfig(BaseModel):
)
class PlannerConfig(BaseModel):
readiness_bonus: float = 2.0
target_distance_weight: float = 1.0
weak_dimension_bonus: float = 1.2
fragile_review_bonus: float = 1.5
project_unlock_bonus: float = 0.8
semantic_similarity_weight: float = 1.0
class ArtifactConfig(BaseModel):
local_pack_dirs: list[str] = Field(default_factory=lambda: ["domain-packs"])
allow_third_party_packs: bool = True
class AppConfig(BaseModel):
model_provider: ModelProviderConfig = Field(default_factory=ModelProviderConfig)
platform: PlatformConfig = Field(default_factory=PlatformConfig)
planner: PlannerConfig = Field(default_factory=PlannerConfig)
artifacts: ArtifactConfig = Field(default_factory=ArtifactConfig)
def load_config(path: str | Path) -> AppConfig:

View File

@ -1,46 +0,0 @@
from __future__ import annotations
from .artifact_registry import PackValidationResult
from .concept_graph import ConceptGraph
from .learning_graph import build_merged_learning_graph, namespaced_concept
from .semantic_similarity import concept_similarity
def build_concept_graph(results: list[PackValidationResult], default_dimension_thresholds: dict[str, float]) -> ConceptGraph:
merged = build_merged_learning_graph(results, default_dimension_thresholds)
graph = ConceptGraph()
for concept_key, data in merged.concept_data.items():
graph.add_concept(concept_key, data)
for concept_key, data in merged.concept_data.items():
for prereq in data["prerequisites"]:
if prereq in merged.concept_data:
graph.add_prerequisite(prereq, concept_key)
for result in results:
if result.manifest is None or not result.is_valid:
continue
pack_name = result.manifest.name
for link in result.manifest.cross_pack_links:
source = link.source_concept if "::" in link.source_concept else namespaced_concept(pack_name, link.source_concept)
target = link.target_concept
if source in graph.graph.nodes and target in graph.graph.nodes:
graph.add_cross_link(source, target, link.relation)
return graph
def suggest_semantic_links(graph: ConceptGraph, minimum_similarity: float = 0.35) -> list[tuple[str, str, float]]:
concepts = list(graph.graph.nodes(data=True))
found = []
for i in range(len(concepts)):
key_a, data_a = concepts[i]
for j in range(i + 1, len(concepts)):
key_b, data_b = concepts[j]
if key_a.split("::")[0] == key_b.split("::")[0]:
continue
sim = concept_similarity(data_a, data_b)
if sim >= minimum_similarity:
found.append((key_a, key_b, sim))
return sorted(found, key=lambda x: x[2], reverse=True)

View File

@ -2,9 +2,9 @@ from __future__ import annotations
from dataclasses import dataclass, field
from typing import Any
import networkx as nx
from .artifact_registry import PackValidationResult, topological_pack_order
from .profile_templates import resolve_mastery_profile
def namespaced_concept(pack_name: str, concept_id: str) -> str:
@ -13,44 +13,38 @@ def namespaced_concept(pack_name: str, concept_id: str) -> str:
@dataclass
class MergedLearningGraph:
graph: nx.DiGraph = field(default_factory=nx.DiGraph)
concept_data: dict[str, dict[str, Any]] = field(default_factory=dict)
project_catalog: list[dict[str, Any]] = field(default_factory=list)
load_order: list[str] = field(default_factory=list)
def build_merged_learning_graph(
results: list[PackValidationResult],
default_dimension_thresholds: dict[str, float],
) -> MergedLearningGraph:
def build_merged_learning_graph(results: list[PackValidationResult]) -> MergedLearningGraph:
merged = MergedLearningGraph()
valid = {r.manifest.name: r for r in results if r.manifest is not None and r.is_valid}
merged.load_order = topological_pack_order(results)
for pack_name in merged.load_order:
result = valid[pack_name]
templates = {
name: {
"required_dimensions": list(spec.required_dimensions),
"dimension_threshold_overrides": dict(spec.dimension_threshold_overrides),
}
for name, spec in result.manifest.profile_templates.items()
}
for concept in result.loaded_files["concepts"].concepts:
key = namespaced_concept(pack_name, concept.id)
resolved_profile = resolve_mastery_profile(
concept.mastery_profile.model_dump(),
templates,
default_dimension_thresholds,
)
merged.concept_data[key] = {
"id": concept.id,
"title": concept.title,
"description": concept.description,
"pack": pack_name,
"prerequisites": [namespaced_concept(pack_name, p) for p in concept.prerequisites],
"prerequisites": list(concept.prerequisites),
"mastery_signals": list(concept.mastery_signals),
"mastery_profile": resolved_profile,
}
merged.graph.add_node(key)
for pack_name in merged.load_order:
result = valid[pack_name]
for concept in result.loaded_files["concepts"].concepts:
concept_key = namespaced_concept(pack_name, concept.id)
for prereq in concept.prerequisites:
prereq_key = namespaced_concept(pack_name, prereq)
if prereq_key in merged.graph:
merged.graph.add_edge(prereq_key, concept_key)
for project in result.loaded_files["projects"].projects:
merged.project_catalog.append({
"id": f"{pack_name}::{project.id}",

View File

@ -2,101 +2,141 @@ import argparse
import os
from pathlib import Path
from .artifact_registry import check_pack_dependencies, detect_dependency_cycles, discover_domain_packs
from .adaptive_engine import LearnerProfile, build_adaptive_plan
from .artifact_registry import (
check_pack_dependencies,
detect_dependency_cycles,
discover_domain_packs,
topological_pack_order,
)
from .config import load_config
from .graph_builder import build_concept_graph, suggest_semantic_links
from .planner import PlannerWeights, rank_next_concepts
from .evidence_engine import EvidenceItem, ingest_evidence_bundle
from .learning_graph import build_merged_learning_graph
from .mentor import generate_socratic_prompt
from .model_provider import ModelProvider
from .practice import generate_practice_task
from .project_advisor import suggest_capstone
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(description="Didactopus graph-aware planner")
parser.add_argument("--target", default="bayes-extension::posterior")
parser.add_argument("--mastered", nargs="*", default=[])
parser.add_argument("--export-dot", default="")
parser.add_argument("--export-cytoscape", default="")
parser.add_argument("--config", default=os.environ.get("DIDACTOPUS_CONFIG", "configs/config.example.yaml"))
parser = argparse.ArgumentParser(description="Didactopus multi-dimensional mastery scaffold")
parser.add_argument("--domain", required=True)
parser.add_argument("--goal", required=True)
parser.add_argument(
"--config",
default=os.environ.get("DIDACTOPUS_CONFIG", "configs/config.example.yaml"),
)
return parser
def main() -> None:
args = build_parser().parse_args()
config = load_config(Path(args.config))
results = discover_domain_packs(["domain-packs"])
dep_errors = check_pack_dependencies(results)
cycles = detect_dependency_cycles(results)
provider = ModelProvider(config.model_provider)
packs = discover_domain_packs(config.artifacts.local_pack_dirs)
dependency_errors = check_pack_dependencies(packs)
cycles = detect_dependency_cycles(packs)
if dep_errors:
print("Dependency errors:")
for err in dep_errors:
print("== Didactopus ==")
print("Many arms, one goal — mastery.")
print()
if dependency_errors:
print("== Dependency Errors ==")
for err in dependency_errors:
print(f"- {err}")
print()
if cycles:
print("Dependency cycles:")
print("== Dependency Cycles ==")
for cycle in cycles:
print(f"- {' -> '.join(cycle)}")
print(f"- cycle: {' -> '.join(cycle)}")
return
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
mastered = set(args.mastered)
print("== Pack Load Order ==")
for name in topological_pack_order(packs):
print(f"- {name}")
print()
weak_dimensions_by_concept = {
"bayes-extension::prior": ["explanation", "transfer"],
}
fragile_concepts = {"bayes-extension::prior"}
ranked = rank_next_concepts(
graph=graph,
mastered=mastered,
targets=[args.target],
weak_dimensions_by_concept=weak_dimensions_by_concept,
fragile_concepts=fragile_concepts,
project_catalog=[
{
"id": "bayes-extension::bayes-mini-project",
"prerequisites": ["bayes-extension::prior"],
},
{
"id": "applied-inference::inference-project",
"prerequisites": ["applied-inference::model-checking"],
},
],
weights=PlannerWeights(
readiness_bonus=config.planner.readiness_bonus,
target_distance_weight=config.planner.target_distance_weight,
weak_dimension_bonus=config.planner.weak_dimension_bonus,
fragile_review_bonus=config.planner.fragile_review_bonus,
project_unlock_bonus=config.planner.project_unlock_bonus,
semantic_similarity_weight=config.planner.semantic_similarity_weight,
),
merged = build_merged_learning_graph(packs)
profile = LearnerProfile(
learner_id="demo-learner",
display_name="Demo Learner",
goals=[args.goal],
mastered_concepts=set(),
hide_mastered=True,
)
print("== Didactopus Graph-Aware Planner ==")
print(f"Target concept: {args.target}")
print()
print("Curriculum path from current mastery:")
for item in graph.curriculum_path_to_target(mastered, args.target):
print(f"- {item}")
print()
print("Ready concepts:")
for item in graph.ready_concepts(mastered):
print(f"- {item}")
print()
print("Ranked next concepts:")
for item in ranked:
print(f"- {item['concept']}: {item['score']:.2f}")
for name, value in item["components"].items():
print(f" * {name}: {value:.2f}")
print()
print("Suggested semantic links:")
for a, b, score in suggest_semantic_links(graph, minimum_similarity=0.10)[:8]:
print(f"- {a} <-> {b} : {score:.2f}")
evidence_items = [
EvidenceItem(
concept_key="foundations-statistics::descriptive-statistics",
evidence_type="project",
score=0.88,
is_recent=True,
rubric_dimensions={
"correctness": 0.9,
"explanation": 0.83,
"transfer": 0.79,
"project_execution": 0.88,
"critique": 0.74,
},
notes="Strong integrated performance.",
),
EvidenceItem(
concept_key="bayes-extension::prior",
evidence_type="problem",
score=0.68,
is_recent=True,
rubric_dimensions={
"correctness": 0.75,
"explanation": 0.62,
"transfer": 0.55,
"critique": 0.58,
},
notes="Knows some basics, weak transfer and critique.",
),
]
if args.export_dot:
graph.export_graphviz(args.export_dot)
print(f"Exported Graphviz DOT to {args.export_dot}")
if args.export_cytoscape:
graph.export_cytoscape_json(args.export_cytoscape)
print(f"Exported Cytoscape JSON to {args.export_cytoscape}")
evidence_state = ingest_evidence_bundle(
profile=profile,
items=evidence_items,
resurfacing_threshold=config.platform.resurfacing_threshold,
confidence_threshold=config.platform.confidence_threshold,
type_weights=config.platform.evidence_weights,
recent_multiplier=config.platform.recent_evidence_multiplier,
dimension_thresholds=config.platform.dimension_thresholds,
)
plan = build_adaptive_plan(merged, profile)
if __name__ == "__main__":
main()
print("== Multi-Dimensional Evidence Summary ==")
for concept_key, summary in evidence_state.summary_by_concept.items():
print(
f"- {concept_key}: weighted_mean={summary.weighted_mean_score:.2f}, "
f"confidence={summary.confidence:.2f}, mastered={summary.mastered}"
)
if summary.dimension_means:
dims = ", ".join(f"{k}={v:.2f}" for k, v in sorted(summary.dimension_means.items()))
print(f" * dimensions: {dims}")
if summary.weak_dimensions:
print(f" * weak dimensions: {', '.join(summary.weak_dimensions)}")
print()
print("== Mastered Concepts ==")
if profile.mastered_concepts:
for concept_key in sorted(profile.mastered_concepts):
print(f"- {concept_key}")
else:
print("- none yet")
print()
print("== Next Best Concepts ==")
for concept in plan.next_best_concepts:
print(f"- {concept}")
print()
focus_concept = "bayes-extension::prior"
weak_dims = evidence_state.summary_by_concept.get(focus_concept).weak_dimensions if focus_concept in evidence_state.summary_by_concept else []
print(generate_socratic_prompt(provider, focus_concept, weak_dims))
print(generate_practice_task(provider, focus_concept, weak_dims))
print(suggest_capstone(provider, args.domain))

View File

@ -13,6 +13,10 @@ class ModelProvider:
def __init__(self, config: ModelProviderConfig) -> None:
self.config = config
def describe(self) -> str:
local = self.config.local
return f"mode={self.config.mode}, local={local.backend}:{local.model_name}"
def generate(self, prompt: str) -> ModelResponse:
local = self.config.local
preview = prompt.strip().replace("\n", " ")[:120]

View File

@ -1,100 +0,0 @@
from __future__ import annotations
from dataclasses import dataclass
from math import inf
from .concept_graph import ConceptGraph
from .semantic_similarity import concept_similarity
@dataclass
class PlannerWeights:
readiness_bonus: float = 2.0
target_distance_weight: float = 1.0
weak_dimension_bonus: float = 1.2
fragile_review_bonus: float = 1.5
project_unlock_bonus: float = 0.8
semantic_similarity_weight: float = 1.0
def _distance_bonus(graph: ConceptGraph, concept: str, targets: list[str]) -> float:
pg = graph.prerequisite_subgraph()
best = inf
for target in targets:
try:
dist = len(__import__("networkx").shortest_path(pg, concept, target)) - 1
best = min(best, dist)
except Exception:
continue
if best is inf:
return 0.0
return 1.0 / (1.0 + best)
def _project_unlock_bonus(concept: str, project_catalog: list[dict]) -> float:
count = 0
for project in project_catalog:
if concept in project.get("prerequisites", []):
count += 1
return float(count)
def _semantic_bonus(graph: ConceptGraph, concept: str, targets: list[str]) -> float:
data_a = graph.graph.nodes[concept]
best = 0.0
for target in targets:
if target not in graph.graph.nodes:
continue
data_b = graph.graph.nodes[target]
best = max(best, concept_similarity(data_a, data_b))
return best
def rank_next_concepts(
graph: ConceptGraph,
mastered: set[str],
targets: list[str],
weak_dimensions_by_concept: dict[str, list[str]],
fragile_concepts: set[str],
project_catalog: list[dict],
weights: PlannerWeights,
) -> list[dict]:
ready = graph.ready_concepts(mastered)
ranked = []
for concept in ready:
score = 0.0
components = {}
readiness = weights.readiness_bonus
score += readiness
components["readiness"] = readiness
distance = weights.target_distance_weight * _distance_bonus(graph, concept, targets)
score += distance
components["target_distance"] = distance
weak = weights.weak_dimension_bonus * len(weak_dimensions_by_concept.get(concept, []))
score += weak
components["weak_dimensions"] = weak
fragile = weights.fragile_review_bonus if concept in fragile_concepts else 0.0
score += fragile
components["fragile_review"] = fragile
project = weights.project_unlock_bonus * _project_unlock_bonus(concept, project_catalog)
score += project
components["project_unlock"] = project
semantic = weights.semantic_similarity_weight * _semantic_bonus(graph, concept, targets)
score += semantic
components["semantic_similarity"] = semantic
ranked.append({
"concept": concept,
"score": score,
"components": components,
})
ranked.sort(key=lambda item: item["score"], reverse=True)
return ranked

View File

@ -1,36 +1,34 @@
from typing import Any
from dataclasses import dataclass
from typing import Dict, List
def resolve_mastery_profile(
concept_profile: dict[str, Any] | None,
templates: dict[str, dict[str, Any]],
default_thresholds: dict[str, float],
) -> dict[str, Any]:
default_profile = {
"required_dimensions": list(default_thresholds.keys()),
"dimension_threshold_overrides": {},
}
if not concept_profile:
effective = dict(default_profile)
@dataclass
class ProfileTemplate:
name: str
required_dimensions: List[str]
dimension_threshold_overrides: Dict[str, float]
def resolve_mastery_profile(concept_profile, templates, default_profile):
if concept_profile is None:
return default_profile
template_name = concept_profile.get("template")
if template_name:
base = templates.get(template_name, default_profile)
profile = {
"required_dimensions": list(base.required_dimensions),
"dimension_threshold_overrides": dict(base.dimension_threshold_overrides),
}
else:
template_name = concept_profile.get("template")
if template_name and template_name in templates:
tmpl = templates[template_name]
effective = {
"required_dimensions": list(tmpl.get("required_dimensions", default_profile["required_dimensions"])),
"dimension_threshold_overrides": dict(tmpl.get("dimension_threshold_overrides", {})),
}
else:
effective = dict(default_profile)
if concept_profile.get("required_dimensions"):
effective["required_dimensions"] = list(concept_profile["required_dimensions"])
if concept_profile.get("dimension_threshold_overrides"):
effective["dimension_threshold_overrides"].update(concept_profile["dimension_threshold_overrides"])
profile = default_profile.copy()
thresholds = dict(default_thresholds)
thresholds.update(effective["dimension_threshold_overrides"])
return {
"required_dimensions": effective["required_dimensions"],
"dimension_threshold_overrides": dict(effective["dimension_threshold_overrides"]),
"effective_thresholds": {dim: thresholds[dim] for dim in effective["required_dimensions"] if dim in thresholds},
}
if "required_dimensions" in concept_profile:
profile["required_dimensions"] = concept_profile["required_dimensions"]
if "dimension_threshold_overrides" in concept_profile:
profile["dimension_threshold_overrides"].update(
concept_profile["dimension_threshold_overrides"]
)
return profile

View File

@ -1,29 +0,0 @@
from collections import Counter
import math
def _tokenize(text: str) -> list[str]:
cleaned = "".join(ch.lower() if ch.isalnum() else " " for ch in text)
return [tok for tok in cleaned.split() if tok]
def token_cosine_similarity(text_a: str, text_b: str) -> float:
tokens_a = _tokenize(text_a)
tokens_b = _tokenize(text_b)
if not tokens_a or not tokens_b:
return 0.0
ca = Counter(tokens_a)
cb = Counter(tokens_b)
shared = set(ca) & set(cb)
dot = sum(ca[t] * cb[t] for t in shared)
na = math.sqrt(sum(v * v for v in ca.values()))
nb = math.sqrt(sum(v * v for v in cb.values()))
if na == 0 or nb == 0:
return 0.0
return dot / (na * nb)
def concept_similarity(concept_a: dict, concept_b: dict) -> float:
text_a = " ".join([concept_a.get("title", ""), concept_a.get("description", ""), " ".join(concept_a.get("mastery_signals", []))])
text_b = " ".join([concept_b.get("title", ""), concept_b.get("description", ""), " ".join(concept_b.get("mastery_signals", []))])
return token_cosine_similarity(text_a, text_b)

View File

@ -1,36 +0,0 @@
from didactopus.artifact_registry import discover_domain_packs
from didactopus.config import load_config
from didactopus.graph_builder import build_concept_graph, suggest_semantic_links
def test_concept_graph_builds() -> None:
config = load_config("configs/config.example.yaml")
results = discover_domain_packs(["domain-packs"])
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
assert "foundations-statistics::probability-basics" in graph.graph.nodes
assert "bayes-extension::posterior" in graph.graph.nodes
def test_curriculum_path_to_target() -> None:
config = load_config("configs/config.example.yaml")
results = discover_domain_packs(["domain-packs"])
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
path = graph.curriculum_path_to_target(set(), "bayes-extension::posterior")
assert "bayes-extension::prior" in path
assert "bayes-extension::posterior" in path
def test_declared_cross_pack_links_exist() -> None:
config = load_config("configs/config.example.yaml")
results = discover_domain_packs(["domain-packs"])
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
related = graph.related_concepts("bayes-extension::posterior")
assert "applied-inference::model-checking" in related
def test_semantic_link_suggestions() -> None:
config = load_config("configs/config.example.yaml")
results = discover_domain_packs(["domain-packs"])
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
suggestions = suggest_semantic_links(graph, minimum_similarity=0.10)
assert len(suggestions) >= 1

View File

@ -1,19 +0,0 @@
from pathlib import Path
from didactopus.artifact_registry import discover_domain_packs
from didactopus.config import load_config
from didactopus.graph_builder import build_concept_graph
def test_exports(tmp_path: Path) -> None:
config = load_config("configs/config.example.yaml")
results = discover_domain_packs(["domain-packs"])
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
dot_path = tmp_path / "graph.dot"
json_path = tmp_path / "graph.json"
graph.export_graphviz(str(dot_path))
graph.export_cytoscape_json(str(json_path))
assert dot_path.exists()
assert json_path.exists()

View File

@ -1,23 +0,0 @@
from didactopus.artifact_registry import discover_domain_packs
from didactopus.config import load_config
from didactopus.graph_builder import build_concept_graph
from didactopus.planner import PlannerWeights, rank_next_concepts
def test_rank_next_concepts() -> None:
config = load_config("configs/config.example.yaml")
results = discover_domain_packs(["domain-packs"])
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
ranked = rank_next_concepts(
graph=graph,
mastered=set(),
targets=["bayes-extension::posterior"],
weak_dimensions_by_concept={"bayes-extension::prior": ["transfer"]},
fragile_concepts={"bayes-extension::prior"},
project_catalog=[{"id": "p1", "prerequisites": ["bayes-extension::prior"]}],
weights=PlannerWeights(),
)
assert len(ranked) >= 1
assert ranked[0]["score"] >= ranked[-1]["score"]

View File

@ -1,18 +0,0 @@
from didactopus.profile_templates import resolve_mastery_profile
def test_template_resolution() -> None:
templates = {
"foundation": {
"required_dimensions": ["correctness", "explanation"],
"dimension_threshold_overrides": {"explanation": 0.8},
}
}
resolved = resolve_mastery_profile(
{"template": "foundation"},
templates,
{"correctness": 0.8, "explanation": 0.75, "transfer": 0.7},
)
assert resolved["required_dimensions"] == ["correctness", "explanation"]
assert resolved["effective_thresholds"]["correctness"] == 0.8
assert resolved["effective_thresholds"]["explanation"] == 0.8