Compare commits
2 Commits
964aadd382
...
dd0cc9fd08
| Author | SHA1 | Date |
|---|---|---|
|
|
dd0cc9fd08 | |
|
|
4ac65b6489 |
454
README.md
454
README.md
|
|
@ -1,319 +1,203 @@
|
||||||
# Didactopus
|
# Didactopus
|
||||||
|
|
||||||
**Didactopus** is a local‑first AI‑assisted autodidactic mastery platform designed to help motivated learners achieve **true expertise** in a chosen domain.
|
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The system combines:
|
**Didactopus** is a local-first AI-assisted autodidactic mastery platform for building genuine expertise through concept graphs, adaptive curriculum planning, evidence-driven mastery, Socratic mentoring, and project-based learning.
|
||||||
|
|
||||||
• domain knowledge graphs
|
**Tagline:** *Many arms, one goal — mastery.*
|
||||||
• mastery‑based learning models
|
|
||||||
• evidence‑driven assessment
|
|
||||||
• Socratic mentoring
|
|
||||||
• adaptive curriculum generation
|
|
||||||
• project‑based evaluation
|
|
||||||
|
|
||||||
Didactopus is designed for **serious learning**, not shallow answer generation.
|
## This revision
|
||||||
|
|
||||||
Its core philosophy is:
|
This revision adds a **graph-aware planning layer** that connects the concept graph engine to the adaptive and evidence engines.
|
||||||
|
|
||||||
> AI should function as a mentor, evaluator, and guide — not a substitute for thinking.
|
The new planner selects the next concepts to study using a utility function that considers:
|
||||||
|
|
||||||
---
|
- prerequisite readiness
|
||||||
|
- distance to learner target concepts
|
||||||
|
- weakness in competence dimensions
|
||||||
|
- project availability
|
||||||
|
- review priority for fragile concepts
|
||||||
|
- semantic neighborhood around learner goals
|
||||||
|
|
||||||
# Project Goals
|
## Why this matters
|
||||||
|
|
||||||
Didactopus aims to enable learners to:
|
Up to this point, Didactopus could:
|
||||||
|
- build concept graphs
|
||||||
|
- identify ready concepts
|
||||||
|
- infer mastery from evidence
|
||||||
|
|
||||||
• build deep conceptual understanding
|
But it still needed a better mechanism for choosing **what to do next**.
|
||||||
• practice reasoning and explanation
|
|
||||||
• complete real projects demonstrating competence
|
|
||||||
• identify weak areas through evidence‑based feedback
|
|
||||||
• progress through mastery rather than time spent
|
|
||||||
|
|
||||||
The platform is particularly suitable for:
|
The graph-aware planner begins to solve that by ranking candidate concepts according to learner-specific utility instead of using unlocked prerequisites alone.
|
||||||
|
|
||||||
• autodidacts
|
## Current architecture overview
|
||||||
• researchers entering new fields
|
|
||||||
• students supplementing formal education
|
|
||||||
• interdisciplinary learners
|
|
||||||
• AI‑assisted self‑study programs
|
|
||||||
|
|
||||||
---
|
Didactopus now includes:
|
||||||
|
|
||||||
# Key Architectural Concepts
|
- **Domain packs** for concepts, projects, rubrics, mastery profiles, templates, and cross-pack links
|
||||||
|
- **Dependency resolution** across packs
|
||||||
|
- **Merged learning graph** generation
|
||||||
|
- **Concept graph engine** with cross-pack links, similarity hooks, pathfinding, and visualization export
|
||||||
|
- **Adaptive learner engine** for ready/blocked/mastered concept states
|
||||||
|
- **Evidence engine** with weighted, recency-aware, multi-dimensional mastery inference
|
||||||
|
- **Concept-specific mastery profiles** with template inheritance
|
||||||
|
- **Graph-aware planner** for utility-ranked next-step recommendations
|
||||||
|
|
||||||
## Domain Packs
|
## Planning utility
|
||||||
|
|
||||||
Knowledge is distributed as **domain packs** contributed by the community.
|
The current planner computes a score per candidate concept using:
|
||||||
|
|
||||||
Each pack can include:
|
- readiness bonus
|
||||||
|
- target-distance bonus
|
||||||
|
- weak-dimension bonus
|
||||||
|
- fragile-concept review bonus
|
||||||
|
- project-unlock bonus
|
||||||
|
- semantic-similarity bonus
|
||||||
|
|
||||||
- concept definitions
|
These terms are transparent and configurable.
|
||||||
- prerequisite graphs
|
|
||||||
- learning roadmaps
|
|
||||||
- projects
|
|
||||||
- rubrics
|
|
||||||
- mastery profiles
|
|
||||||
|
|
||||||
Example packs:
|
## Agentic AI students
|
||||||
|
|
||||||
```
|
This planner also strengthens the case for **AI student agents** that use Didactopus as a structured mastery environment.
|
||||||
domain-packs/
|
|
||||||
statistics-foundations
|
|
||||||
bayes-extension
|
|
||||||
applied-inference
|
|
||||||
```
|
|
||||||
|
|
||||||
Domain packs are validated, dependency‑checked, and merged into a **unified learning graph**.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Learning Graph
|
|
||||||
|
|
||||||
Didactopus merges all installed packs into a directed concept graph:
|
|
||||||
|
|
||||||
```
|
|
||||||
Concept A → Concept B → Concept C
|
|
||||||
```
|
|
||||||
|
|
||||||
Edges represent prerequisites.
|
|
||||||
|
|
||||||
The system then generates:
|
|
||||||
|
|
||||||
• adaptive learning roadmaps
|
|
||||||
• next-best concepts to study
|
|
||||||
• projects unlocked by prerequisite completion
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Evidence‑Driven Mastery
|
|
||||||
|
|
||||||
Concept mastery is **inferred from evidence**, not declared.
|
|
||||||
|
|
||||||
Evidence types include:
|
|
||||||
|
|
||||||
• explanations
|
|
||||||
• problem solutions
|
|
||||||
• transfer tasks
|
|
||||||
• project deliverables
|
|
||||||
|
|
||||||
Evidence contributes weighted scores that determine:
|
|
||||||
|
|
||||||
• mastery state
|
|
||||||
• learner confidence
|
|
||||||
• weak dimensions requiring further practice
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Multi‑Dimensional Mastery
|
|
||||||
|
|
||||||
Didactopus tracks multiple competence dimensions:
|
|
||||||
|
|
||||||
| Dimension | Meaning |
|
|
||||||
|---|---|
|
|
||||||
| correctness | accurate reasoning |
|
|
||||||
| explanation | ability to explain clearly |
|
|
||||||
| transfer | ability to apply knowledge |
|
|
||||||
| project_execution | ability to build artifacts |
|
|
||||||
| critique | ability to detect errors and assumptions |
|
|
||||||
|
|
||||||
Different concepts can require different combinations of these dimensions.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Concept Mastery Profiles
|
|
||||||
|
|
||||||
Concepts define **mastery profiles** specifying:
|
|
||||||
|
|
||||||
• required dimensions
|
|
||||||
• threshold overrides
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
mastery_profile:
|
|
||||||
required_dimensions:
|
|
||||||
- correctness
|
|
||||||
- transfer
|
|
||||||
- critique
|
|
||||||
dimension_threshold_overrides:
|
|
||||||
transfer: 0.8
|
|
||||||
critique: 0.8
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Mastery Profile Inheritance
|
|
||||||
|
|
||||||
This revision adds **profile templates** so packs can define reusable mastery models.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
profile_templates:
|
|
||||||
foundation_concept:
|
|
||||||
required_dimensions:
|
|
||||||
- correctness
|
|
||||||
- explanation
|
|
||||||
|
|
||||||
critique_concept:
|
|
||||||
required_dimensions:
|
|
||||||
- correctness
|
|
||||||
- transfer
|
|
||||||
- critique
|
|
||||||
```
|
|
||||||
|
|
||||||
Concepts can reference templates:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
mastery_profile:
|
|
||||||
template: critique_concept
|
|
||||||
```
|
|
||||||
|
|
||||||
This allows domain packs to remain concise while maintaining consistent evaluation standards.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Adaptive Learning Engine
|
|
||||||
|
|
||||||
The adaptive engine computes:
|
|
||||||
|
|
||||||
• which concepts are ready to study
|
|
||||||
• which are blocked by prerequisites
|
|
||||||
• which are already mastered
|
|
||||||
• which projects are available
|
|
||||||
|
|
||||||
Output includes:
|
|
||||||
|
|
||||||
```
|
|
||||||
next_best_concepts
|
|
||||||
eligible_projects
|
|
||||||
adaptive_learning_roadmap
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Evidence Engine
|
|
||||||
|
|
||||||
The evidence engine:
|
|
||||||
|
|
||||||
• aggregates learner evidence
|
|
||||||
• computes weighted scores
|
|
||||||
• tracks confidence
|
|
||||||
• identifies weak competence dimensions
|
|
||||||
• updates mastery status
|
|
||||||
|
|
||||||
Later weak performance can **resurface concepts for review**.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Socratic Mentor
|
|
||||||
|
|
||||||
Didactopus includes a mentor layer that:
|
|
||||||
|
|
||||||
• asks probing questions
|
|
||||||
• challenges reasoning
|
|
||||||
• generates practice tasks
|
|
||||||
• proposes projects
|
|
||||||
|
|
||||||
Models can run locally (recommended) or via remote APIs.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Agentic AI Students
|
|
||||||
|
|
||||||
Didactopus is also suitable for **AI‑driven learning agents**.
|
|
||||||
|
|
||||||
A future architecture may include:
|
|
||||||
|
|
||||||
```
|
|
||||||
Didactopus Core
|
|
||||||
│
|
|
||||||
├─ Human Learner
|
|
||||||
└─ AI Student Agent
|
|
||||||
```
|
|
||||||
|
|
||||||
An AI student could:
|
An AI student could:
|
||||||
|
|
||||||
1. read domain packs
|
1. inspect the graph
|
||||||
2. attempt practice tasks
|
2. choose the next concept via the planner
|
||||||
3. produce explanations
|
3. attempt tasks
|
||||||
4. critique model outputs
|
4. generate evidence
|
||||||
5. complete simulated projects
|
5. update mastery state
|
||||||
6. accumulate evidence
|
6. repeat until a target expertise profile is reached
|
||||||
7. progress through the mastery graph
|
|
||||||
|
|
||||||
Such agents could be used for:
|
This makes Didactopus useful as both:
|
||||||
|
- a learning platform
|
||||||
|
- a benchmark harness for agentic expertise growth
|
||||||
|
|
||||||
• automated curriculum testing
|
## Core philosophy
|
||||||
• benchmarking AI reasoning
|
|
||||||
• synthetic expert generation
|
|
||||||
• evaluation of model capabilities
|
|
||||||
|
|
||||||
Didactopus therefore supports both:
|
Didactopus assumes that real expertise is built through:
|
||||||
|
|
||||||
• human learners
|
- explanation
|
||||||
• agentic AI learners
|
- problem solving
|
||||||
|
- transfer
|
||||||
|
- critique
|
||||||
|
- project execution
|
||||||
|
|
||||||
---
|
The AI layer should function as a **mentor, evaluator, and curriculum partner**, not an answer vending machine.
|
||||||
|
|
||||||
# Project Structure
|
## Domain packs
|
||||||
|
|
||||||
```
|
Knowledge enters the system through versioned, shareable **domain packs**. Each pack can contribute:
|
||||||
|
|
||||||
|
- concepts
|
||||||
|
- prerequisites
|
||||||
|
- learning stages
|
||||||
|
- projects
|
||||||
|
- rubrics
|
||||||
|
- mastery profiles
|
||||||
|
- profile templates
|
||||||
|
- cross-pack concept links
|
||||||
|
|
||||||
|
## Concept graph engine
|
||||||
|
|
||||||
|
This revision implements a concept graph engine with:
|
||||||
|
|
||||||
|
- prerequisite reasoning across packs
|
||||||
|
- cross-pack concept linking
|
||||||
|
- semantic concept similarity hooks
|
||||||
|
- automatic curriculum pathfinding
|
||||||
|
- visualization export for mastery graphs
|
||||||
|
|
||||||
|
Concepts are namespaced as `pack-name::concept-id`.
|
||||||
|
|
||||||
|
### Cross-pack links
|
||||||
|
|
||||||
|
Domain packs may declare conceptual links such as:
|
||||||
|
|
||||||
|
- `equivalent_to`
|
||||||
|
- `related_to`
|
||||||
|
- `extends`
|
||||||
|
- `depends_on`
|
||||||
|
|
||||||
|
These links enable Didactopus to reason across pack boundaries rather than treating each pack as an isolated island.
|
||||||
|
|
||||||
|
### Semantic similarity
|
||||||
|
|
||||||
|
A semantic similarity layer is included as a hook for:
|
||||||
|
|
||||||
|
- token overlap similarity
|
||||||
|
- future embedding-based similarity
|
||||||
|
- future ontology and LLM-assisted concept alignment
|
||||||
|
|
||||||
|
### Curriculum pathfinding
|
||||||
|
|
||||||
|
The concept graph engine supports:
|
||||||
|
|
||||||
|
- prerequisite chains
|
||||||
|
- shortest dependency paths
|
||||||
|
- next-ready concept discovery
|
||||||
|
- reachability analysis
|
||||||
|
- curriculum path generation from a learner’s mastery state to a target concept
|
||||||
|
|
||||||
|
### Visualization
|
||||||
|
|
||||||
|
Graphs can be exported to:
|
||||||
|
|
||||||
|
- Graphviz DOT
|
||||||
|
- Cytoscape-style JSON
|
||||||
|
|
||||||
|
## Evidence-driven mastery
|
||||||
|
|
||||||
|
Mastery is inferred from evidence such as:
|
||||||
|
|
||||||
|
- explanations
|
||||||
|
- problem solutions
|
||||||
|
- transfer tasks
|
||||||
|
- project artifacts
|
||||||
|
|
||||||
|
Evidence is:
|
||||||
|
|
||||||
|
- weighted by type
|
||||||
|
- optionally up-weighted for recency
|
||||||
|
- summarized by competence dimension
|
||||||
|
- compared against concept-specific mastery profiles
|
||||||
|
|
||||||
|
## Multi-dimensional mastery
|
||||||
|
|
||||||
|
Current dimensions include:
|
||||||
|
|
||||||
|
- `correctness`
|
||||||
|
- `explanation`
|
||||||
|
- `transfer`
|
||||||
|
- `project_execution`
|
||||||
|
- `critique`
|
||||||
|
|
||||||
|
Different concepts can require different subsets of these dimensions.
|
||||||
|
|
||||||
|
## Agentic AI students
|
||||||
|
|
||||||
|
Didactopus is also architecturally suitable for **AI learner agents**.
|
||||||
|
|
||||||
|
An agentic AI student could:
|
||||||
|
|
||||||
|
1. ingest domain packs
|
||||||
|
2. traverse the concept graph
|
||||||
|
3. generate explanations and answers
|
||||||
|
4. attempt practice tasks
|
||||||
|
5. critique model outputs
|
||||||
|
6. complete simulated projects
|
||||||
|
7. accumulate evidence
|
||||||
|
8. advance only when concept-specific mastery criteria are satisfied
|
||||||
|
|
||||||
|
## Repository structure
|
||||||
|
|
||||||
|
```text
|
||||||
didactopus/
|
didactopus/
|
||||||
adaptive_engine/
|
├── README.md
|
||||||
artifact_registry/
|
├── artwork/
|
||||||
evidence_engine/
|
├── configs/
|
||||||
learning_graph/
|
├── docs/
|
||||||
mentor/
|
├── domain-packs/
|
||||||
practice/
|
├── src/didactopus/
|
||||||
project_advisor/
|
└── tests/
|
||||||
```
|
```
|
||||||
|
|
||||||
Additional directories:
|
|
||||||
|
|
||||||
```
|
|
||||||
configs/
|
|
||||||
docs/
|
|
||||||
domain-packs/
|
|
||||||
tests/
|
|
||||||
artwork/
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Current Status
|
|
||||||
|
|
||||||
Implemented:
|
|
||||||
|
|
||||||
✓ domain pack validation
|
|
||||||
✓ dependency resolution
|
|
||||||
✓ learning graph merge
|
|
||||||
✓ adaptive roadmap generation
|
|
||||||
✓ evidence‑driven mastery
|
|
||||||
✓ multi‑dimensional competence tracking
|
|
||||||
✓ concept‑specific mastery profiles
|
|
||||||
✓ profile template inheritance
|
|
||||||
|
|
||||||
Planned next phases:
|
|
||||||
|
|
||||||
• curriculum optimization algorithms
|
|
||||||
• active‑learning task generation
|
|
||||||
• automated project evaluation
|
|
||||||
• distributed pack registry
|
|
||||||
• visualization tools for learning graphs
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Philosophy
|
|
||||||
|
|
||||||
Didactopus is built around a simple principle:
|
|
||||||
|
|
||||||
> Mastery requires thinking, explaining, testing, and building — not merely receiving answers.
|
|
||||||
|
|
||||||
AI can accelerate the process, but genuine learning remains an **active intellectual endeavor**.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Didactopus — many arms, one goal: mastery.**
|
|
||||||
|
|
|
||||||
|
|
@ -4,25 +4,9 @@ model_provider:
|
||||||
backend: ollama
|
backend: ollama
|
||||||
endpoint: http://localhost:11434
|
endpoint: http://localhost:11434
|
||||||
model_name: llama3.1:8b
|
model_name: llama3.1:8b
|
||||||
remote:
|
|
||||||
enabled: false
|
|
||||||
provider_name: none
|
|
||||||
endpoint: ""
|
|
||||||
model_name: ""
|
|
||||||
|
|
||||||
platform:
|
platform:
|
||||||
verification_required: true
|
default_dimension_thresholds:
|
||||||
require_learner_explanations: true
|
|
||||||
permit_direct_answers: false
|
|
||||||
resurfacing_threshold: 0.55
|
|
||||||
confidence_threshold: 0.8
|
|
||||||
evidence_weights:
|
|
||||||
explanation: 1.0
|
|
||||||
problem: 1.5
|
|
||||||
project: 2.5
|
|
||||||
transfer: 2.0
|
|
||||||
recent_evidence_multiplier: 1.35
|
|
||||||
dimension_thresholds:
|
|
||||||
correctness: 0.8
|
correctness: 0.8
|
||||||
explanation: 0.75
|
explanation: 0.75
|
||||||
transfer: 0.7
|
transfer: 0.7
|
||||||
|
|
@ -32,4 +16,3 @@ platform:
|
||||||
artifacts:
|
artifacts:
|
||||||
local_pack_dirs:
|
local_pack_dirs:
|
||||||
- domain-packs
|
- domain-packs
|
||||||
allow_third_party_packs: true
|
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,22 @@
|
||||||
|
# Concept Graph Engine
|
||||||
|
|
||||||
|
The concept graph engine provides the backbone for Didactopus.
|
||||||
|
|
||||||
|
## Features in this revision
|
||||||
|
|
||||||
|
- prerequisite reasoning across packs
|
||||||
|
- cross-pack concept linking
|
||||||
|
- semantic similarity scoring hook
|
||||||
|
- curriculum pathfinding
|
||||||
|
- visualization export
|
||||||
|
|
||||||
|
## Edge types
|
||||||
|
|
||||||
|
The engine distinguishes between:
|
||||||
|
- `prerequisite`
|
||||||
|
- `equivalent_to`
|
||||||
|
- `related_to`
|
||||||
|
- `extends`
|
||||||
|
- `depends_on`
|
||||||
|
|
||||||
|
Only prerequisite edges are used for strict learning-order pathfinding.
|
||||||
|
|
@ -0,0 +1,29 @@
|
||||||
|
# Graph-Aware Planner
|
||||||
|
|
||||||
|
The graph-aware planner ranks next concepts using a transparent utility model.
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
- concept graph
|
||||||
|
- learner mastery state
|
||||||
|
- evidence summaries
|
||||||
|
- target concepts
|
||||||
|
- semantic similarity estimates
|
||||||
|
- project catalog
|
||||||
|
|
||||||
|
## Current scoring terms
|
||||||
|
|
||||||
|
- **readiness_bonus**: concept is currently studyable
|
||||||
|
- **target_distance_weight**: closer concepts to the target score higher
|
||||||
|
- **weak_dimension_bonus**: concepts with known weakness signals are prioritized
|
||||||
|
- **fragile_review_bonus**: resurfaced or fragile concepts are review-prioritized
|
||||||
|
- **project_unlock_bonus**: concepts that unlock projects score higher
|
||||||
|
- **semantic_similarity_weight**: concepts semantically close to targets gain weight
|
||||||
|
|
||||||
|
## Future work
|
||||||
|
|
||||||
|
- learner time budgets
|
||||||
|
- spaced repetition costs
|
||||||
|
- multi-objective planning
|
||||||
|
- planning across multiple targets
|
||||||
|
- reinforcement learning over curriculum policies
|
||||||
|
|
@ -1,6 +1,10 @@
|
||||||
concepts:
|
concepts:
|
||||||
- id: model-checking
|
- id: model-checking
|
||||||
title: Model Checking
|
title: Model Checking
|
||||||
|
description: Critiquing assumptions, fit, and implications of a probabilistic model.
|
||||||
prerequisites: []
|
prerequisites: []
|
||||||
mastery_signals:
|
mastery_signals:
|
||||||
- compare model assumptions
|
- compare model assumptions
|
||||||
|
- critique a simple inference model
|
||||||
|
mastery_profile:
|
||||||
|
template: critique_heavy
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ version: 0.2.0
|
||||||
schema_version: "1"
|
schema_version: "1"
|
||||||
didactopus_min_version: 0.1.0
|
didactopus_min_version: 0.1.0
|
||||||
didactopus_max_version: 0.9.99
|
didactopus_max_version: 0.9.99
|
||||||
description: Simple applied inference pack.
|
description: Applied inference pack emphasizing transfer and critique.
|
||||||
author: Wesley R. Elsberry
|
author: Wesley R. Elsberry
|
||||||
license: MIT
|
license: MIT
|
||||||
dependencies:
|
dependencies:
|
||||||
|
|
@ -12,3 +12,19 @@ dependencies:
|
||||||
min_version: 0.1.0
|
min_version: 0.1.0
|
||||||
max_version: 1.0.0
|
max_version: 1.0.0
|
||||||
overrides: []
|
overrides: []
|
||||||
|
profile_templates:
|
||||||
|
critique_heavy:
|
||||||
|
required_dimensions:
|
||||||
|
- correctness
|
||||||
|
- transfer
|
||||||
|
- critique
|
||||||
|
dimension_threshold_overrides:
|
||||||
|
transfer: 0.78
|
||||||
|
critique: 0.73
|
||||||
|
cross_pack_links:
|
||||||
|
- source_concept: model-checking
|
||||||
|
target_concept: bayes-extension::posterior
|
||||||
|
relation: extends
|
||||||
|
- source_concept: model-checking
|
||||||
|
target_concept: foundations-statistics::probability-basics
|
||||||
|
relation: related_to
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,27 @@
|
||||||
concepts:
|
concepts:
|
||||||
- id: prior
|
- id: prior
|
||||||
title: Prior
|
title: Prior
|
||||||
|
description: A probability distribution representing knowledge before evidence.
|
||||||
prerequisites: []
|
prerequisites: []
|
||||||
mastery_signals:
|
mastery_signals:
|
||||||
- explain a prior distribution
|
- explain a prior distribution
|
||||||
|
- compare reasonable priors
|
||||||
|
mastery_profile:
|
||||||
|
template: bayes_concept
|
||||||
|
|
||||||
- id: posterior
|
- id: posterior
|
||||||
title: Posterior
|
title: Posterior
|
||||||
|
description: Updated beliefs after conditioning on observed evidence.
|
||||||
prerequisites:
|
prerequisites:
|
||||||
- prior
|
- prior
|
||||||
mastery_signals:
|
mastery_signals:
|
||||||
- explain updating beliefs
|
- explain updating beliefs
|
||||||
|
- compare prior and posterior distributions
|
||||||
|
mastery_profile:
|
||||||
|
required_dimensions:
|
||||||
|
- correctness
|
||||||
|
- explanation
|
||||||
|
- transfer
|
||||||
|
- critique
|
||||||
|
dimension_threshold_overrides:
|
||||||
|
critique: 0.78
|
||||||
|
|
|
||||||
|
|
@ -12,3 +12,18 @@ dependencies:
|
||||||
min_version: 1.0.0
|
min_version: 1.0.0
|
||||||
max_version: 1.9.99
|
max_version: 1.9.99
|
||||||
overrides: []
|
overrides: []
|
||||||
|
profile_templates:
|
||||||
|
bayes_concept:
|
||||||
|
required_dimensions:
|
||||||
|
- correctness
|
||||||
|
- explanation
|
||||||
|
- transfer
|
||||||
|
dimension_threshold_overrides:
|
||||||
|
transfer: 0.74
|
||||||
|
cross_pack_links:
|
||||||
|
- source_concept: prior
|
||||||
|
target_concept: foundations-statistics::probability-basics
|
||||||
|
relation: depends_on
|
||||||
|
- source_concept: posterior
|
||||||
|
target_concept: applied-inference::model-checking
|
||||||
|
relation: related_to
|
||||||
|
|
|
||||||
|
|
@ -1,12 +1,26 @@
|
||||||
concepts:
|
concepts:
|
||||||
- id: descriptive-statistics
|
- id: descriptive-statistics
|
||||||
title: Descriptive Statistics
|
title: Descriptive Statistics
|
||||||
|
description: Core summaries of distributions, central tendency, and spread.
|
||||||
prerequisites: []
|
prerequisites: []
|
||||||
mastery_signals:
|
mastery_signals:
|
||||||
- explain central tendency
|
- explain mean median and variance
|
||||||
|
- summarize a simple dataset
|
||||||
|
mastery_profile:
|
||||||
|
template: foundation_concept
|
||||||
|
|
||||||
- id: probability-basics
|
- id: probability-basics
|
||||||
title: Probability Basics
|
title: Probability Basics
|
||||||
|
description: Basic event probability and conditional probability.
|
||||||
prerequisites:
|
prerequisites:
|
||||||
- descriptive-statistics
|
- descriptive-statistics
|
||||||
mastery_signals:
|
mastery_signals:
|
||||||
- explain event probability
|
- explain event probability
|
||||||
|
- calculate simple conditional probability
|
||||||
|
mastery_profile:
|
||||||
|
required_dimensions:
|
||||||
|
- correctness
|
||||||
|
- explanation
|
||||||
|
- transfer
|
||||||
|
dimension_threshold_overrides:
|
||||||
|
transfer: 0.72
|
||||||
|
|
|
||||||
|
|
@ -9,3 +9,11 @@ author: Wesley R. Elsberry
|
||||||
license: MIT
|
license: MIT
|
||||||
dependencies: []
|
dependencies: []
|
||||||
overrides: []
|
overrides: []
|
||||||
|
profile_templates:
|
||||||
|
foundation_concept:
|
||||||
|
required_dimensions:
|
||||||
|
- correctness
|
||||||
|
- explanation
|
||||||
|
dimension_threshold_overrides:
|
||||||
|
explanation: 0.75
|
||||||
|
cross_pack_links: []
|
||||||
|
|
|
||||||
|
|
@ -10,8 +10,11 @@ readme = "README.md"
|
||||||
requires-python = ">=3.10"
|
requires-python = ">=3.10"
|
||||||
license = {text = "MIT"}
|
license = {text = "MIT"}
|
||||||
authors = [{name = "Wesley R. Elsberry"}]
|
authors = [{name = "Wesley R. Elsberry"}]
|
||||||
dependencies = ["pydantic>=2.7", "pyyaml>=6.0", "networkx>=3.2"]
|
dependencies = [
|
||||||
|
"pydantic>=2.7",
|
||||||
|
"pyyaml>=6.0",
|
||||||
|
"networkx>=3.2",
|
||||||
|
]
|
||||||
[project.optional-dependencies]
|
[project.optional-dependencies]
|
||||||
dev = ["pytest>=8.0", "ruff>=0.6"]
|
dev = ["pytest>=8.0", "ruff>=0.6"]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -50,8 +50,8 @@ def validate_pack(pack_dir: str | Path) -> PackValidationResult:
|
||||||
result.manifest = PackManifest.model_validate(_load_yaml(pack_path / "pack.yaml"))
|
result.manifest = PackManifest.model_validate(_load_yaml(pack_path / "pack.yaml"))
|
||||||
if not _version_in_range(DIDACTOPUS_VERSION, result.manifest.didactopus_min_version, result.manifest.didactopus_max_version):
|
if not _version_in_range(DIDACTOPUS_VERSION, result.manifest.didactopus_min_version, result.manifest.didactopus_max_version):
|
||||||
result.errors.append(
|
result.errors.append(
|
||||||
f"incompatible with Didactopus core version {DIDACTOPUS_VERSION}; supported range is "
|
f"incompatible with Didactopus core version {DIDACTOPUS_VERSION}; "
|
||||||
f"{result.manifest.didactopus_min_version}..{result.manifest.didactopus_max_version}"
|
f"supported range is {result.manifest.didactopus_min_version}..{result.manifest.didactopus_max_version}"
|
||||||
)
|
)
|
||||||
result.loaded_files["concepts"] = ConceptsFile.model_validate(_load_yaml(pack_path / "concepts.yaml"))
|
result.loaded_files["concepts"] = ConceptsFile.model_validate(_load_yaml(pack_path / "concepts.yaml"))
|
||||||
result.loaded_files["roadmap"] = RoadmapFile.model_validate(_load_yaml(pack_path / "roadmap.yaml"))
|
result.loaded_files["roadmap"] = RoadmapFile.model_validate(_load_yaml(pack_path / "roadmap.yaml"))
|
||||||
|
|
|
||||||
|
|
@ -8,6 +8,23 @@ class DependencySpec(BaseModel):
|
||||||
max_version: str = "9999.9999.9999"
|
max_version: str = "9999.9999.9999"
|
||||||
|
|
||||||
|
|
||||||
|
class MasteryProfileSpec(BaseModel):
|
||||||
|
template: str | None = None
|
||||||
|
required_dimensions: list[str] = Field(default_factory=list)
|
||||||
|
dimension_threshold_overrides: dict[str, float] = Field(default_factory=dict)
|
||||||
|
|
||||||
|
|
||||||
|
class CrossPackLinkSpec(BaseModel):
|
||||||
|
source_concept: str
|
||||||
|
target_concept: str
|
||||||
|
relation: str
|
||||||
|
|
||||||
|
|
||||||
|
class ProfileTemplateSpec(BaseModel):
|
||||||
|
required_dimensions: list[str] = Field(default_factory=list)
|
||||||
|
dimension_threshold_overrides: dict[str, float] = Field(default_factory=dict)
|
||||||
|
|
||||||
|
|
||||||
class PackManifest(BaseModel):
|
class PackManifest(BaseModel):
|
||||||
name: str
|
name: str
|
||||||
display_name: str
|
display_name: str
|
||||||
|
|
@ -20,13 +37,17 @@ class PackManifest(BaseModel):
|
||||||
license: str = "unspecified"
|
license: str = "unspecified"
|
||||||
dependencies: list[DependencySpec] = Field(default_factory=list)
|
dependencies: list[DependencySpec] = Field(default_factory=list)
|
||||||
overrides: list[str] = Field(default_factory=list)
|
overrides: list[str] = Field(default_factory=list)
|
||||||
|
profile_templates: dict[str, ProfileTemplateSpec] = Field(default_factory=dict)
|
||||||
|
cross_pack_links: list[CrossPackLinkSpec] = Field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
class ConceptEntry(BaseModel):
|
class ConceptEntry(BaseModel):
|
||||||
id: str
|
id: str
|
||||||
title: str
|
title: str
|
||||||
|
description: str = ""
|
||||||
prerequisites: list[str] = Field(default_factory=list)
|
prerequisites: list[str] = Field(default_factory=list)
|
||||||
mastery_signals: list[str] = Field(default_factory=list)
|
mastery_signals: list[str] = Field(default_factory=list)
|
||||||
|
mastery_profile: MasteryProfileSpec = Field(default_factory=MasteryProfileSpec)
|
||||||
|
|
||||||
|
|
||||||
class ConceptsFile(BaseModel):
|
class ConceptsFile(BaseModel):
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,94 @@
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Any
|
||||||
|
from pathlib import Path
|
||||||
|
import json
|
||||||
|
import networkx as nx
|
||||||
|
|
||||||
|
REL_PREREQ = "prerequisite"
|
||||||
|
REL_EQUIVALENT = "equivalent_to"
|
||||||
|
REL_RELATED = "related_to"
|
||||||
|
REL_EXTENDS = "extends"
|
||||||
|
REL_DEPENDS = "depends_on"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ConceptGraph:
|
||||||
|
graph: nx.MultiDiGraph = field(default_factory=nx.MultiDiGraph)
|
||||||
|
|
||||||
|
def add_concept(self, concept_key: str, metadata: dict[str, Any] | None = None) -> None:
|
||||||
|
self.graph.add_node(concept_key, **(metadata or {}))
|
||||||
|
|
||||||
|
def add_edge(self, source: str, target: str, relation: str) -> None:
|
||||||
|
self.graph.add_edge(source, target, relation=relation)
|
||||||
|
|
||||||
|
def add_prerequisite(self, prereq: str, concept: str) -> None:
|
||||||
|
self.add_edge(prereq, concept, REL_PREREQ)
|
||||||
|
|
||||||
|
def add_cross_link(self, source: str, target: str, relation: str) -> None:
|
||||||
|
self.add_edge(source, target, relation)
|
||||||
|
|
||||||
|
def prerequisite_subgraph(self) -> nx.DiGraph:
|
||||||
|
g = nx.DiGraph()
|
||||||
|
for node, data in self.graph.nodes(data=True):
|
||||||
|
g.add_node(node, **data)
|
||||||
|
for u, v, data in self.graph.edges(data=True):
|
||||||
|
if data.get("relation") == REL_PREREQ:
|
||||||
|
g.add_edge(u, v)
|
||||||
|
return g
|
||||||
|
|
||||||
|
def prerequisites(self, concept: str) -> list[str]:
|
||||||
|
return list(self.prerequisite_subgraph().predecessors(concept))
|
||||||
|
|
||||||
|
def prerequisite_chain(self, concept: str) -> list[str]:
|
||||||
|
return list(nx.ancestors(self.prerequisite_subgraph(), concept))
|
||||||
|
|
||||||
|
def dependents(self, concept: str) -> list[str]:
|
||||||
|
return list(self.prerequisite_subgraph().successors(concept))
|
||||||
|
|
||||||
|
def learning_path(self, start: str, target: str) -> list[str] | None:
|
||||||
|
try:
|
||||||
|
return nx.shortest_path(self.prerequisite_subgraph(), start, target)
|
||||||
|
except nx.NetworkXNoPath:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def curriculum_path_to_target(self, mastered: set[str], target: str) -> list[str]:
|
||||||
|
pg = self.prerequisite_subgraph()
|
||||||
|
needed = set(nx.ancestors(pg, target)) | {target}
|
||||||
|
ordered = [n for n in nx.topological_sort(pg) if n in needed]
|
||||||
|
return [n for n in ordered if n not in mastered]
|
||||||
|
|
||||||
|
def ready_concepts(self, mastered: set[str]) -> list[str]:
|
||||||
|
pg = self.prerequisite_subgraph()
|
||||||
|
ready = []
|
||||||
|
for node in pg.nodes:
|
||||||
|
if node in mastered:
|
||||||
|
continue
|
||||||
|
if set(pg.predecessors(node)).issubset(mastered):
|
||||||
|
ready.append(node)
|
||||||
|
return ready
|
||||||
|
|
||||||
|
def related_concepts(self, concept: str, relation_types: set[str] | None = None) -> list[str]:
|
||||||
|
relation_types = relation_types or {REL_EQUIVALENT, REL_RELATED, REL_EXTENDS, REL_DEPENDS}
|
||||||
|
found = []
|
||||||
|
for _, v, data in self.graph.out_edges(concept, data=True):
|
||||||
|
if data.get("relation") in relation_types:
|
||||||
|
found.append(v)
|
||||||
|
return found
|
||||||
|
|
||||||
|
def export_graphviz(self, path: str) -> None:
|
||||||
|
lines = ["digraph Didactopus {"]
|
||||||
|
for node in self.graph.nodes:
|
||||||
|
lines.append(f' "{node}";')
|
||||||
|
for u, v, data in self.graph.edges(data=True):
|
||||||
|
lines.append(f' "{u}" -> "{v}" [label="{data.get("relation", "")}"];')
|
||||||
|
lines.append("}")
|
||||||
|
Path(path).write_text("\n".join(lines), encoding="utf-8")
|
||||||
|
|
||||||
|
def export_cytoscape_json(self, path: str) -> None:
|
||||||
|
data = {
|
||||||
|
"nodes": [{"data": {"id": n, **attrs}} for n, attrs in self.graph.nodes(data=True)],
|
||||||
|
"edges": [{"data": {"source": u, "target": v, **attrs}} for u, v, attrs in self.graph.edges(data=True)],
|
||||||
|
}
|
||||||
|
Path(path).write_text(json.dumps(data, indent=2), encoding="utf-8")
|
||||||
|
|
@ -3,41 +3,8 @@ from pydantic import BaseModel, Field
|
||||||
import yaml
|
import yaml
|
||||||
|
|
||||||
|
|
||||||
class ProviderEndpoint(BaseModel):
|
|
||||||
backend: str = "ollama"
|
|
||||||
endpoint: str = "http://localhost:11434"
|
|
||||||
model_name: str = "llama3.1:8b"
|
|
||||||
|
|
||||||
|
|
||||||
class RemoteProvider(BaseModel):
|
|
||||||
enabled: bool = False
|
|
||||||
provider_name: str = "none"
|
|
||||||
endpoint: str = ""
|
|
||||||
model_name: str = ""
|
|
||||||
|
|
||||||
|
|
||||||
class ModelProviderConfig(BaseModel):
|
|
||||||
mode: str = Field(default="local_first")
|
|
||||||
local: ProviderEndpoint = Field(default_factory=ProviderEndpoint)
|
|
||||||
remote: RemoteProvider = Field(default_factory=RemoteProvider)
|
|
||||||
|
|
||||||
|
|
||||||
class PlatformConfig(BaseModel):
|
class PlatformConfig(BaseModel):
|
||||||
verification_required: bool = True
|
default_dimension_thresholds: dict[str, float] = Field(
|
||||||
require_learner_explanations: bool = True
|
|
||||||
permit_direct_answers: bool = False
|
|
||||||
resurfacing_threshold: float = 0.55
|
|
||||||
confidence_threshold: float = 0.8
|
|
||||||
evidence_weights: dict[str, float] = Field(
|
|
||||||
default_factory=lambda: {
|
|
||||||
"explanation": 1.0,
|
|
||||||
"problem": 1.5,
|
|
||||||
"project": 2.5,
|
|
||||||
"transfer": 2.0,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
recent_evidence_multiplier: float = 1.35
|
|
||||||
dimension_thresholds: dict[str, float] = Field(
|
|
||||||
default_factory=lambda: {
|
default_factory=lambda: {
|
||||||
"correctness": 0.8,
|
"correctness": 0.8,
|
||||||
"explanation": 0.75,
|
"explanation": 0.75,
|
||||||
|
|
@ -48,15 +15,18 @@ class PlatformConfig(BaseModel):
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class ArtifactConfig(BaseModel):
|
class PlannerConfig(BaseModel):
|
||||||
local_pack_dirs: list[str] = Field(default_factory=lambda: ["domain-packs"])
|
readiness_bonus: float = 2.0
|
||||||
allow_third_party_packs: bool = True
|
target_distance_weight: float = 1.0
|
||||||
|
weak_dimension_bonus: float = 1.2
|
||||||
|
fragile_review_bonus: float = 1.5
|
||||||
|
project_unlock_bonus: float = 0.8
|
||||||
|
semantic_similarity_weight: float = 1.0
|
||||||
|
|
||||||
|
|
||||||
class AppConfig(BaseModel):
|
class AppConfig(BaseModel):
|
||||||
model_provider: ModelProviderConfig = Field(default_factory=ModelProviderConfig)
|
|
||||||
platform: PlatformConfig = Field(default_factory=PlatformConfig)
|
platform: PlatformConfig = Field(default_factory=PlatformConfig)
|
||||||
artifacts: ArtifactConfig = Field(default_factory=ArtifactConfig)
|
planner: PlannerConfig = Field(default_factory=PlannerConfig)
|
||||||
|
|
||||||
|
|
||||||
def load_config(path: str | Path) -> AppConfig:
|
def load_config(path: str | Path) -> AppConfig:
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,46 @@
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from .artifact_registry import PackValidationResult
|
||||||
|
from .concept_graph import ConceptGraph
|
||||||
|
from .learning_graph import build_merged_learning_graph, namespaced_concept
|
||||||
|
from .semantic_similarity import concept_similarity
|
||||||
|
|
||||||
|
|
||||||
|
def build_concept_graph(results: list[PackValidationResult], default_dimension_thresholds: dict[str, float]) -> ConceptGraph:
|
||||||
|
merged = build_merged_learning_graph(results, default_dimension_thresholds)
|
||||||
|
graph = ConceptGraph()
|
||||||
|
|
||||||
|
for concept_key, data in merged.concept_data.items():
|
||||||
|
graph.add_concept(concept_key, data)
|
||||||
|
|
||||||
|
for concept_key, data in merged.concept_data.items():
|
||||||
|
for prereq in data["prerequisites"]:
|
||||||
|
if prereq in merged.concept_data:
|
||||||
|
graph.add_prerequisite(prereq, concept_key)
|
||||||
|
|
||||||
|
for result in results:
|
||||||
|
if result.manifest is None or not result.is_valid:
|
||||||
|
continue
|
||||||
|
pack_name = result.manifest.name
|
||||||
|
for link in result.manifest.cross_pack_links:
|
||||||
|
source = link.source_concept if "::" in link.source_concept else namespaced_concept(pack_name, link.source_concept)
|
||||||
|
target = link.target_concept
|
||||||
|
if source in graph.graph.nodes and target in graph.graph.nodes:
|
||||||
|
graph.add_cross_link(source, target, link.relation)
|
||||||
|
|
||||||
|
return graph
|
||||||
|
|
||||||
|
|
||||||
|
def suggest_semantic_links(graph: ConceptGraph, minimum_similarity: float = 0.35) -> list[tuple[str, str, float]]:
|
||||||
|
concepts = list(graph.graph.nodes(data=True))
|
||||||
|
found = []
|
||||||
|
for i in range(len(concepts)):
|
||||||
|
key_a, data_a = concepts[i]
|
||||||
|
for j in range(i + 1, len(concepts)):
|
||||||
|
key_b, data_b = concepts[j]
|
||||||
|
if key_a.split("::")[0] == key_b.split("::")[0]:
|
||||||
|
continue
|
||||||
|
sim = concept_similarity(data_a, data_b)
|
||||||
|
if sim >= minimum_similarity:
|
||||||
|
found.append((key_a, key_b, sim))
|
||||||
|
return sorted(found, key=lambda x: x[2], reverse=True)
|
||||||
|
|
@ -2,9 +2,9 @@ from __future__ import annotations
|
||||||
|
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
from typing import Any
|
from typing import Any
|
||||||
import networkx as nx
|
|
||||||
|
|
||||||
from .artifact_registry import PackValidationResult, topological_pack_order
|
from .artifact_registry import PackValidationResult, topological_pack_order
|
||||||
|
from .profile_templates import resolve_mastery_profile
|
||||||
|
|
||||||
|
|
||||||
def namespaced_concept(pack_name: str, concept_id: str) -> str:
|
def namespaced_concept(pack_name: str, concept_id: str) -> str:
|
||||||
|
|
@ -13,38 +13,44 @@ def namespaced_concept(pack_name: str, concept_id: str) -> str:
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class MergedLearningGraph:
|
class MergedLearningGraph:
|
||||||
graph: nx.DiGraph = field(default_factory=nx.DiGraph)
|
|
||||||
concept_data: dict[str, dict[str, Any]] = field(default_factory=dict)
|
concept_data: dict[str, dict[str, Any]] = field(default_factory=dict)
|
||||||
project_catalog: list[dict[str, Any]] = field(default_factory=list)
|
project_catalog: list[dict[str, Any]] = field(default_factory=list)
|
||||||
load_order: list[str] = field(default_factory=list)
|
load_order: list[str] = field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
def build_merged_learning_graph(results: list[PackValidationResult]) -> MergedLearningGraph:
|
def build_merged_learning_graph(
|
||||||
|
results: list[PackValidationResult],
|
||||||
|
default_dimension_thresholds: dict[str, float],
|
||||||
|
) -> MergedLearningGraph:
|
||||||
merged = MergedLearningGraph()
|
merged = MergedLearningGraph()
|
||||||
valid = {r.manifest.name: r for r in results if r.manifest is not None and r.is_valid}
|
valid = {r.manifest.name: r for r in results if r.manifest is not None and r.is_valid}
|
||||||
merged.load_order = topological_pack_order(results)
|
merged.load_order = topological_pack_order(results)
|
||||||
|
|
||||||
for pack_name in merged.load_order:
|
for pack_name in merged.load_order:
|
||||||
result = valid[pack_name]
|
result = valid[pack_name]
|
||||||
|
templates = {
|
||||||
|
name: {
|
||||||
|
"required_dimensions": list(spec.required_dimensions),
|
||||||
|
"dimension_threshold_overrides": dict(spec.dimension_threshold_overrides),
|
||||||
|
}
|
||||||
|
for name, spec in result.manifest.profile_templates.items()
|
||||||
|
}
|
||||||
for concept in result.loaded_files["concepts"].concepts:
|
for concept in result.loaded_files["concepts"].concepts:
|
||||||
key = namespaced_concept(pack_name, concept.id)
|
key = namespaced_concept(pack_name, concept.id)
|
||||||
|
resolved_profile = resolve_mastery_profile(
|
||||||
|
concept.mastery_profile.model_dump(),
|
||||||
|
templates,
|
||||||
|
default_dimension_thresholds,
|
||||||
|
)
|
||||||
merged.concept_data[key] = {
|
merged.concept_data[key] = {
|
||||||
"id": concept.id,
|
"id": concept.id,
|
||||||
"title": concept.title,
|
"title": concept.title,
|
||||||
|
"description": concept.description,
|
||||||
"pack": pack_name,
|
"pack": pack_name,
|
||||||
"prerequisites": list(concept.prerequisites),
|
"prerequisites": [namespaced_concept(pack_name, p) for p in concept.prerequisites],
|
||||||
"mastery_signals": list(concept.mastery_signals),
|
"mastery_signals": list(concept.mastery_signals),
|
||||||
|
"mastery_profile": resolved_profile,
|
||||||
}
|
}
|
||||||
merged.graph.add_node(key)
|
|
||||||
|
|
||||||
for pack_name in merged.load_order:
|
|
||||||
result = valid[pack_name]
|
|
||||||
for concept in result.loaded_files["concepts"].concepts:
|
|
||||||
concept_key = namespaced_concept(pack_name, concept.id)
|
|
||||||
for prereq in concept.prerequisites:
|
|
||||||
prereq_key = namespaced_concept(pack_name, prereq)
|
|
||||||
if prereq_key in merged.graph:
|
|
||||||
merged.graph.add_edge(prereq_key, concept_key)
|
|
||||||
for project in result.loaded_files["projects"].projects:
|
for project in result.loaded_files["projects"].projects:
|
||||||
merged.project_catalog.append({
|
merged.project_catalog.append({
|
||||||
"id": f"{pack_name}::{project.id}",
|
"id": f"{pack_name}::{project.id}",
|
||||||
|
|
|
||||||
|
|
@ -2,141 +2,101 @@ import argparse
|
||||||
import os
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from .adaptive_engine import LearnerProfile, build_adaptive_plan
|
from .artifact_registry import check_pack_dependencies, detect_dependency_cycles, discover_domain_packs
|
||||||
from .artifact_registry import (
|
|
||||||
check_pack_dependencies,
|
|
||||||
detect_dependency_cycles,
|
|
||||||
discover_domain_packs,
|
|
||||||
topological_pack_order,
|
|
||||||
)
|
|
||||||
from .config import load_config
|
from .config import load_config
|
||||||
from .evidence_engine import EvidenceItem, ingest_evidence_bundle
|
from .graph_builder import build_concept_graph, suggest_semantic_links
|
||||||
from .learning_graph import build_merged_learning_graph
|
from .planner import PlannerWeights, rank_next_concepts
|
||||||
from .mentor import generate_socratic_prompt
|
|
||||||
from .model_provider import ModelProvider
|
|
||||||
from .practice import generate_practice_task
|
|
||||||
from .project_advisor import suggest_capstone
|
|
||||||
|
|
||||||
|
|
||||||
def build_parser() -> argparse.ArgumentParser:
|
def build_parser() -> argparse.ArgumentParser:
|
||||||
parser = argparse.ArgumentParser(description="Didactopus multi-dimensional mastery scaffold")
|
parser = argparse.ArgumentParser(description="Didactopus graph-aware planner")
|
||||||
parser.add_argument("--domain", required=True)
|
parser.add_argument("--target", default="bayes-extension::posterior")
|
||||||
parser.add_argument("--goal", required=True)
|
parser.add_argument("--mastered", nargs="*", default=[])
|
||||||
parser.add_argument(
|
parser.add_argument("--export-dot", default="")
|
||||||
"--config",
|
parser.add_argument("--export-cytoscape", default="")
|
||||||
default=os.environ.get("DIDACTOPUS_CONFIG", "configs/config.example.yaml"),
|
parser.add_argument("--config", default=os.environ.get("DIDACTOPUS_CONFIG", "configs/config.example.yaml"))
|
||||||
)
|
|
||||||
return parser
|
return parser
|
||||||
|
|
||||||
|
|
||||||
def main() -> None:
|
def main() -> None:
|
||||||
args = build_parser().parse_args()
|
args = build_parser().parse_args()
|
||||||
config = load_config(Path(args.config))
|
config = load_config(Path(args.config))
|
||||||
provider = ModelProvider(config.model_provider)
|
results = discover_domain_packs(["domain-packs"])
|
||||||
packs = discover_domain_packs(config.artifacts.local_pack_dirs)
|
dep_errors = check_pack_dependencies(results)
|
||||||
dependency_errors = check_pack_dependencies(packs)
|
cycles = detect_dependency_cycles(results)
|
||||||
cycles = detect_dependency_cycles(packs)
|
|
||||||
|
|
||||||
print("== Didactopus ==")
|
if dep_errors:
|
||||||
print("Many arms, one goal — mastery.")
|
print("Dependency errors:")
|
||||||
print()
|
for err in dep_errors:
|
||||||
|
|
||||||
if dependency_errors:
|
|
||||||
print("== Dependency Errors ==")
|
|
||||||
for err in dependency_errors:
|
|
||||||
print(f"- {err}")
|
print(f"- {err}")
|
||||||
print()
|
|
||||||
|
|
||||||
if cycles:
|
if cycles:
|
||||||
print("== Dependency Cycles ==")
|
print("Dependency cycles:")
|
||||||
for cycle in cycles:
|
for cycle in cycles:
|
||||||
print(f"- cycle: {' -> '.join(cycle)}")
|
print(f"- {' -> '.join(cycle)}")
|
||||||
return
|
return
|
||||||
|
|
||||||
print("== Pack Load Order ==")
|
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
|
||||||
for name in topological_pack_order(packs):
|
mastered = set(args.mastered)
|
||||||
print(f"- {name}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
merged = build_merged_learning_graph(packs)
|
weak_dimensions_by_concept = {
|
||||||
profile = LearnerProfile(
|
"bayes-extension::prior": ["explanation", "transfer"],
|
||||||
learner_id="demo-learner",
|
}
|
||||||
display_name="Demo Learner",
|
fragile_concepts = {"bayes-extension::prior"}
|
||||||
goals=[args.goal],
|
|
||||||
mastered_concepts=set(),
|
ranked = rank_next_concepts(
|
||||||
hide_mastered=True,
|
graph=graph,
|
||||||
|
mastered=mastered,
|
||||||
|
targets=[args.target],
|
||||||
|
weak_dimensions_by_concept=weak_dimensions_by_concept,
|
||||||
|
fragile_concepts=fragile_concepts,
|
||||||
|
project_catalog=[
|
||||||
|
{
|
||||||
|
"id": "bayes-extension::bayes-mini-project",
|
||||||
|
"prerequisites": ["bayes-extension::prior"],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "applied-inference::inference-project",
|
||||||
|
"prerequisites": ["applied-inference::model-checking"],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
weights=PlannerWeights(
|
||||||
|
readiness_bonus=config.planner.readiness_bonus,
|
||||||
|
target_distance_weight=config.planner.target_distance_weight,
|
||||||
|
weak_dimension_bonus=config.planner.weak_dimension_bonus,
|
||||||
|
fragile_review_bonus=config.planner.fragile_review_bonus,
|
||||||
|
project_unlock_bonus=config.planner.project_unlock_bonus,
|
||||||
|
semantic_similarity_weight=config.planner.semantic_similarity_weight,
|
||||||
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
evidence_items = [
|
print("== Didactopus Graph-Aware Planner ==")
|
||||||
EvidenceItem(
|
print(f"Target concept: {args.target}")
|
||||||
concept_key="foundations-statistics::descriptive-statistics",
|
|
||||||
evidence_type="project",
|
|
||||||
score=0.88,
|
|
||||||
is_recent=True,
|
|
||||||
rubric_dimensions={
|
|
||||||
"correctness": 0.9,
|
|
||||||
"explanation": 0.83,
|
|
||||||
"transfer": 0.79,
|
|
||||||
"project_execution": 0.88,
|
|
||||||
"critique": 0.74,
|
|
||||||
},
|
|
||||||
notes="Strong integrated performance.",
|
|
||||||
),
|
|
||||||
EvidenceItem(
|
|
||||||
concept_key="bayes-extension::prior",
|
|
||||||
evidence_type="problem",
|
|
||||||
score=0.68,
|
|
||||||
is_recent=True,
|
|
||||||
rubric_dimensions={
|
|
||||||
"correctness": 0.75,
|
|
||||||
"explanation": 0.62,
|
|
||||||
"transfer": 0.55,
|
|
||||||
"critique": 0.58,
|
|
||||||
},
|
|
||||||
notes="Knows some basics, weak transfer and critique.",
|
|
||||||
),
|
|
||||||
]
|
|
||||||
|
|
||||||
evidence_state = ingest_evidence_bundle(
|
|
||||||
profile=profile,
|
|
||||||
items=evidence_items,
|
|
||||||
resurfacing_threshold=config.platform.resurfacing_threshold,
|
|
||||||
confidence_threshold=config.platform.confidence_threshold,
|
|
||||||
type_weights=config.platform.evidence_weights,
|
|
||||||
recent_multiplier=config.platform.recent_evidence_multiplier,
|
|
||||||
dimension_thresholds=config.platform.dimension_thresholds,
|
|
||||||
)
|
|
||||||
|
|
||||||
plan = build_adaptive_plan(merged, profile)
|
|
||||||
|
|
||||||
print("== Multi-Dimensional Evidence Summary ==")
|
|
||||||
for concept_key, summary in evidence_state.summary_by_concept.items():
|
|
||||||
print(
|
|
||||||
f"- {concept_key}: weighted_mean={summary.weighted_mean_score:.2f}, "
|
|
||||||
f"confidence={summary.confidence:.2f}, mastered={summary.mastered}"
|
|
||||||
)
|
|
||||||
if summary.dimension_means:
|
|
||||||
dims = ", ".join(f"{k}={v:.2f}" for k, v in sorted(summary.dimension_means.items()))
|
|
||||||
print(f" * dimensions: {dims}")
|
|
||||||
if summary.weak_dimensions:
|
|
||||||
print(f" * weak dimensions: {', '.join(summary.weak_dimensions)}")
|
|
||||||
print()
|
print()
|
||||||
|
print("Curriculum path from current mastery:")
|
||||||
print("== Mastered Concepts ==")
|
for item in graph.curriculum_path_to_target(mastered, args.target):
|
||||||
if profile.mastered_concepts:
|
print(f"- {item}")
|
||||||
for concept_key in sorted(profile.mastered_concepts):
|
|
||||||
print(f"- {concept_key}")
|
|
||||||
else:
|
|
||||||
print("- none yet")
|
|
||||||
print()
|
print()
|
||||||
|
print("Ready concepts:")
|
||||||
print("== Next Best Concepts ==")
|
for item in graph.ready_concepts(mastered):
|
||||||
for concept in plan.next_best_concepts:
|
print(f"- {item}")
|
||||||
print(f"- {concept}")
|
|
||||||
print()
|
print()
|
||||||
|
print("Ranked next concepts:")
|
||||||
|
for item in ranked:
|
||||||
|
print(f"- {item['concept']}: {item['score']:.2f}")
|
||||||
|
for name, value in item["components"].items():
|
||||||
|
print(f" * {name}: {value:.2f}")
|
||||||
|
print()
|
||||||
|
print("Suggested semantic links:")
|
||||||
|
for a, b, score in suggest_semantic_links(graph, minimum_similarity=0.10)[:8]:
|
||||||
|
print(f"- {a} <-> {b} : {score:.2f}")
|
||||||
|
|
||||||
focus_concept = "bayes-extension::prior"
|
if args.export_dot:
|
||||||
weak_dims = evidence_state.summary_by_concept.get(focus_concept).weak_dimensions if focus_concept in evidence_state.summary_by_concept else []
|
graph.export_graphviz(args.export_dot)
|
||||||
print(generate_socratic_prompt(provider, focus_concept, weak_dims))
|
print(f"Exported Graphviz DOT to {args.export_dot}")
|
||||||
print(generate_practice_task(provider, focus_concept, weak_dims))
|
if args.export_cytoscape:
|
||||||
print(suggest_capstone(provider, args.domain))
|
graph.export_cytoscape_json(args.export_cytoscape)
|
||||||
|
print(f"Exported Cytoscape JSON to {args.export_cytoscape}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
|
||||||
|
|
@ -13,10 +13,6 @@ class ModelProvider:
|
||||||
def __init__(self, config: ModelProviderConfig) -> None:
|
def __init__(self, config: ModelProviderConfig) -> None:
|
||||||
self.config = config
|
self.config = config
|
||||||
|
|
||||||
def describe(self) -> str:
|
|
||||||
local = self.config.local
|
|
||||||
return f"mode={self.config.mode}, local={local.backend}:{local.model_name}"
|
|
||||||
|
|
||||||
def generate(self, prompt: str) -> ModelResponse:
|
def generate(self, prompt: str) -> ModelResponse:
|
||||||
local = self.config.local
|
local = self.config.local
|
||||||
preview = prompt.strip().replace("\n", " ")[:120]
|
preview = prompt.strip().replace("\n", " ")[:120]
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,100 @@
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from math import inf
|
||||||
|
|
||||||
|
from .concept_graph import ConceptGraph
|
||||||
|
from .semantic_similarity import concept_similarity
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class PlannerWeights:
|
||||||
|
readiness_bonus: float = 2.0
|
||||||
|
target_distance_weight: float = 1.0
|
||||||
|
weak_dimension_bonus: float = 1.2
|
||||||
|
fragile_review_bonus: float = 1.5
|
||||||
|
project_unlock_bonus: float = 0.8
|
||||||
|
semantic_similarity_weight: float = 1.0
|
||||||
|
|
||||||
|
|
||||||
|
def _distance_bonus(graph: ConceptGraph, concept: str, targets: list[str]) -> float:
|
||||||
|
pg = graph.prerequisite_subgraph()
|
||||||
|
best = inf
|
||||||
|
for target in targets:
|
||||||
|
try:
|
||||||
|
dist = len(__import__("networkx").shortest_path(pg, concept, target)) - 1
|
||||||
|
best = min(best, dist)
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
if best is inf:
|
||||||
|
return 0.0
|
||||||
|
return 1.0 / (1.0 + best)
|
||||||
|
|
||||||
|
|
||||||
|
def _project_unlock_bonus(concept: str, project_catalog: list[dict]) -> float:
|
||||||
|
count = 0
|
||||||
|
for project in project_catalog:
|
||||||
|
if concept in project.get("prerequisites", []):
|
||||||
|
count += 1
|
||||||
|
return float(count)
|
||||||
|
|
||||||
|
|
||||||
|
def _semantic_bonus(graph: ConceptGraph, concept: str, targets: list[str]) -> float:
|
||||||
|
data_a = graph.graph.nodes[concept]
|
||||||
|
best = 0.0
|
||||||
|
for target in targets:
|
||||||
|
if target not in graph.graph.nodes:
|
||||||
|
continue
|
||||||
|
data_b = graph.graph.nodes[target]
|
||||||
|
best = max(best, concept_similarity(data_a, data_b))
|
||||||
|
return best
|
||||||
|
|
||||||
|
|
||||||
|
def rank_next_concepts(
|
||||||
|
graph: ConceptGraph,
|
||||||
|
mastered: set[str],
|
||||||
|
targets: list[str],
|
||||||
|
weak_dimensions_by_concept: dict[str, list[str]],
|
||||||
|
fragile_concepts: set[str],
|
||||||
|
project_catalog: list[dict],
|
||||||
|
weights: PlannerWeights,
|
||||||
|
) -> list[dict]:
|
||||||
|
ready = graph.ready_concepts(mastered)
|
||||||
|
ranked = []
|
||||||
|
|
||||||
|
for concept in ready:
|
||||||
|
score = 0.0
|
||||||
|
components = {}
|
||||||
|
|
||||||
|
readiness = weights.readiness_bonus
|
||||||
|
score += readiness
|
||||||
|
components["readiness"] = readiness
|
||||||
|
|
||||||
|
distance = weights.target_distance_weight * _distance_bonus(graph, concept, targets)
|
||||||
|
score += distance
|
||||||
|
components["target_distance"] = distance
|
||||||
|
|
||||||
|
weak = weights.weak_dimension_bonus * len(weak_dimensions_by_concept.get(concept, []))
|
||||||
|
score += weak
|
||||||
|
components["weak_dimensions"] = weak
|
||||||
|
|
||||||
|
fragile = weights.fragile_review_bonus if concept in fragile_concepts else 0.0
|
||||||
|
score += fragile
|
||||||
|
components["fragile_review"] = fragile
|
||||||
|
|
||||||
|
project = weights.project_unlock_bonus * _project_unlock_bonus(concept, project_catalog)
|
||||||
|
score += project
|
||||||
|
components["project_unlock"] = project
|
||||||
|
|
||||||
|
semantic = weights.semantic_similarity_weight * _semantic_bonus(graph, concept, targets)
|
||||||
|
score += semantic
|
||||||
|
components["semantic_similarity"] = semantic
|
||||||
|
|
||||||
|
ranked.append({
|
||||||
|
"concept": concept,
|
||||||
|
"score": score,
|
||||||
|
"components": components,
|
||||||
|
})
|
||||||
|
|
||||||
|
ranked.sort(key=lambda item: item["score"], reverse=True)
|
||||||
|
return ranked
|
||||||
|
|
@ -1,34 +1,36 @@
|
||||||
from dataclasses import dataclass
|
from typing import Any
|
||||||
from typing import Dict, List
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
def resolve_mastery_profile(
|
||||||
class ProfileTemplate:
|
concept_profile: dict[str, Any] | None,
|
||||||
name: str
|
templates: dict[str, dict[str, Any]],
|
||||||
required_dimensions: List[str]
|
default_thresholds: dict[str, float],
|
||||||
dimension_threshold_overrides: Dict[str, float]
|
) -> dict[str, Any]:
|
||||||
|
default_profile = {
|
||||||
|
"required_dimensions": list(default_thresholds.keys()),
|
||||||
def resolve_mastery_profile(concept_profile, templates, default_profile):
|
"dimension_threshold_overrides": {},
|
||||||
if concept_profile is None:
|
}
|
||||||
return default_profile
|
if not concept_profile:
|
||||||
|
effective = dict(default_profile)
|
||||||
template_name = concept_profile.get("template")
|
|
||||||
if template_name:
|
|
||||||
base = templates.get(template_name, default_profile)
|
|
||||||
profile = {
|
|
||||||
"required_dimensions": list(base.required_dimensions),
|
|
||||||
"dimension_threshold_overrides": dict(base.dimension_threshold_overrides),
|
|
||||||
}
|
|
||||||
else:
|
else:
|
||||||
profile = default_profile.copy()
|
template_name = concept_profile.get("template")
|
||||||
|
if template_name and template_name in templates:
|
||||||
|
tmpl = templates[template_name]
|
||||||
|
effective = {
|
||||||
|
"required_dimensions": list(tmpl.get("required_dimensions", default_profile["required_dimensions"])),
|
||||||
|
"dimension_threshold_overrides": dict(tmpl.get("dimension_threshold_overrides", {})),
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
effective = dict(default_profile)
|
||||||
|
if concept_profile.get("required_dimensions"):
|
||||||
|
effective["required_dimensions"] = list(concept_profile["required_dimensions"])
|
||||||
|
if concept_profile.get("dimension_threshold_overrides"):
|
||||||
|
effective["dimension_threshold_overrides"].update(concept_profile["dimension_threshold_overrides"])
|
||||||
|
|
||||||
if "required_dimensions" in concept_profile:
|
thresholds = dict(default_thresholds)
|
||||||
profile["required_dimensions"] = concept_profile["required_dimensions"]
|
thresholds.update(effective["dimension_threshold_overrides"])
|
||||||
|
return {
|
||||||
if "dimension_threshold_overrides" in concept_profile:
|
"required_dimensions": effective["required_dimensions"],
|
||||||
profile["dimension_threshold_overrides"].update(
|
"dimension_threshold_overrides": dict(effective["dimension_threshold_overrides"]),
|
||||||
concept_profile["dimension_threshold_overrides"]
|
"effective_thresholds": {dim: thresholds[dim] for dim in effective["required_dimensions"] if dim in thresholds},
|
||||||
)
|
}
|
||||||
|
|
||||||
return profile
|
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,29 @@
|
||||||
|
from collections import Counter
|
||||||
|
import math
|
||||||
|
|
||||||
|
|
||||||
|
def _tokenize(text: str) -> list[str]:
|
||||||
|
cleaned = "".join(ch.lower() if ch.isalnum() else " " for ch in text)
|
||||||
|
return [tok for tok in cleaned.split() if tok]
|
||||||
|
|
||||||
|
|
||||||
|
def token_cosine_similarity(text_a: str, text_b: str) -> float:
|
||||||
|
tokens_a = _tokenize(text_a)
|
||||||
|
tokens_b = _tokenize(text_b)
|
||||||
|
if not tokens_a or not tokens_b:
|
||||||
|
return 0.0
|
||||||
|
ca = Counter(tokens_a)
|
||||||
|
cb = Counter(tokens_b)
|
||||||
|
shared = set(ca) & set(cb)
|
||||||
|
dot = sum(ca[t] * cb[t] for t in shared)
|
||||||
|
na = math.sqrt(sum(v * v for v in ca.values()))
|
||||||
|
nb = math.sqrt(sum(v * v for v in cb.values()))
|
||||||
|
if na == 0 or nb == 0:
|
||||||
|
return 0.0
|
||||||
|
return dot / (na * nb)
|
||||||
|
|
||||||
|
|
||||||
|
def concept_similarity(concept_a: dict, concept_b: dict) -> float:
|
||||||
|
text_a = " ".join([concept_a.get("title", ""), concept_a.get("description", ""), " ".join(concept_a.get("mastery_signals", []))])
|
||||||
|
text_b = " ".join([concept_b.get("title", ""), concept_b.get("description", ""), " ".join(concept_b.get("mastery_signals", []))])
|
||||||
|
return token_cosine_similarity(text_a, text_b)
|
||||||
|
|
@ -0,0 +1,36 @@
|
||||||
|
from didactopus.artifact_registry import discover_domain_packs
|
||||||
|
from didactopus.config import load_config
|
||||||
|
from didactopus.graph_builder import build_concept_graph, suggest_semantic_links
|
||||||
|
|
||||||
|
|
||||||
|
def test_concept_graph_builds() -> None:
|
||||||
|
config = load_config("configs/config.example.yaml")
|
||||||
|
results = discover_domain_packs(["domain-packs"])
|
||||||
|
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
|
||||||
|
assert "foundations-statistics::probability-basics" in graph.graph.nodes
|
||||||
|
assert "bayes-extension::posterior" in graph.graph.nodes
|
||||||
|
|
||||||
|
|
||||||
|
def test_curriculum_path_to_target() -> None:
|
||||||
|
config = load_config("configs/config.example.yaml")
|
||||||
|
results = discover_domain_packs(["domain-packs"])
|
||||||
|
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
|
||||||
|
path = graph.curriculum_path_to_target(set(), "bayes-extension::posterior")
|
||||||
|
assert "bayes-extension::prior" in path
|
||||||
|
assert "bayes-extension::posterior" in path
|
||||||
|
|
||||||
|
|
||||||
|
def test_declared_cross_pack_links_exist() -> None:
|
||||||
|
config = load_config("configs/config.example.yaml")
|
||||||
|
results = discover_domain_packs(["domain-packs"])
|
||||||
|
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
|
||||||
|
related = graph.related_concepts("bayes-extension::posterior")
|
||||||
|
assert "applied-inference::model-checking" in related
|
||||||
|
|
||||||
|
|
||||||
|
def test_semantic_link_suggestions() -> None:
|
||||||
|
config = load_config("configs/config.example.yaml")
|
||||||
|
results = discover_domain_packs(["domain-packs"])
|
||||||
|
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
|
||||||
|
suggestions = suggest_semantic_links(graph, minimum_similarity=0.10)
|
||||||
|
assert len(suggestions) >= 1
|
||||||
|
|
@ -0,0 +1,19 @@
|
||||||
|
from pathlib import Path
|
||||||
|
from didactopus.artifact_registry import discover_domain_packs
|
||||||
|
from didactopus.config import load_config
|
||||||
|
from didactopus.graph_builder import build_concept_graph
|
||||||
|
|
||||||
|
|
||||||
|
def test_exports(tmp_path: Path) -> None:
|
||||||
|
config = load_config("configs/config.example.yaml")
|
||||||
|
results = discover_domain_packs(["domain-packs"])
|
||||||
|
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
|
||||||
|
|
||||||
|
dot_path = tmp_path / "graph.dot"
|
||||||
|
json_path = tmp_path / "graph.json"
|
||||||
|
|
||||||
|
graph.export_graphviz(str(dot_path))
|
||||||
|
graph.export_cytoscape_json(str(json_path))
|
||||||
|
|
||||||
|
assert dot_path.exists()
|
||||||
|
assert json_path.exists()
|
||||||
|
|
@ -0,0 +1,23 @@
|
||||||
|
from didactopus.artifact_registry import discover_domain_packs
|
||||||
|
from didactopus.config import load_config
|
||||||
|
from didactopus.graph_builder import build_concept_graph
|
||||||
|
from didactopus.planner import PlannerWeights, rank_next_concepts
|
||||||
|
|
||||||
|
|
||||||
|
def test_rank_next_concepts() -> None:
|
||||||
|
config = load_config("configs/config.example.yaml")
|
||||||
|
results = discover_domain_packs(["domain-packs"])
|
||||||
|
graph = build_concept_graph(results, config.platform.default_dimension_thresholds)
|
||||||
|
|
||||||
|
ranked = rank_next_concepts(
|
||||||
|
graph=graph,
|
||||||
|
mastered=set(),
|
||||||
|
targets=["bayes-extension::posterior"],
|
||||||
|
weak_dimensions_by_concept={"bayes-extension::prior": ["transfer"]},
|
||||||
|
fragile_concepts={"bayes-extension::prior"},
|
||||||
|
project_catalog=[{"id": "p1", "prerequisites": ["bayes-extension::prior"]}],
|
||||||
|
weights=PlannerWeights(),
|
||||||
|
)
|
||||||
|
|
||||||
|
assert len(ranked) >= 1
|
||||||
|
assert ranked[0]["score"] >= ranked[-1]["score"]
|
||||||
|
|
@ -0,0 +1,18 @@
|
||||||
|
from didactopus.profile_templates import resolve_mastery_profile
|
||||||
|
|
||||||
|
|
||||||
|
def test_template_resolution() -> None:
|
||||||
|
templates = {
|
||||||
|
"foundation": {
|
||||||
|
"required_dimensions": ["correctness", "explanation"],
|
||||||
|
"dimension_threshold_overrides": {"explanation": 0.8},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
resolved = resolve_mastery_profile(
|
||||||
|
{"template": "foundation"},
|
||||||
|
templates,
|
||||||
|
{"correctness": 0.8, "explanation": 0.75, "transfer": 0.7},
|
||||||
|
)
|
||||||
|
assert resolved["required_dimensions"] == ["correctness", "explanation"]
|
||||||
|
assert resolved["effective_thresholds"]["correctness"] == 0.8
|
||||||
|
assert resolved["effective_thresholds"]["explanation"] == 0.8
|
||||||
Loading…
Reference in New Issue