Apply ZIP update: 085-didactopus-workspace-manager-update.zip [2026-03-14T13:18:42]

This commit is contained in:
welsberr 2026-03-14 13:29:55 -04:00
parent f608aa692b
commit 55b170f918
188 changed files with 15208 additions and 469 deletions

View File

@ -0,0 +1,28 @@
# Didactopus
Didactopus is a local-first AI-assisted autodidactic mastery platform built around
concept graphs, evaluator-driven evidence, adaptive planning, mastery ledgers,
curriculum ingestion, and human review of generated draft packs.
## This revision
This revision adds a **workspace manager** on top of the local review bridge.
The goal is to reduce the remaining friction in getting from:
- raw or ingested course materials
- to a draft pack
- to an actively curated review session
- to a promoted reviewed pack
## Why this matters
A major design goal of Didactopus is lowering the **activation-energy hump**.
Even when good online course content exists, a learner or curator may still stall because:
- materials are scattered
- multiple draft packs accumulate
- there is no single place to organize review work
- switching between projects is awkward
The workspace manager addresses that by making Didactopus feel more like a practical
local tool and less like a pile of disconnected artifacts.

View File

@ -0,0 +1,48 @@
# Didactopus
Didactopus is a local-first AI-assisted autodidactic mastery platform built around
concept graphs, evaluator-driven evidence, adaptive planning, mastery ledgers,
curriculum ingestion, and human review of generated draft packs.
## This revision
This revision adds a **draft-pack import workflow** on top of the workspace manager.
The goal is to let a user take a newly generated draft pack from the ingestion
pipeline and bring it into a managed review workspace in one step.
## Why this matters
A major source of friction in turning online course contents into usable study
domains is not only extraction difficulty, but also the messy handoff between:
- generated draft artifacts
- review workspaces
- ongoing curation
- promoted reviewed packs
That handoff can easily become another activation-energy barrier.
This import workflow reduces that barrier by making it straightforward to:
1. choose a draft pack directory
2. create or target a workspace
3. copy/import the draft pack into that workspace
4. begin review immediately in the UI
## What is included
- workspace import operation
- local API endpoint for importing a draft pack into a workspace
- React UI controls for import
- preservation of imported draft-pack files
- sample import source directory
- sample workspace with imported draft pack
## Core workflow
1. generate a draft pack via ingestion
2. create a workspace or choose an existing one
3. import the draft pack into that workspace
4. open the workspace in the review UI
5. curate and promote it

View File

@ -0,0 +1,48 @@
# Didactopus
Didactopus is a local-first AI-assisted autodidactic mastery platform built around
concept graphs, evaluator-driven evidence, adaptive planning, mastery ledgers,
curriculum ingestion, and human review of generated draft packs.
## This revision
This revision adds an **import validation and safety layer** to the draft-pack
import workflow.
The goal is to make importing generated packs into review workspaces safer,
clearer, and easier to trust.
## Why this matters
If the draft-pack import step is risky or opaque, it becomes another point where
a user may hesitate or stall. That would undercut the broader goal of helping
users get over the activation-energy hump of turning online course contents into
usable Didactopus learning domains.
This layer reduces that risk by adding:
- required-file validation
- schema/version summary inspection
- overwrite warnings
- import preview endpoint
- import error reporting
- basic pack-health reporting before copy/import
## What is included
- draft-pack validator
- import preview model
- overwrite-safety checks
- preview and import API endpoints
- updated React UI for preview-before-import
- sample valid and invalid draft packs
- tests for validation and safety behavior
## Core workflow
1. point the UI at a source draft-pack directory
2. preview validation results
3. review warnings or blocking errors
4. choose whether overwrite is allowed
5. import into workspace
6. continue directly into review

View File

@ -0,0 +1,62 @@
# Didactopus
Didactopus is a local-first AI-assisted autodidactic mastery platform built around
concept graphs, evaluator-driven evidence, adaptive planning, mastery ledgers,
curriculum ingestion, and human review of generated draft packs.
## This revision
This revision adds a **full pack-validation layer** that checks cross-file coherence
for Didactopus draft packs before import and during review.
The goal is to move beyond “does the directory exist and parse?” toward a more
Didactopus-native notion of whether a pack is structurally coherent enough to use.
## Why this matters
A generated pack may look fine at first glance and still contain internal problems:
- roadmap stages referencing missing concepts
- projects depending on nonexistent concepts
- duplicate concept ids
- rubrics with malformed structure
- empty or weak metadata
- inconsistent pack identity information
Those issues can become another activation-energy barrier. A user who has already
done the hard work of finding course materials and generating a draft pack should
not have to manually discover every structural issue one file at a time.
## What is included
- full pack validator
- cross-file validation across:
- `pack.yaml`
- `concepts.yaml`
- `roadmap.yaml`
- `projects.yaml`
- `rubrics.yaml`
- validation summary model
- import preview now includes pack-validation findings
- review UI panels for validation errors and warnings
- sample valid and invalid packs
- tests for coherence checks
## Core checks
Current scaffold validates:
- required files exist
- YAML parsing for all key files
- pack metadata presence
- duplicate concept ids
- roadmap concepts exist in `concepts.yaml`
- project prerequisites exist in `concepts.yaml`
- rubric structure presence
- empty or suspiciously weak concept entries
## Design stance
This is a structural coherence layer, not a guarantee of pedagogical quality.
It makes the import path safer and clearer, while still leaving room for later
semantic and domain-specific validation.

View File

@ -0,0 +1,15 @@
# Didactopus
This update adds a **coverage-and-alignment QA layer**.
It checks whether concepts, mastery signals, checkpoints, projects, and rubrics
actually line up well enough to support a credible mastery path.
Current checks:
- concepts absent from roadmap stages
- concepts absent from checkpoint language
- concepts absent from project prerequisites
- concepts never covered by either checkpoints or projects
- mastery signals not reflected in checkpoints or deliverables
- rubric criteria with weak overlap to mastery/project language
- projects that cover too little of the concept set

View File

@ -0,0 +1,31 @@
# Didactopus
Didactopus is a local-first AI-assisted autodidactic mastery platform built around
concept graphs, evaluator-driven evidence, adaptive planning, mastery ledgers,
curriculum ingestion, and human review of generated draft packs.
## This revision
This revision adds a **curriculum path quality layer**.
The goal is to analyze whether a pack's roadmap looks like a sensible learner
progression rather than merely a list of stages.
## What is included
- curriculum path quality analysis module
- heuristic checks for stage progression quality
- path-quality findings included in import preview
- UI display for curriculum path warnings
- sample packs and tests
## Current path-quality checks
This scaffold includes checks for:
- empty stages
- stages with no checkpoint activity
- concepts never referenced in checkpoints or projects
- capstones/projects placed very early
- dead-end late stages with no assessment density
- suspicious stage-size imbalance
- abrupt prerequisite-load jumps across stages

View File

@ -0,0 +1,3 @@
# Didactopus
This update adds an evaluator-to-pack alignment QA layer.

View File

@ -0,0 +1,6 @@
# Didactopus
This update adds an **evidence-flow and mastery-ledger QA layer**.
It checks whether evaluator outputs, evidence types, and assessment artifacts can
be translated into learner mastery-ledger records in a coherent way.

View File

@ -0,0 +1,38 @@
# Didactopus
Didactopus is a local-first AI-assisted autodidactic mastery platform built around
concept graphs, evaluator-driven evidence, adaptive planning, mastery ledgers,
curriculum ingestion, and human review of generated draft packs.
## This revision
This revision adds a **graph-aware prerequisite analysis layer**.
The goal is to inspect a pack not just as a set of files or even as a semantically
plausible curriculum draft, but as an actual dependency graph whose structure may
reveal deeper curation problems.
## Why this matters
A pack can be syntactically valid, cross-file coherent, and even semantically plausible,
yet still have a concept graph that is hard to learn from or maintain. Typical examples:
- prerequisite cycles
- isolated concepts with no curricular integration
- bottleneck concepts with too many downstream dependencies
- suspiciously flat domains with almost no dependency structure
- suspiciously deep chains suggesting over-fragmentation
Those graph problems can still raise the activation-energy cost of using a pack,
because they make learning paths harder to trust and revise.
## What is included
- prerequisite graph analysis module
- cycle detection
- isolated concept detection
- bottleneck concept detection
- flatness and chain-depth heuristics
- graph findings included in import preview
- UI panel for graph-analysis warnings
- sample packs and tests

View File

@ -0,0 +1,52 @@
# Didactopus
Didactopus is a local-first AI-assisted autodidactic mastery platform built around
concept graphs, evaluator-driven evidence, adaptive planning, mastery ledgers,
curriculum ingestion, and human review of generated draft packs.
## This revision
This revision adds a **domain-pack semantic QA layer**.
The goal is to go beyond file integrity and cross-file coherence, and start asking
whether a generated Didactopus pack looks semantically plausible as a learning domain.
## Why this matters
A pack may pass structural validation and still have higher-level weaknesses such as:
- near-duplicate concepts with different wording
- prerequisites that look suspiciously thin or over-compressed
- missing bridge concepts between stages
- concepts that are probably too broad and should be split
- concepts with names that imply overlap or ambiguity
Those problems can still slow a learner or curator down, which means they still
contribute to the activation-energy hump Didactopus is meant to reduce.
## What is included
- semantic QA analysis module
- heuristic semantic checks
- semantic QA findings included in import preview
- UI panel for semantic QA warnings
- sample packs showing semantic QA output
- tests for semantic QA behavior
## Current semantic QA checks
This scaffold includes heuristic checks for:
- near-duplicate concept titles
- over-broad concept titles
- suspiciously thin prerequisite chains
- missing bridge concepts between roadmap stages
- concepts with very similar descriptions
- singleton advanced stages with no visible bridge support
## Design stance
This is still a heuristic layer, not a final semantic truth engine.
Its purpose is to surface likely curation issues early enough that a reviewer can
correct them before those issues turn into confusion or wasted effort.

View File

@ -0,0 +1,12 @@
# Didactopus Admin Curation Layer
This update extends the previous admin/learner workflow scaffold with a deeper
admin and curation layer.
## Added in this scaffold
- pack validation review surfaces in the admin UI
- attribution / provenance inspection surfaces in the admin UI
- evaluator trace inspection surfaces
- richer pack authoring forms instead of raw JSON-only editing
- backend endpoints for validation summaries, provenance inspection, and evaluator traces

View File

@ -0,0 +1,37 @@
# Didactopus Admin + Learner UI Workflows
This update builds the next layer on top of the productionization scaffold by wiring
the frontend toward real workflow surfaces:
- login with token storage
- token refresh handling
- learner dashboard flow
- evaluator-history view
- learner-management view
- admin pack creation / publication view
## Included
### Frontend
- login screen
- auth context with token refresh scaffold
- learner dashboard
- evaluator history panel
- learner management panel
- admin pack editor / publisher panel
- shared API client
### Backend additions
- learner listing endpoint
- admin pack listing endpoint
- admin pack publication toggle endpoint
## Scope
This remains a scaffold intended to connect the architectural pieces and establish
usable interaction flows. It is not yet a polished production UI.
## Intended next step
- integrate richer form validation
- add pack schema editing tools
- connect evaluator traces and rubric results
- add paginated audit history

View File

@ -0,0 +1,27 @@
# Didactopus Agent Audit Logging + Key Rotation Layer
This update extends the service-account scaffold with two operational controls:
- **audit logging** for machine-initiated activity
- **key rotation / revocation scaffolding** for service accounts
## Added in this scaffold
- audit log records for service-account actions
- request-level audit helper for agent operations
- service-account secret rotation endpoint
- service-account enable/disable endpoint
- admin UI for viewing audit events and rotating credentials
## Why this matters
A serious AI learner deployment needs more than scoped credentials.
It also needs to answer:
- which service account did what?
- when did it do it?
- what endpoint or workflow did it invoke?
- can we replace or revoke a compromised credential?
This layer makes service-account usage more accountable and more maintainable.

View File

@ -0,0 +1,43 @@
# Didactopus Agent Service Account Layer
This update extends the deployment-policy and agent-hooks scaffold with a
**first-class service-account model for AI learners and other non-human agents**.
## Added in this scaffold
- service-account records
- scoped API tokens for agents
- capability scopes for learner workflows
- direct agent authentication endpoint
- scope checks for agent operations
- admin UI for viewing service accounts and their scopes
## Why this matters
An AI learner should not need to masquerade as a human user session.
With this layer, an installation can:
- create a dedicated machine identity
- give it only the scopes it needs
- allow it to operate through the same API surfaces as the UI
- keep agent permissions narrower than full admin access when appropriate
## Example scopes
- `packs:read`
- `packs:write_personal`
- `contributions:submit`
- `learners:read`
- `learners:write`
- `recommendations:read`
- `evaluators:submit`
- `evaluators:read`
- `governance:read`
- `governance:write`
## Strong next step
- key rotation and revocation UI
- service-account ownership and audit trails
- structured workflow schema export for agents
- explicit agent-run logs tied to service-account identity

View File

@ -0,0 +1,41 @@
# Didactopus Animated Concept Graph Layer
This update extends the learning-animation scaffold with an **animated concept-graph view**.
## What it adds
- concept-graph playback frames
- node state transitions over time
- prerequisite edge rendering data
- API endpoint for graph animation payloads
- UI prototype for animated concept graph playback
## Why this matters
A bar-chart timeline is useful, but a concept graph better matches how Didactopus
represents mastery structure:
- concepts as nodes
- prerequisites as directed edges
- mastery progression as node color/size change
- availability/unlock state as a visible transition
This makes learning progression easier to interpret for:
- human learners
- AI-learner debugging
- curriculum designers
- reviewers comparing different runs
## Animation model
Each frame includes:
- node scores
- node status (`locked`, `available`, `active`, `mastered`)
- simple node size hints derived from score
- static prerequisite edges
Later versions could add:
- force-directed layouts
- semantic cross-pack links
- edge highlighting when prerequisite satisfaction changes
- side-by-side learner comparison

View File

@ -0,0 +1,24 @@
# Didactopus Worker-Backed Artifact Registry Layer
This update extends the media-rendering pipeline with a **worker-backed artifact registry**.
## What it adds
- artifact registry records
- render job records
- worker-oriented job lifecycle states
- artifact listing and lookup endpoints
- bundle registration into a persistent catalog
- UI prototype for browsing render jobs and produced artifacts
## Why this matters
The previous layer could create render bundles, but the outputs were still basically
filesystem-level side effects. This layer promotes artifacts into first-class Didactopus
objects so the system can:
- track render requests over time
- associate artifacts with learners and packs
- record job status (`queued`, `running`, `completed`, `failed`)
- expose artifacts in the UI and API
- support future download, retention, and publication policies

View File

@ -0,0 +1,37 @@
# Didactopus
This update adds an **attribution, provenance, and license-compliance scaffold** for domain packs.
It is designed for open-courseware ingestion workflows, including sources such as MIT OpenCourseWare,
where downstream reuse may be allowed but requires preserving provenance and meeting license terms.
## Why this matters
A Didactopus domain pack should not be a black box. If source materials contributed to the pack,
the pack should carry machine-readable and human-readable provenance so that:
- attribution can be generated automatically
- remix/adaptation status can be recorded
- excluded third-party content can be flagged
- downstream redistribution can be audited more safely
- human learners and maintainers can inspect where content came from
## Included in this update
- source provenance models
- attribution bundle generator
- attribution QA checks
- sample `sources.yaml`
- sample `ATTRIBUTION.md`
- pack-level provenance manifest
- MIT OCW-oriented notes for compliance-aware ingestion
## Pack artifacts introduced here
- `sources.yaml` — source inventory and licensing metadata
- `ATTRIBUTION.md` — human-readable attribution report
- `provenance_manifest.json` — machine-readable normalized provenance output
## Important note
This scaffold helps operationalize attribution and provenance handling.
It is **not** legal advice.

View File

@ -0,0 +1,57 @@
# Didactopus Auth + DB + Async Evaluator Prototype
This update extends the backend API prototype with scaffolding for:
- authentication and multi-user separation
- a real database backend (SQLite via SQLAlchemy)
- evaluator job submission
- asynchronous result ingestion into learner mastery records
## What is included
### Authentication and user separation
This prototype introduces:
- user records
- simple token-based auth scaffold
- learner-state ownership checks
- per-user learner records
This is intentionally minimal and suitable for local development, not production hardening.
### Database backend
The file-backed JSON store is replaced here with a relational persistence scaffold:
- SQLite database by default
- SQLAlchemy ORM models
- tables for users, packs, learners, mastery records, evidence events, evaluator jobs
### Async evaluator jobs
This prototype adds:
- evaluator job submission endpoint
- background worker scaffold
- evaluator results persisted to the database
- resulting evidence events applied into learner mastery records
## Why this matters
This is the first version that structurally supports:
- multiple users
- persistent learner history in a real database
- evaluator-driven evidence arriving later than the UI request that triggered it
That is the correct shape for turning Didactopus into a genuine multi-user learning platform.
## Important note
This remains a prototype scaffold:
- auth is deliberately simple
- SQLite is used for ease of inspection
- background job execution uses FastAPI background tasks rather than a production queue
- secrets, password hardening, and deployment concerns still need a later pass
## Next likely step
- replace simple token auth with stronger session/JWT handling
- migrate from SQLite to PostgreSQL
- add role-based authorization
- move evaluator jobs to a real queue such as Celery/RQ/Arq
- expose evaluator traces and job history in the learner UI

View File

@ -0,0 +1,46 @@
# Didactopus Backend API Prototype
This update adds a small real backend API scaffold for:
- pack registry listing
- learner-state persistence outside the browser
- evaluator-result ingestion into mastery records
## What is included
### Backend
A lightweight FastAPI-style scaffold with:
- `GET /api/packs`
- `GET /api/packs/{pack_id}`
- `GET /api/learners/{learner_id}/state`
- `PUT /api/learners/{learner_id}/state`
- `POST /api/learners/{learner_id}/evidence`
- `GET /api/learners/{learner_id}/recommendations/{pack_id}`
The backend uses simple file-backed JSON storage so the prototype remains easy to inspect and modify.
### Frontend
The learner UI is updated to:
- load pack registry from the backend
- load learner state from the backend
- persist learner state through the backend
- submit simulated evidence events to the backend
- render recommendations returned by the backend
## Why this matters
This is the first step from a single-browser prototype toward a genuinely multi-session, multi-user system:
- learner state no longer has to live only in local storage
- recommendations can be centralized
- evaluator output can enter the same evidence pathway as UI-generated events
- future real evaluators can update learner state without changing the learner UI architecture
## Prototype scope
This remains intentionally small:
- file-backed storage
- no authentication yet
- no database dependency yet
- simulated evaluator/evidence flow still simple
That makes it appropriate for rapid iteration while preserving a clean path to later migration.

View File

@ -0,0 +1,35 @@
# Didactopus Contribution Management Layer
This update extends the review-governance scaffold with a **contribution management layer**.
## Added in this scaffold
- contributor submission records
- pack diffs between versions
- approval gates tied to validation/provenance status
- reviewer task queue / notification scaffold
- admin UI views for submissions, diffs, and review tasks
## Why this matters
Once Didactopus supports outside contributors, maintainers need a structured way to:
- receive submissions
- compare proposed revisions against current versions
- see whether QA/provenance gates are satisfied
- assign or at least surface review work
- keep an audit trail of what was submitted and why it was accepted or rejected
## Scope
This remains a scaffold:
- diffs are summary-oriented rather than line-perfect
- notification/task queues are prototype records
- gate checks are simple but explicit
- contributor identity is tied to existing users rather than a separate contributor model
## Strong next step
- richer semantic diffs for concepts/onboarding/compliance
- required reviewer assignment rules
- notifications via email/chat connectors
- policy engine for gating approvals

View File

@ -0,0 +1,48 @@
# Didactopus
This package adds two substantial prototype layers:
1. a **course-ingestion compliance layer**
2. a **real learner-facing UI prototype**
## What this update covers
### Course-ingestion compliance
The compliance layer is designed for domain packs created from open courseware and other external instructional sources.
It includes:
- source inventory handling
- attribution and provenance records
- pack-level license flags
- compliance QA checks
- exclusion tracking for third-party content
- redistribution-risk signaling
### Learner-facing UI prototype
The prototype UI is designed to be usable by humans approaching a new topic.
It implements:
- topic/domain selection
- first-session onboarding
- “what should I do next?” cards
- visible mastery-map progress
- milestone/reward feedback
- transparent “why the system recommends this” explanations
## UX stance
Didactopus should help a novice get moving quickly, not present a second subject to learn first.
The first session should:
- make the next action obvious
- give quick feedback
- show visible progress
- feel encouraging rather than bureaucratic
## Prototype scope
This is still a prototype scaffold, but the UI and compliance pieces are concrete enough to:
- test interaction patterns
- validate data shapes
- demonstrate provenance-aware ingestion
- serve as a starting point for a fuller implementation

View File

@ -0,0 +1,46 @@
# Didactopus Deployment Policy + Agent Hooks Layer
This update extends the dual-lane policy scaffold with two related concerns:
1. **Deployment policy settings**
- single-user / private-first
- team / lab
- community repository
2. **AI learner / agent hook parity**
- explicit API surfaces for agentic learners
- capability discovery endpoints
- task-oriented endpoints parallel to the UI workflows
- access to pack, learner, evaluator, and recommendation workflows without relying on the UI
## Why this matters
Didactopus should remain usable in two modes:
- a human using the UI directly
- an AI learner or agentic orchestrator using the API directly
The AI learner should not lose capability simply because a human-facing UI exists.
Instead, the UI should be understood as a thin client over API functionality.
## What is added
- deployment policy profile model and endpoint
- policy-aware defaults for pack lane behavior
- agent capability manifest endpoint
- agent learner workflow endpoints
- explicit notes documenting API parity with UI workflows
## AI learner capability check
This scaffold makes the AI-learner situation clearer:
- yes, the API still exposes the essential learner operations
- yes, pack access, recommendations, evaluator job submission, and learner-state access remain directly callable
- yes, there is now an explicit capability-discovery endpoint so an agent can inspect what the installation supports
## Strong next step
- add service-account / non-human agent credentials
- formalize machine-usable schemas for workflows and actions
- add structured action planning endpoint for agentic learners

View File

@ -0,0 +1,47 @@
# Didactopus Dual-Lane Policy Layer
This update extends the contribution-management scaffold with a **dual-lane policy model**:
- a **personal lane** for individuals building domain packs for their own use
- a **community lane** for contributed packs that enter shared review and publication workflows
## Design intent
A single user working privately with Didactopus should **not** be blocked by governance overhead
when constructing packs for their own purposes.
At the same time, community-shared packs should still be subject to:
- contribution intake
- validation and provenance gates
- reviewer workflows
- approval before publication
## Added in this scaffold
- pack policy lane metadata (`personal`, `community`)
- bypass rules for personal packs
- community-only gate enforcement for publication workflows
- UI distinction between personal-authoring and community-submission flows
- reviewer-assignment and approval-policy scaffolding for community packs only
## Resulting behavior
### Personal lane
A user can:
- create and revise packs directly
- publish locally for their own use
- bypass reviewer task queues
- inspect validation/provenance without being blocked by them
### Community lane
A contributor can:
- submit a pack or revision for review
- see gate summaries and diffs
- enter reviewer assignment and approval workflow
- require policy satisfaction before publish
## Strong next step
- per-installation policy settings
- optional stricter local policies for teams or labs
- semantic diffing and structured reviewer checklists

View File

@ -0,0 +1,45 @@
# Didactopus Layout-Aware Graph Engine Layer
This update extends the animated concept-graph scaffold with a **layout-aware graph engine**.
## What it adds
- stable node positioning
- pack-authored coordinates
- automatic layered layout fallback
- cross-pack concept links
- SVG frame export scaffolding
- UI prototype with stable animated graph playback
## Why this matters
Animated concept graphs are much more readable when node positions do not jump around.
This layer makes the graph a more faithful representation of a mastery ecosystem by adding:
- deterministic coordinates
- prerequisite layering
- optional author-specified placement
- cross-pack links for broader learning pathways
- export-ready frame generation for later GIF/MP4 pipelines
## Layout model
The engine uses this priority order:
1. explicit pack-authored coordinates
2. automatic layered layout from prerequisite depth
3. deterministic horizontal spacing inside each depth layer
## Export model
This scaffold includes:
- graph frame payloads from the API
- SVG frame export helper script
- one SVG per frame for later conversion to GIF/MP4 with external tools
## Strong next step
- force-directed refinement
- edge highlighting on unlock transitions
- cross-pack supergraph views
- direct GIF/MP4 rendering pipeline

View File

@ -0,0 +1,17 @@
# Didactopus
This update adds a **learner-state progression engine scaffold**.
It models how mastery records can evolve over time from repeated evidence, with:
- score aggregation
- confidence reinforcement and decay
- prerequisite-gated advancement
- next-step recommendations
Current components:
- learner state model
- evidence application engine
- confidence update logic
- prerequisite-gated readiness checks
- recommendation engine
- tests and sample data

View File

@ -0,0 +1,46 @@
# Didactopus Run/Session Correlation + Learning Animation Layer
This update extends the agent audit / key rotation scaffold with:
- **run/session correlation** for learner episodes
- **workflow logs** tied to learner runs
- **animation data endpoints** for replaying learning progress
- a **UI prototype** that can animate a learner's mastery changes over time
## Why this matters
A single audit event is useful, but it does not tell the full story of a learning episode.
For both human learners and AI learners, Didactopus should be able to represent:
- when a learning run began
- what sequence of actions happened
- how mastery estimates changed during the run
- how recommendations shifted as competence improved
That makes it possible to:
- inspect learner trajectories
- debug agentic learning behavior
- demonstrate the learning process to users, reviewers, or researchers
- create visualizations and animations of learning over time
## Added in this scaffold
- learner run/session records
- workflow event log records
- animation frame generation from learner history
- API endpoints for run creation, workflow-event logging, and animation playback data
- UI prototype for replaying learning progression as an animation
## Animation concept
This scaffold uses a simple time-series animation model:
- each frame corresponds to a learner-history event
- each concept's mastery score is shown per frame
- the UI can replay those frames with a timer
Later implementations could support:
- graph/network animation
- concept unlock transitions
- recommendation timeline overlays
- side-by-side human vs AI learner comparison

View File

@ -0,0 +1,54 @@
# Didactopus Live Learner UI Prototype
This update connects the learner-facing UI prototype to a live in-browser learner-state
and orchestration loop.
## What this prototype does
It now drives the interface from live state rather than static cards:
- topic/domain selection
- first-session onboarding
- recommendation generation from learner state
- visible mastery-map progress from mastery records
- milestone / reward feedback
- transparent "why this is recommended" explanations
- simulated evidence application that updates learner mastery live
- source attribution / compliance panel for provenance-sensitive packs
## Architecture
### Frontend
A React/Vite single-page prototype that manages:
- learner profile selection
- domain pack selection
- learner mastery records
- recommendation cards
- mastery map rendering
- milestone log
- attribution/compliance display
### State engine
A lightweight JS orchestration layer mirrors the Didactopus Python scaffolds:
- evidence application
- score aggregation
- confidence updates
- prerequisite-gated unlocking
- next-step recommendation generation
- reinforcement targeting
- simple claim-readiness estimation
## Why this matters
This is closer to a human-usable experience:
- the learner can see the effect of actions immediately
- the "why next?" logic is inspectable
- progress feels visible and rewarding
- the system remains simple enough for a novice to approach
## Next likely step
Wire this prototype to a real backend so that:
- domain packs are loaded from pack files
- learner state persists across sessions
- evaluator results update mastery records automatically
- attribution/compliance artifacts are derived from actual ingested sources

View File

@ -0,0 +1,46 @@
# Didactopus Direct Media Rendering Pipeline Layer
This update extends the layout-aware graph engine with a **direct media-rendering pipeline**
for turning learning animations into shareable artifacts.
## What it adds
- SVG frame export integration
- GIF manifest generation
- MP4 manifest generation
- FFmpeg-oriented render script scaffolding
- API endpoint for media render jobs
- UI prototype for creating export bundles
## Why this matters
Didactopus should not stop at interactive playback. It should also be able to produce
portable visual artifacts for:
- research presentations
- learner progress sharing
- curriculum review
- AI learner debugging
- repository documentation
This layer provides a structured path from graph animation payloads to:
- frame directories
- render manifests
- GIF/MP4-ready job bundles
## Scope
This scaffold produces:
- exported SVG frames
- JSON render manifests
- shell script scaffolding for FFmpeg conversion
It does **not** embed FFmpeg execution into the API server itself.
That is a deliberate separation so rendering can be delegated to a worker or offline job.
## Strong next step
- actual worker-backed render execution
- render status tracking
- downloadable media artifact registry
- parameterized themes, sizes, and captions

View File

@ -0,0 +1,63 @@
# Didactopus
This update adds a **learner-run orchestration layer scaffold** with explicit **UX design guidance**.
The goal is to tie together:
- domain-pack selection
- learner onboarding
- recommendation generation
- evaluator invocation
- mastery-ledger updates
- stopping criteria for usable expertise
- humane, low-friction user experience
## UX stance
Didactopus should not require the learner to first master Didactopus.
A person approaching a new topic should be able to:
- choose a topic
- understand what to do next
- get feedback quickly
- see progress clearly
- recover easily from mistakes or uncertainty
- experience the process as rewarding rather than bureaucratic
## UX principles
### 1. Low activation energy
The first session should produce visible progress quickly.
### 2. Clear next action
At every point, the learner should know what to do next.
### 3. Gentle structure
The system should scaffold without becoming oppressive or confusing.
### 4. Reward loops
Progress should feel visible and meaningful:
- concept unlocks
- streaks or milestones
- mastery-map filling
- capstone readiness indicators
- “you can now do X” style feedback
### 5. Human-readable state
The learner should be able to inspect:
- what the system thinks they know
- why it thinks that
- what evidence changed the estimate
- what is blocking advancement
### 6. Graceful fallback
When the system is uncertain, it should degrade into simple guidance, not inscrutable failure.
## Included in this update
- orchestration state models
- onboarding/session planning scaffold
- learner run-loop scaffold
- stop/claim-readiness criteria scaffold
- UX-oriented recommendation formatting
- sample CLI flow
- UX notes for future web UI work

View File

@ -0,0 +1,43 @@
# Didactopus Pack + Persistence Prototype
This update connects the learner-facing prototype to:
- **real pack-shaped data files**
- **pack compliance / attribution manifests**
- **persistent learner state** via browser local storage
- a small **Python pack export utility** that converts a Didactopus-style pack directory
into a frontend-consumable JSON bundle
## What is included
### Frontend
- topic/domain selection from real pack files
- first-session onboarding from pack metadata
- recommendation cards driven by live learner state
- mastery-map progress from pack concepts and persisted learner records
- milestone/reward feedback
- transparent "why this is recommended" explanations
- compliance/provenance display from pack manifest
- persistent learner state across reloads via local storage
### Backend-adjacent tooling
- `pack_to_frontend.py` converts:
- `pack.yaml`
- `concepts.yaml`
- `pack_compliance_manifest.json`
into a bundle suitable for the learner UI
## Why this matters
This gets Didactopus closer to a usable human-facing system:
- the UI is no longer a static mock
- packs are loadable artifacts
- learner progress persists between sessions
- provenance/compliance data can be shown from real manifests
## Next likely step
Add a real API layer so that:
- learner state is persisted outside the browser
- evaluator runs produce evidence automatically
- multiple users can work against the same pack registry

View File

@ -0,0 +1,30 @@
# Didactopus Productionization Scaffold
This update takes the prior authenticated API/database prototype one step closer to a
production-ready shape.
## Added in this scaffold
- PostgreSQL-first configuration
- JWT-style auth scaffold with access/refresh token concepts
- role-based authorization model
- background worker queue scaffold
- evaluator history endpoints
- learner-management endpoints
- pack-administration endpoints
- Docker Compose layout for API + worker + PostgreSQL
## Important note
This is still a scaffold, not a hardened deployment:
- JWT signing secrets are placeholder-driven
- queue processing is still simplified
- no TLS termination is included here
- migrations are not fully implemented
## Intended next steps
- replace placeholders with deployment secrets
- add Alembic migrations
- add Redis-backed queue or a more robust worker setup
- connect the learner UI to the new admin/evaluator-history endpoints

View File

@ -0,0 +1,39 @@
# Didactopus Review Governance Layer
This update extends the admin-curation scaffold with a **review and governance layer**
for contributed and curated packs.
## Added in this scaffold
- pack versioning records
- draft / in_review / approved / rejected publication states
- reviewer comments and sign-off records
- moderation workflow for contributed packs
- admin UI views for governance and review history
## Why this matters
Once Didactopus accepts contributed packs or substantial revisions, it needs more than
editing and inspection. It needs process.
A governance-capable system should let maintainers:
- see what version of a pack is current
- review proposed updates before publication
- record reviewer comments
- approve or reject submissions explicitly
- preserve an audit trail of those actions
## Scope
This remains a scaffold:
- versioning is simple and linear
- moderation states are explicit but minimal
- audit history is prototype-level
- approval logic is not yet policy-driven
## Strong next step
- connect governance to the full QA pipeline
- require validation/provenance checks before approval
- add multi-reviewer policies and required approvals
- support diff views between pack versions

View File

@ -0,0 +1,27 @@
# Didactopus Agent Audit Logging + Key Rotation Layer
This update extends the service-account scaffold with two operational controls:
- **audit logging** for machine-initiated activity
- **key rotation / revocation scaffolding** for service accounts
## Added in this scaffold
- audit log records for service-account actions
- request-level audit helper for agent operations
- service-account secret rotation endpoint
- service-account enable/disable endpoint
- admin UI for viewing audit events and rotating credentials
## Why this matters
A serious AI learner deployment needs more than scoped credentials.
It also needs to answer:
- which service account did what?
- when did it do it?
- what endpoint or workflow did it invoke?
- can we replace or revoke a compromised credential?
This layer makes service-account usage more accountable and more maintainable.

View File

@ -0,0 +1,62 @@
# Didactopus Artifact Lifecycle + Knowledge Export Layer
This update extends the worker-backed artifact registry with:
- artifact download support
- retention policy support
- artifact expiration metadata
- lifecycle management endpoints
- learner knowledge export paths
## What it adds
- artifact download API
- retention policy fields on artifacts
- expiry / purge metadata
- artifact state transitions
- knowledge export scaffolding for reuse beyond Didactopus
- guidance for improving packs, producing curriculum outputs, and generating agent skills
## Why this matters
Didactopus should not merely *track* artifacts. It should help manage their lifecycle
and make the knowledge represented by learner activity reusable.
This layer supports two complementary goals:
### 1. Artifact lifecycle management
Artifacts can now be:
- registered
- listed
- downloaded
- marked for retention or expiry
- reviewed for later cleanup
### 2. Knowledge export and reuse
Learner progress can be rendered into structured outputs that may be useful for:
- improving Didactopus domain packs
- drafting traditional curriculum materials
- producing AI-oriented skill packages
- documenting surprising learner discoveries
- supporting mentor review and knowledge capture
## Knowledge export philosophy
A learner should not only consume domain packs; sometimes the learner contributes new
understanding, better examples, clearer misconceptions, or unexpected conceptual links.
Didactopus therefore needs a path from learner activity to reusable artifacts such as:
- concept observations
- misconception notes
- pack-improvement suggestions
- curriculum outlines
- skill manifests
- structured knowledge snapshots
## Strong next step
- true scheduled retention cleanup worker
- signed or permission-checked download tokens
- richer learner-knowledge synthesis pipeline
- export templates for curriculum and skill packages

View File

@ -0,0 +1,181 @@
# Didactopus
Didactopus is an experimental learning infrastructure designed to support **human learners, AI learners, and hybrid learning ecosystems**. It focuses on representing knowledge structures, learner progress, and the evolution of understanding in ways that are inspectable, reproducible, and reusable.
The system treats learning as an **observable graph process** rather than a sequence of isolated exercises. Concept nodes, prerequisite edges, and learner evidence events together produce a dynamic knowledge trajectory.
Didactopus aims to support:
- individual mastery learning
- curriculum authoring
- discovery of new conceptual connections
- AIassisted autodidactic learning
- generation of reusable educational artifacts
---
# Core Concepts
## Domain Packs
A **domain pack** represents a structured set of concepts and relationships.
Concepts form nodes in a graph and may include:
- prerequisites
- crosspack links
- exercises or learning activities
- conceptual metadata
Domain packs can be:
- private (learner owned)
- community shared
- curated / mentorreviewed
---
## Learner State
Each learner accumulates **evidence events** that update mastery estimates for concepts.
Evidence events can include:
- exercises
- reviews
- projects
- observations
- mentor evaluation
Mastery records track:
- score
- confidence
- evidence count
- update history
The system stores full evidence history so that learning trajectories can be reconstructed.
---
## Artifact System
Didactopus produces **artifacts** that document learner knowledge and learning trajectories.
Artifacts may include:
- animation bundles
- graph visualizations
- knowledge exports
- curriculum drafts
- derived skill descriptions
Artifacts are tracked using an **artifact registry** with lifecycle metadata.
Artifact lifecycle states include:
- created
- retained
- expired
- deleted
Retention policies allow systems to manage storage while preserving important learner discoveries.
---
# Worker Rendering System
Rendering jobs transform learner knowledge into visual or structured outputs.
Typical workflow:
1. Learner state + pack graph → animation frames
2. Frames exported as SVG
3. Render bundle created
4. Optional FFmpeg render to GIF/MP4
Outputs are registered as artifacts so they can be downloaded or reused.
---
# Knowledge Export
Didactopus supports exporting structured learner knowledge for reuse.
Export targets include:
- improved domain packs
- curriculum material
- AI training data
- agent skill definitions
- research on learning processes
Exports are **candidate knowledge**, not automatically validated truth.
Human mentors or automated validation pipelines can review them before promotion.
---
# Philosophy: Synthesis and Discovery
Didactopus places strong emphasis on **synthesis**.
Many important discoveries occur not within a single domain, but at the **intersection of domains**.
Examples include:
- mathematics applied to biology
- information theory applied to neuroscience
- physics concepts applied to ecological models
Domain packs therefore support:
- crosspack links
- relationship annotations
- visualization of conceptual overlap
These connections help learners discover:
- analogies
- transferable skills
- deeper structural patterns across knowledge fields
The goal is not merely to learn isolated facts, but to build a **network of understanding**.
---
# Learners as Discoverers
Learners sometimes discover insights that mentors did not anticipate.
Didactopus is designed so that learner output can contribute back into the system through:
- knowledge export
- artifact review workflows
- pack improvement suggestions
This creates a **feedback loop** where learning activity improves the curriculum itself.
---
# Intended Uses
Didactopus supports several categories of use:
Human learning
- selfdirected study
- classroom support
- masterybased curricula
Research
- studying learning trajectories
- analyzing conceptual difficulty
AI systems
- training agent skill graphs
- evaluating reasoning development
Educational publishing
- curriculum drafts
- visualization tools
- learning progress reports

View File

@ -0,0 +1,33 @@
# Didactopus Object Editing, Versioning, Merge/Apply, and Export Layer
This layer extends promotion target objects with:
- editable downstream objects
- version history for promoted objects
- merge/apply flow for pack patch proposals
- export formats for curriculum drafts and skill bundles
It adds concrete scaffolding for turning promoted outputs into maintainable assets
rather than one-off records.
## Added capabilities
- versioned pack patch proposals
- versioned curriculum drafts
- versioned skill bundles
- patch-apply endpoint for updating pack JSON
- markdown/json export for curriculum drafts
- json/yaml-style manifest export for skill bundles
- reviewer UI prototype for editing and exporting target objects
## Why this matters
A promotion target is only the start of the lifecycle. Real use requires:
- revision
- comparison
- approval
- application
- export
This scaffold establishes those mechanisms in a minimal but extensible form.

View File

@ -0,0 +1,21 @@
# Didactopus Promotion Target Objects Layer
This layer extends the review workbench and synthesis scaffold by making promotion
targets concrete. Promotions no longer stop at metadata records; they now create
first-class downstream objects.
Added target object families:
- pack patch proposals
- curriculum drafts
- skill bundles
This scaffold includes:
- ORM models for concrete promotion targets
- repository helpers to create and list them
- promotion logic that materializes target objects
- API endpoints for browsing created target objects
- a UI prototype showing promoted outputs
This is the bridge between "interesting candidate" and "usable Didactopus asset."

View File

@ -0,0 +1,104 @@
# Didactopus
Didactopus is an AI-assisted learning and knowledge-graph platform for representing
how understanding develops, how concepts relate, and how learner output can be
reused to improve packs, curricula, and downstream agent skills.
It is designed for:
- human learners
- AI learners
- human/AI collaborative learning workflows
- curriculum designers
- mentors and reviewers
- researchers studying learning trajectories
The system treats learning as a graph process rather than as a sequence of isolated
quiz events. Domain packs define concepts, prerequisites, and cross-pack
relationships. Learner evidence updates mastery estimates and produces reusable
artifacts.
## Major capabilities
### Domain packs
Domain packs define concept graphs, prerequisite relationships, and optional
cross-pack links. Packs may be private, shared, reviewed, or published.
### Learner state
Learners accumulate evidence events, mastery records, evaluation outcomes, and
trajectory histories.
### Animated graph views
Learning progress can be rendered as stable animated concept graphs and exported
as frame bundles for GIF/MP4 production.
### Artifact registry
Render bundles, knowledge exports, and derivative outputs are managed as
first-class artifacts with retention metadata and lifecycle controls.
### Knowledge export
Learner output can be exported as candidate structured knowledge, including:
- pack-improvement suggestions
- curriculum draft material
- skill-bundle candidates
- archived observations and discovery notes
### Review and promotion workflow
Learner-derived knowledge is not treated as automatically correct. It enters a
triage and review pipeline where it may be promoted into accepted Didactopus
assets.
### Synthesis engine
Didactopus emphasizes synthesis: discovering helpful overlaps and structural
analogies between distinct topics. The synthesis engine proposes candidate links,
analogy clusters, and cross-pack insights.
---
## Philosophy
### Learning as visible structure
The system should make it possible to inspect not just outcomes, but how those
outcomes emerge.
### Learners as discoverers
Learners sometimes find gaps, hidden prerequisites, better examples, or novel
connections that mentors did not anticipate. Didactopus is designed to capture
that productively.
### Synthesis matters
Some of the most valuable understanding comes from linking apparently disparate
topics. Didactopus explicitly supports this through:
- cross-pack links
- similarity scoring
- synthesis proposals
- reusable exports for pack revision and curriculum design
### Reuse beyond Didactopus
Learner knowledge should be renderable into forms useful for:
- improved domain packs
- traditional curriculum products
- agentic AI skills
- mentor notes
- research artifacts
---
## New additions in this update
This update adds design material for:
- review-and-promotion workflow for learner-derived knowledge
- synthesis engine architecture
- updated README and FAQ language reflecting synthesis and knowledge reuse
See:
- `docs/review_and_promotion_workflow.md`
- `docs/synthesis_engine_architecture.md`
- `docs/api_outline.md`
- `docs/data_models.md`
- `FAQ.md`

View File

@ -0,0 +1,15 @@
# Didactopus Review Workbench + Synthesis Scaffold
This scaffold turns the review-and-promotion workflow and synthesis-engine design
into a concrete repository layer.
It adds:
- ORM models for knowledge candidates, reviews, promotions, and synthesis candidates
- repository helpers for triage, review, promotion, and archival
- API endpoints for the review workflow
- API endpoints for synthesis candidate generation and browsing
- a React UI prototype for a reviewer workbench
This is still a scaffold rather than a finished production implementation, but it
provides the structural backbone for the next stage of Didactopus.

8234
.zip_update_manifest.json Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +1,6 @@
FROM python:3.11-slim
WORKDIR /app
COPY pyproject.toml README.md /app/
COPY pyproject.toml /app/pyproject.toml
COPY src /app/src
COPY configs /app/configs
COPY domain-packs /app/domain-packs
RUN pip install --no-cache-dir -e .
CMD ["python", "-m", "didactopus.main", "--domain", "statistics", "--goal", "practical mastery"]
RUN pip install --no-cache-dir .
CMD ["didactopus-api"]

88
FAQ.md Normal file
View File

@ -0,0 +1,88 @@
# Didactopus FAQ
## What is Didactopus for?
Didactopus helps represent learning as a knowledge graph with evidence, mastery,
artifacts, and reusable outputs. It supports both learners and the systems that
author, review, and improve learning materials.
## Is it only for AI learners?
No. It is built for:
- human learners
- AI learners
- hybrid workflows where AI and humans both contribute
## Why emphasize synthesis?
Because understanding often improves when learners recognize structural overlap
between different domains. Transfer, analogy, and conceptual reuse are central to
real intellectual progress.
Examples include:
- entropy in thermodynamics and information theory
- drift in population genetics and random walks
- feedback in engineering, biology, and machine learning
Didactopus tries to surface these overlaps rather than treating subjects as sealed
containers.
## Why not automatically trust learner-derived knowledge?
Learner-derived knowledge can be valuable, but it still needs review,
validation, and provenance. A learner may discover something surprising and
useful, but the system should preserve both usefulness and caution.
## What can learner-derived knowledge become?
Depending on review outcome, it can be promoted into:
- accepted pack improvements
- curriculum drafts
- reusable skill bundles
- archived but unadopted suggestions
## What is the review-and-promotion workflow?
It is the process by which exported learner observations are triaged, reviewed,
validated, and either promoted or archived.
## What is the synthesis engine?
The synthesis engine analyzes concept graphs and learner evidence to identify
candidate conceptual overlaps, analogies, and transferable structures across
packs.
## Can Didactopus produce traditional educational outputs?
Yes. Knowledge exports can seed:
- lesson outlines
- study guides
- exercise sets
- instructor notes
- curriculum maps
## Can Didactopus produce AI skill-like outputs?
Yes. Structured exports can support:
- skill manifests
- evaluation checklists
- failure-mode notes
- canonical examples
- prerequisite maps
## What happens to artifacts over time?
Artifacts can be:
- retained
- archived
- expired
- soft-deleted
Retention policy support is included so temporary debugging products and durable
portfolio artifacts can be treated differently.

View File

@ -0,0 +1,13 @@
concepts:
- id: prior-and-posterior
title: Prior and Posterior
description: Beliefs before and after evidence.
prerequisites: []
- id: posterior-analysis
title: Posterior Analysis
description: Beliefs before and after evidence.
prerequisites: []
- id: statistics-and-probability
title: Statistics and Probability
description: General overview.
prerequisites: []

View File

@ -0,0 +1,5 @@
dimensions:
- name: typography
description: visual polish and typesetting
evidence_types:
- page layout

View File

@ -0,0 +1,5 @@
entry_schema:
concept_id: str
score: float
dimension_mappings: {}
evidence_type_mappings: {}

View File

@ -0,0 +1,3 @@
name: broken-pack
display_name: Broken Pack
version: 0.1.0-draft

View File

@ -0,0 +1 @@
projects: []

View File

@ -0,0 +1,9 @@
stages:
- id: stage-1
title: Foundations
concepts:
- statistics-and-probability
- id: stage-2
title: Advanced Inference
concepts:
- posterior-analysis

View File

@ -0,0 +1,5 @@
rubrics:
- id: basic-rubric
title: Basic Rubric
criteria:
- correctness

View File

@ -1,5 +1,8 @@
review:
default_reviewer: "Wesley R. Elsberry"
allow_provisional_concepts: true
write_promoted_pack: true
write_review_ledger: true
bridge:
host: "127.0.0.1"
port: 8765
registry_path: "workspace_registry.json"
default_workspace_root: "workspaces"

View File

@ -0,0 +1,53 @@
{
"id": "bayes-pack",
"title": "Bayesian Reasoning",
"subtitle": "Probability, evidence, updating, and model criticism.",
"level": "novice-friendly",
"concepts": [
{
"id": "prior",
"title": "Prior",
"prerequisites": [],
"masteryDimension": "mastery",
"exerciseReward": "Prior badge earned"
},
{
"id": "posterior",
"title": "Posterior",
"prerequisites": [
"prior"
],
"masteryDimension": "mastery",
"exerciseReward": "Posterior path opened"
},
{
"id": "model-checking",
"title": "Model Checking",
"prerequisites": [
"posterior"
],
"masteryDimension": "mastery",
"exerciseReward": "Model-checking unlocked"
}
],
"onboarding": {
"headline": "Start with a fast visible win",
"body": "Read one short orientation, answer one guided question, and leave with your first mastery marker.",
"checklist": [
"Read the one-screen topic orientation",
"Answer one guided exercise",
"Write one explanation in your own words"
]
},
"compliance": {
"sources": 2,
"attributionRequired": true,
"shareAlikeRequired": true,
"noncommercialOnly": true,
"flags": [
"share-alike",
"noncommercial",
"excluded-third-party-content"
]
}
}

View File

@ -0,0 +1,49 @@
{
"id": "stats-pack",
"title": "Introductory Statistics",
"subtitle": "Descriptive statistics, sampling, and inference.",
"level": "novice-friendly",
"concepts": [
{
"id": "descriptive",
"title": "Descriptive Statistics",
"prerequisites": [],
"masteryDimension": "mastery",
"exerciseReward": "Descriptive tools unlocked"
},
{
"id": "sampling",
"title": "Sampling",
"prerequisites": [
"descriptive"
],
"masteryDimension": "mastery",
"exerciseReward": "Sampling pathway opened"
},
{
"id": "inference",
"title": "Inference",
"prerequisites": [
"sampling"
],
"masteryDimension": "mastery",
"exerciseReward": "Inference challenge unlocked"
}
],
"onboarding": {
"headline": "Build your first useful data skill",
"body": "You will learn one concept that immediately helps you summarize real data.",
"checklist": [
"See one worked example",
"Compute one short example yourself",
"Explain what the result means"
]
},
"compliance": {
"sources": 1,
"attributionRequired": true,
"shareAlikeRequired": false,
"noncommercialOnly": false,
"flags": []
}
}

View File

@ -1,21 +1,37 @@
version: "3.9"
services:
didactopus:
postgres:
image: postgres:16
environment:
POSTGRES_DB: didactopus
POSTGRES_USER: didactopus
POSTGRES_PASSWORD: didactopus-dev-password
ports:
- "5432:5432"
volumes:
- ./ops/postgres-data:/var/lib/postgresql/data
api:
build: .
image: didactopus:dev
command: didactopus-api
environment:
DIDACTOPUS_DATABASE_URL: postgresql+psycopg://didactopus:didactopus-dev-password@postgres:5432/didactopus
DIDACTOPUS_JWT_SECRET: change-me
ports:
- "8011:8011"
depends_on:
- postgres
volumes:
- ./:/app
working_dir: /app
command: python -m didactopus.main --domain "statistics" --goal "practical mastery"
worker:
build: .
command: didactopus-worker
environment:
DIDACTOPUS_CONFIG: /app/configs/config.yaml
ollama:
image: ollama/ollama:latest
profiles: ["local-llm"]
ports:
- "11434:11434"
DIDACTOPUS_DATABASE_URL: postgresql+psycopg://didactopus:didactopus-dev-password@postgres:5432/didactopus
DIDACTOPUS_JWT_SECRET: change-me
depends_on:
- postgres
volumes:
- ollama-data:/root/.ollama
volumes:
ollama-data:
- ./:/app

43
docs/api_outline.md Normal file
View File

@ -0,0 +1,43 @@
# API Outline
## Review-and-promotion workflow
### Candidate intake
- `POST /api/knowledge-candidates`
- `GET /api/knowledge-candidates`
- `GET /api/knowledge-candidates/{candidate_id}`
### Review
- `POST /api/knowledge-candidates/{candidate_id}/reviews`
- `GET /api/knowledge-candidates/{candidate_id}/reviews`
### Promotion
- `POST /api/knowledge-candidates/{candidate_id}/promote`
- `GET /api/promotions`
- `GET /api/promotions/{promotion_id}`
### Archive / reject
- `POST /api/knowledge-candidates/{candidate_id}/archive`
- `POST /api/knowledge-candidates/{candidate_id}/reject`
## Synthesis engine
### Candidate generation
- `POST /api/synthesis/run`
- `GET /api/synthesis/candidates`
- `GET /api/synthesis/candidates/{synthesis_id}`
### Clusters
- `GET /api/synthesis/clusters`
- `GET /api/synthesis/clusters/{cluster_id}`
### Promotion path
- `POST /api/synthesis/candidates/{synthesis_id}/promote`
## Artifact lifecycle additions
- `GET /api/artifacts/{artifact_id}/download`
- `POST /api/artifacts/{artifact_id}/retention`
- `DELETE /api/artifacts/{artifact_id}`
## Learner knowledge export
- `POST /api/learners/{learner_id}/knowledge-export/{pack_id}`

View File

@ -0,0 +1,19 @@
{
"new_workstreams": [
"review_and_promotion_workflow",
"synthesis_engine"
],
"promotion_targets": [
"accepted_pack_improvements",
"curriculum_drafts",
"reusable_skill_bundles",
"archived_unadopted_suggestions"
],
"synthesis_signals": [
"semantic_similarity",
"structural_similarity",
"learner_trajectory",
"review_history",
"novelty"
]
}

View File

@ -0,0 +1,35 @@
# Attribution and Provenance in Didactopus
A Didactopus pack that is built from external educational material should carry:
- source identity
- source URL
- creator / publisher
- license identifier
- license URL
- adaptation status
- attribution text
- exclusion notes
- retrieval date
## Why both machine-readable and human-readable artifacts?
Machine-readable provenance supports:
- validation
- export pipelines
- automated NOTICE/ATTRIBUTION generation
- future audit tools
Human-readable attribution supports:
- repository inspection
- redistribution review
- transparency for maintainers and learners
## Recommended policy
Every ingested source record should answer:
1. What is the source?
2. Who published it?
3. Under what license?
4. Was the source adapted, excerpted, transformed, or only referenced?
5. Are any subcomponents excluded from the main license?
6. What attribution text should be shown downstream?

View File

@ -0,0 +1,3 @@
# Coverage and Alignment QA
This layer asks whether a domain pack's instructional elements actually line up.

View File

@ -0,0 +1,13 @@
# Curriculum Path Quality Layer
This layer analyzes roadmap and project structure as a learner-facing progression.
## Current checks
- empty stages
- missing checkpoints
- unassessed concepts
- early capstone placement
- dead-end late stages
- stage-size imbalance
- abrupt prerequisite-load jumps

65
docs/data_models.md Normal file
View File

@ -0,0 +1,65 @@
# Data Model Outline
## New core entities
### KnowledgeCandidate
```json
{
"candidate_id": "kc_001",
"source_type": "learner_export",
"source_artifact_id": 42,
"learner_id": "learner_a",
"pack_id": "stats_intro",
"candidate_kind": "hidden_prerequisite",
"title": "Variance may be an unstated prerequisite for standard deviation",
"summary": "Learner evidence suggests an implicit conceptual dependency.",
"structured_payload": {},
"evidence_summary": "Repeated low-confidence performance.",
"confidence_hint": 0.72,
"novelty_score": 0.61,
"synthesis_score": 0.58,
"triage_lane": "pack_improvement",
"current_status": "triaged"
}
```
### ReviewRecord
```json
{
"review_id": "rv_001",
"candidate_id": "kc_001",
"reviewer_id": "mentor_1",
"review_kind": "human_review",
"verdict": "accept_pack_improvement",
"rationale": "Supported by learner evidence and pack topology."
}
```
### PromotionRecord
```json
{
"promotion_id": "pr_001",
"candidate_id": "kc_001",
"promotion_target": "pack_improvement",
"target_object_id": "patch_014",
"promotion_status": "approved"
}
```
### SynthesisCandidate
```json
{
"synthesis_id": "syn_001",
"source_concept_id": "entropy_info",
"target_concept_id": "entropy_thermo",
"source_pack_id": "information_theory",
"target_pack_id": "thermodynamics",
"synthesis_kind": "cross_pack_similarity",
"score_total": 0.84,
"score_semantic": 0.88,
"score_structural": 0.71,
"score_trajectory": 0.55,
"score_review_history": 0.60,
"explanation": "These concepts share terminology and play analogous explanatory roles."
}
```

36
docs/draft-pack-import.md Normal file
View File

@ -0,0 +1,36 @@
# Draft-Pack Import Workflow
The draft-pack import workflow bridges ingestion output and review workspace setup.
## Why it exists
Without import support, users still have to manually:
- locate a generated draft pack
- create a workspace
- copy files into the right directory
- reopen the review tool
That is exactly the kind of startup friction Didactopus is supposed to reduce.
## Current scaffold
This revision adds:
- import API endpoint
- workspace-manager copy/import operation
- UI controls for creating a workspace and importing a draft pack path
## Import behavior
The current scaffold:
- creates the target workspace if needed
- copies the source draft-pack directory into `workspace/draft_pack/`
- updates workspace metadata
- allows the workspace to be opened immediately afterward
## Future work
- file picker integration
- import validation
- overwrite protection / confirmation
- pack schema validation before import
- duplicate import detection

View File

@ -1,55 +1,19 @@
# FAQ
## Why does Didactopus need ingestion and review tools?
## Why add a workspace manager?
Because useful course material often exists in forms that are difficult to activate for
serious self-directed learning. The issue is not just availability of information; it is
the effort required to transform that information into a usable learning domain.
Because the activation-energy problem is not just parsing content. It is also
staying organized once several candidate domains and draft packs exist.
## What problem is this trying to solve?
## What problem does this solve?
A common problem is the **activation energy hump**:
- the course exists
- the notes exist
- the syllabus exists
- the learner is motivated
- but the path from raw material to usable study structure is still too hard
It reduces the friction of:
- tracking multiple projects
- reopening previous review sessions
- switching among draft packs
- treating Didactopus like an actual working environment
Didactopus is meant to reduce that hump.
## Does this help with online course ingestion?
## Why not just read course webpages directly?
Because mastery-oriented use needs structure:
- concepts
- prerequisites
- projects
- rubrics
- review decisions
- trust statuses
Raw course pages do not usually provide these in a directly reusable form.
## Why have a review UI?
Because automated ingestion creates drafts, not final trusted packs. A reviewer still needs
to make explicit curation decisions.
## What can the SPA review UI do in this scaffold?
- inspect concepts
- edit trust status
- edit notes
- edit prerequisites
- resolve conflicts
- export a promoted reviewed pack
## Is this already a full production UI?
No. It is a local-first interactive scaffold with stable data contracts, suitable for
growing into a stronger production interface.
## Does Didactopus eliminate the need to think?
No. The goal is to reduce startup friction and organizational overhead, not to replace
judgment. The user or curator still decides what is trustworthy and how the domain should
be shaped.
Yes. One of the barriers to using online course contents is that the setup work
quickly becomes messy. A workspace manager helps turn that mess into a manageable process.

View File

@ -0,0 +1,63 @@
# Full Pack Validation
The full pack validator inspects the main Didactopus pack artifacts together.
## Purpose
Basic file checks are not enough. A pack can parse successfully while still being
internally inconsistent.
## Files checked
- `pack.yaml`
- `concepts.yaml`
- `roadmap.yaml`
- `projects.yaml`
- `rubrics.yaml`
## Current validation categories
### File and parse checks
- required files present
- YAML parseable
### Pack metadata checks
- `name`
- `display_name`
- `version`
### Concept checks
- concept list exists
- duplicate concept ids
- missing titles
- missing or very thin descriptions
### Roadmap checks
- stage list exists
- stage concepts refer to known concepts
### Project checks
- project list exists
- project prerequisite concepts refer to known concepts
### Rubric checks
- rubric list exists
- each rubric has `id`
- rubric has at least one criterion when present
## Output
Validation returns:
- blocking errors
- warnings
- structured summary counts
- import readiness
## Future work
- cross-pack dependency validation
- mastery-profile validation
- stronger rubric schema
- semantic duplicate detection
- prerequisite cycle detection
- version compatibility checks

View File

@ -0,0 +1,41 @@
# Graph-Aware Prerequisite Analysis
This layer analyzes Didactopus packs as directed graphs over concept dependencies.
## Purpose
File validation asks whether a pack parses.
Structural validation asks whether pack artifacts agree.
Semantic QA asks whether a pack looks educationally plausible.
Graph-aware analysis asks whether the concept dependency structure itself looks healthy.
## Current checks
### Cycle detection
Flags direct or indirect prerequisite cycles.
### Isolated concept detection
Flags concepts with no incoming and no outgoing prerequisite edges.
### Bottleneck detection
Flags concepts with unusually many downstream dependents.
### Flat-domain heuristic
Flags packs where there are too few prerequisite edges relative to concept count.
### Deep-chain heuristic
Flags long prerequisite chains that may indicate over-fragmentation.
## Output
Returns:
- graph warnings
- summary counts
- structural graph metrics
## Future work
- weighted edge confidence
- strongly connected component summaries
- pack-to-pack dependency overlays
- learner-profile-aware path complexity scoring

37
docs/import-validation.md Normal file
View File

@ -0,0 +1,37 @@
# Import Validation and Safety
The import validation layer sits between generated draft packs and managed
review workspaces.
## Why it exists
Importing should not be a blind file copy. Users need to know whether a draft
pack appears structurally usable before it is brought into a workspace.
## Current checks
The scaffold validates:
- presence of required files
- parseability of `pack.yaml`
- parseability of `concepts.yaml`
- basic pack metadata fields
- concept count
- overwrite risk for target workspace
## Current outputs
The preview step returns:
- `ok`
- blocking errors
- warnings
- pack summary
- overwrite warning
- import readiness flag
## Future work
- stronger schema validation
- version compatibility checks against Didactopus core
- validation of roadmap/projects/rubrics coherence
- file diff preview when overwriting
- conflict-aware import merge rather than replacement copy

View File

@ -0,0 +1,45 @@
# Course-Ingestion Compliance Notes
Didactopus domain packs may be derived from licensed educational sources.
That means the ingestion pipeline should preserve enough information to support:
- attribution
- license URL retention
- adaptation status
- share-alike / noncommercial flags
- explicit exclusion handling for third-party content
- downstream auditability
## Recommended source record fields
Each ingested source should carry:
- source ID
- title
- URL
- publisher
- creator
- license ID
- license URL
- retrieval date
- adaptation flag
- attribution text
- exclusion flag
- exclusion notes
## Pack-level compliance fields
A derived pack should carry:
- derived_from_sources
- restrictive_flags
- redistribution_notes
- attribution_required
- share_alike_required
- noncommercial_only
## MIT OCW-specific pattern
For MIT OpenCourseWare-derived packs, treat the course material as licensed content while separately recording:
- third-party exclusions
- image/video exceptions
- linked-content exceptions
- any asset not safely covered by the course-level reuse assumption

14
docs/mit-ocw-notes.md Normal file
View File

@ -0,0 +1,14 @@
# MIT OpenCourseWare Notes
MIT OpenCourseWare publishes material under CC BY-NC-SA 4.0 on its terms page, while also warning
that some external or third-party linked content may be excluded from that license.
That means a Didactopus ingestion pipeline should not simply mark an entire pack as reusable without nuance.
Recommended handling:
- record MIT OCW course pages as licensed sources
- record individual excluded items explicitly when identified
- preserve the license URL in source metadata
- record whether Didactopus generated an adaptation
- generate an attribution artifact automatically
- propagate a noncommercial/sharealike flag in pack metadata when derived content is redistributed

View File

@ -0,0 +1,332 @@
# Review-and-Promotion Workflow for Learner-Derived Knowledge
## Purpose
Learner-derived knowledge should move through a controlled path from raw
observation to reusable system asset. This workflow is designed to turn exports
into reviewed candidates that can become:
- accepted pack improvements
- curriculum drafts
- reusable skill bundles
- archived but unadopted suggestions
## Design goals
- preserve learner discoveries without assuming correctness
- support reviewer triage and provenance
- separate candidate knowledge from accepted knowledge
- allow multiple promotion targets
- keep enough traceability to understand why a candidate was accepted or rejected
---
## Workflow stages
### 1. Capture
Input sources include:
- learner knowledge exports
- mentor observations
- evaluator traces
- synthesis-engine proposals
- artifact-derived observations
Output:
- one or more **knowledge candidates**
### 2. Normalize
Convert raw export text and metadata into structured candidate records, such as:
- concept observation
- hidden prerequisite suggestion
- misconception note
- analogy / cross-pack link suggestion
- curriculum draft fragment
- skill-bundle candidate
### 3. Triage
Each candidate is routed into a review lane:
- pack improvement
- curriculum draft
- skill bundle
- archive / backlog
Triage criteria:
- relevance to existing packs
- novelty
- evidence quality
- reviewer priority
- confidence / ambiguity
### 4. Review
Human or automated reviewers inspect the candidate.
Reviewer questions:
- is the claim coherent?
- is it genuinely new or just a restatement?
- does evidence support it?
- does it fit one or more promotion targets?
- what are the risks if promoted?
### 5. Decision
Possible outcomes:
- accept into pack improvement queue
- promote to curriculum draft
- promote to skill bundle draft
- archive but keep discoverable
- reject as invalid / duplicate / unsupported
### 6. Promotion
Accepted items are transformed into target-specific assets:
- pack patch proposal
- curriculum draft object
- skill bundle object
### 7. Feedback and provenance
Every decision stores:
- source export
- source learner
- source pack
- reviewer identity
- rationale
- timestamps
- superseding links if a later decision replaces an earlier one
---
## Target lanes
## A. Accepted pack improvements
Typical promoted items:
- missing prerequisite
- poor concept ordering
- missing example
- misleading terminology
- clearer analogy
- cross-pack link worth formalizing
Output objects:
- patch proposals
- revised concept metadata
- candidate new edges
- explanation replacement suggestions
Recommended fields:
- pack_id
- concept_ids_affected
- patch_type
- proposed_change
- evidence_summary
- reviewer_notes
- promotion_status
## B. Curriculum drafts
Typical promoted items:
- lesson outline
- concept progression plan
- exercise cluster
- misconceptions guide
- capstone prompt
- study guide segment
Output objects:
- draft lessons
- outline sections
- teacher notes
- question banks
Recommended fields:
- curriculum_product_type
- topic_focus
- target audience
- prerequisite level
- source concepts
- generated draft
- editorial notes
## C. Reusable skill bundles
Typical promoted items:
- concept mastery checklist
- canonical examples
- error patterns
- prerequisite structure
- evaluation rubrics
- recommended actions
Output objects:
- skill manifest
- skill tests
- skill examples
- operational notes
Recommended fields:
- skill_name
- target domain
- prerequisites
- expected inputs
- failure modes
- validation checks
- source pack links
## D. Archived but unadopted suggestions
Some observations should remain searchable even if not promoted.
Use this lane when:
- evidence is interesting but incomplete
- idea is plausible but low priority
- reviewer is uncertain
- concept does not fit a current roadmap
- duplication risk exists but insight might still help later
Recommended fields:
- archive_reason
- potential_future_use
- reviewer_notes
- related packs
- revisit_after
---
## Core data model
### KnowledgeCandidate
- candidate_id
- source_type
- source_artifact_id
- learner_id
- pack_id
- candidate_kind
- title
- summary
- structured_payload
- evidence_summary
- confidence_hint
- novelty_score
- synthesis_score
- triage_lane
- current_status
- created_at
### ReviewRecord
- review_id
- candidate_id
- reviewer_id
- review_kind
- verdict
- rationale
- requested_changes
- created_at
### PromotionRecord
- promotion_id
- candidate_id
- promotion_target
- target_object_id
- promotion_status
- promoted_by
- created_at
### CandidateLink
- link_id
- candidate_id
- related_candidate_id
- relation_kind
- note
---
## Suggested states
Candidate states:
- captured
- normalized
- triaged
- under_review
- accepted
- promoted
- archived
- rejected
Pack improvement states:
- proposed
- approved
- merged
- superseded
Curriculum draft states:
- draft
- editorial_review
- approved
- published
Skill bundle states:
- draft
- validation
- approved
- deployed
---
## Promotion rules
### Pack improvements
Promote when:
- directly improves pack clarity or structure
- supported by evidence or synthesis signal
- low risk of destabilizing pack semantics
### Curriculum drafts
Promote when:
- pedagogically useful even if not strictly a pack change
- enough material exists to support a lesson, guide, or exercise group
### Skill bundles
Promote when:
- insight can be operationalized into a reusable structured behavior package
- prerequisites, examples, and evaluation logic are sufficiently clear
### Archive
Use when:
- the idea is promising but under-evidenced
- better future context may make it valuable
- reviewer wants traceability without immediate adoption
---
## Review UX recommendations
Reviewer interface should show:
- candidate summary
- source artifact and export trace
- related concepts and packs
- novelty score
- synthesis score
- suggested promotion targets
- side-by-side comparison with current pack text
- one-click actions for:
- accept as pack improvement
- promote to curriculum draft
- promote to skill bundle
- archive
- reject
---
## Integration with synthesis engine
Synthesis proposals should enter the same workflow as learner-derived candidates.
This creates a unified promotion pipeline for:
- human observations
- AI learner observations
- automated synthesis discoveries

44
docs/semantic-qa.md Normal file
View File

@ -0,0 +1,44 @@
# Semantic QA Layer
The semantic QA layer sits above structural validation.
## Purpose
Structural validation tells us whether a pack is syntactically and referentially
coherent. Semantic QA asks whether it also looks *educationally plausible*.
## Current checks
### Near-duplicate title check
Flags concept titles that are lexically very similar.
### Over-broad concept check
Flags titles that look unusually broad or compound, such as:
- "Prior and Posterior"
- "Statistics and Probability"
- "Modeling and Inference"
### Description similarity check
Flags concepts whose descriptions appear highly similar.
### Missing bridge concept check
Looks at successive roadmap stages and flags abrupt jumps where later stages do
not seem to share enough semantic continuity with earlier stages.
### Thin prerequisite chain check
Flags advanced-sounding concepts that have zero or very few prerequisites.
## Output
Semantic QA returns:
- warnings
- summary counts
- finding categories
## Future work
- embedding-backed similarity
- prerequisite cycle suspicion scoring
- topic-cluster coherence
- cross-pack semantic overlap
- domain-specific semantic QA plugins

View File

@ -0,0 +1,291 @@
# Synthesis Engine Architecture
## Purpose
The synthesis engine identifies potentially useful conceptual overlaps across
packs, topics, and learning trajectories. Its goal is to help learners and
maintainers discover connections that improve understanding of the topic of
interest.
This is not merely a recommendation engine. It is a **cross-domain structural
discovery system**.
---
## Design goals
- identify meaningful connections across packs
- support analogy, transfer, and hidden-prerequisite discovery
- generate reviewer-friendly candidate proposals
- improve pack quality and curriculum design
- capture surprising learner or AI discoveries
- expose synthesis to users visually and operationally
---
## Kinds of synthesis targets
### 1. Cross-pack concept similarity
Examples:
- entropy ↔ entropy
- drift ↔ random walk
- selection pressure ↔ optimization pressure
### 2. Structural analogy
Examples:
- feedback loops in control theory and ecology
- graph search and evolutionary exploration
- signal detection in acoustics and statistical inference
### 3. Hidden prerequisite discovery
If learners repeatedly fail on a concept despite nominal prerequisites, a
missing dependency may exist.
### 4. Example transfer
A concept may become easier to understand when illustrated by examples from
another pack.
### 5. Skill transfer
A skill bundle from one domain may partially apply in another domain.
---
## Data model
### ConceptNode
- concept_id
- pack_id
- title
- description
- prerequisites
- tags
- examples
- glossary terms
- vector embedding
- graph neighborhood signature
### SynthesisCandidate
- synthesis_id
- source_concept_id
- target_concept_id
- source_pack_id
- target_pack_id
- synthesis_kind
- score_total
- score_semantic
- score_structural
- score_trajectory
- score_review_history
- explanation
- evidence
- current_status
### SynthesisCluster
Represents a small group of mutually related concepts across packs.
Fields:
- cluster_id
- member_concepts
- centroid_embedding
- theme_label
- notes
### HiddenPrerequisiteCandidate
- source_concept_id
- suspected_missing_prerequisite_id
- signal_strength
- supporting_fail_patterns
- reviewer_status
---
## Scoring methods
The engine should combine multiple signals.
### A. Semantic similarity score
Source:
- concept text
- glossary
- examples
- descriptions
- optional embeddings
Methods:
- cosine similarity on embeddings
- term overlap
- phrase normalization
- ontology-aware synonyms if available
### B. Structural similarity score
Source:
- prerequisite neighborhoods
- downstream dependencies
- graph motif similarity
- role in pack topology
Examples:
- concepts that sit in similar graph positions
- concepts that unlock similar kinds of later work
### C. Learner trajectory score
Source:
- shared error patterns
- similar mastery progression
- evidence timing
- co-improvement patterns across learners
Examples:
- learners who master A often learn B faster
- failure on X predicts later trouble on Y
### D. Reviewer history score
Source:
- accepted past synthesis suggestions
- rejected patterns
- reviewer preference patterns
Use:
- prioritize candidate types with strong track record
### E. Novelty score
Purpose:
- avoid flooding reviewers with obvious or duplicate links
Methods:
- de-duplicate against existing pack links
- penalize near-duplicate proposals
- boost under-explored high-signal regions
---
## Composite score
Suggested first composite:
score_total =
0.35 * semantic_similarity
+ 0.25 * structural_similarity
+ 0.20 * trajectory_signal
+ 0.10 * review_prior
+ 0.10 * novelty
This weighting should remain configurable.
---
## Discovery pipeline
### Step 1. Ingest graph and learner data
Inputs:
- packs
- concepts
- pack metadata
- learner states
- evidence histories
- artifacts
- knowledge exports
### Step 2. Compute concept features
For each concept:
- embedding
- prerequisite signature
- downstream signature
- learner-error signature
- example signature
### Step 3. Generate candidate pairs
Possible approaches:
- nearest neighbors in embedding space
- shared tag neighborhoods
- prerequisite motif matches
- frequent learner co-patterns
### Step 4. Re-rank candidates
Combine semantic, structural, and trajectory scores.
### Step 5. Group into synthesis clusters
Cluster related candidate pairs into themes such as:
- uncertainty
- feedback
- optimization
- conservation
- branching processes
### Step 6. Produce explanations
Each candidate should include a compact explanation, for example:
- “These concepts occupy similar prerequisite roles.”
- “Learner error patterns suggest a hidden shared dependency.”
- “Examples in pack A may clarify this concept in pack B.”
### Step 7. Send to review-and-promotion workflow
All candidates become reviewable objects rather than immediately modifying packs.
---
## Outputs
The engine should emit candidate objects suitable for promotion into:
- cross-pack links
- pack improvement suggestions
- curriculum draft notes
- skill-bundle drafts
- archived synthesis notes
---
## UI visualization
### 1. Synthesis map
Graph overlay showing:
- existing cross-pack links
- proposed synthesis links
- confidence levels
- accepted vs candidate status
### 2. Candidate explanation panel
For a selected proposed link:
- why it was suggested
- component scores
- source evidence
- similar accepted proposals
- reviewer actions
### 3. Cluster view
Shows higher-level themes connecting multiple packs.
### 4. Learner pathway overlay
Allows a maintainer to see where synthesis would help a learner currently stuck in
one pack by borrowing examples or structures from another.
### 5. Promotion workflow integration
Every synthesis candidate can be:
- accepted as pack improvement
- converted to curriculum draft
- converted to skill bundle
- archived
- rejected
---
## Appropriate uses
The synthesis engine is especially useful for:
- interdisciplinary education
- transfer learning support
- AI learner introspection
- pack maintenance
- curriculum design
- discovery of hidden structure
---
## Cautions
- synthesis suggestions are candidate aids, not guaranteed truths
- semantic similarity alone is not enough
- over-linking can confuse learners
- reviewers need concise explanation and provenance
- accepted synthesis should be visible as intentional structure, not accidental clutter

View File

@ -0,0 +1,33 @@
# UI Visualization Notes
## Review workbench
Main panes:
1. Candidate queue
2. Candidate detail
3. Evidence/provenance panel
4. Promotion actions
5. Related synthesis suggestions
## Synthesis map
Features:
- zoomable concept supergraph
- accepted vs proposed links
- cross-pack color coding
- cluster highlighting
- filter by score, pack, theme
## Promotion dashboard
Views:
- pack improvement queue
- curriculum draft queue
- skill bundle queue
- archive browser
## Learner-facing synthesis hints
The learner view should be selective and helpful, not noisy.
Good uses:
- “This concept may connect to another pack you know.”
- “An analogy from another topic may help here.”
- “Learners like you often benefit from this bridge concept.”

34
docs/ux-notes.md Normal file
View File

@ -0,0 +1,34 @@
# UX Notes for Human Learners
## First-session design
The first session should:
- fit on one screen when possible
- ask for one meaningful action
- generate one visible success marker
## Recommended interface pattern
A learner-facing UI should emphasize:
- one main recommendation card
- one secondary reinforcement task
- a small visible mastery map
- a plain-language "why this next?" explanation
## Fun / rewarding elements
Good candidates:
- concept unlock animations
- progress rings
- milestone badges
- encouraging plain-language summaries
- capstone readiness meter
Avoid:
- gamification that obscures meaning
- complicated dashboards on first use
- forcing users to interpret opaque confidence math
## Human-readable learner state
Always expose:
- what changed
- why it changed
- what is still missing
- what the next sensible step is

22
docs/workspace-manager.md Normal file
View File

@ -0,0 +1,22 @@
# Workspace Manager
The workspace manager provides project-level organization for Didactopus review work.
## Why it exists
Without a workspace layer, users still have to manually track:
- which draft packs exist
- where they live
- which one is currently being reviewed
- which ones have promoted outputs
That creates unnecessary friction.
## Features in this scaffold
- workspace registry file
- create workspace
- list workspaces
- open a specific workspace
- track recent workspaces
- expose these through a local bridge API

View File

@ -0,0 +1,10 @@
concepts:
- id: prior
title: Prior
prerequisites: []
- id: posterior
title: Posterior
prerequisites: [prior]
- id: model-checking
title: Model Checking
prerequisites: [posterior]

10
example-pack/pack.yaml Normal file
View File

@ -0,0 +1,10 @@
name: bayes-pack
display_name: Bayesian Reasoning
description: Probability, evidence, updating, and model criticism.
audience_level: novice-friendly
first_session_headline: Start with a fast visible win
first_session_body: Read one short orientation, answer one guided question, and leave with your first mastery marker.
first_session_checklist:
- Read the one-screen topic orientation
- Answer one guided exercise
- Write one explanation in your own words

View File

@ -0,0 +1,20 @@
{
"pack_id": "bayes-pack",
"display_name": "Bayesian Reasoning",
"derived_from_sources": [
"mit-ocw-bayes",
"excluded-figure"
],
"attribution_required": true,
"share_alike_required": true,
"noncommercial_only": true,
"restrictive_flags": [
"share-alike",
"noncommercial",
"excluded-third-party-content"
],
"redistribution_notes": [
"Derived redistributable material may need to remain under the same license family.",
"Derived redistributable material may be limited to noncommercial use."
]
}

View File

@ -0,0 +1,21 @@
concepts:
- id: bayes-prior
title: Bayes Prior
description: Prior beliefs before evidence in a probabilistic model.
prerequisites: []
mastery_signals:
- Explain a prior distribution.
- id: bayes-posterior
title: Bayes Posterior
description: Updated beliefs after evidence in a probabilistic model.
prerequisites:
- bayes-prior
mastery_signals:
- Compare prior and posterior beliefs.
- id: model-checking
title: Model Checking
description: Evaluate whether model assumptions and fit remain plausible.
prerequisites:
- bayes-posterior
mastery_signals:
- Critique a model fit.

View File

@ -0,0 +1,3 @@
# Conflict Report
- Example imported conflict.

View File

@ -0,0 +1,8 @@
dimensions:
- name: explanation
description: quality of explanation
- name: comparison
description: quality of comparison
evidence_types:
- explanation
- comparison report

View File

@ -0,0 +1,15 @@
entry_schema:
concept_id: str
dimension: str
score: float
confidence: float
last_updated: datetime
dimension_mappings:
explanation: explanation
comparison: comparison
evidence_type_mappings:
explanation: text_artifact
comparison report: project_artifact
confidence_update:
method: weighted_average
decay: 0.05

3
generated-pack/pack.yaml Normal file
View File

@ -0,0 +1,3 @@
name: imported-pack
display_name: Imported Pack
version: 0.1.0-draft

View File

@ -0,0 +1,8 @@
projects:
- id: compare-beliefs
title: Compare Prior and Posterior
prerequisites:
- bayes-prior
- bayes-posterior
deliverables:
- short report

View File

@ -0,0 +1,3 @@
# Review Report
- Example imported review flag.

View File

@ -0,0 +1,13 @@
stages:
- id: stage-1
title: Prior Beliefs
concepts:
- bayes-prior
- id: stage-2
title: Posterior Updating
concepts:
- bayes-posterior
- id: stage-3
title: Model Checking
concepts:
- model-checking

View File

@ -0,0 +1,6 @@
rubrics:
- id: basic-rubric
title: Basic Rubric
criteria:
- correctness
- explanation

View File

@ -5,7 +5,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "didactopus"
version = "0.1.0"
description = "Didactopus: interactive review UI scaffold"
description = "Didactopus: workspace manager for local review UI"
readme = "README.md"
requires-python = ">=3.10"
license = {text = "MIT"}
@ -16,7 +16,7 @@ dependencies = ["pydantic>=2.7", "pyyaml>=6.0"]
dev = ["pytest>=8.0", "ruff>=0.6"]
[project.scripts]
didactopus-review = "didactopus.main:main"
didactopus-review-bridge = "didactopus.review_bridge_server:main"
[tool.setuptools.packages.find]
where = ["src"]

23
samples/ATTRIBUTION.md Normal file
View File

@ -0,0 +1,23 @@
# Attribution
## Example MIT OpenCourseWare Course Page
- Source ID: mit-ocw-bayes-demo
- URL: https://ocw.mit.edu/courses/example-course/
- Creator: MIT OpenCourseWare
- Publisher: Massachusetts Institute of Technology
- License: CC BY-NC-SA 4.0
- License URL: https://creativecommons.org/licenses/by-nc-sa/4.0/
- Adapted: yes
- Adaptation notes: Didactopus extracted topic structure, concepts, and exercise prompts into a derived domain pack.
- Attribution text: Derived in part from MIT OpenCourseWare material, used under CC BY-NC-SA 4.0.
## Example Excluded Third-Party Item
- Source ID: mit-ocw-third-party-note
- URL: https://ocw.mit.edu/courses/example-course/pages/lecture-videos/
- Creator: Third-party rights holder
- Publisher: Massachusetts Institute of Technology
- License: third-party-excluded
- Adapted: no
- Attribution text: Referenced only for exclusion tracking; not reused in redistributed Didactopus artifacts.
- Excluded from upstream course license: yes
- Exclusion notes: This item was flagged as excluded from the OCW Creative Commons license and should not be redistributed as pack content.

10
samples/concepts.yaml Normal file
View File

@ -0,0 +1,10 @@
concepts:
- id: bayes-prior
title: Bayes Prior
prerequisites: []
- id: bayes-posterior
title: Bayes Posterior
prerequisites: [bayes-prior]
- id: model-checking
title: Model Checking
prerequisites: [bayes-posterior]

View File

@ -0,0 +1,14 @@
{
"learner_id": "demo-learner",
"records": [
{
"concept_id": "bayes-prior",
"dimension": "mastery",
"score": 0.81,
"confidence": 0.72,
"evidence_count": 3,
"last_updated": "2026-03-13T12:00:00+00:00"
}
],
"history": []
}

View File

@ -0,0 +1,9 @@
{
"source_count": 2,
"licenses_present": [
"CC BY-NC-SA 4.0",
"third-party-excluded"
],
"excluded_source_count": 1,
"adapted_source_count": 1
}

26
samples/sources.yaml Normal file
View File

@ -0,0 +1,26 @@
sources:
- source_id: mit-ocw-bayes
title: Example MIT OpenCourseWare Bayesian Materials
url: https://ocw.mit.edu/courses/example-course/
publisher: Massachusetts Institute of Technology
creator: MIT OpenCourseWare
license_id: CC BY-NC-SA 4.0
license_url: https://creativecommons.org/licenses/by-nc-sa/4.0/
retrieved_at: 2026-03-13
adapted: true
attribution_text: Derived in part from MIT OpenCourseWare material used under CC BY-NC-SA 4.0.
excluded_from_upstream_license: false
exclusion_notes: ""
- source_id: excluded-figure
title: Example Third-Party Figure
url: https://ocw.mit.edu/courses/example-course/pages/lecture-videos/
publisher: Massachusetts Institute of Technology
creator: Third-party rights holder
license_id: third-party-excluded
license_url: ""
retrieved_at: 2026-03-13
adapted: false
attribution_text: Tracked for exclusion; not reused in redistributed pack content.
excluded_from_upstream_license: true
exclusion_notes: Figure flagged as excluded from the course-level Creative Commons license.

150
src/didactopus/api.py Normal file
View File

@ -0,0 +1,150 @@
from __future__ import annotations
from fastapi import FastAPI, HTTPException, Header, Depends
from fastapi.middleware.cors import CORSMiddleware
import uvicorn
from .db import Base, engine
from .models import (
LoginRequest, TokenPair, KnowledgeCandidateCreate, KnowledgeCandidateUpdate,
ReviewCreate, PromoteRequest, SynthesisRunRequest, SynthesisPromoteRequest,
CreateLearnerRequest
)
from .repository import (
authenticate_user, get_user_by_id, create_learner, learner_owned_by_user,
create_candidate, list_candidates, get_candidate, update_candidate,
create_review, list_reviews, create_promotion, list_promotions,
list_synthesis_candidates, get_synthesis_candidate
)
from .auth import issue_access_token, issue_refresh_token, decode_token, new_token_id
from .synthesis import generate_synthesis_candidates
Base.metadata.create_all(bind=engine)
app = FastAPI(title="Didactopus Review Workbench API")
app.add_middleware(CORSMiddleware, allow_origins=["*"], allow_credentials=True, allow_methods=["*"], allow_headers=["*"])
_refresh_tokens = {}
def current_user(authorization: str = Header(default="")):
token = authorization.removeprefix("Bearer ").strip()
payload = decode_token(token) if token else None
if not payload or payload.get("kind") != "access":
raise HTTPException(status_code=401, detail="Unauthorized")
user = get_user_by_id(int(payload["sub"]))
if user is None or not user.is_active:
raise HTTPException(status_code=401, detail="Unauthorized")
return user
def require_reviewer(user = Depends(current_user)):
if user.role not in {"admin", "reviewer"}:
raise HTTPException(status_code=403, detail="Reviewer role required")
return user
@app.post("/api/login", response_model=TokenPair)
def login(payload: LoginRequest):
user = authenticate_user(payload.username, payload.password)
if user is None:
raise HTTPException(status_code=401, detail="Invalid credentials")
token_id = new_token_id()
_refresh_tokens[token_id] = user.id
return TokenPair(
access_token=issue_access_token(user.id, user.username, user.role),
refresh_token=issue_refresh_token(user.id, user.username, user.role, token_id),
username=user.username,
role=user.role
)
@app.post("/api/learners")
def api_create_learner(payload: CreateLearnerRequest, user = Depends(current_user)):
create_learner(user.id, payload.learner_id, payload.display_name)
return {"ok": True, "learner_id": payload.learner_id}
@app.post("/api/knowledge-candidates")
def api_create_candidate(payload: KnowledgeCandidateCreate, reviewer = Depends(require_reviewer)):
candidate_id = create_candidate(payload)
return {"candidate_id": candidate_id}
@app.get("/api/knowledge-candidates")
def api_list_candidates(reviewer = Depends(require_reviewer)):
return list_candidates()
@app.get("/api/knowledge-candidates/{candidate_id}")
def api_get_candidate(candidate_id: int, reviewer = Depends(require_reviewer)):
row = get_candidate(candidate_id)
if row is None:
raise HTTPException(status_code=404, detail="Candidate not found")
return row
@app.post("/api/knowledge-candidates/{candidate_id}/update")
def api_update_candidate(candidate_id: int, payload: KnowledgeCandidateUpdate, reviewer = Depends(require_reviewer)):
row = update_candidate(candidate_id, triage_lane=payload.triage_lane, current_status=payload.current_status)
if row is None:
raise HTTPException(status_code=404, detail="Candidate not found")
return {"candidate_id": row.id, "triage_lane": row.triage_lane, "current_status": row.current_status}
@app.post("/api/knowledge-candidates/{candidate_id}/reviews")
def api_create_review(candidate_id: int, payload: ReviewCreate, reviewer = Depends(require_reviewer)):
if get_candidate(candidate_id) is None:
raise HTTPException(status_code=404, detail="Candidate not found")
review_id = create_review(candidate_id, reviewer.id, payload)
return {"review_id": review_id}
@app.get("/api/knowledge-candidates/{candidate_id}/reviews")
def api_list_reviews(candidate_id: int, reviewer = Depends(require_reviewer)):
return list_reviews(candidate_id)
@app.post("/api/knowledge-candidates/{candidate_id}/promote")
def api_promote_candidate(candidate_id: int, payload: PromoteRequest, reviewer = Depends(require_reviewer)):
if get_candidate(candidate_id) is None:
raise HTTPException(status_code=404, detail="Candidate not found")
promotion_id = create_promotion(candidate_id, reviewer.id, payload)
return {"promotion_id": promotion_id}
@app.get("/api/promotions")
def api_list_promotions(reviewer = Depends(require_reviewer)):
return list_promotions()
@app.post("/api/synthesis/run")
def api_run_synthesis(payload: SynthesisRunRequest, reviewer = Depends(require_reviewer)):
created = generate_synthesis_candidates(payload.source_pack_id, payload.target_pack_id, payload.limit)
return {"created_count": len(created), "synthesis_ids": created}
@app.get("/api/synthesis/candidates")
def api_list_synthesis(reviewer = Depends(require_reviewer)):
return list_synthesis_candidates()
@app.get("/api/synthesis/candidates/{synthesis_id}")
def api_get_synthesis(synthesis_id: int, reviewer = Depends(require_reviewer)):
row = get_synthesis_candidate(synthesis_id)
if row is None:
raise HTTPException(status_code=404, detail="Synthesis candidate not found")
return row
@app.post("/api/synthesis/candidates/{synthesis_id}/promote")
def api_promote_synthesis(synthesis_id: int, payload: SynthesisPromoteRequest, reviewer = Depends(require_reviewer)):
syn = get_synthesis_candidate(synthesis_id)
if syn is None:
raise HTTPException(status_code=404, detail="Synthesis candidate not found")
candidate_id = create_candidate(KnowledgeCandidateCreate(
source_type="synthesis_engine",
source_artifact_id=None,
learner_id="system",
pack_id=syn["source_pack_id"],
candidate_kind="synthesis_proposal",
title=f"Synthesis: {syn['source_concept_id']}{syn['target_concept_id']}",
summary=syn["explanation"],
structured_payload=syn,
evidence_summary="Promoted from synthesis engine candidate",
confidence_hint=syn["score_total"],
novelty_score=syn["evidence"].get("novelty", 0.0),
synthesis_score=syn["score_total"],
triage_lane=payload.promotion_target,
))
promotion_id = create_promotion(candidate_id, reviewer.id, PromoteRequest(
promotion_target=payload.promotion_target,
target_object_id="",
promotion_status="approved",
))
return {"candidate_id": candidate_id, "promotion_id": promotion_id}
def main():
uvicorn.run(app, host="127.0.0.1", port=8011)

View File

@ -0,0 +1,47 @@
from __future__ import annotations
from pathlib import Path
import argparse
from .provenance import load_sources, write_provenance_manifest
def render_attribution_markdown(sources_path: str | Path) -> str:
inventory = load_sources(sources_path)
lines = ["# Attribution", ""]
for src in inventory.sources:
lines.append(f"## {src.title}")
lines.append(f"- Source ID: {src.source_id}")
lines.append(f"- URL: {src.url}")
if src.creator:
lines.append(f"- Creator: {src.creator}")
if src.publisher:
lines.append(f"- Publisher: {src.publisher}")
if src.license_id:
lines.append(f"- License: {src.license_id}")
if src.license_url:
lines.append(f"- License URL: {src.license_url}")
lines.append(f"- Adapted: {'yes' if src.adapted else 'no'}")
if src.adaptation_notes:
lines.append(f"- Adaptation notes: {src.adaptation_notes}")
if src.attribution_text:
lines.append(f"- Attribution text: {src.attribution_text}")
if src.excluded_from_upstream_license:
lines.append(f"- Excluded from upstream course license: yes")
if src.exclusion_notes:
lines.append(f"- Exclusion notes: {src.exclusion_notes}")
lines.append("")
return "\n".join(lines)
def build_artifacts(sources_path: str | Path, attribution_out: str | Path, manifest_out: str | Path) -> None:
Path(attribution_out).write_text(render_attribution_markdown(sources_path), encoding="utf-8")
inventory = load_sources(sources_path)
write_provenance_manifest(inventory, manifest_out)
def main() -> None:
parser = argparse.ArgumentParser(description="Build Didactopus attribution artifacts from sources.yaml")
parser.add_argument("sources")
parser.add_argument("--attribution-out", default="ATTRIBUTION.md")
parser.add_argument("--manifest-out", default="provenance_manifest.json")
args = parser.parse_args()
build_artifacts(args.sources, args.attribution_out, args.manifest_out)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,29 @@
from __future__ import annotations
from pathlib import Path
from .provenance import load_sources
def attribution_qa(sources_path: str | Path) -> dict:
inv = load_sources(sources_path)
warnings: list[str] = []
for src in inv.sources:
if not src.license_id:
warnings.append(f"Source '{src.source_id}' is missing a license identifier.")
if src.license_id and not src.license_url:
warnings.append(f"Source '{src.source_id}' is missing a license URL.")
if not src.attribution_text:
warnings.append(f"Source '{src.source_id}' is missing attribution text.")
if not src.url:
warnings.append(f"Source '{src.source_id}' is missing a source URL.")
if src.adapted and not src.adaptation_notes:
warnings.append(f"Source '{src.source_id}' is marked adapted but has no adaptation notes.")
if src.excluded_from_upstream_license and not src.exclusion_notes:
warnings.append(f"Source '{src.source_id}' is marked excluded but has no exclusion notes.")
summary = {
"warning_count": len(warnings),
"source_count": len(inv.sources),
"adapted_source_count": sum(1 for s in inv.sources if s.adapted),
"excluded_source_count": sum(1 for s in inv.sources if s.excluded_from_upstream_license),
}
return {"warnings": warnings, "summary": summary}

35
src/didactopus/auth.py Normal file
View File

@ -0,0 +1,35 @@
from __future__ import annotations
from datetime import datetime, timedelta, timezone
from jose import jwt, JWTError
from passlib.context import CryptContext
import secrets
from .config import load_settings
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
settings = load_settings()
def hash_password(password: str) -> str:
return pwd_context.hash(password)
def verify_password(password: str, password_hash: str) -> bool:
return pwd_context.verify(password, password_hash)
def _encode_token(payload: dict, expires_delta: timedelta) -> str:
to_encode = dict(payload)
to_encode["exp"] = datetime.now(timezone.utc) + expires_delta
return jwt.encode(to_encode, settings.jwt_secret, algorithm=settings.jwt_algorithm)
def issue_access_token(user_id: int, username: str, role: str) -> str:
return _encode_token({"sub": str(user_id), "username": username, "role": role, "kind": "access"}, timedelta(minutes=30))
def issue_refresh_token(user_id: int, username: str, role: str, token_id: str) -> str:
return _encode_token({"sub": str(user_id), "username": username, "role": role, "kind": "refresh", "jti": token_id}, timedelta(days=14))
def decode_token(token: str) -> dict | None:
try:
return jwt.decode(token, settings.jwt_secret, algorithms=[settings.jwt_algorithm])
except JWTError:
return None
def new_token_id() -> str:
return secrets.token_urlsafe(24)

View File

@ -0,0 +1,29 @@
from __future__ import annotations
from pydantic import BaseModel, Field
class SourceRecord(BaseModel):
source_id: str
title: str
url: str
publisher: str = ""
creator: str = ""
license_id: str = ""
license_url: str = ""
retrieved_at: str = ""
adapted: bool = False
attribution_text: str = ""
excluded_from_upstream_license: bool = False
exclusion_notes: str = ""
class PackComplianceManifest(BaseModel):
pack_id: str
display_name: str
derived_from_sources: list[str] = Field(default_factory=list)
attribution_required: bool = True
share_alike_required: bool = False
noncommercial_only: bool = False
restrictive_flags: list[str] = Field(default_factory=list)
redistribution_notes: list[str] = Field(default_factory=list)
class SourceInventory(BaseModel):
sources: list[SourceRecord] = Field(default_factory=list)

View File

@ -5,13 +5,19 @@ import yaml
class ReviewConfig(BaseModel):
default_reviewer: str = "Unknown Reviewer"
allow_provisional_concepts: bool = True
write_promoted_pack: bool = True
write_review_ledger: bool = True
class BridgeConfig(BaseModel):
host: str = "127.0.0.1"
port: int = 8765
registry_path: str = "workspace_registry.json"
default_workspace_root: str = "workspaces"
class AppConfig(BaseModel):
review: ReviewConfig = Field(default_factory=ReviewConfig)
bridge: BridgeConfig = Field(default_factory=BridgeConfig)
def load_config(path: str | Path) -> AppConfig:

View File

@ -0,0 +1,95 @@
from __future__ import annotations
from pathlib import Path
import argparse, json, yaml
from .compliance_models import SourceInventory, PackComplianceManifest
def load_sources(path: str | Path) -> SourceInventory:
data = yaml.safe_load(Path(path).read_text(encoding="utf-8")) or {}
return SourceInventory.model_validate(data)
def build_pack_compliance_manifest(
pack_id: str,
display_name: str,
inventory: SourceInventory,
) -> PackComplianceManifest:
licenses = {s.license_id for s in inventory.sources if s.license_id}
restrictive_flags: list[str] = []
redistribution_notes: list[str] = []
share_alike_required = any("SA" in lic for lic in licenses)
noncommercial_only = any("NC" in lic for lic in licenses)
if share_alike_required:
restrictive_flags.append("share-alike")
redistribution_notes.append("Derived redistributable material may need to remain under the same license family.")
if noncommercial_only:
restrictive_flags.append("noncommercial")
redistribution_notes.append("Derived redistributable material may be limited to noncommercial use.")
if any(s.excluded_from_upstream_license for s in inventory.sources):
restrictive_flags.append("excluded-third-party-content")
redistribution_notes.append("Some source-linked assets were flagged as excluded from the upstream course license.")
return PackComplianceManifest(
pack_id=pack_id,
display_name=display_name,
derived_from_sources=[s.source_id for s in inventory.sources],
attribution_required=True,
share_alike_required=share_alike_required,
noncommercial_only=noncommercial_only,
restrictive_flags=restrictive_flags,
redistribution_notes=redistribution_notes,
)
def compliance_qa(inventory: SourceInventory, manifest: PackComplianceManifest) -> dict:
warnings: list[str] = []
for src in inventory.sources:
if not src.url:
warnings.append(f"Source '{src.source_id}' is missing a URL.")
if not src.license_id:
warnings.append(f"Source '{src.source_id}' is missing a license identifier.")
if src.license_id and not src.license_url:
warnings.append(f"Source '{src.source_id}' is missing a license URL.")
if not src.attribution_text:
warnings.append(f"Source '{src.source_id}' is missing attribution text.")
if src.excluded_from_upstream_license and not src.exclusion_notes:
warnings.append(f"Source '{src.source_id}' is marked excluded but has no exclusion notes.")
if manifest.attribution_required and not inventory.sources:
warnings.append("Manifest requires attribution but the source inventory is empty.")
if manifest.share_alike_required and "share-alike" not in manifest.restrictive_flags:
warnings.append("Manifest indicates share-alike but restrictive flags are incomplete.")
if manifest.noncommercial_only and "noncommercial" not in manifest.restrictive_flags:
warnings.append("Manifest indicates noncommercial-only but restrictive flags are incomplete.")
return {
"warnings": warnings,
"summary": {
"warning_count": len(warnings),
"source_count": len(inventory.sources),
"share_alike_required": manifest.share_alike_required,
"noncommercial_only": manifest.noncommercial_only,
},
}
def write_manifest(manifest: PackComplianceManifest, outpath: str | Path) -> None:
Path(outpath).write_text(json.dumps(manifest.model_dump(), indent=2), encoding="utf-8")
def main() -> None:
parser = argparse.ArgumentParser(description="Build and QA Didactopus course-ingestion compliance artifacts.")
parser.add_argument("sources")
parser.add_argument("--pack-id", default="demo-pack")
parser.add_argument("--display-name", default="Demo Pack")
parser.add_argument("--out", default="pack_compliance_manifest.json")
args = parser.parse_args()
inventory = load_sources(args.sources)
manifest = build_pack_compliance_manifest(args.pack_id, args.display_name, inventory)
qa = compliance_qa(inventory, manifest)
write_manifest(manifest, args.out)
print(json.dumps({"manifest": manifest.model_dump(), "qa": qa}, indent=2))
if __name__ == "__main__":
main()

View File

@ -0,0 +1,2 @@
def coverage_alignment_for_pack(source_dir):
return {'warnings': [], 'summary': {'coverage_warning_count': 0}}

8
src/didactopus/db.py Normal file
View File

@ -0,0 +1,8 @@
from sqlalchemy import create_engine
from sqlalchemy.orm import declarative_base, sessionmaker
from .config import load_settings
settings = load_settings()
engine = create_engine(settings.database_url, future=True)
SessionLocal = sessionmaker(bind=engine, autoflush=False, autocommit=False, future=True)
Base = declarative_base()

View File

@ -0,0 +1,47 @@
from __future__ import annotations
from pathlib import Path
import json, yaml
from .learner_state import LearnerState
from .orchestration_models import LearnerProfile, StopCriteria
from .onboarding import build_initial_run_state, build_first_session_plan
from .orchestrator import run_learning_cycle, apply_demo_evidence
def load_concepts(path: str | Path) -> list[dict]:
data = yaml.safe_load(Path(path).read_text(encoding="utf-8")) or {}
return list(data.get("concepts", []) or [])
def main():
base = Path(__file__).resolve().parents[2] / "samples"
concepts = load_concepts(base / "concepts.yaml")
profile = LearnerProfile(
learner_id="demo-learner",
display_name="Demo Learner",
target_domain="Bayesian reasoning",
prior_experience="novice",
preferred_session_minutes=20,
motivation_notes="Curious and wants quick visible progress.",
)
run_state = build_initial_run_state(profile)
plan = build_first_session_plan(profile, concepts)
learner_state = LearnerState(learner_id=profile.learner_id)
learner_state = apply_demo_evidence(learner_state, "bayes-prior", "2026-03-13T12:00:00+00:00")
stop = StopCriteria(
min_mastered_concepts=1,
min_average_score=0.70,
min_average_confidence=0.20,
required_capstones=[],
)
result = run_learning_cycle(learner_state, run_state, concepts, stop)
payload = {
"first_session_plan": plan.model_dump(),
"cycle_result": result,
"records": [r.model_dump() for r in learner_state.records],
}
print(json.dumps(payload, indent=2))
if __name__ == "__main__":
main()

Some files were not shown because too many files have changed in this diff Show More