Course Review Workflow.

This commit is contained in:
welsberr 2026-03-13 08:04:26 -04:00
parent 1d0de94025
commit f608aa692b
19 changed files with 998 additions and 92 deletions

350
075-README.md Normal file
View File

@ -0,0 +1,350 @@
# Didactopus
![Didactopus mascot](artwork/didactopus-mascot.png)
**Didactopus** is a local-first AI-assisted autodidactic mastery platform for building genuine expertise through concept graphs, adaptive curriculum planning, evidence-driven mastery, Socratic mentoring, and project-based learning.
**Tagline:** *Many arms, one goal — mastery.*
## Recent revisions
### Interactive Domain review
This revision upgrades the earlier static review scaffold into an **interactive local SPA review UI**.
The new review layer is meant to help a human curator work through draft packs created
by the ingestion pipeline and promote them into more trusted reviewed packs.
## Why this matters
One of the practical problems with using open online course contents is that the material
is often scattered, inconsistently structured, awkward to reuse, and cognitively expensive
to turn into something actionable.
Even when excellent course material exists, there is often a real **activation energy hump**
between:
- finding useful content
- extracting the structure
- organizing the concepts
- deciding what to trust
- getting a usable learning domain set up
Didactopus is meant to help overcome that hump.
Its ingestion and review pipeline should let a motivated learner or curator get from
"here is a pile of course material" to "here is a usable reviewed domain pack" with
substantially less friction.
## What is included
- interactive React SPA review UI
- JSON-backed review state model
- curation action application
- promoted-pack export
- reviewer notes and trust-status editing
- conflict resolution support
- README and FAQ updates reflecting the activation-energy goal
- sample review data and promoted pack output
## Core workflow
1. ingest course or topic materials into a draft pack
2. open the review UI
3. inspect concepts, conflicts, and review flags
4. edit statuses, notes, titles, descriptions, and prerequisites
5. resolve conflicts
6. export a promoted reviewed pack
## Why the review UI matters for course ingestion
In practice, course ingestion is not only a parsing problem. It is a **startup friction**
problem. A person may know what they want to study, and even know that good material exists,
but still fail to start because turning raw educational material into a coherent mastery
domain is too much work.
Didactopus should reduce that work enough that getting started becomes realistic.
### Review workflow
This revision adds a **review UI / curation workflow scaffold** for generated draft packs.
The purpose is to let a human reviewer inspect draft outputs from the course/topic
ingestion pipeline, make explicit curation decisions, and promote a reviewed draft
into a more trusted domain pack.
#### What is included
- review-state schema
- draft-pack loader
- curation action model
- review decision ledger
- promoted-pack writer
- static HTML review UI scaffold
- JSON data export for the UI
- sample curated review session
- sample promoted pack output
#### Core idea
Draft packs should not move directly into trusted use.
Instead, they should pass through a curation workflow where a reviewer can:
- merge concepts
- split concepts
- edit prerequisites
- mark concepts as trusted / provisional / rejected
- resolve conflict flags
- annotate rationale
- promote a curated pack into a reviewed pack
#### Status
This is a scaffold for a local-first workflow.
The HTML UI is static but wired to a concrete JSON review-state model so it can
later be upgraded into a richer SPA or desktop app without changing the data contracts.
### Course-to-course merger
This revision adds two major capabilities:
- **real document adapter scaffolds** for PDF, DOCX, PPTX, and HTML
- a **cross-course merger** for combining multiple course-derived packs into one stronger domain draft
These additions extend the earlier multi-source ingestion layer from "multiple files for one course"
to "multiple courses or course-like sources for one topic domain."
## What is included
- adapter registry for:
- PDF
- DOCX
- PPTX
- HTML
- Markdown
- text
- normalized document extraction interface
- course bundle ingestion across multiple source documents
- cross-course terminology and overlap analysis
- merged topic-pack emitter
- cross-course conflict report
- example source files and example merged output
## Design stance
This is still scaffold-level extraction. The purpose is to define stable interfaces and emitted artifacts,
not to claim perfect semantic parsing of every teaching document.
The implementation is designed so stronger parsers can later replace the stub extractors without changing
the surrounding pipeline.
### Multi-Source Course Ingestion
This revision adds a **Multi-Source Course Ingestion Layer**.
The pipeline can now accept multiple source files representing the same course or
topic domain, normalize them into a shared intermediate representation, merge them,
and emit a single draft Didactopus pack plus a conflict report.
#### Supported scaffold source types
Current scaffold adapters:
- Markdown (`.md`)
- Plain text (`.txt`)
- HTML-ish text (`.html`, `.htm`)
- Transcript text (`.transcript.txt`)
- Syllabus text (`.syllabus.txt`)
This revision is intentionally adapter-oriented, so future PDF, slide, and DOCX
adapters can be added behind the same interface.
#### What is included
- multi-source adapter dispatch
- normalized source records
- source merge logic
- cross-source terminology conflict report
- duplicate lesson/title detection
- merged draft pack emission
- merged attribution manifest
- sample multi-source inputs
- sample merged output pack
### Course Ingestion Pipeline
This revision adds a **Course-to-Pack Ingestion Pipeline** plus a **stable rule-policy adapter layer**.
The design goal is to turn open or user-supplied course materials into draft
Didactopus domain packs without introducing a brittle external rule-engine dependency.
#### Why no third-party rule engine here?
To minimize dependency risk, this scaffold uses a small declarative rule-policy
adapter implemented in pure Python and standard-library data structures.
That gives Didactopus:
- portable rules
- inspectable rule definitions
- deterministic behavior
- zero extra runtime dependency for policy evaluation
If a stronger rule engine is needed later, this adapter can remain the stable API surface.
#### What is included
- normalized course schema
- Markdown/HTML-ish text ingestion adapter
- module / lesson / objective extraction
- concept candidate extraction
- prerequisite guess generation
- rule-policy adapter
- draft pack emitter
- review report generation
- sample course input
- sample generated pack outputs
### Mastery Ledger
This revision adds a **Mastery Ledger + Capability Export** layer.
The main purpose is to let Didactopus turn accumulated learner state into
portable, inspectable artifacts that can support downstream deployment,
review, orchestration, or certification-like workflows.
#### What is new
- mastery ledger data model
- capability profile export
- JSON export of mastered concepts and evaluator summaries
- Markdown export of a readable capability report
- artifact manifest for produced deliverables
- demo CLI for generating exports for an AI student or human learner
- FAQ covering how learned mastery is represented and put to work
#### Why this matters
Didactopus can now do more than guide learning. It can also emit a structured
statement of what a learner appears able to do, based on explicit concepts,
evidence, and artifacts.
That makes it easier to use Didactopus as:
- a mastery tracker
- a portfolio generator
- a deployment-readiness aid
- an orchestration input for agent routing
#### Mastery representation
A learner's mastery is represented as structured operational state, including:
- mastered concepts
- evaluator results
- evidence summaries
- weak dimensions
- attempt history
- produced artifacts
- capability export
This is stricter than a normal chat transcript or self-description.
#### Future direction
A later revision should connect the capability export with:
- formal evaluator outputs
- signed evidence ledgers
- domain-specific capability schemas
- deployment policies for agent routing
### Evaluator Pipeline
This revision introduces a **pluggable evaluator pipeline** that converts
learner attempts into structured mastery evidence.
### Agentic Learner Loop
This revision adds an **agentic learner loop** that turns Didactopus into a closed-loop mastery system prototype.
The loop can now:
- choose the next concept via the graph-aware planner
- generate a synthetic learner attempt
- score the attempt into evidence
- update mastery state
- repeat toward a target concept
This is still scaffold-level, but it is the first explicit implementation of the idea that **Didactopus can supervise not only human learners, but also AI student agents**.
## Complete overview to this point
Didactopus currently includes:
- **Domain packs** for concepts, projects, rubrics, mastery profiles, templates, and cross-pack links
- **Dependency resolution** across packs
- **Merged learning graph** generation
- **Concept graph engine** for cross-pack prerequisite reasoning, linking, pathfinding, and export
- **Adaptive learner engine** for ready, blocked, and mastered concepts
- **Evidence engine** with weighted, recency-aware, multi-dimensional mastery inference
- **Concept-specific mastery profiles** with template inheritance
- **Graph-aware planner** for utility-ranked next-step recommendations
- **Agentic learner loop** for iterative goal-directed mastery acquisition
## Agentic AI students
An AI student under Didactopus is modeled as an **agent that accumulates evidence against concept mastery criteria**.
It does not “learn” in the same sense that model weights are retrained inside Didactopus. Instead, its learned mastery is represented as:
- current mastered concept set
- evidence history
- dimension-level competence summaries
- concept-specific weak dimensions
- adaptive plan state
- optional artifacts, explanations, project outputs, and critiques it has produced
In other words, Didactopus represents mastery as a **structured operational state**, not merely a chat transcript.
That state can be put to work by:
- selecting tasks the agent is now qualified to attempt
- routing domain-relevant problems to the agent
- exposing mastered concept profiles to orchestration logic
- using evidence summaries to decide whether the agent should act, defer, or review
- exporting a mastery portfolio for downstream use
## FAQ
See:
- `docs/faq.md`
## Correctness and formal knowledge components
See:
- `docs/correctness-and-knowledge-engine.md`
Short version: yes, there is a strong argument that Didactopus will eventually benefit from a more formal knowledge-engine layer, especially for domains where correctness can be stated in symbolic, logical, computational, or rule-governed terms.
A good future architecture is likely **hybrid**:
- LLM/agentic layer for explanation, synthesis, critique, and exploration
- formal knowledge engine for rule checking, constraint satisfaction, proof support, symbolic validation, and executable correctness checks
## Repository structure
```text
didactopus/
├── README.md
├── artwork/
├── configs/
├── docs/
├── examples/
├── src/didactopus/
├── tests/
└── webui/
```

View File

@ -8,6 +8,65 @@
## Recent revisions ## Recent revisions
### Interactive Domain review
This revision upgrades the earlier static review scaffold into an **interactive local SPA review UI**.
The new review layer is meant to help a human curator work through draft packs created
by the ingestion pipeline and promote them into more trusted reviewed packs.
## Why this matters
One of the practical problems with using open online course contents is that the material
is often scattered, inconsistently structured, awkward to reuse, and cognitively expensive
to turn into something actionable.
Even when excellent course material exists, there is often a real **activation energy hump**
between:
- finding useful content
- extracting the structure
- organizing the concepts
- deciding what to trust
- getting a usable learning domain set up
Didactopus is meant to help overcome that hump.
Its ingestion and review pipeline should let a motivated learner or curator get from
"here is a pile of course material" to "here is a usable reviewed domain pack" with
substantially less friction.
## What is included
- interactive React SPA review UI
- JSON-backed review state model
- curation action application
- promoted-pack export
- reviewer notes and trust-status editing
- conflict resolution support
- README and FAQ updates reflecting the activation-energy goal
- sample review data and promoted pack output
## Core workflow
1. ingest course or topic materials into a draft pack
2. open the review UI
3. inspect concepts, conflicts, and review flags
4. edit statuses, notes, titles, descriptions, and prerequisites
5. resolve conflicts
6. export a promoted reviewed pack
## Why the review UI matters for course ingestion
In practice, course ingestion is not only a parsing problem. It is a **startup friction**
problem. A person may know what they want to study, and even know that good material exists,
but still fail to start because turning raw educational material into a coherent mastery
domain is too much work.
Didactopus should reduce that work enough that getting started becomes realistic.
### Review workflow ### Review workflow
This revision adds a **review UI / curation workflow scaffold** for generated draft packs. This revision adds a **review UI / curation workflow scaffold** for generated draft packs.
@ -277,13 +336,15 @@ A good future architecture is likely **hybrid**:
## Repository structure ## Repository structure
```text ```text
didactopus/ didactopus/
├── README.md ├── README.md
├── artwork/ ├── artwork/
├── configs/ ├── configs/
├── docs/ ├── docs/
├── domain-packs/ ├── examples/
├── src/didactopus/ ├── src/didactopus/
└── tests/ ├── tests/
└── webui/
``` ```

View File

@ -1,27 +1,55 @@
# FAQ # FAQ
## Why add a review UI? ## Why does Didactopus need ingestion and review tools?
Because automatically generated packs are draft assets, not final trusted assets. Because useful course material often exists in forms that are difficult to activate for
serious self-directed learning. The issue is not just availability of information; it is
the effort required to transform that information into a usable learning domain.
## What can a reviewer change? ## What problem is this trying to solve?
In this scaffold: A common problem is the **activation energy hump**:
- concept trust status - the course exists
- the notes exist
- the syllabus exists
- the learner is motivated
- but the path from raw material to usable study structure is still too hard
Didactopus is meant to reduce that hump.
## Why not just read course webpages directly?
Because mastery-oriented use needs structure:
- concepts
- prerequisites - prerequisites
- titles - projects
- descriptions - rubrics
- merge/split intent records - review decisions
- conflict resolution notes - trust statuses
## Is the UI fully interactive? Raw course pages do not usually provide these in a directly reusable form.
Not yet. The current version is a static HTML scaffold backed by real JSON data models. ## Why have a review UI?
## Why keep a review ledger? Because automated ingestion creates drafts, not final trusted packs. A reviewer still needs
to make explicit curation decisions.
To preserve provenance and make curation decisions auditable. ## What can the SPA review UI do in this scaffold?
## Does promotion mean certification? - inspect concepts
- edit trust status
- edit notes
- edit prerequisites
- resolve conflicts
- export a promoted reviewed pack
No. Promotion means "reviewed and improved for Didactopus use," not formal certification. ## Is this already a full production UI?
No. It is a local-first interactive scaffold with stable data contracts, suitable for
growing into a stronger production interface.
## Does Didactopus eliminate the need to think?
No. The goal is to reduce startup friction and organizational overhead, not to replace
judgment. The user or curator still decides what is trustworthy and how the domain should
be shaped.

View File

@ -0,0 +1,34 @@
# Interactive Review UI
This revision introduces a React-based local SPA for reviewing draft packs.
## Goals
- reduce curation friction
- make review decisions explicit
- allow pack promotion after inspection
- preserve provenance and review rationale
## Features in this scaffold
- concept list with editable fields
- trust status editing
- concept notes editing
- prerequisite editing
- conflict visibility and resolution
- promoted-pack export generation in-browser logic
## Data model
The SPA loads `review_data.json` and can emit:
- updated review state
- review ledger entries
- promoted concepts payload
## Next steps
- file open/save integration
- conflict filtering
- merge/split concept actions in UI
- richer diff views
- domain-pack validation from the UI

View File

@ -6,7 +6,6 @@ concepts:
mastery_signals: mastery_signals:
- Explain mean, median, and variance. - Explain mean, median, and variance.
mastery_profile: {} mastery_profile: {}
- id: probability-basics - id: probability-basics
title: Probability Basics title: Probability Basics
description: Basic event probability and conditional probability. description: Basic event probability and conditional probability.
@ -15,7 +14,6 @@ concepts:
mastery_signals: mastery_signals:
- Compute a simple conditional probability. - Compute a simple conditional probability.
mastery_profile: {} mastery_profile: {}
- id: prior-and-posterior - id: prior-and-posterior
title: Prior and Posterior title: Prior and Posterior
description: Beliefs before and after evidence. description: Beliefs before and after evidence.

View File

@ -0,0 +1,21 @@
concepts:
- id: descriptive-statistics
title: Descriptive Statistics
description: Measures of center and spread.
prerequisites: []
mastery_signals:
- Explain mean, median, and variance.
status: trusted
notes:
- Reviewed in initial curation pass.
mastery_profile: {}
- id: probability-basics
title: Probability Basics
description: Basic event probability and conditional probability.
prerequisites:
- descriptive-statistics
mastery_signals:
- Compute a simple conditional probability.
status: provisional
notes: []
mastery_profile: {}

View File

@ -0,0 +1,6 @@
name: introductory-bayesian-inference
display_name: Introductory Bayesian Inference
version: 0.1.0-reviewed
curation:
reviewer: Wesley R. Elsberry
ledger_entries: 2

View File

@ -5,7 +5,7 @@ build-backend = "setuptools.build_meta"
[project] [project]
name = "didactopus" name = "didactopus"
version = "0.1.0" version = "0.1.0"
description = "Didactopus: draft-pack review workflow scaffold" description = "Didactopus: interactive review UI scaffold"
readme = "README.md" readme = "README.md"
requires-python = ">=3.10" requires-python = ">=3.10"
license = {text = "MIT"} license = {text = "MIT"}

View File

@ -8,11 +8,10 @@ from .review_loader import load_draft_pack
from .review_schema import ReviewSession, ReviewAction from .review_schema import ReviewSession, ReviewAction
from .review_actions import apply_action from .review_actions import apply_action
from .review_export import export_review_state_json, export_promoted_pack, export_review_ui_data from .review_export import export_review_state_json, export_promoted_pack, export_review_ui_data
from .ui_scaffold import write_review_ui
def build_parser() -> argparse.ArgumentParser: def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(description="Didactopus review workflow scaffold") parser = argparse.ArgumentParser(description="Didactopus interactive review workflow scaffold")
parser.add_argument("--draft-pack", required=True, help="Path to draft pack directory") parser.add_argument("--draft-pack", required=True, help="Path to draft pack directory")
parser.add_argument("--output-dir", default="review-output") parser.add_argument("--output-dir", default="review-output")
parser.add_argument("--config", default="configs/config.example.yaml") parser.add_argument("--config", default="configs/config.example.yaml")
@ -25,7 +24,6 @@ def main() -> None:
draft = load_draft_pack(args.draft_pack) draft = load_draft_pack(args.draft_pack)
session = ReviewSession(reviewer=config.review.default_reviewer, draft_pack=draft) session = ReviewSession(reviewer=config.review.default_reviewer, draft_pack=draft)
# Demo curation actions
if session.draft_pack.concepts: if session.draft_pack.concepts:
first = session.draft_pack.concepts[0].concept_id first = session.draft_pack.concepts[0].concept_id
apply_action(session, session.reviewer, ReviewAction( apply_action(session, session.reviewer, ReviewAction(
@ -41,37 +39,17 @@ def main() -> None:
rationale="Record reviewer note.", rationale="Record reviewer note.",
)) ))
if len(session.draft_pack.concepts) > 1:
second = session.draft_pack.concepts[1].concept_id
apply_action(session, session.reviewer, ReviewAction(
action_type="set_status",
target=second,
payload={"status": "provisional"},
rationale="Keep provisional pending further review.",
))
if session.draft_pack.conflicts:
apply_action(session, session.reviewer, ReviewAction(
action_type="resolve_conflict",
target="",
payload={"conflict": session.draft_pack.conflicts[0]},
rationale="Resolved first conflict in demo workflow.",
))
outdir = Path(args.output_dir) outdir = Path(args.output_dir)
outdir.mkdir(parents=True, exist_ok=True) outdir.mkdir(parents=True, exist_ok=True)
export_review_state_json(session, outdir / "review_session.json") export_review_state_json(session, outdir / "review_session.json")
export_review_ui_data(session, outdir) export_review_ui_data(session, outdir)
write_review_ui(outdir)
if config.review.write_promoted_pack: if config.review.write_promoted_pack:
export_promoted_pack(session, outdir / "promoted_pack") export_promoted_pack(session, outdir / "promoted_pack")
print("== Didactopus Review Workflow ==") print("== Didactopus Interactive Review Workflow ==")
print(f"Draft pack: {args.draft_pack}") print(f"Draft pack: {args.draft_pack}")
print(f"Reviewer: {session.reviewer}") print(f"Reviewer: {session.reviewer}")
print(f"Concepts: {len(session.draft_pack.concepts)}") print(f"Concepts: {len(session.draft_pack.concepts)}")
print(f"Ledger entries: {len(session.ledger)}") print(f"Ledger entries: {len(session.ledger)}")
print(f"Remaining conflicts: {len(session.draft_pack.conflicts)}")
print(f"Output dir: {outdir}") print(f"Output dir: {outdir}")

View File

@ -29,22 +29,5 @@ def apply_action(session: ReviewSession, reviewer: str, action: ReviewAction) ->
note = action.payload.get("note", "") note = action.payload.get("note", "")
if note: if note:
target.notes.append(note) target.notes.append(note)
elif action.action_type == "merge_concepts":
source = _find_concept(session, action.payload.get("source", ""))
dest = _find_concept(session, action.payload.get("destination", ""))
if source is not None and dest is not None and source is not dest:
for prereq in source.prerequisites:
if prereq not in dest.prerequisites:
dest.prerequisites.append(prereq)
for sig in source.mastery_signals:
if sig not in dest.mastery_signals:
dest.mastery_signals.append(sig)
for note in source.notes:
if note not in dest.notes:
dest.notes.append(note)
source.status = "rejected"
source.notes.append(f"Merged into {dest.concept_id}")
elif action.action_type == "split_concept" and target is not None:
target.notes.append("Split requested; manual follow-up required.")
session.ledger.append(ReviewLedgerEntry(reviewer=reviewer, action=action)) session.ledger.append(ReviewLedgerEntry(reviewer=reviewer, action=action))

View File

@ -22,30 +22,21 @@ def load_draft_pack(pack_dir: str | Path) -> DraftPackData:
) )
) )
conflicts_path = pack_dir / "conflict_report.md" def bullet_lines(path: Path) -> list[str]:
review_path = pack_dir / "review_report.md" if not path.exists():
attribution_path = pack_dir / "license_attribution.json" return []
pack_path = pack_dir / "pack.yaml" return [line[2:] for line in path.read_text(encoding="utf-8").splitlines() if line.startswith("- ")]
conflicts = [] conflicts = bullet_lines(pack_dir / "conflict_report.md")
if conflicts_path.exists(): review_flags = bullet_lines(pack_dir / "review_report.md")
conflicts = [
line[2:] for line in conflicts_path.read_text(encoding="utf-8").splitlines()
if line.startswith("- ")
]
review_flags = []
if review_path.exists():
review_flags = [
line[2:] for line in review_path.read_text(encoding="utf-8").splitlines()
if line.startswith("- ")
]
attribution = {} attribution = {}
attribution_path = pack_dir / "license_attribution.json"
if attribution_path.exists(): if attribution_path.exists():
attribution = json.loads(attribution_path.read_text(encoding="utf-8")) attribution = json.loads(attribution_path.read_text(encoding="utf-8"))
pack = {} pack = {}
pack_path = pack_dir / "pack.yaml"
if pack_path.exists(): if pack_path.exists():
pack = yaml.safe_load(pack_path.read_text(encoding="utf-8")) or {} pack = yaml.safe_load(pack_path.read_text(encoding="utf-8")) or {}

View File

@ -12,17 +12,3 @@ def test_apply_status_action() -> None:
apply_action(session, "R", ReviewAction(action_type="set_status", target="c1", payload={"status": "trusted"})) apply_action(session, "R", ReviewAction(action_type="set_status", target="c1", payload={"status": "trusted"}))
assert session.draft_pack.concepts[0].status == "trusted" assert session.draft_pack.concepts[0].status == "trusted"
assert len(session.ledger) == 1 assert len(session.ledger) == 1
def test_merge_action() -> None:
session = ReviewSession(
reviewer="R",
draft_pack=DraftPackData(
concepts=[
ConceptReviewEntry(concept_id="a", title="A"),
ConceptReviewEntry(concept_id="b", title="B"),
]
),
)
apply_action(session, "R", ReviewAction(action_type="merge_concepts", target="", payload={"source": "a", "destination": "b"}))
assert session.draft_pack.concepts[0].status == "rejected"

View File

@ -0,0 +1,6 @@
from pathlib import Path
def test_webui_scaffold_exists() -> None:
assert Path("webui/src/App.jsx").exists()
assert Path("webui/sample/review_data.json").exists()

12
webui/index.html Normal file
View File

@ -0,0 +1,12 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Didactopus Review UI</title>
<script type="module" src="/src/main.jsx"></script>
</head>
<body>
<div id="root"></div>
</body>
</html>

17
webui/package.json Normal file
View File

@ -0,0 +1,17 @@
{
"name": "didactopus-review-ui",
"private": true,
"version": "0.1.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build"
},
"dependencies": {
"react": "^18.3.1",
"react-dom": "^18.3.1"
},
"devDependencies": {
"vite": "^5.4.0"
}
}

View File

@ -0,0 +1,60 @@
{
"reviewer": "Wesley R. Elsberry",
"pack": {
"name": "introductory-bayesian-inference",
"display_name": "Introductory Bayesian Inference",
"version": "0.1.0-draft"
},
"concepts": [
{
"concept_id": "descriptive-statistics",
"title": "Descriptive Statistics",
"description": "Measures of center and spread.",
"prerequisites": [],
"mastery_signals": [
"Explain mean, median, and variance."
],
"status": "trusted",
"notes": [
"Reviewed in initial curation pass."
]
},
{
"concept_id": "probability-basics",
"title": "Probability Basics",
"description": "Basic event probability and conditional probability.",
"prerequisites": [
"descriptive-statistics"
],
"mastery_signals": [
"Compute a simple conditional probability."
],
"status": "provisional",
"notes": []
},
{
"concept_id": "prior-and-posterior",
"title": "Prior and Posterior",
"description": "Beliefs before and after evidence.",
"prerequisites": [
"probability-basics"
],
"mastery_signals": [
"Compare prior and posterior beliefs."
],
"status": "needs_review",
"notes": [
"May be too broad and may need splitting."
]
}
],
"conflicts": [
"Key term 'prior' appears in multiple lesson contexts.",
"Lesson 'prior and posterior' was merged from multiple sources; review ordering assumptions."
],
"review_flags": [
"Module 'Bayesian Updating' appears to contain project-like material; review project extraction.",
"Concept 'Prior and Posterior' may be too broad and may need splitting."
],
"ledger": []
}

254
webui/src/App.jsx Normal file
View File

@ -0,0 +1,254 @@
import React, { useMemo, useState } from "react";
import reviewData from "../sample/review_data.json";
const statuses = ["needs_review", "trusted", "provisional", "rejected"];
function downloadJson(filename, data) {
const blob = new Blob([JSON.stringify(data, null, 2)], { type: "application/json" });
const url = URL.createObjectURL(blob);
const a = document.createElement("a");
a.href = url;
a.download = filename;
a.click();
URL.revokeObjectURL(url);
}
function promotedPackFromState(state) {
return {
pack: {
...state.pack,
version: String(state.pack.version || "0.1.0-draft").replace("-draft", "-reviewed"),
curation: {
reviewer: state.reviewer,
ledger_entries: state.ledger.length
}
},
concepts: state.concepts
.filter((c) => c.status !== "rejected")
.map((c) => ({
id: c.concept_id,
title: c.title,
description: c.description,
prerequisites: c.prerequisites,
mastery_signals: c.mastery_signals,
status: c.status,
notes: c.notes,
mastery_profile: {}
})),
conflicts: state.conflicts,
review_flags: state.review_flags
};
}
export default function App() {
const [state, setState] = useState(reviewData);
const [selectedId, setSelectedId] = useState(reviewData.concepts[0]?.concept_id || "");
const selected = useMemo(
() => state.concepts.find((c) => c.concept_id === selectedId) || null,
[state, selectedId]
);
function updateConcept(conceptId, patch, rationale) {
setState((prev) => {
const concepts = prev.concepts.map((c) =>
c.concept_id === conceptId ? { ...c, ...patch } : c
);
const ledger = [
...prev.ledger,
{
reviewer: prev.reviewer,
action: {
action_type: "note",
target: conceptId,
payload: patch,
rationale: rationale || "UI edit"
}
}
];
return { ...prev, concepts, ledger };
});
}
function resolveConflict(conflict) {
setState((prev) => ({
...prev,
conflicts: prev.conflicts.filter((c) => c !== conflict),
ledger: [
...prev.ledger,
{
reviewer: prev.reviewer,
action: {
action_type: "resolve_conflict",
target: "",
payload: { conflict },
rationale: "Resolved in UI"
}
}
]
}));
}
const promoted = promotedPackFromState(state);
return (
<div className="page">
<header className="hero">
<div>
<h1>Didactopus Review UI</h1>
<p>
Reduce the activation-energy hump: move from raw course-derived draft pack
to curated reviewed domain pack with less friction.
</p>
</div>
<div className="hero-actions">
<button onClick={() => downloadJson("review_data.edited.json", state)}>Export Review State</button>
<button onClick={() => downloadJson("promoted_pack.json", promoted)}>Export Promoted Pack</button>
</div>
</header>
<section className="summary-grid">
<div className="card">
<h2>Pack</h2>
<div className="small">{state.pack.display_name || state.pack.name}</div>
<div className="small">Reviewer: {state.reviewer}</div>
<div className="small">Concepts: {state.concepts.length}</div>
</div>
<div className="card">
<h2>Conflicts</h2>
<div className="big">{state.conflicts.length}</div>
</div>
<div className="card">
<h2>Flags</h2>
<div className="big">{state.review_flags.length}</div>
</div>
<div className="card">
<h2>Ledger</h2>
<div className="big">{state.ledger.length}</div>
</div>
</section>
<main className="layout">
<aside className="sidebar">
<h2>Concepts</h2>
{state.concepts.map((c) => (
<button
key={c.concept_id}
className={`concept-btn ${c.concept_id === selectedId ? "active" : ""}`}
onClick={() => setSelectedId(c.concept_id)}
>
<span>{c.title}</span>
<span className={`status-pill status-${c.status}`}>{c.status}</span>
</button>
))}
</aside>
<section className="content">
{selected ? (
<>
<div className="card">
<h2>Concept Editor</h2>
<label>
Title
<input
value={selected.title}
onChange={(e) => updateConcept(selected.concept_id, { title: e.target.value }, "Edited title")}
/>
</label>
<label>
Status
<select
value={selected.status}
onChange={(e) => updateConcept(selected.concept_id, { status: e.target.value }, "Changed trust status")}
>
{statuses.map((s) => (
<option value={s} key={s}>{s}</option>
))}
</select>
</label>
<label>
Description
<textarea
rows="6"
value={selected.description}
onChange={(e) => updateConcept(selected.concept_id, { description: e.target.value }, "Edited description")}
/>
</label>
<label>
Prerequisites (comma-separated ids)
<input
value={(selected.prerequisites || []).join(", ")}
onChange={(e) =>
updateConcept(
selected.concept_id,
{
prerequisites: e.target.value
.split(",")
.map((x) => x.trim())
.filter(Boolean)
},
"Edited prerequisites"
)
}
/>
</label>
<label>
Notes
<textarea
rows="4"
value={(selected.notes || []).join("\n")}
onChange={(e) =>
updateConcept(
selected.concept_id,
{ notes: e.target.value.split("\n").filter(Boolean) },
"Edited notes"
)
}
/>
</label>
</div>
<div className="card">
<h2>Mastery Signals</h2>
<ul>
{(selected.mastery_signals || []).map((signal, idx) => (
<li key={idx}>{signal}</li>
))}
</ul>
</div>
</>
) : (
<div className="card">No concept selected.</div>
)}
</section>
<section className="rightbar">
<div className="card">
<h2>Conflicts</h2>
{state.conflicts.length ? state.conflicts.map((conflict, idx) => (
<div key={idx} className="conflict">
<div>{conflict}</div>
<button onClick={() => resolveConflict(conflict)}>Resolve</button>
</div>
)) : <div className="small">No remaining conflicts.</div>}
</div>
<div className="card">
<h2>Review Flags</h2>
<ul>
{state.review_flags.map((flag, idx) => <li key={idx}>{flag}</li>)}
</ul>
</div>
<div className="card">
<h2>Why this exists</h2>
<p className="small">
Online course material can be excellent and still be hard to activate.
Didactopus aims to reduce the setup burden from useful but messy course content
to usable reviewed learning domain.
</p>
</div>
</section>
</main>
</div>
);
}

6
webui/src/main.jsx Normal file
View File

@ -0,0 +1,6 @@
import React from "react";
import { createRoot } from "react-dom/client";
import App from "./App";
import "./styles.css";
createRoot(document.getElementById("root")).render(<App />);

115
webui/src/styles.css Normal file
View File

@ -0,0 +1,115 @@
:root {
--bg: #f7f8fb;
--card: #ffffff;
--text: #1f2430;
--muted: #5d6678;
--border: #d7dce5;
--accent: #2d6cdf;
}
* { box-sizing: border-box; }
body {
margin: 0;
font-family: Arial, Helvetica, sans-serif;
background: var(--bg);
color: var(--text);
}
.page {
max-width: 1500px;
margin: 0 auto;
padding: 20px;
}
.hero {
background: var(--card);
border: 1px solid var(--border);
border-radius: 20px;
padding: 20px;
display: flex;
justify-content: space-between;
gap: 20px;
align-items: flex-start;
}
.hero h1 { margin-top: 0; }
.hero-actions {
display: flex;
gap: 10px;
flex-wrap: wrap;
}
button {
border: 1px solid var(--border);
background: white;
border-radius: 12px;
padding: 10px 14px;
cursor: pointer;
}
button:hover { border-color: var(--accent); }
.summary-grid {
margin-top: 16px;
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 16px;
}
.layout {
margin-top: 16px;
display: grid;
grid-template-columns: 290px 1fr 360px;
gap: 16px;
}
.card {
background: var(--card);
border: 1px solid var(--border);
border-radius: 18px;
padding: 16px;
}
.sidebar, .content, .rightbar {
display: flex;
flex-direction: column;
gap: 16px;
}
.concept-btn {
width: 100%;
text-align: left;
display: flex;
justify-content: space-between;
gap: 8px;
margin-bottom: 10px;
}
.concept-btn.active {
border-color: var(--accent);
box-shadow: 0 0 0 2px rgba(45,108,223,0.08);
}
.status-pill {
font-size: 12px;
padding: 4px 8px;
border-radius: 999px;
border: 1px solid var(--border);
white-space: nowrap;
}
.status-trusted { background: #e7f7ec; }
.status-provisional { background: #fff6df; }
.status-rejected { background: #fde9e9; }
.status-needs_review { background: #eef2f7; }
label {
display: block;
font-weight: 600;
margin-bottom: 12px;
}
input, textarea, select {
width: 100%;
margin-top: 6px;
border: 1px solid var(--border);
border-radius: 10px;
padding: 10px;
font: inherit;
background: white;
}
.small { color: var(--muted); }
.big { font-size: 34px; font-weight: 700; }
.conflict {
border-top: 1px solid var(--border);
padding-top: 12px;
margin-top: 12px;
}
@media (max-width: 1100px) {
.summary-grid { grid-template-columns: repeat(2, 1fr); }
.layout { grid-template-columns: 1fr; }
}