diff --git a/075-README.md b/075-README.md
new file mode 100644
index 0000000..e3aaf4d
--- /dev/null
+++ b/075-README.md
@@ -0,0 +1,350 @@
+# Didactopus
+
+
+
+**Didactopus** is a local-first AI-assisted autodidactic mastery platform for building genuine expertise through concept graphs, adaptive curriculum planning, evidence-driven mastery, Socratic mentoring, and project-based learning.
+
+**Tagline:** *Many arms, one goal — mastery.*
+
+## Recent revisions
+
+### Interactive Domain review
+
+This revision upgrades the earlier static review scaffold into an **interactive local SPA review UI**.
+
+The new review layer is meant to help a human curator work through draft packs created
+by the ingestion pipeline and promote them into more trusted reviewed packs.
+
+## Why this matters
+
+One of the practical problems with using open online course contents is that the material
+is often scattered, inconsistently structured, awkward to reuse, and cognitively expensive
+to turn into something actionable.
+
+Even when excellent course material exists, there is often a real **activation energy hump**
+between:
+
+- finding useful content
+- extracting the structure
+- organizing the concepts
+- deciding what to trust
+- getting a usable learning domain set up
+
+Didactopus is meant to help overcome that hump.
+
+Its ingestion and review pipeline should let a motivated learner or curator get from
+"here is a pile of course material" to "here is a usable reviewed domain pack" with
+substantially less friction.
+
+## What is included
+
+- interactive React SPA review UI
+- JSON-backed review state model
+- curation action application
+- promoted-pack export
+- reviewer notes and trust-status editing
+- conflict resolution support
+- README and FAQ updates reflecting the activation-energy goal
+- sample review data and promoted pack output
+
+## Core workflow
+
+1. ingest course or topic materials into a draft pack
+2. open the review UI
+3. inspect concepts, conflicts, and review flags
+4. edit statuses, notes, titles, descriptions, and prerequisites
+5. resolve conflicts
+6. export a promoted reviewed pack
+
+## Why the review UI matters for course ingestion
+
+In practice, course ingestion is not only a parsing problem. It is a **startup friction**
+problem. A person may know what they want to study, and even know that good material exists,
+but still fail to start because turning raw educational material into a coherent mastery
+domain is too much work.
+
+Didactopus should reduce that work enough that getting started becomes realistic.
+
+
+
+### Review workflow
+
+This revision adds a **review UI / curation workflow scaffold** for generated draft packs.
+
+The purpose is to let a human reviewer inspect draft outputs from the course/topic
+ingestion pipeline, make explicit curation decisions, and promote a reviewed draft
+into a more trusted domain pack.
+
+#### What is included
+
+- review-state schema
+- draft-pack loader
+- curation action model
+- review decision ledger
+- promoted-pack writer
+- static HTML review UI scaffold
+- JSON data export for the UI
+- sample curated review session
+- sample promoted pack output
+
+#### Core idea
+
+Draft packs should not move directly into trusted use.
+Instead, they should pass through a curation workflow where a reviewer can:
+
+- merge concepts
+- split concepts
+- edit prerequisites
+- mark concepts as trusted / provisional / rejected
+- resolve conflict flags
+- annotate rationale
+- promote a curated pack into a reviewed pack
+
+#### Status
+
+This is a scaffold for a local-first workflow.
+The HTML UI is static but wired to a concrete JSON review-state model so it can
+later be upgraded into a richer SPA or desktop app without changing the data contracts.
+
+### Course-to-course merger
+
+This revision adds two major capabilities:
+
+- **real document adapter scaffolds** for PDF, DOCX, PPTX, and HTML
+- a **cross-course merger** for combining multiple course-derived packs into one stronger domain draft
+
+These additions extend the earlier multi-source ingestion layer from "multiple files for one course"
+to "multiple courses or course-like sources for one topic domain."
+
+## What is included
+
+- adapter registry for:
+ - PDF
+ - DOCX
+ - PPTX
+ - HTML
+ - Markdown
+ - text
+- normalized document extraction interface
+- course bundle ingestion across multiple source documents
+- cross-course terminology and overlap analysis
+- merged topic-pack emitter
+- cross-course conflict report
+- example source files and example merged output
+
+## Design stance
+
+This is still scaffold-level extraction. The purpose is to define stable interfaces and emitted artifacts,
+not to claim perfect semantic parsing of every teaching document.
+
+The implementation is designed so stronger parsers can later replace the stub extractors without changing
+the surrounding pipeline.
+
+
+### Multi-Source Course Ingestion
+
+This revision adds a **Multi-Source Course Ingestion Layer**.
+
+The pipeline can now accept multiple source files representing the same course or
+topic domain, normalize them into a shared intermediate representation, merge them,
+and emit a single draft Didactopus pack plus a conflict report.
+
+#### Supported scaffold source types
+
+Current scaffold adapters:
+- Markdown (`.md`)
+- Plain text (`.txt`)
+- HTML-ish text (`.html`, `.htm`)
+- Transcript text (`.transcript.txt`)
+- Syllabus text (`.syllabus.txt`)
+
+This revision is intentionally adapter-oriented, so future PDF, slide, and DOCX
+adapters can be added behind the same interface.
+
+#### What is included
+
+- multi-source adapter dispatch
+- normalized source records
+- source merge logic
+- cross-source terminology conflict report
+- duplicate lesson/title detection
+- merged draft pack emission
+- merged attribution manifest
+- sample multi-source inputs
+- sample merged output pack
+
+
+### Course Ingestion Pipeline
+
+This revision adds a **Course-to-Pack Ingestion Pipeline** plus a **stable rule-policy adapter layer**.
+
+The design goal is to turn open or user-supplied course materials into draft
+Didactopus domain packs without introducing a brittle external rule-engine dependency.
+
+#### Why no third-party rule engine here?
+
+To minimize dependency risk, this scaffold uses a small declarative rule-policy
+adapter implemented in pure Python and standard-library data structures.
+
+That gives Didactopus:
+- portable rules
+- inspectable rule definitions
+- deterministic behavior
+- zero extra runtime dependency for policy evaluation
+
+If a stronger rule engine is needed later, this adapter can remain the stable API surface.
+
+#### What is included
+
+- normalized course schema
+- Markdown/HTML-ish text ingestion adapter
+- module / lesson / objective extraction
+- concept candidate extraction
+- prerequisite guess generation
+- rule-policy adapter
+- draft pack emitter
+- review report generation
+- sample course input
+- sample generated pack outputs
+
+
+### Mastery Ledger
+
+This revision adds a **Mastery Ledger + Capability Export** layer.
+
+The main purpose is to let Didactopus turn accumulated learner state into
+portable, inspectable artifacts that can support downstream deployment,
+review, orchestration, or certification-like workflows.
+
+#### What is new
+
+- mastery ledger data model
+- capability profile export
+- JSON export of mastered concepts and evaluator summaries
+- Markdown export of a readable capability report
+- artifact manifest for produced deliverables
+- demo CLI for generating exports for an AI student or human learner
+- FAQ covering how learned mastery is represented and put to work
+
+#### Why this matters
+
+Didactopus can now do more than guide learning. It can also emit a structured
+statement of what a learner appears able to do, based on explicit concepts,
+evidence, and artifacts.
+
+That makes it easier to use Didactopus as:
+- a mastery tracker
+- a portfolio generator
+- a deployment-readiness aid
+- an orchestration input for agent routing
+
+#### Mastery representation
+
+A learner's mastery is represented as structured operational state, including:
+
+- mastered concepts
+- evaluator results
+- evidence summaries
+- weak dimensions
+- attempt history
+- produced artifacts
+- capability export
+
+This is stricter than a normal chat transcript or self-description.
+
+#### Future direction
+
+A later revision should connect the capability export with:
+- formal evaluator outputs
+- signed evidence ledgers
+- domain-specific capability schemas
+- deployment policies for agent routing
+
+
+### Evaluator Pipeline
+
+This revision introduces a **pluggable evaluator pipeline** that converts
+learner attempts into structured mastery evidence.
+
+### Agentic Learner Loop
+
+This revision adds an **agentic learner loop** that turns Didactopus into a closed-loop mastery system prototype.
+
+The loop can now:
+
+- choose the next concept via the graph-aware planner
+- generate a synthetic learner attempt
+- score the attempt into evidence
+- update mastery state
+- repeat toward a target concept
+
+This is still scaffold-level, but it is the first explicit implementation of the idea that **Didactopus can supervise not only human learners, but also AI student agents**.
+
+## Complete overview to this point
+
+Didactopus currently includes:
+
+- **Domain packs** for concepts, projects, rubrics, mastery profiles, templates, and cross-pack links
+- **Dependency resolution** across packs
+- **Merged learning graph** generation
+- **Concept graph engine** for cross-pack prerequisite reasoning, linking, pathfinding, and export
+- **Adaptive learner engine** for ready, blocked, and mastered concepts
+- **Evidence engine** with weighted, recency-aware, multi-dimensional mastery inference
+- **Concept-specific mastery profiles** with template inheritance
+- **Graph-aware planner** for utility-ranked next-step recommendations
+- **Agentic learner loop** for iterative goal-directed mastery acquisition
+
+## Agentic AI students
+
+An AI student under Didactopus is modeled as an **agent that accumulates evidence against concept mastery criteria**.
+
+It does not “learn” in the same sense that model weights are retrained inside Didactopus. Instead, its learned mastery is represented as:
+
+- current mastered concept set
+- evidence history
+- dimension-level competence summaries
+- concept-specific weak dimensions
+- adaptive plan state
+- optional artifacts, explanations, project outputs, and critiques it has produced
+
+In other words, Didactopus represents mastery as a **structured operational state**, not merely a chat transcript.
+
+That state can be put to work by:
+
+- selecting tasks the agent is now qualified to attempt
+- routing domain-relevant problems to the agent
+- exposing mastered concept profiles to orchestration logic
+- using evidence summaries to decide whether the agent should act, defer, or review
+- exporting a mastery portfolio for downstream use
+
+## FAQ
+
+See:
+- `docs/faq.md`
+
+## Correctness and formal knowledge components
+
+See:
+- `docs/correctness-and-knowledge-engine.md`
+
+Short version: yes, there is a strong argument that Didactopus will eventually benefit from a more formal knowledge-engine layer, especially for domains where correctness can be stated in symbolic, logical, computational, or rule-governed terms.
+
+A good future architecture is likely **hybrid**:
+
+- LLM/agentic layer for explanation, synthesis, critique, and exploration
+- formal knowledge engine for rule checking, constraint satisfaction, proof support, symbolic validation, and executable correctness checks
+
+## Repository structure
+
+
+```text
+didactopus/
+├── README.md
+├── artwork/
+├── configs/
+├── docs/
+├── examples/
+├── src/didactopus/
+├── tests/
+└── webui/
+```
diff --git a/README.md b/README.md
index 7e24f8a..e3aaf4d 100644
--- a/README.md
+++ b/README.md
@@ -8,6 +8,65 @@
## Recent revisions
+### Interactive Domain review
+
+This revision upgrades the earlier static review scaffold into an **interactive local SPA review UI**.
+
+The new review layer is meant to help a human curator work through draft packs created
+by the ingestion pipeline and promote them into more trusted reviewed packs.
+
+## Why this matters
+
+One of the practical problems with using open online course contents is that the material
+is often scattered, inconsistently structured, awkward to reuse, and cognitively expensive
+to turn into something actionable.
+
+Even when excellent course material exists, there is often a real **activation energy hump**
+between:
+
+- finding useful content
+- extracting the structure
+- organizing the concepts
+- deciding what to trust
+- getting a usable learning domain set up
+
+Didactopus is meant to help overcome that hump.
+
+Its ingestion and review pipeline should let a motivated learner or curator get from
+"here is a pile of course material" to "here is a usable reviewed domain pack" with
+substantially less friction.
+
+## What is included
+
+- interactive React SPA review UI
+- JSON-backed review state model
+- curation action application
+- promoted-pack export
+- reviewer notes and trust-status editing
+- conflict resolution support
+- README and FAQ updates reflecting the activation-energy goal
+- sample review data and promoted pack output
+
+## Core workflow
+
+1. ingest course or topic materials into a draft pack
+2. open the review UI
+3. inspect concepts, conflicts, and review flags
+4. edit statuses, notes, titles, descriptions, and prerequisites
+5. resolve conflicts
+6. export a promoted reviewed pack
+
+## Why the review UI matters for course ingestion
+
+In practice, course ingestion is not only a parsing problem. It is a **startup friction**
+problem. A person may know what they want to study, and even know that good material exists,
+but still fail to start because turning raw educational material into a coherent mastery
+domain is too much work.
+
+Didactopus should reduce that work enough that getting started becomes realistic.
+
+
+
### Review workflow
This revision adds a **review UI / curation workflow scaffold** for generated draft packs.
@@ -277,13 +336,15 @@ A good future architecture is likely **hybrid**:
## Repository structure
+
```text
didactopus/
├── README.md
├── artwork/
├── configs/
├── docs/
-├── domain-packs/
+├── examples/
├── src/didactopus/
-└── tests/
+├── tests/
+└── webui/
```
diff --git a/docs/faq.md b/docs/faq.md
index 607cd01..c4005a6 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -1,27 +1,55 @@
# FAQ
-## Why add a review UI?
+## Why does Didactopus need ingestion and review tools?
-Because automatically generated packs are draft assets, not final trusted assets.
+Because useful course material often exists in forms that are difficult to activate for
+serious self-directed learning. The issue is not just availability of information; it is
+the effort required to transform that information into a usable learning domain.
-## What can a reviewer change?
+## What problem is this trying to solve?
-In this scaffold:
-- concept trust status
+A common problem is the **activation energy hump**:
+- the course exists
+- the notes exist
+- the syllabus exists
+- the learner is motivated
+- but the path from raw material to usable study structure is still too hard
+
+Didactopus is meant to reduce that hump.
+
+## Why not just read course webpages directly?
+
+Because mastery-oriented use needs structure:
+- concepts
- prerequisites
-- titles
-- descriptions
-- merge/split intent records
-- conflict resolution notes
+- projects
+- rubrics
+- review decisions
+- trust statuses
-## Is the UI fully interactive?
+Raw course pages do not usually provide these in a directly reusable form.
-Not yet. The current version is a static HTML scaffold backed by real JSON data models.
+## Why have a review UI?
-## Why keep a review ledger?
+Because automated ingestion creates drafts, not final trusted packs. A reviewer still needs
+to make explicit curation decisions.
-To preserve provenance and make curation decisions auditable.
+## What can the SPA review UI do in this scaffold?
-## Does promotion mean certification?
+- inspect concepts
+- edit trust status
+- edit notes
+- edit prerequisites
+- resolve conflicts
+- export a promoted reviewed pack
-No. Promotion means "reviewed and improved for Didactopus use," not formal certification.
+## Is this already a full production UI?
+
+No. It is a local-first interactive scaffold with stable data contracts, suitable for
+growing into a stronger production interface.
+
+## Does Didactopus eliminate the need to think?
+
+No. The goal is to reduce startup friction and organizational overhead, not to replace
+judgment. The user or curator still decides what is trustworthy and how the domain should
+be shaped.
diff --git a/docs/interactive-review-ui.md b/docs/interactive-review-ui.md
new file mode 100644
index 0000000..743fef9
--- /dev/null
+++ b/docs/interactive-review-ui.md
@@ -0,0 +1,34 @@
+# Interactive Review UI
+
+This revision introduces a React-based local SPA for reviewing draft packs.
+
+## Goals
+
+- reduce curation friction
+- make review decisions explicit
+- allow pack promotion after inspection
+- preserve provenance and review rationale
+
+## Features in this scaffold
+
+- concept list with editable fields
+- trust status editing
+- concept notes editing
+- prerequisite editing
+- conflict visibility and resolution
+- promoted-pack export generation in-browser logic
+
+## Data model
+
+The SPA loads `review_data.json` and can emit:
+- updated review state
+- review ledger entries
+- promoted concepts payload
+
+## Next steps
+
+- file open/save integration
+- conflict filtering
+- merge/split concept actions in UI
+- richer diff views
+- domain-pack validation from the UI
diff --git a/examples/draft_pack/concepts.yaml b/examples/draft_pack/concepts.yaml
index d94fa8d..72e5cc1 100644
--- a/examples/draft_pack/concepts.yaml
+++ b/examples/draft_pack/concepts.yaml
@@ -6,7 +6,6 @@ concepts:
mastery_signals:
- Explain mean, median, and variance.
mastery_profile: {}
-
- id: probability-basics
title: Probability Basics
description: Basic event probability and conditional probability.
@@ -15,7 +14,6 @@ concepts:
mastery_signals:
- Compute a simple conditional probability.
mastery_profile: {}
-
- id: prior-and-posterior
title: Prior and Posterior
description: Beliefs before and after evidence.
diff --git a/examples/promoted_pack/concepts.yaml b/examples/promoted_pack/concepts.yaml
new file mode 100644
index 0000000..5dbba30
--- /dev/null
+++ b/examples/promoted_pack/concepts.yaml
@@ -0,0 +1,21 @@
+concepts:
+ - id: descriptive-statistics
+ title: Descriptive Statistics
+ description: Measures of center and spread.
+ prerequisites: []
+ mastery_signals:
+ - Explain mean, median, and variance.
+ status: trusted
+ notes:
+ - Reviewed in initial curation pass.
+ mastery_profile: {}
+ - id: probability-basics
+ title: Probability Basics
+ description: Basic event probability and conditional probability.
+ prerequisites:
+ - descriptive-statistics
+ mastery_signals:
+ - Compute a simple conditional probability.
+ status: provisional
+ notes: []
+ mastery_profile: {}
diff --git a/examples/promoted_pack/pack.yaml b/examples/promoted_pack/pack.yaml
new file mode 100644
index 0000000..ae291c3
--- /dev/null
+++ b/examples/promoted_pack/pack.yaml
@@ -0,0 +1,6 @@
+name: introductory-bayesian-inference
+display_name: Introductory Bayesian Inference
+version: 0.1.0-reviewed
+curation:
+ reviewer: Wesley R. Elsberry
+ ledger_entries: 2
diff --git a/pyproject.toml b/pyproject.toml
index d94ce90..57dad6c 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -5,7 +5,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "didactopus"
version = "0.1.0"
-description = "Didactopus: draft-pack review workflow scaffold"
+description = "Didactopus: interactive review UI scaffold"
readme = "README.md"
requires-python = ">=3.10"
license = {text = "MIT"}
diff --git a/src/didactopus/main.py b/src/didactopus/main.py
index cd0d5a9..b530728 100644
--- a/src/didactopus/main.py
+++ b/src/didactopus/main.py
@@ -8,11 +8,10 @@ from .review_loader import load_draft_pack
from .review_schema import ReviewSession, ReviewAction
from .review_actions import apply_action
from .review_export import export_review_state_json, export_promoted_pack, export_review_ui_data
-from .ui_scaffold import write_review_ui
def build_parser() -> argparse.ArgumentParser:
- parser = argparse.ArgumentParser(description="Didactopus review workflow scaffold")
+ parser = argparse.ArgumentParser(description="Didactopus interactive review workflow scaffold")
parser.add_argument("--draft-pack", required=True, help="Path to draft pack directory")
parser.add_argument("--output-dir", default="review-output")
parser.add_argument("--config", default="configs/config.example.yaml")
@@ -25,7 +24,6 @@ def main() -> None:
draft = load_draft_pack(args.draft_pack)
session = ReviewSession(reviewer=config.review.default_reviewer, draft_pack=draft)
- # Demo curation actions
if session.draft_pack.concepts:
first = session.draft_pack.concepts[0].concept_id
apply_action(session, session.reviewer, ReviewAction(
@@ -41,37 +39,17 @@ def main() -> None:
rationale="Record reviewer note.",
))
- if len(session.draft_pack.concepts) > 1:
- second = session.draft_pack.concepts[1].concept_id
- apply_action(session, session.reviewer, ReviewAction(
- action_type="set_status",
- target=second,
- payload={"status": "provisional"},
- rationale="Keep provisional pending further review.",
- ))
-
- if session.draft_pack.conflicts:
- apply_action(session, session.reviewer, ReviewAction(
- action_type="resolve_conflict",
- target="",
- payload={"conflict": session.draft_pack.conflicts[0]},
- rationale="Resolved first conflict in demo workflow.",
- ))
-
outdir = Path(args.output_dir)
outdir.mkdir(parents=True, exist_ok=True)
-
export_review_state_json(session, outdir / "review_session.json")
export_review_ui_data(session, outdir)
- write_review_ui(outdir)
if config.review.write_promoted_pack:
export_promoted_pack(session, outdir / "promoted_pack")
- print("== Didactopus Review Workflow ==")
+ print("== Didactopus Interactive Review Workflow ==")
print(f"Draft pack: {args.draft_pack}")
print(f"Reviewer: {session.reviewer}")
print(f"Concepts: {len(session.draft_pack.concepts)}")
print(f"Ledger entries: {len(session.ledger)}")
- print(f"Remaining conflicts: {len(session.draft_pack.conflicts)}")
print(f"Output dir: {outdir}")
diff --git a/src/didactopus/review_actions.py b/src/didactopus/review_actions.py
index 692f16d..1a1dd25 100644
--- a/src/didactopus/review_actions.py
+++ b/src/didactopus/review_actions.py
@@ -29,22 +29,5 @@ def apply_action(session: ReviewSession, reviewer: str, action: ReviewAction) ->
note = action.payload.get("note", "")
if note:
target.notes.append(note)
- elif action.action_type == "merge_concepts":
- source = _find_concept(session, action.payload.get("source", ""))
- dest = _find_concept(session, action.payload.get("destination", ""))
- if source is not None and dest is not None and source is not dest:
- for prereq in source.prerequisites:
- if prereq not in dest.prerequisites:
- dest.prerequisites.append(prereq)
- for sig in source.mastery_signals:
- if sig not in dest.mastery_signals:
- dest.mastery_signals.append(sig)
- for note in source.notes:
- if note not in dest.notes:
- dest.notes.append(note)
- source.status = "rejected"
- source.notes.append(f"Merged into {dest.concept_id}")
- elif action.action_type == "split_concept" and target is not None:
- target.notes.append("Split requested; manual follow-up required.")
session.ledger.append(ReviewLedgerEntry(reviewer=reviewer, action=action))
diff --git a/src/didactopus/review_loader.py b/src/didactopus/review_loader.py
index 8ca90da..8498a12 100644
--- a/src/didactopus/review_loader.py
+++ b/src/didactopus/review_loader.py
@@ -22,30 +22,21 @@ def load_draft_pack(pack_dir: str | Path) -> DraftPackData:
)
)
- conflicts_path = pack_dir / "conflict_report.md"
- review_path = pack_dir / "review_report.md"
- attribution_path = pack_dir / "license_attribution.json"
- pack_path = pack_dir / "pack.yaml"
+ def bullet_lines(path: Path) -> list[str]:
+ if not path.exists():
+ return []
+ return [line[2:] for line in path.read_text(encoding="utf-8").splitlines() if line.startswith("- ")]
- conflicts = []
- if conflicts_path.exists():
- conflicts = [
- line[2:] for line in conflicts_path.read_text(encoding="utf-8").splitlines()
- if line.startswith("- ")
- ]
-
- review_flags = []
- if review_path.exists():
- review_flags = [
- line[2:] for line in review_path.read_text(encoding="utf-8").splitlines()
- if line.startswith("- ")
- ]
+ conflicts = bullet_lines(pack_dir / "conflict_report.md")
+ review_flags = bullet_lines(pack_dir / "review_report.md")
attribution = {}
+ attribution_path = pack_dir / "license_attribution.json"
if attribution_path.exists():
attribution = json.loads(attribution_path.read_text(encoding="utf-8"))
pack = {}
+ pack_path = pack_dir / "pack.yaml"
if pack_path.exists():
pack = yaml.safe_load(pack_path.read_text(encoding="utf-8")) or {}
diff --git a/tests/test_review_actions.py b/tests/test_review_actions.py
index 6548668..d615f0f 100644
--- a/tests/test_review_actions.py
+++ b/tests/test_review_actions.py
@@ -12,17 +12,3 @@ def test_apply_status_action() -> None:
apply_action(session, "R", ReviewAction(action_type="set_status", target="c1", payload={"status": "trusted"}))
assert session.draft_pack.concepts[0].status == "trusted"
assert len(session.ledger) == 1
-
-
-def test_merge_action() -> None:
- session = ReviewSession(
- reviewer="R",
- draft_pack=DraftPackData(
- concepts=[
- ConceptReviewEntry(concept_id="a", title="A"),
- ConceptReviewEntry(concept_id="b", title="B"),
- ]
- ),
- )
- apply_action(session, "R", ReviewAction(action_type="merge_concepts", target="", payload={"source": "a", "destination": "b"}))
- assert session.draft_pack.concepts[0].status == "rejected"
diff --git a/tests/test_webui_files.py b/tests/test_webui_files.py
new file mode 100644
index 0000000..ef6e13e
--- /dev/null
+++ b/tests/test_webui_files.py
@@ -0,0 +1,6 @@
+from pathlib import Path
+
+
+def test_webui_scaffold_exists() -> None:
+ assert Path("webui/src/App.jsx").exists()
+ assert Path("webui/sample/review_data.json").exists()
diff --git a/webui/index.html b/webui/index.html
new file mode 100644
index 0000000..4ecb1aa
--- /dev/null
+++ b/webui/index.html
@@ -0,0 +1,12 @@
+
+
+
+ Online course material can be excellent and still be hard to activate.
+ Didactopus aims to reduce the setup burden from “useful but messy course content”
+ to “usable reviewed learning domain.”
+