Initial Codex commit.

This commit is contained in:
welsberr 2026-03-18 15:06:17 -04:00
parent 0a437d7736
commit c0c5137e74
53 changed files with 5408 additions and 229 deletions

237
.gitignore vendored
View File

@ -1,229 +1,14 @@
# ---> Python
# Byte-compiled / optimized / DLL files
__pycache__/ __pycache__/
*.py[cod] *.pyc
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/ .pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# ---> Emacs
# -*- mode: gitignore; -*-
*~
\#*\#
/.emacs.desktop
/.emacs.desktop.lock
*.elc
auto-save-list
tramp
.\#*
# Org-mode
.org-id-locations
*_archive
# flymake-mode
*_flymake.*
# eshell files
/eshell/history
/eshell/lastdir
# elpa packages
/elpa/
# reftex files
*.rel
# AUCTeX auto folder
/auto/
# cask packages
.cask/
dist/ dist/
build/
# Flycheck *.egg-info/
flycheck_*.el .DS_Store
node_modules/
# server auth directory coverage/
/server/ artifacts/
*.log
# projectiles files tmp/
.projectile *.tsbuildinfo
.benchmarks/
# directory configuration
.dir-locals.el
# network security
/network-security.data
# ---> Rust
# Generated by Cargo
# will have compiled files and executables
debug/
target/
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
Cargo.lock
# These are backup files generated by rustfmt
**/*.rs.bk
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb

View File

@ -1,5 +1,81 @@
# Synaptopus # Synaptopus
Many minds, one workbench. Many minds, one workbench.
Synaptopus is a multi-architecture artificial neural systems lab for composing, comparing, and teaching interacting neural network models. Synaptopus is a multi-architecture artificial neural systems lab for composing, comparing, and teaching interacting network models.
The repository is intended as a broader home for reusable artificial neural system components, pedagogical tooling, and hybrid systems that combine unlike architectures into a single executable process. The thesis-derived composition system developed in the neighboring repository is one important origin point, but it is not the boundary of the idea.
## What Synaptopus Is For
- Building systems in which different neural architectures play different roles
- Comparing architecture families under a common execution model
- Teaching artificial neural systems through inspectable, stepwise behavior
- Supporting both domain-specific applications and cross-domain experiments
- Providing a basis for future graphical and browser-based experimentation tools
## Design Direction
Synaptopus is meant to support systems built from cooperating components such as:
- generators
- critics
- categorizers
- controllers
- analyzers
The emphasis is on explicit interaction among heterogeneous models rather than a single monolithic network.
## Initial Provenance
The immediate historical provenance of this repository is a 1988-1989 master's thesis project by Wesley Royce Elsberry, implemented in Turbo Pascal, which combined Hopfield-style generation, backpropagation-based evaluation, and ART-style categorization in a hybrid musical composition system.
That work has already been reconstructed in Python elsewhere as a thesis-focused project. Synaptopus is the next abstraction layer: a more general repository for multi-architecture artificial neural systems, their interfaces, their orchestration, and their pedagogical presentation.
See [docs/HISTORY.md](docs/HISTORY.md) for the longer provenance note.
See [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) for the current execution and serialization model.
See [docs/FORMATS.md](docs/FORMATS.md) for the current JSON artifact contracts and examples.
See [docs/ROADMAP.md](docs/ROADMAP.md) for the broader project plan.
See [typescript/README.md](typescript/README.md) for the first TypeScript-side contract layer.
See [viewer/index.html](viewer/index.html) for the minimal browser-based trace viewer.
For the smoothest viewer workflow:
```bash
PYTHONPATH=src python -m synaptopus ./artifacts --demo parity_pressure
python -m http.server 8000
```
Then load `http://127.0.0.1:8000/artifacts/manifest.json` in the viewer's manifest URL field.
If you want a checkpointable run artifact as well:
```bash
PYTHONPATH=src python -m synaptopus ./artifacts \
--demo parity_pressure \
--accepted-count 4 \
--snapshot-after-accepted 2
```
That writes `snapshot.json` alongside the graph, trace, report, and manifest artifacts. The snapshot captures the demo state plus mutable network internals so the run can be resumed later rather than replayed from scratch.
## Planned Scope
- reusable architecture interfaces
- generic network implementations
- mixed-family example systems
- domain adapters and example tasks
- execution tracing and information-theoretic analysis
- visual tooling for inspecting interacting systems
- a future JavaScript and web-based driver
## Repository Layout
```text
src/synaptopus/ package code
docs/ project notes, provenance, and architecture documents
```
## Status
This repository is now past the pure-scaffold stage. It contains the first generic runtime, reporting, serialization, graph, backpropagation, ART1, and Hopfield layers, plus internal mixed-family demos built on the generic orchestration model. The exporter can emit artifacts for more than one internal demo and can now save checkpointable snapshot artifacts for later resume, while the thesis-derived Python implementation remains the historical reference for the first complete hybrid system.

164
docs/ARCHITECTURE.md Normal file
View File

@ -0,0 +1,164 @@
# Architecture
## Purpose
Synaptopus is designed around a simple idea: unlike artificial neural system families should be able to participate in one executable process without being flattened into one model class or hidden behind a monolithic runtime.
The current Python implementation therefore separates:
- architecture families
- execution semantics
- reporting and analysis
- graph descriptions
- trace serialization
That separation is deliberate. It is meant to support both local Python experimentation and a future JavaScript or browser-based driver that uses the same conceptual model.
## Execution Model
The core runtime model is sequential and stateful.
At the lowest level, a system produces a `StepTrace`:
- `previous_state`
- `next_state`
- `candidate`
- `accepted`
- `elapsed_seconds`
- optional `metadata`
These traces are accumulated into an `ExecutionRecord`, which contains:
- all attempted steps
- the accepted subset
- final state
- total runtime
Execution records can also be merged when a run is resumed from a checkpoint. That makes it possible to preserve both the pre-checkpoint history and the post-resume continuation as one logical run.
This model is intentionally compatible with systems that:
- generate candidates
- evaluate or categorize them
- reject and retry
- maintain internal state across attempts
That makes it a better fit for hybrid cognitive loops than a purely acyclic batch pipeline.
## Component Roles
The current orchestration model defines explicit component roles:
- `Generator`
- `Critic`
- `Categorizer`
- `AcceptancePolicy`
- `StateTransition`
These are combined by `CooperativeSystem`, which performs:
1. candidate generation
2. critique
3. categorization
4. policy decision
5. state transition
The result is still emitted as a standard `StepTrace`, so all downstream reporting and serialization remains generic.
## Architecture Families
At present, Synaptopus contains three reusable architecture families:
- multilayer feedforward backpropagation
- ART1 category learning
- Hopfield-style recurrent dynamics
The long-term intent is to add more families while preserving a stable orchestration and trace model.
## Graph Schema
The graph layer is deliberately thin. It is not a second execution engine.
It provides:
- `GraphNodeSpec`
- `GraphEdgeSpec`
- `GraphSchema`
- `FunctionalNode`
Each node spec records:
- `node_id`
- `node_type`
- `input_names`
- `output_names`
Each edge spec records:
- `source_node_id`
- `source_output`
- `target_node_id`
- `target_input`
This is enough to describe a workbench graph for a future UI. The runtime semantics still live in the Python component objects and execution traces, not in the schema alone.
See [FORMATS.md](FORMATS.md) for the current JSON schema shape and examples.
## Trace Serialization
Execution traces are exported through the serialization layer as JSON-safe objects.
The important rule is that the serialization boundary is explicit:
- Python dataclasses are converted to plain objects
- tuples become JSON arrays
- nested metadata is normalized recursively
The exported structures are:
- `SerializedStepTrace`
- `SerializedExecutionRecord`
- `DemoSnapshot`
This format is intended to be consumed by future browser tooling, teaching interfaces, and experiment dashboards.
See [FORMATS.md](FORMATS.md) for current trace and report examples.
## Snapshot And Resume
Synaptopus now has a first checkpoint/resume path for its internal demos.
A `DemoSnapshot` captures:
- the demo identity
- the serialized execution record up to the checkpoint
- the mutable system internals needed to continue the run
- run parameters relevant to the checkpoint
This is intentionally more than a trace dump. A trace alone is good for inspection, but a restartable run also needs the live model state that produced the trace. For the current demos, that includes the ART1 category state and the trained backprop network parameters.
The snapshot layer is presently demo-specific rather than fully generic. That is deliberate. It keeps the checkpoint semantics honest while the broader contract for arbitrary user-defined systems is still being designed.
## Reporting
Reporting is built over execution records rather than individual network families.
The current `RunReport` includes:
- accepted count
- attempt count
- average attempts per acceptance
- total runtime
- optional information-theoretic sequence analysis
This keeps reporting portable across domains and architectures.
## Why This Structure
This structure is meant to preserve three properties:
- heterogeneous architectures remain explicit
- execution stays inspectable
- web-facing tools can be built from serialized traces and graph schemas without redefining the system
That is the central architectural commitment of Synaptopus.

334
docs/FORMATS.md Normal file
View File

@ -0,0 +1,334 @@
# Formats
## Purpose
Synaptopus uses JSON-facing artifact types as the bridge to future browser and TypeScript tooling:
- artifact manifest
- graph schema
- execution trace
- run report
- demo snapshot
These are generated today by the internal demo exporter. The examples below reflect the current output shapes rather than speculative formats.
All exported artifacts are wrapped in a versioned envelope with this top-level shape:
- `artifact_type`
- `schema_version`
- `payload`
- `metadata`
Example:
```json
{
"artifact_type": "graph_schema",
"schema_version": "1.0",
"payload": {
"nodes": [],
"edges": []
},
"metadata": {}
}
```
## Artifact Manifest
The manifest is the directory-level index for a set of exported artifacts.
Top-level fields:
- `schema_version`
- `artifacts`
- `metadata`
Each artifact entry has:
- `artifact_type`
- `file_name`
Example:
```json
{
"schema_version": "1.0",
"artifacts": [
{ "artifact_type": "graph_schema", "file_name": "graph.json" },
{ "artifact_type": "execution_trace", "file_name": "trace.json" },
{ "artifact_type": "run_report", "file_name": "report.json" },
{ "artifact_type": "demo_snapshot", "file_name": "snapshot.json" }
],
"metadata": {
"example": "parity_pressure",
"snapshot_after_accepted": 2
}
}
```
## Graph Schema
The graph schema describes node roles and wiring. It does not encode full runtime behavior by itself.
Top-level fields:
- `nodes`
- `edges`
Each node has:
- `node_id`
- `node_type`
- `input_names`
- `output_names`
Each edge has:
- `source_node_id`
- `source_output`
- `target_node_id`
- `target_input`
Example:
```json
{
"artifact_type": "graph_schema",
"schema_version": "1.0",
"payload": {
"nodes": [
{
"node_id": "generator",
"node_type": "generator",
"input_names": ["state"],
"output_names": ["candidate"]
},
{
"node_id": "policy",
"node_type": "policy",
"input_names": ["state", "candidate", "critique", "category"],
"output_names": ["decision", "accepted"]
}
],
"edges": [
{
"source_node_id": "generator",
"source_output": "candidate",
"target_node_id": "critic",
"target_input": "candidate"
}
]
},
"metadata": {}
}
```
## Execution Trace
The execution trace captures the real runtime behavior of a system under the generic accept/reject loop.
Top-level fields:
- `accepted`
- `attempts`
- `final_state`
- `total_seconds`
Each step trace contains:
- `previous_state`
- `next_state`
- `candidate`
- `accepted`
- `elapsed_seconds`
- `metadata`
The metadata payload is intentionally architecture- and example-specific, but it must still be JSON-safe. For the XOR novelty demo it contains:
- `critique`
- `category`
- `decision`
Example excerpt:
```json
{
"artifact_type": "execution_trace",
"schema_version": "1.0",
"payload": {
"accepted": [
{
"candidate": [0, 1],
"accepted": true,
"metadata": {
"critique": {
"outputs": [0.9850980332426884],
"loss": 0.0
},
"category": {
"winner": 0,
"matched": true,
"new_category": false,
"delta_vigilance": false
},
"decision": {
"accepted": true,
"label": "accept"
}
}
}
]
},
"metadata": {}
}
```
## Generic And Example-Specific Trace Fields
The execution trace has two layers:
- generic runtime fields, which should remain stable across examples
- `metadata`, which may vary by example or architecture family
Generic fields:
- `previous_state`
- `next_state`
- `candidate`
- `accepted`
- `elapsed_seconds`
Example-specific metadata should stay JSON-safe and should be interpreted by consumers only when they know the example or system family.
## Run Report
The run report is a compact summary artifact for comparison, dashboarding, and experiment logging.
Top-level fields inside `payload`:
- `parameters`
- `accepted_count`
- `attempt_count`
- `total_seconds`
- `sequence_analysis`
- `average_attempts_per_accept`
Example:
```json
{
"artifact_type": "run_report",
"schema_version": "1.0",
"payload": {
"parameters": {
"example": "xor_novelty",
"accepted_count": 2,
"max_attempts_per_accept": 4
},
"accepted_count": 2,
"attempt_count": 3,
"total_seconds": 0.00019715959206223488,
"sequence_analysis": {
"item_count": 2,
"alphabet_size": 4,
"unigram_entropy_bits": 1.0,
"conditional_entropy_bits": 0.0,
"normalized_entropy": 0.5,
"predictability": 1.0,
"redundancy": 0.5
}
},
"average_attempts_per_accept": 1.5
},
"metadata": {}
}
```
The `sequence_analysis` object is optional. When present, it currently contains:
- `item_count`
- `alphabet_size`
- `unigram_entropy_bits`
- `conditional_entropy_bits`
- `normalized_entropy`
- `predictability`
- `redundancy`
## Demo Snapshot
The demo snapshot artifact is the first checkpoint/resume format in Synaptopus. It is currently scoped to the internal demos rather than arbitrary user-defined systems.
Top-level fields inside `payload`:
- `demo_name`
- `system`
- `record`
- `parameters`
The `system` object stores the mutable model internals needed to resume execution, such as ART1 categories and backpropagation weights. The `record` object stores the accumulated execution history up to the checkpoint.
Example:
```json
{
"artifact_type": "demo_snapshot",
"schema_version": "1.0",
"payload": {
"demo_name": "parity_pressure",
"system": {
"critic_network": {},
"categorizer_network": {},
"acceptance_threshold": 0.8
},
"record": {
"accepted": [],
"attempts": [],
"final_state": {
"accepted": [[0, 0, 1], [1, 0, 0]],
"attempts": 5
},
"total_seconds": 0.0005
},
"parameters": {
"accepted_count": 2,
"max_attempts_per_accept": 12
}
},
"metadata": {
"demo_name": "parity_pressure"
}
}
```
## Versioning
The current artifact schema version is:
- `1.0`
Any future breaking change to the envelope, manifest, or payload structures should increment the schema version rather than silently changing field meaning.
## Format Constraints
These constraints should remain stable:
- artifacts must be JSON-safe without Python-specific types
- exported files should be versioned via envelopes
- artifact sets should include a manifest
- tuples must serialize as arrays
- dataclass-like records must serialize as plain objects
- graph schemas must remain declarative
- traces must reflect actual runtime attempts and acceptances
- reports should stay compact and comparison-friendly
## Intended TypeScript Mapping
The natural future TypeScript split is:
- manifest interfaces
- artifact envelope interfaces
- graph schema interfaces
- trace interfaces
- report interfaces
Those should be derived directly from the current artifact contracts rather than reinterpreted independently in the frontend.

49
docs/HISTORY.md Normal file
View File

@ -0,0 +1,49 @@
# History
## Provenance
Synaptopus grows out of a much earlier line of work: a 1988-1989 master's thesis project at The University of Texas at Arlington by Wesley Royce Elsberry on hybrid artificial neural network modelling.
That original system combined multiple architecture families in a single loop:
- a Hopfield-Tank style generator
- a backpropagation-based critic
- an ART-style novelty and category mechanism
- a rule-based instructor and acceptance policy around them
The important idea was not just that neural networks could be used for a task, but that unlike neural systems could be made to cooperate, constrain one another, and contribute different functional roles within a larger process.
## Why A Separate Repository
The thesis reconstruction and Python port made the historical system accessible again, but it also clarified that the deeper contribution was architectural rather than domain-bound. The composition project is one concrete application of a broader pattern:
- heterogeneous neural components
- explicit orchestration
- inspectable intermediate states
- sequential acceptance and rejection loops
- evaluation beyond raw fitting or classification
Synaptopus exists to make that broader pattern the primary subject.
## Relationship To TriuneCadence
TriuneCadence is the thesis-focused reconstruction: historically grounded, composition-centered, and intentionally close to the original hybrid system.
Synaptopus is the broader framework direction: a place where reusable architecture interfaces, generic implementations, educational tools, and new multi-architecture experiments can live without being tied to one historical task.
In short:
- TriuneCadence is one important exemplar
- Synaptopus is the larger lab
## Intended Future
Over time, Synaptopus may include:
- generic architecture families beyond the original three
- additional domains beyond music
- execution graphs and visual workbenches
- browser-based and pedagogical interfaces
- experiment tracing, timing, and information-theoretic analysis
The aim is to support both serious experimentation and explanation: a system that can be used to build artificial neural systems and to teach how they work together.

78
docs/ROADMAP.md Normal file
View File

@ -0,0 +1,78 @@
# Roadmap
## Overall Direction
Synaptopus is intended to become a multi-architecture artificial neural systems lab that supports:
- reusable architecture families
- hybrid execution across unlike systems
- inspectable traces for pedagogy and research
- graph-oriented tooling
- browser-based experimentation
The project should remain useful even if no single architecture family dominates it.
## Current State
The repository already contains:
- generic runtime and trace primitives
- component-role protocols and cooperative orchestration
- information-theoretic sequence analysis
- generic reporting helpers
- graph schema and trace serialization
- multilayer backpropagation
- ART1
- Hopfield-style dynamics and generic Hopfield matrix preparation
- a small XOR novelty demo combining backpropagation and ART1
- a richer parity-pressure demo combining backpropagation and ART1 under category pressure
- a demo exporter that can emit artifacts for multiple internal demos
- first-pass checkpoint/resume snapshots for the internal demos
This is the first point at which Synaptopus is more than a scaffold.
## Near Term
- Extend checkpoint/resume beyond internal demos toward a generic snapshot contract
- Add explicit RNG-state capture where demo behavior is stochastic at runtime
- Expose snapshot artifacts more directly in the browser-side tooling
- Document recommended conventions for state, candidate, metadata, and mutable model serialization
## Mid Term
- Introduce domain adapters as examples rather than as the center of the framework
- Add experiment runners that generate comparable reports across parameter sweeps
- Add more robust trace viewers and summarized execution statistics
- Build a TypeScript mirror of the graph schema and trace model
- Prototype a browser-based workbench that can visualize execution traces and graph structure
## Longer Term
- Support richer loop and controller semantics in the graph layer
- Add pedagogical views for stepwise inspection of network behavior
- Expand architecture coverage beyond the historically reconstructed families
- Allow the same execution concepts to span music, classification, toy planning, and other problem domains
- Support saved sessions and replayable teaching demonstrations
## Design Constraints
Several constraints should remain stable as the repository grows:
- generic code should be preferred over thesis-specific code
- architecture families should remain explicit rather than hidden behind one opaque abstraction
- graph tooling should reflect execution semantics rather than invent a separate model
- serialization should stay JSON-friendly for browser consumption
- pedagogy should be treated as a first-class use case, not an afterthought
## Relationship To TriuneCadence
TriuneCadence remains the historically grounded exemplar and compatibility reference for the thesis-derived hybrid composition system.
Synaptopus should borrow generic, reusable pieces from that work, but should not become tied to one domain, one historical artifact set, or one architecture trio.
## Concrete Next Milestones
1. Generalize snapshot/resume beyond the built-in demos.
2. Extend the TypeScript-side contracts to cover snapshot artifacts explicitly.
3. Teach the browser tooling to inspect checkpoint contents and resume lineage.
4. Add a more complex mixed-family example with stronger controller semantics than the current parity-pressure demo.

22
pyproject.toml Normal file
View File

@ -0,0 +1,22 @@
[build-system]
requires = ["setuptools>=68"]
build-backend = "setuptools.build_meta"
[project]
name = "synaptopus"
version = "0.1.0"
description = "A multi-architecture artificial neural systems lab for composing, comparing, and teaching interacting network models."
readme = "README.md"
requires-python = ">=3.10"
authors = [
{ name = "Wesley Royce Elsberry" }
]
[project.scripts]
synaptopus-demo-export = "synaptopus.demo_export:main"
[tool.setuptools]
package-dir = {"" = "src"}
[tool.setuptools.packages.find]
where = ["src"]

161
src/synaptopus/__init__.py Normal file
View File

@ -0,0 +1,161 @@
"""Synaptopus: a multi-architecture artificial neural systems lab."""
from .analysis import SequenceAnalysis, analyze_sequence, first_order_conditional_entropy, shannon_entropy
from .artifacts import (
ARTIFACT_SCHEMA_VERSION,
ArtifactEnvelope,
ArtifactManifest,
ArtifactManifestEntry,
make_artifact_envelope,
save_artifact_json,
save_manifest_json,
)
from .art1 import ART1Category, ART1Network, ART1Params, ART1Result
from .architectures import (
AcceptancePolicy,
Categorizer,
Critic,
Generator,
PolicyDecision,
StateTransition,
)
from .backprop import BackpropLayerState, BackpropNetwork, BackpropResult
from .demo_registry import available_demo_names, get_demo_definition
from .examples import (
ParityPressureState,
XorDemoState,
build_parity_pressure_demo,
build_xor_novelty_demo,
)
from .graph import (
FunctionalNode,
GraphEdgeSpec,
GraphNode,
GraphNodeResult,
GraphNodeSpec,
GraphSchema,
GraphValue,
categorizer_node,
critic_node,
generator_node,
policy_node,
)
from .hopfield_build import (
HopfieldGridShape,
accumulate_sequence_transitions,
apply_grid_inhibition,
clear_diagonal,
grid_index,
zero_weight_matrix,
)
from .hopfield import HopfieldNetwork, HopfieldNetworkState, HopfieldParams, HopfieldRunResult
from .orchestration import CooperativeSystem, HybridStepMetadata
from .reporting import save_run_report_json, summarize_execution, summarize_sequence_run
from .runtime import (
ExecutionRecord,
StepTrace,
merge_execution_records,
run_until_acceptance,
run_until_acceptance_count,
)
from .serialization import (
SerializedExecutionRecord,
SerializedStepTrace,
deserialize_execution_record,
deserialize_step_trace,
save_execution_record_json,
serialize_execution_record,
serialize_step_trace,
to_jsonable,
)
from .snapshots import (
DemoSnapshot,
create_demo_snapshot,
load_demo_snapshot_json,
restore_demo_snapshot,
resume_demo_snapshot,
save_demo_snapshot_json,
)
from .types import RunReport
__all__ = [
"AcceptancePolicy",
"ARTIFACT_SCHEMA_VERSION",
"ART1Category",
"ART1Network",
"ART1Params",
"ART1Result",
"ArtifactEnvelope",
"ArtifactManifest",
"ArtifactManifestEntry",
"BackpropLayerState",
"BackpropNetwork",
"BackpropResult",
"Categorizer",
"CooperativeSystem",
"Critic",
"DemoSnapshot",
"ExecutionRecord",
"FunctionalNode",
"GraphEdgeSpec",
"Generator",
"GraphNode",
"GraphNodeResult",
"GraphNodeSpec",
"GraphSchema",
"GraphValue",
"HybridStepMetadata",
"HopfieldGridShape",
"HopfieldNetwork",
"HopfieldNetworkState",
"HopfieldParams",
"HopfieldRunResult",
"PolicyDecision",
"RunReport",
"SequenceAnalysis",
"StateTransition",
"StepTrace",
"ParityPressureState",
"XorDemoState",
"__version__",
"analyze_sequence",
"apply_grid_inhibition",
"build_parity_pressure_demo",
"build_xor_novelty_demo",
"available_demo_names",
"categorizer_node",
"clear_diagonal",
"create_demo_snapshot",
"critic_node",
"deserialize_execution_record",
"deserialize_step_trace",
"first_order_conditional_entropy",
"generator_node",
"get_demo_definition",
"grid_index",
"load_demo_snapshot_json",
"make_artifact_envelope",
"merge_execution_records",
"policy_node",
"restore_demo_snapshot",
"run_until_acceptance",
"run_until_acceptance_count",
"save_artifact_json",
"save_demo_snapshot_json",
"save_run_report_json",
"save_manifest_json",
"save_execution_record_json",
"serialize_execution_record",
"serialize_step_trace",
"shannon_entropy",
"summarize_execution",
"summarize_sequence_run",
"SerializedExecutionRecord",
"SerializedStepTrace",
"to_jsonable",
"resume_demo_snapshot",
"zero_weight_matrix",
"accumulate_sequence_transitions",
]
__version__ = "0.1.0"

View File

@ -0,0 +1,5 @@
from .demo_export import main
if __name__ == "__main__":
raise SystemExit(main())

View File

@ -0,0 +1,66 @@
from __future__ import annotations
from collections import Counter, defaultdict
from dataclasses import dataclass
import math
@dataclass(frozen=True)
class SequenceAnalysis:
item_count: int
alphabet_size: int
unigram_entropy_bits: float
conditional_entropy_bits: float
normalized_entropy: float
predictability: float
redundancy: float
def shannon_entropy(sequence: tuple[int, ...] | list[int]) -> float:
if not sequence:
return 0.0
counts = Counter(sequence)
total = len(sequence)
return -sum((count / total) * math.log2(count / total) for count in counts.values())
def first_order_conditional_entropy(sequence: tuple[int, ...] | list[int]) -> float:
if len(sequence) < 2:
return 0.0
transitions: dict[int, Counter[int]] = defaultdict(Counter)
source_counts = Counter(sequence[:-1])
for left, right in zip(sequence[:-1], sequence[1:]):
transitions[left][right] += 1
total_transitions = len(sequence) - 1
entropy = 0.0
for source, next_counts in transitions.items():
source_prob = source_counts[source] / total_transitions
total = sum(next_counts.values())
source_entropy = -sum(
(count / total) * math.log2(count / total) for count in next_counts.values()
)
entropy += source_prob * source_entropy
return entropy
def analyze_sequence(
sequence: tuple[int, ...] | list[int],
*,
alphabet_size: int,
) -> SequenceAnalysis:
values = tuple(int(value) for value in sequence)
unigram_entropy = shannon_entropy(values)
conditional_entropy = first_order_conditional_entropy(values)
max_entropy = math.log2(alphabet_size) if alphabet_size > 1 else 0.0
normalized_entropy = unigram_entropy / max_entropy if max_entropy else 0.0
predictability = 1.0 - (conditional_entropy / max_entropy if max_entropy else 0.0)
redundancy = 1.0 - normalized_entropy
return SequenceAnalysis(
item_count=len(values),
alphabet_size=alphabet_size,
unigram_entropy_bits=unigram_entropy,
conditional_entropy_bits=conditional_entropy,
normalized_entropy=normalized_entropy,
predictability=predictability,
redundancy=redundancy,
)

View File

@ -0,0 +1,54 @@
from __future__ import annotations
from dataclasses import dataclass
from typing import Generic, Protocol, TypeVar
StateT = TypeVar("StateT")
CandidateT = TypeVar("CandidateT")
CritiqueT = TypeVar("CritiqueT")
CategoryT = TypeVar("CategoryT")
@dataclass(frozen=True)
class PolicyDecision:
accepted: bool
label: str = ""
class Generator(Protocol[StateT, CandidateT]):
def generate(self, state: StateT) -> CandidateT:
...
class Critic(Protocol[StateT, CandidateT, CritiqueT]):
def critique(self, state: StateT, candidate: CandidateT) -> CritiqueT:
...
class Categorizer(Protocol[StateT, CandidateT, CategoryT]):
def categorize(self, state: StateT, candidate: CandidateT) -> CategoryT:
...
class AcceptancePolicy(Protocol[StateT, CandidateT, CritiqueT, CategoryT]):
def decide(
self,
state: StateT,
candidate: CandidateT,
critique: CritiqueT,
category: CategoryT,
) -> PolicyDecision:
...
class StateTransition(Protocol[StateT, CandidateT, CritiqueT, CategoryT]):
def advance(
self,
state: StateT,
candidate: CandidateT,
critique: CritiqueT,
category: CategoryT,
decision: PolicyDecision,
) -> StateT:
...

217
src/synaptopus/art1.py Normal file
View File

@ -0,0 +1,217 @@
from __future__ import annotations
from dataclasses import dataclass
import json
@dataclass(frozen=True)
class ART1Params:
max_categories: int
input_length: int
vigilance: float = 0.9
initial_bottom_up: float = 0.1
initial_top_down: float = 0.9
vigilance_decay: float = 0.99
@dataclass(frozen=True)
class ART1Category:
bottom_up: tuple[float, ...]
top_down: tuple[float, ...]
committed: bool
@dataclass(frozen=True)
class ART1Result:
winner: int
matched: bool
new_category: bool
delta_vigilance: bool
committed_categories: int
vigilance: float
expected_vector: tuple[int, ...]
class ART1Network:
def __init__(self, params: ART1Params) -> None:
self.params = params
self.vigilance = params.vigilance
self._categories = [
{
"bottom_up": [params.initial_bottom_up] * params.input_length,
"top_down": [params.initial_top_down] * params.input_length,
"committed": False,
}
for _ in range(params.max_categories)
]
@property
def committed_categories(self) -> int:
return sum(1 for category in self._categories if category["committed"])
@property
def categories(self) -> tuple[ART1Category, ...]:
return tuple(
ART1Category(
bottom_up=tuple(category["bottom_up"]),
top_down=tuple(category["top_down"]),
committed=bool(category["committed"]),
)
for category in self._categories
)
def categorize(self, input_vector: tuple[int, ...] | list[int]) -> ART1Result:
vector = tuple(int(value) for value in input_vector)
if len(vector) != self.params.input_length:
raise ValueError(
f"expected input length {self.params.input_length}, got {len(vector)}"
)
eligible = {
index for index, category in enumerate(self._categories) if category["committed"]
}
delta_vigilance = False
while True:
if not eligible:
if self.committed_categories < self.params.max_categories:
winner = self.committed_categories
self._commit_category(winner, vector)
return ART1Result(
winner=winner,
matched=True,
new_category=True,
delta_vigilance=delta_vigilance,
committed_categories=self.committed_categories,
vigilance=self.vigilance,
expected_vector=tuple(vector),
)
self.vigilance *= self.params.vigilance_decay
delta_vigilance = True
eligible = {
index
for index, category in enumerate(self._categories)
if category["committed"]
}
winner = self._choose_winner(vector, eligible)
self._resonate(winner, vector)
expected_vector = self._expected_vector(winner)
return ART1Result(
winner=winner,
matched=True,
new_category=False,
delta_vigilance=True,
committed_categories=self.committed_categories,
vigilance=self.vigilance,
expected_vector=expected_vector,
)
winner = self._choose_winner(vector, eligible)
expected_vector = self._expected_vector(winner)
if self._match(vector, expected_vector):
self._resonate(winner, vector)
return ART1Result(
winner=winner,
matched=True,
new_category=False,
delta_vigilance=delta_vigilance,
committed_categories=self.committed_categories,
vigilance=self.vigilance,
expected_vector=expected_vector,
)
eligible.remove(winner)
def _choose_winner(self, vector: tuple[int, ...], eligible: set[int]) -> int:
best_index = min(eligible)
best_score = float("-inf")
for index in sorted(eligible):
category = self._categories[index]
score = sum(
vector[i] * category["bottom_up"][i]
for i in range(self.params.input_length)
)
if score > best_score:
best_score = score
best_index = index
return best_index
def _expected_vector(self, category_index: int) -> tuple[int, ...]:
top_down = self._categories[category_index]["top_down"]
threshold = sum(top_down) / self.params.input_length
return tuple(1 if value >= threshold else 0 for value in top_down)
def _match(self, vector: tuple[int, ...], expected_vector: tuple[int, ...]) -> bool:
ones_in_input = sum(vector)
raw_match = sum(
1 for left, right in zip(vector, expected_vector) if left == 1 and right == 1
)
if ones_in_input == 0:
return raw_match > 0
return (raw_match / ones_in_input) >= self.vigilance
def _commit_category(self, category_index: int, vector: tuple[int, ...]) -> None:
category = self._categories[category_index]
category["committed"] = True
category["top_down"] = [float(value) for value in vector]
ones = max(1, sum(vector))
category["bottom_up"] = [float(value) / ones for value in vector]
def _resonate(self, category_index: int, vector: tuple[int, ...]) -> None:
category = self._categories[category_index]
intersected = [
1 if category["top_down"][i] >= 0.5 and vector[i] == 1 else 0
for i in range(self.params.input_length)
]
category["top_down"] = [float(value) for value in intersected]
ones = max(1, sum(intersected))
category["bottom_up"] = [float(value) / ones for value in intersected]
def to_dict(self) -> dict[str, object]:
return {
"params": {
"max_categories": self.params.max_categories,
"input_length": self.params.input_length,
"vigilance": self.params.vigilance,
"initial_bottom_up": self.params.initial_bottom_up,
"initial_top_down": self.params.initial_top_down,
"vigilance_decay": self.params.vigilance_decay,
},
"vigilance": self.vigilance,
"categories": self._categories,
}
@classmethod
def from_dict(cls, data: dict[str, object]) -> "ART1Network":
params_data = data["params"] # type: ignore[index]
network = cls(
ART1Params(
max_categories=int(params_data["max_categories"]), # type: ignore[index]
input_length=int(params_data["input_length"]), # type: ignore[index]
vigilance=float(params_data["vigilance"]), # type: ignore[index]
initial_bottom_up=float(params_data["initial_bottom_up"]), # type: ignore[index]
initial_top_down=float(params_data["initial_top_down"]), # type: ignore[index]
vigilance_decay=float(params_data["vigilance_decay"]), # type: ignore[index]
)
)
network.vigilance = float(data["vigilance"])
network._categories = [
{
"bottom_up": [float(value) for value in category["bottom_up"]],
"top_down": [float(value) for value in category["top_down"]],
"committed": bool(category["committed"]),
}
for category in data["categories"] # type: ignore[index]
]
return network
def save_json(self, path: str) -> None:
with open(path, "w", encoding="utf-8") as handle:
json.dump(self.to_dict(), handle, indent=2)
@classmethod
def load_json(cls, path: str) -> "ART1Network":
with open(path, "r", encoding="utf-8") as handle:
data = json.load(handle)
return cls.from_dict(data)

View File

@ -0,0 +1,66 @@
from __future__ import annotations
from dataclasses import asdict, dataclass
from pathlib import Path
import json
from .serialization import to_jsonable
ARTIFACT_SCHEMA_VERSION = "1.0"
@dataclass(frozen=True)
class ArtifactEnvelope:
artifact_type: str
schema_version: str
payload: object
metadata: dict[str, object]
@dataclass(frozen=True)
class ArtifactManifestEntry:
artifact_type: str
file_name: str
@dataclass(frozen=True)
class ArtifactManifest:
schema_version: str
artifacts: tuple[ArtifactManifestEntry, ...]
metadata: dict[str, object]
def make_artifact_envelope(
artifact_type: str,
payload: object,
*,
metadata: dict[str, object] | None = None,
) -> ArtifactEnvelope:
return ArtifactEnvelope(
artifact_type=artifact_type,
schema_version=ARTIFACT_SCHEMA_VERSION,
payload=to_jsonable(payload),
metadata=dict(metadata or {}),
)
def save_artifact_json(
artifact_type: str,
payload: object,
path: str | Path,
*,
metadata: dict[str, object] | None = None,
) -> None:
destination = Path(path)
envelope = make_artifact_envelope(
artifact_type,
payload,
metadata=metadata,
)
destination.write_text(json.dumps(asdict(envelope), indent=2), encoding="utf-8")
def save_manifest_json(manifest: ArtifactManifest, path: str | Path) -> None:
destination = Path(path)
destination.write_text(json.dumps(asdict(manifest), indent=2), encoding="utf-8")

259
src/synaptopus/backprop.py Normal file
View File

@ -0,0 +1,259 @@
from __future__ import annotations
from dataclasses import dataclass
import json
import math
import random
from typing import Iterable
def sigmoid(x: float) -> float:
clamped = max(min(-x, 80.0), -80.0)
return 1.0 / (1.0 + math.exp(clamped))
@dataclass(frozen=True)
class BackpropLayerState:
activations: tuple[float, ...]
deltas: tuple[float, ...]
biases: tuple[float, ...]
@dataclass(frozen=True)
class BackpropResult:
outputs: tuple[float, ...]
loss: float
layer_states: tuple[BackpropLayerState, ...]
class BackpropNetwork:
def __init__(
self,
*,
layer_sizes: tuple[int, ...],
learning_rate: float,
momentum: float,
weights: list[list[list[float]]],
biases: list[list[float]],
) -> None:
if len(layer_sizes) < 2:
raise ValueError("layer_sizes must include at least input and output layers")
if any(size <= 0 for size in layer_sizes):
raise ValueError("all layer sizes must be positive")
if len(weights) != len(layer_sizes) - 1:
raise ValueError("weights must connect each adjacent layer")
if len(biases) != len(layer_sizes) - 1:
raise ValueError("biases must match the number of non-input layers")
self.layer_sizes = layer_sizes
self.learning_rate = learning_rate
self.momentum = momentum
self.weights = weights
self.biases = biases
self.last_weight_updates = [
[[0.0 for _ in neuron] for neuron in layer]
for layer in weights
]
self.last_bias_updates = [
[0.0 for _ in layer]
for layer in biases
]
@property
def input_size(self) -> int:
return self.layer_sizes[0]
@property
def output_size(self) -> int:
return self.layer_sizes[-1]
@property
def hidden_layers(self) -> tuple[int, ...]:
return self.layer_sizes[1:-1]
@classmethod
def random(
cls,
*,
input_size: int,
hidden_layers: tuple[int, ...],
output_size: int,
learning_rate: float = 0.5,
momentum: float = 0.1,
rng: random.Random | None = None,
) -> "BackpropNetwork":
generator = rng or random.Random()
layer_sizes = (input_size, *hidden_layers, output_size)
weights: list[list[list[float]]] = []
biases: list[list[float]] = []
for left_size, right_size in zip(layer_sizes[:-1], layer_sizes[1:]):
weights.append(
[
[generator.uniform(-1.0, 1.0) for _ in range(left_size)]
for _ in range(right_size)
]
)
biases.append([generator.uniform(-0.25, 0.25) for _ in range(right_size)])
return cls(
layer_sizes=layer_sizes,
learning_rate=learning_rate,
momentum=momentum,
weights=weights,
biases=biases,
)
def predict(self, inputs: Iterable[float]) -> BackpropResult:
activations = self._forward(inputs)
layer_states = tuple(
BackpropLayerState(
activations=tuple(layer_activation),
deltas=tuple(0.0 for _ in layer_activation),
biases=tuple(self.biases[layer_index - 1]),
)
for layer_index, layer_activation in enumerate(activations[1:], start=1)
)
return BackpropResult(
outputs=tuple(activations[-1]),
loss=0.0,
layer_states=layer_states,
)
def train_step(self, inputs: Iterable[float], targets: Iterable[float]) -> BackpropResult:
input_values = tuple(float(value) for value in inputs)
target_values = tuple(float(value) for value in targets)
if len(input_values) != self.input_size:
raise ValueError(f"expected {self.input_size} inputs, got {len(input_values)}")
if len(target_values) != self.output_size:
raise ValueError(f"expected {self.output_size} targets, got {len(target_values)}")
activations = self._forward(input_values)
deltas: list[list[float]] = [
[0.0 for _ in range(size)]
for size in self.layer_sizes[1:]
]
output_activations = activations[-1]
output_deltas: list[float] = []
losses: list[float] = []
for activation, target in zip(output_activations, target_values):
error = target - activation
losses.append(0.5 * error * error)
output_deltas.append(error * activation * (1.0 - activation))
deltas[-1] = output_deltas
for layer_index in range(len(deltas) - 2, -1, -1):
current_activations = activations[layer_index + 1]
next_weights = self.weights[layer_index + 1]
next_deltas = deltas[layer_index + 1]
current_deltas: list[float] = []
for neuron_index, activation in enumerate(current_activations):
downstream = 0.0
for next_neuron_index, next_delta in enumerate(next_deltas):
downstream += next_delta * next_weights[next_neuron_index][neuron_index]
current_deltas.append(activation * (1.0 - activation) * downstream)
deltas[layer_index] = current_deltas
for layer_index, (layer_weights, layer_biases) in enumerate(zip(self.weights, self.biases)):
source_activations = activations[layer_index]
layer_deltas = deltas[layer_index]
for neuron_index in range(len(layer_weights)):
bias_update = (
self.learning_rate * layer_deltas[neuron_index]
+ self.momentum * self.last_bias_updates[layer_index][neuron_index]
)
self.last_bias_updates[layer_index][neuron_index] = bias_update
layer_biases[neuron_index] += bias_update
for source_index in range(len(source_activations)):
update = (
self.learning_rate
* layer_deltas[neuron_index]
* source_activations[source_index]
)
update += (
self.momentum
* self.last_weight_updates[layer_index][neuron_index][source_index]
)
self.last_weight_updates[layer_index][neuron_index][source_index] = update
layer_weights[neuron_index][source_index] += update
layer_states = tuple(
BackpropLayerState(
activations=tuple(activations[layer_index + 1]),
deltas=tuple(deltas[layer_index]),
biases=tuple(self.biases[layer_index]),
)
for layer_index in range(len(deltas))
)
return BackpropResult(
outputs=tuple(activations[-1]),
loss=sum(losses),
layer_states=layer_states,
)
def _forward(self, inputs: Iterable[float]) -> list[list[float]]:
input_values = tuple(float(value) for value in inputs)
if len(input_values) != self.input_size:
raise ValueError(f"expected {self.input_size} inputs, got {len(input_values)}")
activations: list[list[float]] = [list(input_values)]
current = list(input_values)
for layer_weights, layer_biases in zip(self.weights, self.biases):
next_values: list[float] = []
for neuron_weights, bias in zip(layer_weights, layer_biases):
total = sum(weight * value for weight, value in zip(neuron_weights, current)) + bias
next_values.append(sigmoid(total))
activations.append(next_values)
current = next_values
return activations
def to_dict(self) -> dict[str, object]:
return {
"layer_sizes": list(self.layer_sizes),
"learning_rate": self.learning_rate,
"momentum": self.momentum,
"weights": self.weights,
"biases": self.biases,
"last_weight_updates": self.last_weight_updates,
"last_bias_updates": self.last_bias_updates,
}
@classmethod
def from_dict(cls, data: dict[str, object]) -> "BackpropNetwork":
network = cls(
layer_sizes=tuple(int(value) for value in data["layer_sizes"]), # type: ignore[index]
learning_rate=float(data["learning_rate"]),
momentum=float(data["momentum"]),
weights=[
[
[float(weight) for weight in neuron]
for neuron in layer
]
for layer in data["weights"] # type: ignore[index]
],
biases=[
[float(bias) for bias in layer]
for layer in data["biases"] # type: ignore[index]
],
)
network.last_weight_updates = [
[
[float(weight) for weight in neuron]
for neuron in layer
]
for layer in data.get("last_weight_updates", network.last_weight_updates) # type: ignore[arg-type]
]
network.last_bias_updates = [
[float(bias) for bias in layer]
for layer in data.get("last_bias_updates", network.last_bias_updates) # type: ignore[arg-type]
]
return network
def save_json(self, path: str) -> None:
with open(path, "w", encoding="utf-8") as handle:
json.dump(self.to_dict(), handle, indent=2)
@classmethod
def load_json(cls, path: str) -> "BackpropNetwork":
with open(path, "r", encoding="utf-8") as handle:
return cls.from_dict(json.load(handle))

View File

@ -0,0 +1,182 @@
from __future__ import annotations
from pathlib import Path
import argparse
import json
from .artifacts import (
ARTIFACT_SCHEMA_VERSION,
ArtifactManifest,
ArtifactManifestEntry,
save_manifest_json,
)
from .demo_registry import available_demo_names, get_demo_definition
from .graph import GraphEdgeSpec, GraphSchema, categorizer_node, critic_node, generator_node, policy_node
from .reporting import save_run_report_json, summarize_sequence_run
from .runtime import run_until_acceptance_count
from .serialization import save_execution_record_json
from .snapshots import create_demo_snapshot, resume_demo_snapshot, save_demo_snapshot_json
def _graph_schema_for_system(system: object) -> GraphSchema:
graph_nodes = (
generator_node("generator", system.generator), # type: ignore[attr-defined]
critic_node("critic", system.critic), # type: ignore[attr-defined]
categorizer_node("categorizer", system.categorizer), # type: ignore[attr-defined]
policy_node("policy", system.policy), # type: ignore[attr-defined]
)
return GraphSchema(
nodes=tuple(node.spec() for node in graph_nodes),
edges=(
GraphEdgeSpec("generator", "candidate", "critic", "candidate"),
GraphEdgeSpec("generator", "candidate", "categorizer", "candidate"),
GraphEdgeSpec("generator", "candidate", "policy", "candidate"),
GraphEdgeSpec("critic", "critique", "policy", "critique"),
GraphEdgeSpec("categorizer", "category", "policy", "category"),
),
)
def export_demo_artifacts(
output_dir: str | Path,
*,
demo_name: str,
accepted_count: int = 2,
max_attempts_per_accept: int = 4,
snapshot_after_accepted: int | None = None,
) -> dict[str, Path]:
definition = get_demo_definition(demo_name)
destination = Path(output_dir)
destination.mkdir(parents=True, exist_ok=True)
system = definition.build_system()
graph_path = destination / "graph.json"
trace_path = destination / "trace.json"
report_path = destination / "report.json"
manifest_path = destination / "manifest.json"
snapshot_path = destination / "snapshot.json"
manifest_entries = [
ArtifactManifestEntry("graph_schema", graph_path.name),
ArtifactManifestEntry("execution_trace", trace_path.name),
ArtifactManifestEntry("run_report", report_path.name),
]
if snapshot_after_accepted is None:
record = run_until_acceptance_count(
system,
definition.initial_state,
accepted_count=accepted_count,
max_attempts_per_accept=max_attempts_per_accept,
)
else:
if snapshot_after_accepted < 0:
raise ValueError("snapshot_after_accepted must be non-negative")
if snapshot_after_accepted > accepted_count:
raise ValueError("snapshot_after_accepted cannot exceed accepted_count")
partial_record = run_until_acceptance_count(
system,
definition.initial_state,
accepted_count=snapshot_after_accepted,
max_attempts_per_accept=max_attempts_per_accept,
)
snapshot = create_demo_snapshot(
demo_name,
system=system,
record=partial_record,
parameters={
"accepted_count": snapshot_after_accepted,
"max_attempts_per_accept": max_attempts_per_accept,
},
)
save_demo_snapshot_json(snapshot, snapshot_path)
manifest_entries.append(ArtifactManifestEntry("demo_snapshot", snapshot_path.name))
if snapshot_after_accepted == accepted_count:
record = partial_record
else:
system, record = resume_demo_snapshot(
snapshot,
additional_accepted_count=accepted_count - snapshot_after_accepted,
max_attempts_per_accept=max_attempts_per_accept,
)
_graph_schema_for_system(system).save_json(graph_path)
save_execution_record_json(record, trace_path)
report = summarize_sequence_run(
record,
sequence_getter=definition.sequence_getter,
alphabet_size=definition.alphabet_size,
parameters={
"example": definition.name,
"accepted_count": accepted_count,
"max_attempts_per_accept": max_attempts_per_accept,
},
)
save_run_report_json(report, report_path)
save_manifest_json(
ArtifactManifest(
schema_version=ARTIFACT_SCHEMA_VERSION,
artifacts=tuple(manifest_entries),
metadata={
"example": definition.name,
"accepted_count": accepted_count,
"max_attempts_per_accept": max_attempts_per_accept,
"snapshot_after_accepted": snapshot_after_accepted,
},
),
manifest_path,
)
artifacts = {
"graph": graph_path,
"trace": trace_path,
"report": report_path,
"manifest": manifest_path,
}
if snapshot_after_accepted is not None:
artifacts["snapshot"] = snapshot_path
return artifacts
def export_xor_demo_artifacts(
output_dir: str | Path,
*,
accepted_count: int = 2,
max_attempts_per_accept: int = 4,
) -> dict[str, Path]:
return export_demo_artifacts(
output_dir,
demo_name="xor_novelty",
accepted_count=accepted_count,
max_attempts_per_accept=max_attempts_per_accept,
)
def main(argv: list[str] | None = None) -> int:
parser = argparse.ArgumentParser(
prog="synaptopus-demo-export",
description="Export graph, trace, and report artifacts for a Synaptopus internal demo.",
)
parser.add_argument("output_dir")
parser.add_argument(
"--demo",
default="xor_novelty",
choices=available_demo_names(),
)
parser.add_argument("--accepted-count", type=int, default=2)
parser.add_argument("--max-attempts-per-accept", type=int, default=4)
parser.add_argument("--snapshot-after-accepted", type=int)
args = parser.parse_args(argv)
artifacts = export_demo_artifacts(
args.output_dir,
demo_name=args.demo,
accepted_count=args.accepted_count,
max_attempts_per_accept=args.max_attempts_per_accept,
snapshot_after_accepted=args.snapshot_after_accepted,
)
print(json.dumps({name: str(path) for name, path in artifacts.items()}, indent=2))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@ -0,0 +1,227 @@
from __future__ import annotations
from dataclasses import dataclass
from typing import Callable, Generic, TypeVar
from .architectures import PolicyDecision
from .art1 import ART1Network, ART1Result
from .backprop import BackpropLayerState, BackpropNetwork, BackpropResult
from .examples import (
BackpropParityCritic,
BackpropXorCritic,
BinaryPairCategorizer,
BinaryTripleCategorizer,
CyclicBinaryGenerator,
CyclicParityGenerator,
ParityPressurePolicy,
ParityPressureState,
ParityPressureTransition,
XorDemoState,
XorDemoTransition,
XorNoveltyPolicy,
build_parity_pressure_demo,
build_xor_novelty_demo,
)
from .orchestration import CooperativeSystem, HybridStepMetadata
from .serialization import to_jsonable
StateT = TypeVar("StateT")
CandidateT = TypeVar("CandidateT")
MetadataT = TypeVar("MetadataT")
BinaryPair = tuple[int, int]
BinaryTriple = tuple[int, int, int]
@dataclass(frozen=True)
class DemoDefinition(Generic[StateT, CandidateT, MetadataT]):
name: str
build_system: Callable[[], object]
initial_state: StateT
alphabet_size: int
sequence_getter: Callable[[object], list[int]]
state_encoder: Callable[[StateT], object]
state_decoder: Callable[[object], StateT]
candidate_decoder: Callable[[object], CandidateT]
metadata_decoder: Callable[[object | None], MetadataT | None]
system_encoder: Callable[[object], dict[str, object]]
system_decoder: Callable[[dict[str, object]], object]
def _binary_pair_sequence(record: object) -> list[int]:
return [left * 2 + right for left, right in record.final_state.accepted] # type: ignore[attr-defined]
def _binary_triple_sequence(record: object) -> list[int]:
return [
left * 4 + middle * 2 + right
for left, middle, right in record.final_state.accepted # type: ignore[attr-defined]
]
def _decode_xor_state(data: object) -> XorDemoState:
payload = data # type: ignore[assignment]
return XorDemoState(
accepted=tuple(tuple(int(value) for value in pair) for pair in payload["accepted"]), # type: ignore[index]
attempts=int(payload["attempts"]), # type: ignore[index]
)
def _decode_parity_state(data: object) -> ParityPressureState:
payload = data # type: ignore[assignment]
return ParityPressureState(
accepted=tuple(
tuple(int(value) for value in triple)
for triple in payload["accepted"] # type: ignore[index]
),
attempts=int(payload["attempts"]), # type: ignore[index]
)
def _decode_binary_pair(data: object) -> BinaryPair:
left, right = data # type: ignore[misc]
return (int(left), int(right))
def _decode_binary_triple(data: object) -> BinaryTriple:
left, middle, right = data # type: ignore[misc]
return (int(left), int(middle), int(right))
def _decode_backprop_result(data: object) -> BackpropResult:
payload = data # type: ignore[assignment]
return BackpropResult(
outputs=tuple(float(value) for value in payload["outputs"]), # type: ignore[index]
loss=float(payload["loss"]), # type: ignore[index]
layer_states=tuple(
BackpropLayerState(
activations=tuple(float(value) for value in layer_state["activations"]),
deltas=tuple(float(value) for value in layer_state["deltas"]),
biases=tuple(float(value) for value in layer_state["biases"]),
)
for layer_state in payload["layer_states"] # type: ignore[index]
),
)
def _decode_art1_result(data: object) -> ART1Result:
payload = data # type: ignore[assignment]
return ART1Result(
winner=int(payload["winner"]), # type: ignore[index]
matched=bool(payload["matched"]), # type: ignore[index]
new_category=bool(payload["new_category"]), # type: ignore[index]
delta_vigilance=bool(payload["delta_vigilance"]), # type: ignore[index]
committed_categories=int(payload["committed_categories"]), # type: ignore[index]
vigilance=float(payload["vigilance"]), # type: ignore[index]
expected_vector=tuple(int(value) for value in payload["expected_vector"]), # type: ignore[index]
)
def _decode_policy_decision(data: object) -> PolicyDecision:
payload = data # type: ignore[assignment]
return PolicyDecision(
accepted=bool(payload["accepted"]), # type: ignore[index]
label=str(payload["label"]), # type: ignore[index]
)
def _decode_hybrid_metadata(data: object | None) -> HybridStepMetadata[BackpropResult, ART1Result] | None:
if data is None:
return None
payload = data # type: ignore[assignment]
return HybridStepMetadata(
critique=_decode_backprop_result(payload["critique"]), # type: ignore[index]
category=_decode_art1_result(payload["category"]), # type: ignore[index]
decision=_decode_policy_decision(payload["decision"]), # type: ignore[index]
)
def _encode_xor_system(system: object) -> dict[str, object]:
current = system # type: ignore[assignment]
return {
"critic_network": current.critic.network.to_dict(),
"categorizer_network": current.categorizer.network.to_dict(),
"acceptance_threshold": current.policy.acceptance_threshold,
}
def _decode_xor_system(data: dict[str, object]) -> CooperativeSystem[
XorDemoState,
BinaryPair,
BackpropResult,
ART1Result,
]:
return CooperativeSystem(
generator=CyclicBinaryGenerator(),
critic=BackpropXorCritic(BackpropNetwork.from_dict(data["critic_network"])), # type: ignore[arg-type]
categorizer=BinaryPairCategorizer(ART1Network.from_dict(data["categorizer_network"])), # type: ignore[arg-type]
policy=XorNoveltyPolicy(acceptance_threshold=float(data["acceptance_threshold"])),
transition=XorDemoTransition(),
)
def _encode_parity_system(system: object) -> dict[str, object]:
current = system # type: ignore[assignment]
return {
"critic_network": current.critic.network.to_dict(),
"categorizer_network": current.categorizer.network.to_dict(),
"acceptance_threshold": current.policy.acceptance_threshold,
}
def _decode_parity_system(data: dict[str, object]) -> CooperativeSystem[
ParityPressureState,
BinaryTriple,
BackpropResult,
ART1Result,
]:
return CooperativeSystem(
generator=CyclicParityGenerator(),
critic=BackpropParityCritic(BackpropNetwork.from_dict(data["critic_network"])), # type: ignore[arg-type]
categorizer=BinaryTripleCategorizer(ART1Network.from_dict(data["categorizer_network"])), # type: ignore[arg-type]
policy=ParityPressurePolicy(acceptance_threshold=float(data["acceptance_threshold"])),
transition=ParityPressureTransition(),
)
DEMO_DEFINITIONS: dict[str, DemoDefinition[object, object, object]] = {
"xor_novelty": DemoDefinition(
name="xor_novelty",
build_system=build_xor_novelty_demo,
initial_state=XorDemoState(),
alphabet_size=4,
sequence_getter=_binary_pair_sequence,
state_encoder=to_jsonable,
state_decoder=_decode_xor_state,
candidate_decoder=_decode_binary_pair,
metadata_decoder=_decode_hybrid_metadata,
system_encoder=_encode_xor_system,
system_decoder=_decode_xor_system,
),
"parity_pressure": DemoDefinition(
name="parity_pressure",
build_system=build_parity_pressure_demo,
initial_state=ParityPressureState(),
alphabet_size=8,
sequence_getter=_binary_triple_sequence,
state_encoder=to_jsonable,
state_decoder=_decode_parity_state,
candidate_decoder=_decode_binary_triple,
metadata_decoder=_decode_hybrid_metadata,
system_encoder=_encode_parity_system,
system_decoder=_decode_parity_system,
),
}
def available_demo_names() -> tuple[str, ...]:
return tuple(sorted(DEMO_DEFINITIONS))
def get_demo_definition(name: str) -> DemoDefinition[object, object, object]:
try:
return DEMO_DEFINITIONS[name]
except KeyError as exc:
raise ValueError(f"unknown demo_name {name!r}") from exc

257
src/synaptopus/examples.py Normal file
View File

@ -0,0 +1,257 @@
from __future__ import annotations
from dataclasses import dataclass
import random
from .architectures import PolicyDecision
from .art1 import ART1Network, ART1Params, ART1Result
from .backprop import BackpropNetwork, BackpropResult
from .orchestration import CooperativeSystem
BinaryPair = tuple[int, int]
BinaryTriple = tuple[int, int, int]
@dataclass(frozen=True)
class XorDemoState:
accepted: tuple[BinaryPair, ...] = ()
attempts: int = 0
class CyclicBinaryGenerator:
_patterns: tuple[BinaryPair, ...] = (
(0, 0),
(0, 1),
(1, 0),
(1, 1),
)
def generate(self, state: XorDemoState) -> BinaryPair:
return self._patterns[state.attempts % len(self._patterns)]
class BackpropXorCritic:
def __init__(self, network: BackpropNetwork) -> None:
self.network = network
def critique(self, state: XorDemoState, candidate: BinaryPair) -> BackpropResult:
return self.network.predict(tuple(float(value) for value in candidate))
class BinaryPairCategorizer:
def __init__(self, network: ART1Network) -> None:
self.network = network
def categorize(self, state: XorDemoState, candidate: BinaryPair) -> ART1Result:
return self.network.categorize(candidate)
class XorNoveltyPolicy:
def __init__(self, *, acceptance_threshold: float = 0.8) -> None:
self.acceptance_threshold = acceptance_threshold
def decide(
self,
state: XorDemoState,
candidate: BinaryPair,
critique: BackpropResult,
category: ART1Result,
) -> PolicyDecision:
score = critique.outputs[0]
accepted = (
score >= self.acceptance_threshold
and category.matched
and not category.delta_vigilance
)
return PolicyDecision(
accepted=accepted,
label="accept" if accepted else "reject",
)
class XorDemoTransition:
def advance(
self,
state: XorDemoState,
candidate: BinaryPair,
critique: BackpropResult,
category: ART1Result,
decision: PolicyDecision,
) -> XorDemoState:
if decision.accepted:
return XorDemoState(
accepted=state.accepted + (candidate,),
attempts=state.attempts + 1,
)
return XorDemoState(
accepted=state.accepted,
attempts=state.attempts + 1,
)
def build_xor_novelty_demo(
*,
rng_seed: int = 11,
acceptance_threshold: float = 0.8,
art_vigilance: float = 0.9,
) -> CooperativeSystem[XorDemoState, BinaryPair, BackpropResult, ART1Result]:
network = BackpropNetwork.random(
input_size=2,
hidden_layers=(4, 4),
output_size=1,
learning_rate=0.8,
momentum=0.2,
rng=random.Random(rng_seed),
)
samples = (
((0.0, 0.0), (0.0,)),
((0.0, 1.0), (1.0,)),
((1.0, 0.0), (1.0,)),
((1.0, 1.0), (0.0,)),
)
for _ in range(6000):
for inputs, targets in samples:
network.train_step(inputs, targets)
art = ART1Network(
params=ART1Params(
max_categories=4,
input_length=2,
vigilance=art_vigilance,
)
)
return CooperativeSystem(
generator=CyclicBinaryGenerator(),
critic=BackpropXorCritic(network),
categorizer=BinaryPairCategorizer(art),
policy=XorNoveltyPolicy(acceptance_threshold=acceptance_threshold),
transition=XorDemoTransition(),
)
@dataclass(frozen=True)
class ParityPressureState:
accepted: tuple[BinaryTriple, ...] = ()
attempts: int = 0
class CyclicParityGenerator:
_patterns: tuple[BinaryTriple, ...] = (
(0, 0, 0),
(0, 0, 1),
(0, 1, 0),
(0, 1, 1),
(1, 0, 0),
(1, 0, 1),
(1, 1, 0),
(1, 1, 1),
)
def generate(self, state: ParityPressureState) -> BinaryTriple:
return self._patterns[state.attempts % len(self._patterns)]
class BackpropParityCritic:
def __init__(self, network: BackpropNetwork) -> None:
self.network = network
def critique(self, state: ParityPressureState, candidate: BinaryTriple) -> BackpropResult:
return self.network.predict(tuple(float(value) for value in candidate))
class BinaryTripleCategorizer:
def __init__(self, network: ART1Network) -> None:
self.network = network
def categorize(self, state: ParityPressureState, candidate: BinaryTriple) -> ART1Result:
encoded = [0] * 8
encoded[(candidate[0] * 4) + (candidate[1] * 2) + candidate[2]] = 1
return self.network.categorize(encoded)
class ParityPressurePolicy:
def __init__(self, *, acceptance_threshold: float = 0.8) -> None:
self.acceptance_threshold = acceptance_threshold
def decide(
self,
state: ParityPressureState,
candidate: BinaryTriple,
critique: BackpropResult,
category: ART1Result,
) -> PolicyDecision:
score = critique.outputs[0]
accepted = (
score >= self.acceptance_threshold
and category.matched
and not category.delta_vigilance
)
return PolicyDecision(
accepted=accepted,
label="accept" if accepted else "reject",
)
class ParityPressureTransition:
def advance(
self,
state: ParityPressureState,
candidate: BinaryTriple,
critique: BackpropResult,
category: ART1Result,
decision: PolicyDecision,
) -> ParityPressureState:
if decision.accepted:
return ParityPressureState(
accepted=state.accepted + (candidate,),
attempts=state.attempts + 1,
)
return ParityPressureState(
accepted=state.accepted,
attempts=state.attempts + 1,
)
def build_parity_pressure_demo(
*,
rng_seed: int = 21,
acceptance_threshold: float = 0.8,
art_vigilance: float = 0.95,
art_max_categories: int = 2,
) -> CooperativeSystem[ParityPressureState, BinaryTriple, BackpropResult, ART1Result]:
network = BackpropNetwork.random(
input_size=3,
hidden_layers=(6, 4),
output_size=1,
learning_rate=0.8,
momentum=0.2,
rng=random.Random(rng_seed),
)
samples = tuple(
(
tuple(float(bit) for bit in bits),
(float(sum(bits) % 2),),
)
for bits in CyclicParityGenerator._patterns
)
for _ in range(8000):
for inputs, targets in samples:
network.train_step(inputs, targets)
art = ART1Network(
params=ART1Params(
max_categories=art_max_categories,
input_length=8,
vigilance=art_vigilance,
)
)
return CooperativeSystem(
generator=CyclicParityGenerator(),
critic=BackpropParityCritic(network),
categorizer=BinaryTripleCategorizer(art),
policy=ParityPressurePolicy(acceptance_threshold=acceptance_threshold),
transition=ParityPressureTransition(),
)

192
src/synaptopus/graph.py Normal file
View File

@ -0,0 +1,192 @@
from __future__ import annotations
from dataclasses import asdict, dataclass
from pathlib import Path
from typing import Callable, Generic, Mapping, Protocol, TypeVar
from .architectures import AcceptancePolicy, Categorizer, Critic, Generator
StateT = TypeVar("StateT")
CandidateT = TypeVar("CandidateT")
CritiqueT = TypeVar("CritiqueT")
CategoryT = TypeVar("CategoryT")
@dataclass(frozen=True)
class GraphValue:
value: object
kind: str
@dataclass(frozen=True)
class GraphNodeResult:
outputs: dict[str, GraphValue]
@dataclass(frozen=True)
class GraphNodeSpec:
node_id: str
node_type: str
input_names: tuple[str, ...]
output_names: tuple[str, ...]
@dataclass(frozen=True)
class GraphEdgeSpec:
source_node_id: str
source_output: str
target_node_id: str
target_input: str
@dataclass(frozen=True)
class GraphSchema:
nodes: tuple[GraphNodeSpec, ...]
edges: tuple[GraphEdgeSpec, ...]
def to_dict(self) -> dict[str, object]:
return asdict(self)
def save_json(self, path: str | Path) -> None:
from .artifacts import save_artifact_json
save_artifact_json("graph_schema", self, path)
class GraphNode(Protocol):
node_id: str
node_type: str
input_names: tuple[str, ...]
output_names: tuple[str, ...]
def run(self, inputs: Mapping[str, object]) -> GraphNodeResult:
...
def spec(self) -> GraphNodeSpec:
...
class FunctionalNode:
def __init__(
self,
*,
node_id: str,
node_type: str,
input_names: tuple[str, ...],
output_names: tuple[str, ...],
fn: Callable[[Mapping[str, object]], GraphNodeResult],
) -> None:
self.node_id = node_id
self.node_type = node_type
self.input_names = input_names
self.output_names = output_names
self._fn = fn
def run(self, inputs: Mapping[str, object]) -> GraphNodeResult:
return self._fn(inputs)
def spec(self) -> GraphNodeSpec:
return GraphNodeSpec(
node_id=self.node_id,
node_type=self.node_type,
input_names=self.input_names,
output_names=self.output_names,
)
def generator_node(
node_id: str,
generator: Generator[StateT, CandidateT],
) -> FunctionalNode:
def run(inputs: Mapping[str, object]) -> GraphNodeResult:
state = inputs["state"]
candidate = generator.generate(state) # type: ignore[arg-type]
return GraphNodeResult(
outputs={
"candidate": GraphValue(candidate, "candidate"),
}
)
return FunctionalNode(
node_id=node_id,
node_type="generator",
input_names=("state",),
output_names=("candidate",),
fn=run,
)
def critic_node(
node_id: str,
critic: Critic[StateT, CandidateT, CritiqueT],
) -> FunctionalNode:
def run(inputs: Mapping[str, object]) -> GraphNodeResult:
critique = critic.critique(
inputs["state"], # type: ignore[arg-type]
inputs["candidate"], # type: ignore[arg-type]
)
return GraphNodeResult(
outputs={
"critique": GraphValue(critique, "critique"),
}
)
return FunctionalNode(
node_id=node_id,
node_type="critic",
input_names=("state", "candidate"),
output_names=("critique",),
fn=run,
)
def categorizer_node(
node_id: str,
categorizer: Categorizer[StateT, CandidateT, CategoryT],
) -> FunctionalNode:
def run(inputs: Mapping[str, object]) -> GraphNodeResult:
category = categorizer.categorize(
inputs["state"], # type: ignore[arg-type]
inputs["candidate"], # type: ignore[arg-type]
)
return GraphNodeResult(
outputs={
"category": GraphValue(category, "category"),
}
)
return FunctionalNode(
node_id=node_id,
node_type="categorizer",
input_names=("state", "candidate"),
output_names=("category",),
fn=run,
)
def policy_node(
node_id: str,
policy: AcceptancePolicy[StateT, CandidateT, CritiqueT, CategoryT],
) -> FunctionalNode:
def run(inputs: Mapping[str, object]) -> GraphNodeResult:
decision = policy.decide(
inputs["state"], # type: ignore[arg-type]
inputs["candidate"], # type: ignore[arg-type]
inputs["critique"], # type: ignore[arg-type]
inputs["category"], # type: ignore[arg-type]
)
return GraphNodeResult(
outputs={
"decision": GraphValue(decision, "decision"),
"accepted": GraphValue(decision.accepted, "boolean"),
}
)
return FunctionalNode(
node_id=node_id,
node_type="policy",
input_names=("state", "candidate", "critique", "category"),
output_names=("decision", "accepted"),
fn=run,
)

245
src/synaptopus/hopfield.py Normal file
View File

@ -0,0 +1,245 @@
from __future__ import annotations
from dataclasses import dataclass
import json
import math
@dataclass(frozen=True)
class HopfieldParams:
epsilon: float = 0.005
resistance_scale: float = 3.5
capacitance_scale: float = 10.0
weight_scale: float = 1.0
input_scale: float = 1.0
iteration_scale: float = 1.0
global_resistance: float = 1.0
global_capacitance: float = 1.0
@dataclass(frozen=True)
class HopfieldNetworkState:
activations: tuple[tuple[float, ...], ...]
outputs: tuple[tuple[float, ...], ...]
external_inputs: tuple[tuple[float, ...], ...]
@dataclass(frozen=True)
class HopfieldRunResult:
state: HopfieldNetworkState
iterations: int
def tanh_clamped(value: float, exp_max: float = 80.0) -> float:
value = max(min(value, exp_max), -exp_max)
return (math.exp(value) - math.exp(-value)) / (math.exp(value) + math.exp(-value))
class HopfieldNetwork:
def __init__(
self,
*,
weight_matrix: tuple[tuple[float, ...], ...],
params: HopfieldParams | None = None,
) -> None:
self.weight_matrix = weight_matrix
self.params = params or HopfieldParams()
def run(
self,
external_inputs: tuple[tuple[float, ...], ...],
*,
initial_activations: tuple[tuple[float, ...], ...] | None = None,
) -> HopfieldRunResult:
row_count = len(external_inputs)
if row_count == 0:
raise ValueError("external_inputs cannot be empty")
column_count = len(external_inputs[0])
if any(len(row) != column_count for row in external_inputs):
raise ValueError("external_inputs rows must be the same length")
_validate_weight_matrix(self.weight_matrix, active_size=row_count * column_count)
base_activations = initial_activations or tuple(
tuple(0.5 for _ in range(column_count)) for _ in range(row_count)
)
if len(base_activations) != row_count or any(
len(row) != column_count for row in base_activations
):
raise ValueError("initial_activations shape must match external_inputs")
activations = [
[list(row) for row in base_activations],
[list(row) for row in base_activations],
]
outputs = [
[[0.0 for _ in range(column_count)] for _ in range(row_count)],
[[0.0 for _ in range(column_count)] for _ in range(row_count)],
]
inputs = [
[list(row) for row in external_inputs],
[list(row) for row in external_inputs],
]
outputs[1][0][0] = 20.0
time_step = 0
_update_outputs(activations, outputs, time_step, self.params, row_count, column_count)
iterations = 0
while not _done(outputs, self.params.epsilon, row_count, column_count):
time_step = time_step % 2
next_time = (time_step + 1) % 2
_update_outputs(
activations, outputs, time_step, self.params, row_count, column_count
)
for row_index in range(row_count):
for column_index in range(column_count):
delta = _delta_neuron_activation(
row_index=row_index,
column_index=column_index,
row_count=row_count,
column_count=column_count,
time_step=time_step,
activations=activations,
outputs=outputs,
inputs=inputs,
weight_matrix=self.weight_matrix,
params=self.params,
)
activations[next_time][row_index][column_index] = (
activations[time_step][row_index][column_index]
+ self.params.iteration_scale * delta
)
time_step += 1
iterations += 1
final_slot = time_step % 2
state = HopfieldNetworkState(
activations=tuple(tuple(row) for row in activations[final_slot]),
outputs=tuple(tuple(row) for row in outputs[final_slot]),
external_inputs=tuple(tuple(row) for row in external_inputs),
)
return HopfieldRunResult(state=state, iterations=iterations)
def to_dict(self) -> dict[str, object]:
return {
"weight_matrix": [list(row) for row in self.weight_matrix],
"params": {
"epsilon": self.params.epsilon,
"resistance_scale": self.params.resistance_scale,
"capacitance_scale": self.params.capacitance_scale,
"weight_scale": self.params.weight_scale,
"input_scale": self.params.input_scale,
"iteration_scale": self.params.iteration_scale,
"global_resistance": self.params.global_resistance,
"global_capacitance": self.params.global_capacitance,
},
}
@classmethod
def from_dict(cls, data: dict[str, object]) -> "HopfieldNetwork":
params_data = data["params"] # type: ignore[index]
return cls(
weight_matrix=tuple(
tuple(float(value) for value in row)
for row in data["weight_matrix"] # type: ignore[index]
),
params=HopfieldParams(
epsilon=float(params_data["epsilon"]), # type: ignore[index]
resistance_scale=float(params_data["resistance_scale"]), # type: ignore[index]
capacitance_scale=float(params_data["capacitance_scale"]), # type: ignore[index]
weight_scale=float(params_data["weight_scale"]), # type: ignore[index]
input_scale=float(params_data["input_scale"]), # type: ignore[index]
iteration_scale=float(params_data["iteration_scale"]), # type: ignore[index]
global_resistance=float(params_data["global_resistance"]), # type: ignore[index]
global_capacitance=float(params_data["global_capacitance"]), # type: ignore[index]
),
)
def save_json(self, path: str) -> None:
with open(path, "w", encoding="utf-8") as handle:
json.dump(self.to_dict(), handle, indent=2)
@classmethod
def load_json(cls, path: str) -> "HopfieldNetwork":
with open(path, "r", encoding="utf-8") as handle:
return cls.from_dict(json.load(handle))
def _validate_weight_matrix(
weight_matrix: tuple[tuple[float, ...], ...],
*,
active_size: int,
) -> None:
if len(weight_matrix) < active_size:
raise ValueError(f"weight matrix needs at least {active_size} rows")
if any(len(row) < active_size for row in weight_matrix[:active_size]):
raise ValueError(f"weight matrix needs at least {active_size} columns")
def _update_outputs(
activations: list[list[list[float]]],
outputs: list[list[list[float]]],
time_step: int,
params: HopfieldParams,
row_count: int,
column_count: int,
) -> None:
for row_index in range(row_count):
for column_index in range(column_count):
outputs[time_step][row_index][column_index] = 0.5 * (
1.0
+ tanh_clamped(
activations[time_step][row_index][column_index]
/ params.global_capacitance
)
)
def _done(
outputs: list[list[list[float]]],
epsilon: float,
row_count: int,
column_count: int,
) -> bool:
for row_index in range(row_count):
for column_index in range(column_count):
if abs(outputs[0][row_index][column_index] - outputs[1][row_index][column_index]) > epsilon:
return False
return True
def _weight_coord(row_index: int, column_index: int, row_count: int) -> int:
return row_count * column_index + row_index
def _delta_neuron_activation(
*,
row_index: int,
column_index: int,
row_count: int,
column_count: int,
time_step: int,
activations: list[list[list[float]]],
outputs: list[list[list[float]]],
inputs: list[list[list[float]]],
weight_matrix: tuple[tuple[float, ...], ...],
params: HopfieldParams,
) -> float:
weight_sum = 0.0
current_index = _weight_coord(row_index, column_index, row_count)
for other_row in range(row_count):
for other_column in range(column_count):
other_index = _weight_coord(other_row, other_column, row_count)
weight_sum += (
weight_matrix[current_index][other_index]
* params.weight_scale
* outputs[time_step][other_row][other_column]
)
activation = activations[time_step][row_index][column_index]
neuron_input = inputs[time_step][row_index][column_index]
numerator = (
-(activation / (params.global_resistance * params.resistance_scale))
+ (neuron_input * params.input_scale)
+ weight_sum
)
return numerator / (params.global_capacitance * params.capacitance_scale)

View File

@ -0,0 +1,97 @@
from __future__ import annotations
from dataclasses import dataclass
@dataclass(frozen=True)
class HopfieldGridShape:
row_count: int
column_count: int
@property
def size(self) -> int:
return self.row_count * self.column_count
def grid_index(row_index: int, column_index: int, shape: HopfieldGridShape) -> int:
if not (0 <= row_index < shape.row_count):
raise ValueError(f"row_index {row_index} out of range")
if not (0 <= column_index < shape.column_count):
raise ValueError(f"column_index {column_index} out of range")
return shape.row_count * column_index + row_index
def zero_weight_matrix(shape: HopfieldGridShape) -> tuple[tuple[float, ...], ...]:
return tuple(tuple(0.0 for _ in range(shape.size)) for _ in range(shape.size))
def accumulate_sequence_transitions(
shape: HopfieldGridShape,
sequences: tuple[tuple[int, ...], ...] | list[tuple[int, ...]] | list[list[int]],
*,
transition_offsets: tuple[int, ...] = (1,),
weight_increment: float = 1.0,
one_based_rows: bool = True,
) -> tuple[tuple[float, ...], ...]:
weights = [list(row) for row in zero_weight_matrix(shape)]
for raw_sequence in sequences:
sequence = tuple(int(value) for value in raw_sequence)
for offset in transition_offsets:
if offset <= 0:
raise ValueError("transition offsets must be positive")
for start in range(len(sequence) - offset):
left_row = sequence[start] - 1 if one_based_rows else sequence[start]
right_row = sequence[start + offset] - 1 if one_based_rows else sequence[start + offset]
left_column = start
right_column = start + offset
if not (0 <= left_column < shape.column_count and 0 <= right_column < shape.column_count):
continue
left_index = grid_index(left_row, left_column, shape)
right_index = grid_index(right_row, right_column, shape)
weights[left_index][right_index] += weight_increment
weights[right_index][left_index] = weights[left_index][right_index]
return tuple(tuple(row) for row in weights)
def apply_grid_inhibition(
weight_matrix: tuple[tuple[float, ...], ...],
shape: HopfieldGridShape,
*,
row_inhibition: float = 0.0,
column_inhibition: float = 0.0,
) -> tuple[tuple[float, ...], ...]:
weights = [list(row) for row in weight_matrix]
if len(weights) != shape.size or any(len(row) != shape.size for row in weights):
raise ValueError("weight_matrix shape does not match grid shape")
for row_index in range(shape.row_count):
for column_index in range(shape.column_count):
current_index = grid_index(row_index, column_index, shape)
for other_row in range(shape.row_count):
if other_row == row_index:
continue
other_index = grid_index(other_row, column_index, shape)
if other_index <= current_index:
continue
weights[current_index][other_index] += column_inhibition
weights[other_index][current_index] = weights[current_index][other_index]
for other_column in range(shape.column_count):
if other_column == column_index:
continue
other_index = grid_index(row_index, other_column, shape)
if other_index <= current_index:
continue
weights[current_index][other_index] += row_inhibition
weights[other_index][current_index] = weights[current_index][other_index]
return tuple(tuple(row) for row in weights)
def clear_diagonal(
weight_matrix: tuple[tuple[float, ...], ...],
) -> tuple[tuple[float, ...], ...]:
weights = [list(row) for row in weight_matrix]
if any(len(row) != len(weights) for row in weights):
raise ValueError("weight_matrix must be square")
for index in range(len(weights)):
weights[index][index] = 0.0
return tuple(tuple(row) for row in weights)

View File

@ -0,0 +1,68 @@
from __future__ import annotations
from dataclasses import dataclass
import time
from typing import Generic, TypeVar
from .architectures import (
AcceptancePolicy,
Categorizer,
Critic,
Generator,
PolicyDecision,
StateTransition,
)
from .runtime import StepTrace
StateT = TypeVar("StateT")
CandidateT = TypeVar("CandidateT")
CritiqueT = TypeVar("CritiqueT")
CategoryT = TypeVar("CategoryT")
@dataclass(frozen=True)
class HybridStepMetadata(Generic[CritiqueT, CategoryT]):
critique: CritiqueT
category: CategoryT
decision: PolicyDecision
class CooperativeSystem(Generic[StateT, CandidateT, CritiqueT, CategoryT]):
def __init__(
self,
*,
generator: Generator[StateT, CandidateT],
critic: Critic[StateT, CandidateT, CritiqueT],
categorizer: Categorizer[StateT, CandidateT, CategoryT],
policy: AcceptancePolicy[StateT, CandidateT, CritiqueT, CategoryT],
transition: StateTransition[StateT, CandidateT, CritiqueT, CategoryT],
) -> None:
self.generator = generator
self.critic = critic
self.categorizer = categorizer
self.policy = policy
self.transition = transition
def step(
self,
state: StateT,
) -> StepTrace[StateT, CandidateT, HybridStepMetadata[CritiqueT, CategoryT]]:
start_time = time.perf_counter()
candidate = self.generator.generate(state)
critique = self.critic.critique(state, candidate)
category = self.categorizer.categorize(state, candidate)
decision = self.policy.decide(state, candidate, critique, category)
next_state = self.transition.advance(state, candidate, critique, category, decision)
return StepTrace(
previous_state=state,
next_state=next_state,
candidate=candidate,
accepted=decision.accepted,
elapsed_seconds=time.perf_counter() - start_time,
metadata=HybridStepMetadata(
critique=critique,
category=category,
decision=decision,
),
)

View File

@ -0,0 +1,61 @@
from __future__ import annotations
from dataclasses import asdict
from pathlib import Path
from typing import Callable, Iterable, TypeVar
from .analysis import SequenceAnalysis, analyze_sequence
from .runtime import ExecutionRecord
from .types import RunReport
StateT = TypeVar("StateT")
CandidateT = TypeVar("CandidateT")
MetadataT = TypeVar("MetadataT")
def summarize_execution(
record: ExecutionRecord[StateT, CandidateT, MetadataT],
*,
parameters: dict[str, object] | None = None,
sequence: Iterable[int] | None = None,
alphabet_size: int | None = None,
) -> RunReport:
sequence_analysis: dict[str, float | int] = {}
if sequence is not None:
if alphabet_size is None:
raise ValueError("alphabet_size is required when sequence is provided")
analysis = analyze_sequence(tuple(sequence), alphabet_size=alphabet_size)
sequence_analysis = asdict(analysis)
return RunReport(
parameters=dict(parameters or {}),
accepted_count=record.accepted_count,
attempt_count=record.attempt_count,
total_seconds=record.total_seconds,
sequence_analysis=sequence_analysis,
average_attempts_per_accept=(
record.attempt_count / record.accepted_count if record.accepted_count else 0.0
),
)
def summarize_sequence_run(
record: ExecutionRecord[StateT, CandidateT, MetadataT],
*,
sequence_getter: Callable[[ExecutionRecord[StateT, CandidateT, MetadataT]], Iterable[int]],
alphabet_size: int,
parameters: dict[str, object] | None = None,
) -> RunReport:
return summarize_execution(
record,
parameters=parameters,
sequence=tuple(sequence_getter(record)),
alphabet_size=alphabet_size,
)
def save_run_report_json(report: RunReport, path: str | Path) -> None:
from .artifacts import save_artifact_json
save_artifact_json("run_report", report, path)

106
src/synaptopus/runtime.py Normal file
View File

@ -0,0 +1,106 @@
from __future__ import annotations
from dataclasses import dataclass, field
import time
from typing import Generic, Protocol, TypeVar
StateT = TypeVar("StateT")
CandidateT = TypeVar("CandidateT")
MetadataT = TypeVar("MetadataT")
@dataclass(frozen=True)
class StepTrace(Generic[StateT, CandidateT, MetadataT]):
previous_state: StateT
next_state: StateT
candidate: CandidateT
accepted: bool
elapsed_seconds: float
metadata: MetadataT | None = None
@dataclass(frozen=True)
class ExecutionRecord(Generic[StateT, CandidateT, MetadataT]):
accepted: tuple[StepTrace[StateT, CandidateT, MetadataT], ...]
attempts: tuple[StepTrace[StateT, CandidateT, MetadataT], ...]
final_state: StateT
total_seconds: float
@property
def accepted_count(self) -> int:
return len(self.accepted)
@property
def attempt_count(self) -> int:
return len(self.attempts)
class AcceptRejectSystem(Protocol[StateT, CandidateT, MetadataT]):
def step(self, state: StateT) -> StepTrace[StateT, CandidateT, MetadataT]:
...
def run_until_acceptance(
system: AcceptRejectSystem[StateT, CandidateT, MetadataT],
initial_state: StateT,
*,
max_attempts: int,
) -> ExecutionRecord[StateT, CandidateT, MetadataT]:
start_time = time.perf_counter()
attempts: list[StepTrace[StateT, CandidateT, MetadataT]] = []
current_state = initial_state
for _ in range(max_attempts):
step = system.step(current_state)
attempts.append(step)
if step.accepted:
return ExecutionRecord(
accepted=(step,),
attempts=tuple(attempts),
final_state=step.next_state,
total_seconds=time.perf_counter() - start_time,
)
current_state = step.next_state
raise RuntimeError("failed to produce an accepted step within max_attempts")
def run_until_acceptance_count(
system: AcceptRejectSystem[StateT, CandidateT, MetadataT],
initial_state: StateT,
*,
accepted_count: int,
max_attempts_per_accept: int,
) -> ExecutionRecord[StateT, CandidateT, MetadataT]:
start_time = time.perf_counter()
attempts: list[StepTrace[StateT, CandidateT, MetadataT]] = []
accepted_steps: list[StepTrace[StateT, CandidateT, MetadataT]] = []
current_state = initial_state
for _ in range(accepted_count):
accepted_run = run_until_acceptance(
system,
current_state,
max_attempts=max_attempts_per_accept,
)
attempts.extend(accepted_run.attempts)
accepted_steps.extend(accepted_run.accepted)
current_state = accepted_run.final_state
return ExecutionRecord(
accepted=tuple(accepted_steps),
attempts=tuple(attempts),
final_state=current_state,
total_seconds=time.perf_counter() - start_time,
)
def merge_execution_records(
left: ExecutionRecord[StateT, CandidateT, MetadataT],
right: ExecutionRecord[StateT, CandidateT, MetadataT],
) -> ExecutionRecord[StateT, CandidateT, MetadataT]:
if left.final_state != right.attempts[0].previous_state:
raise ValueError("right record does not continue from left final_state")
return ExecutionRecord(
accepted=left.accepted + right.accepted,
attempts=left.attempts + right.attempts,
final_state=right.final_state,
total_seconds=left.total_seconds + right.total_seconds,
)

View File

@ -0,0 +1,180 @@
from __future__ import annotations
from dataclasses import dataclass, fields, is_dataclass
from pathlib import Path
from typing import Callable, TypeVar
from .runtime import ExecutionRecord, StepTrace
StateT = TypeVar("StateT")
CandidateT = TypeVar("CandidateT")
MetadataT = TypeVar("MetadataT")
def to_jsonable(value: object) -> object:
if is_dataclass(value):
return {
field.name: to_jsonable(getattr(value, field.name))
for field in fields(value)
}
if isinstance(value, tuple):
return [to_jsonable(item) for item in value]
if isinstance(value, list):
return [to_jsonable(item) for item in value]
if isinstance(value, dict):
return {str(key): to_jsonable(item) for key, item in value.items()}
return value
@dataclass(frozen=True)
class SerializedStepTrace:
previous_state: object
next_state: object
candidate: object
accepted: bool
elapsed_seconds: float
metadata: object | None
@dataclass(frozen=True)
class SerializedExecutionRecord:
accepted: tuple[SerializedStepTrace, ...]
attempts: tuple[SerializedStepTrace, ...]
final_state: object
total_seconds: float
def serialize_step_trace(
trace: StepTrace[StateT, CandidateT, MetadataT],
*,
state_encoder: Callable[[StateT], object] = to_jsonable,
candidate_encoder: Callable[[CandidateT], object] = to_jsonable,
metadata_encoder: Callable[[MetadataT | None], object | None] = to_jsonable,
) -> SerializedStepTrace:
return SerializedStepTrace(
previous_state=state_encoder(trace.previous_state),
next_state=state_encoder(trace.next_state),
candidate=candidate_encoder(trace.candidate),
accepted=trace.accepted,
elapsed_seconds=trace.elapsed_seconds,
metadata=metadata_encoder(trace.metadata),
)
def serialize_execution_record(
record: ExecutionRecord[StateT, CandidateT, MetadataT],
*,
state_encoder: Callable[[StateT], object] = to_jsonable,
candidate_encoder: Callable[[CandidateT], object] = to_jsonable,
metadata_encoder: Callable[[MetadataT | None], object | None] = to_jsonable,
) -> SerializedExecutionRecord:
return SerializedExecutionRecord(
accepted=tuple(
serialize_step_trace(
trace,
state_encoder=state_encoder,
candidate_encoder=candidate_encoder,
metadata_encoder=metadata_encoder,
)
for trace in record.accepted
),
attempts=tuple(
serialize_step_trace(
trace,
state_encoder=state_encoder,
candidate_encoder=candidate_encoder,
metadata_encoder=metadata_encoder,
)
for trace in record.attempts
),
final_state=state_encoder(record.final_state),
total_seconds=record.total_seconds,
)
def deserialize_step_trace(
data: SerializedStepTrace | dict[str, object],
*,
state_decoder: Callable[[object], StateT],
candidate_decoder: Callable[[object], CandidateT],
metadata_decoder: Callable[[object | None], MetadataT | None],
) -> StepTrace[StateT, CandidateT, MetadataT]:
if isinstance(data, SerializedStepTrace):
payload = data
else:
payload = SerializedStepTrace(
previous_state=data["previous_state"],
next_state=data["next_state"],
candidate=data["candidate"],
accepted=bool(data["accepted"]),
elapsed_seconds=float(data["elapsed_seconds"]),
metadata=data.get("metadata"),
)
return StepTrace(
previous_state=state_decoder(payload.previous_state),
next_state=state_decoder(payload.next_state),
candidate=candidate_decoder(payload.candidate),
accepted=payload.accepted,
elapsed_seconds=payload.elapsed_seconds,
metadata=metadata_decoder(payload.metadata),
)
def deserialize_execution_record(
data: SerializedExecutionRecord | dict[str, object],
*,
state_decoder: Callable[[object], StateT],
candidate_decoder: Callable[[object], CandidateT],
metadata_decoder: Callable[[object | None], MetadataT | None],
) -> ExecutionRecord[StateT, CandidateT, MetadataT]:
if isinstance(data, SerializedExecutionRecord):
payload = data
else:
payload = SerializedExecutionRecord(
accepted=tuple(data["accepted"]), # type: ignore[arg-type]
attempts=tuple(data["attempts"]), # type: ignore[arg-type]
final_state=data["final_state"],
total_seconds=float(data["total_seconds"]),
)
return ExecutionRecord(
accepted=tuple(
deserialize_step_trace(
step,
state_decoder=state_decoder,
candidate_decoder=candidate_decoder,
metadata_decoder=metadata_decoder,
)
for step in payload.accepted
),
attempts=tuple(
deserialize_step_trace(
step,
state_decoder=state_decoder,
candidate_decoder=candidate_decoder,
metadata_decoder=metadata_decoder,
)
for step in payload.attempts
),
final_state=state_decoder(payload.final_state),
total_seconds=payload.total_seconds,
)
def save_execution_record_json(
record: ExecutionRecord[StateT, CandidateT, MetadataT],
path: str | Path,
*,
state_encoder: Callable[[StateT], object] = to_jsonable,
candidate_encoder: Callable[[CandidateT], object] = to_jsonable,
metadata_encoder: Callable[[MetadataT | None], object | None] = to_jsonable,
) -> None:
from .artifacts import save_artifact_json
serialized = serialize_execution_record(
record,
state_encoder=state_encoder,
candidate_encoder=candidate_encoder,
metadata_encoder=metadata_encoder,
)
save_artifact_json("execution_trace", serialized, path)

104
src/synaptopus/snapshots.py Normal file
View File

@ -0,0 +1,104 @@
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
import json
from .artifacts import save_artifact_json
from .demo_registry import get_demo_definition
from .runtime import ExecutionRecord, merge_execution_records, run_until_acceptance_count
from .serialization import (
SerializedExecutionRecord,
deserialize_execution_record,
serialize_execution_record,
)
@dataclass(frozen=True)
class DemoSnapshot:
demo_name: str
system: dict[str, object]
record: SerializedExecutionRecord
parameters: dict[str, object]
def create_demo_snapshot(
demo_name: str,
*,
system: object,
record: ExecutionRecord[object, object, object],
parameters: dict[str, object] | None = None,
) -> DemoSnapshot:
definition = get_demo_definition(demo_name)
serialized_record = serialize_execution_record(
record,
state_encoder=definition.state_encoder,
)
return DemoSnapshot(
demo_name=demo_name,
system=definition.system_encoder(system),
record=serialized_record,
parameters=dict(parameters or {}),
)
def save_demo_snapshot_json(snapshot: DemoSnapshot, path: str | Path) -> None:
save_artifact_json(
"demo_snapshot",
snapshot,
path,
metadata={"demo_name": snapshot.demo_name},
)
def load_demo_snapshot_json(path: str | Path) -> DemoSnapshot:
payload = json.loads(Path(path).read_text(encoding="utf-8"))
if payload["artifact_type"] != "demo_snapshot":
raise ValueError("artifact is not a demo_snapshot")
envelope_payload = payload["payload"]
record_payload = envelope_payload["record"]
return DemoSnapshot(
demo_name=str(envelope_payload["demo_name"]),
system=dict(envelope_payload["system"]),
record=SerializedExecutionRecord(
accepted=tuple(record_payload["accepted"]),
attempts=tuple(record_payload["attempts"]),
final_state=record_payload["final_state"],
total_seconds=float(record_payload["total_seconds"]),
),
parameters=dict(envelope_payload.get("parameters", {})),
)
def restore_demo_snapshot(
snapshot: DemoSnapshot,
) -> tuple[object, ExecutionRecord[object, object, object]]:
definition = get_demo_definition(snapshot.demo_name)
system = definition.system_decoder(snapshot.system)
record = deserialize_execution_record(
snapshot.record,
state_decoder=definition.state_decoder,
candidate_decoder=definition.candidate_decoder,
metadata_decoder=definition.metadata_decoder,
)
return system, record
def resume_demo_snapshot(
snapshot: DemoSnapshot,
*,
additional_accepted_count: int,
max_attempts_per_accept: int,
) -> tuple[object, ExecutionRecord[object, object, object]]:
if additional_accepted_count < 0:
raise ValueError("additional_accepted_count must be non-negative")
system, record = restore_demo_snapshot(snapshot)
if additional_accepted_count == 0:
return system, record
continuation = run_until_acceptance_count(
system,
record.final_state,
accepted_count=additional_accepted_count,
max_attempts_per_accept=max_attempts_per_accept,
)
return system, merge_execution_records(record, continuation)

13
src/synaptopus/types.py Normal file
View File

@ -0,0 +1,13 @@
from __future__ import annotations
from dataclasses import dataclass, field
@dataclass(frozen=True)
class RunReport:
parameters: dict[str, object] = field(default_factory=dict)
accepted_count: int = 0
attempt_count: int = 0
total_seconds: float = 0.0
sequence_analysis: dict[str, float | int] = field(default_factory=dict)
average_attempts_per_accept: float = 0.0

23
tests/test_analysis.py Normal file
View File

@ -0,0 +1,23 @@
from __future__ import annotations
import math
from synaptopus.analysis import analyze_sequence, first_order_conditional_entropy, shannon_entropy
def test_shannon_entropy_is_zero_for_constant_sequence() -> None:
assert shannon_entropy((1, 1, 1, 1)) == 0.0
def test_first_order_conditional_entropy_is_zero_for_deterministic_transitions() -> None:
assert first_order_conditional_entropy((0, 1, 0, 1, 0, 1)) == 0.0
def test_analyze_sequence_reports_expected_bounds() -> None:
analysis = analyze_sequence((0, 1, 0, 1), alphabet_size=2)
assert analysis.item_count == 4
assert math.isclose(analysis.unigram_entropy_bits, 1.0)
assert math.isclose(analysis.conditional_entropy_bits, 0.0)
assert math.isclose(analysis.normalized_entropy, 1.0)
assert math.isclose(analysis.predictability, 1.0)
assert math.isclose(analysis.redundancy, 0.0)

47
tests/test_art1.py Normal file
View File

@ -0,0 +1,47 @@
from __future__ import annotations
from synaptopus.art1 import ART1Network, ART1Params
def test_art1_commits_first_category() -> None:
network = ART1Network(ART1Params(max_categories=3, input_length=4, vigilance=0.9))
result = network.categorize((1, 0, 1, 0))
assert result.winner == 0
assert result.new_category is True
assert result.committed_categories == 1
assert result.expected_vector == (1, 0, 1, 0)
def test_art1_reuses_matching_category() -> None:
network = ART1Network(ART1Params(max_categories=3, input_length=4, vigilance=0.9))
network.categorize((1, 0, 1, 0))
result = network.categorize((1, 0, 1, 0))
assert result.winner == 0
assert result.new_category is False
assert result.committed_categories == 1
def test_art1_commits_new_category_for_nonmatching_pattern() -> None:
network = ART1Network(ART1Params(max_categories=3, input_length=4, vigilance=0.9))
network.categorize((1, 0, 1, 0))
result = network.categorize((0, 1, 0, 1))
assert result.winner == 1
assert result.new_category is True
assert result.committed_categories == 2
def test_art1_round_trips_through_dict() -> None:
network = ART1Network(ART1Params(max_categories=2, input_length=4, vigilance=0.8))
network.categorize((1, 1, 0, 0))
restored = ART1Network.from_dict(network.to_dict())
assert restored.vigilance == network.vigilance
assert restored.committed_categories == network.committed_categories
assert restored.categories == network.categories

69
tests/test_backprop.py Normal file
View File

@ -0,0 +1,69 @@
from __future__ import annotations
import random
from synaptopus.backprop import BackpropNetwork
def test_backprop_supports_multiple_hidden_layers() -> None:
network = BackpropNetwork.random(
input_size=3,
hidden_layers=(4, 3),
output_size=2,
learning_rate=0.5,
momentum=0.1,
rng=random.Random(7),
)
result = network.predict((0.0, 1.0, 0.5))
assert network.hidden_layers == (4, 3)
assert network.output_size == 2
assert len(result.outputs) == 2
assert len(result.layer_states) == 3
def test_backprop_can_learn_xor_with_two_hidden_layers() -> None:
network = BackpropNetwork.random(
input_size=2,
hidden_layers=(4, 4),
output_size=1,
learning_rate=0.8,
momentum=0.2,
rng=random.Random(11),
)
samples = (
((0.0, 0.0), (0.0,)),
((0.0, 1.0), (1.0,)),
((1.0, 0.0), (1.0,)),
((1.0, 1.0), (0.0,)),
)
for _ in range(6000):
for inputs, targets in samples:
network.train_step(inputs, targets)
predictions = {
inputs: network.predict(inputs).outputs[0]
for inputs, _ in samples
}
assert predictions[(0.0, 0.0)] < 0.2
assert predictions[(0.0, 1.0)] > 0.8
assert predictions[(1.0, 0.0)] > 0.8
assert predictions[(1.0, 1.0)] < 0.2
def test_backprop_round_trips_through_dict() -> None:
network = BackpropNetwork.random(
input_size=2,
hidden_layers=(3, 2),
output_size=1,
rng=random.Random(3),
)
restored = BackpropNetwork.from_dict(network.to_dict())
assert restored.layer_sizes == network.layer_sizes
assert restored.weights == network.weights
assert restored.biases == network.biases

58
tests/test_demo_export.py Normal file
View File

@ -0,0 +1,58 @@
from __future__ import annotations
import json
from synaptopus.demo_export import export_demo_artifacts, export_xor_demo_artifacts
def test_demo_export_writes_all_artifacts(tmp_path) -> None:
artifacts = export_xor_demo_artifacts(tmp_path)
graph = json.loads(artifacts["graph"].read_text(encoding="utf-8"))
trace = json.loads(artifacts["trace"].read_text(encoding="utf-8"))
report = json.loads(artifacts["report"].read_text(encoding="utf-8"))
manifest = json.loads(artifacts["manifest"].read_text(encoding="utf-8"))
assert graph["artifact_type"] == "graph_schema"
assert graph["payload"]["nodes"][0]["node_type"] == "generator"
assert trace["artifact_type"] == "execution_trace"
assert trace["payload"]["accepted"][0]["candidate"] == [0, 1]
assert report["artifact_type"] == "run_report"
assert report["payload"]["parameters"]["example"] == "xor_novelty"
assert report["payload"]["accepted_count"] == 2
assert manifest["artifacts"][0]["artifact_type"] == "graph_schema"
def test_demo_export_can_target_parity_pressure_demo(tmp_path) -> None:
artifacts = export_demo_artifacts(
tmp_path,
demo_name="parity_pressure",
accepted_count=3,
max_attempts_per_accept=12,
)
trace = json.loads(artifacts["trace"].read_text(encoding="utf-8"))
report = json.loads(artifacts["report"].read_text(encoding="utf-8"))
assert report["payload"]["parameters"]["example"] == "parity_pressure"
assert report["payload"]["accepted_count"] == 3
assert trace["payload"]["attempts"][2]["metadata"]["category"]["delta_vigilance"] is True
def test_demo_export_can_write_and_resume_from_snapshot(tmp_path) -> None:
artifacts = export_demo_artifacts(
tmp_path,
demo_name="parity_pressure",
accepted_count=4,
max_attempts_per_accept=12,
snapshot_after_accepted=2,
)
manifest = json.loads(artifacts["manifest"].read_text(encoding="utf-8"))
snapshot = json.loads(artifacts["snapshot"].read_text(encoding="utf-8"))
report = json.loads(artifacts["report"].read_text(encoding="utf-8"))
assert manifest["artifacts"][-1]["artifact_type"] == "demo_snapshot"
assert snapshot["artifact_type"] == "demo_snapshot"
assert snapshot["payload"]["record"]["final_state"]["attempts"] >= 2
assert report["payload"]["accepted_count"] == 4

66
tests/test_examples.py Normal file
View File

@ -0,0 +1,66 @@
from __future__ import annotations
from synaptopus.examples import (
ParityPressureState,
XorDemoState,
build_parity_pressure_demo,
build_xor_novelty_demo,
)
from synaptopus.reporting import summarize_sequence_run
from synaptopus.runtime import run_until_acceptance_count
def test_xor_novelty_demo_accepts_xor_positive_patterns() -> None:
system = build_xor_novelty_demo()
record = run_until_acceptance_count(
system,
XorDemoState(),
accepted_count=2,
max_attempts_per_accept=4,
)
assert tuple(step.candidate for step in record.accepted) == ((0, 1), (1, 0))
assert record.final_state.accepted == ((0, 1), (1, 0))
def test_xor_novelty_demo_produces_reportable_sequence() -> None:
system = build_xor_novelty_demo()
record = run_until_acceptance_count(
system,
XorDemoState(),
accepted_count=2,
max_attempts_per_accept=4,
)
report = summarize_sequence_run(
record,
sequence_getter=lambda current: [left * 2 + right for left, right in current.final_state.accepted],
alphabet_size=4,
parameters={"example": "xor_novelty"},
)
assert report.parameters["example"] == "xor_novelty"
assert report.accepted_count == 2
assert report.sequence_analysis["item_count"] == 2
def test_parity_pressure_demo_exhibits_retries_and_repeated_acceptance() -> None:
system = build_parity_pressure_demo()
record = run_until_acceptance_count(
system,
ParityPressureState(),
accepted_count=4,
max_attempts_per_accept=10,
)
accepted_candidates = tuple(step.candidate for step in record.accepted)
assert len(accepted_candidates) == 4
assert record.attempt_count > record.accepted_count
assert all(sum(candidate) % 2 == 1 for candidate in accepted_candidates)
assert any(
(step.metadata is not None and step.metadata.category.delta_vigilance)
for step in record.attempts
if not step.accepted
)

9
tests/test_exports.py Normal file
View File

@ -0,0 +1,9 @@
from __future__ import annotations
import synaptopus
def test_public_exports_are_available() -> None:
assert synaptopus.__version__ == "0.1.0"
assert callable(synaptopus.analyze_sequence)
assert callable(synaptopus.run_until_acceptance)

38
tests/test_graph.py Normal file
View File

@ -0,0 +1,38 @@
from __future__ import annotations
from synaptopus.graph import categorizer_node, critic_node, generator_node, policy_node
from testsupport import (
AcceptEvenHighPolicy,
CounterState,
EvenCritic,
IncrementingGenerator,
ThresholdCategorizer,
)
def test_graph_nodes_wrap_component_roles() -> None:
state = CounterState(1)
candidate = generator_node("gen", IncrementingGenerator()).run({"state": state})
assert candidate.outputs["candidate"].value == 2
critique = critic_node("crit", EvenCritic()).run(
{"state": state, "candidate": candidate.outputs["candidate"].value}
)
assert critique.outputs["critique"].value is True
category = categorizer_node("cat", ThresholdCategorizer()).run(
{"state": state, "candidate": candidate.outputs["candidate"].value}
)
assert category.outputs["category"].value == "high"
decision = policy_node("pol", AcceptEvenHighPolicy()).run(
{
"state": state,
"candidate": candidate.outputs["candidate"].value,
"critique": critique.outputs["critique"].value,
"category": category.outputs["category"].value,
}
)
assert decision.outputs["accepted"].value is True
assert decision.outputs["decision"].value.label == "accept"

52
tests/test_hopfield.py Normal file
View File

@ -0,0 +1,52 @@
from __future__ import annotations
from synaptopus.hopfield import HopfieldNetwork, HopfieldParams
def test_hopfield_zero_matrix_runs_on_arbitrary_grid_shape() -> None:
inputs = (
(0.8, 0.2),
(0.1, 0.9),
(0.4, 0.3),
)
size = len(inputs) * len(inputs[0])
weights = tuple(tuple(0.0 for _ in range(size)) for _ in range(size))
result = HopfieldNetwork(weight_matrix=weights).run(inputs)
assert result.iterations > 0
assert len(result.state.outputs) == 3
assert len(result.state.outputs[0]) == 2
def test_hopfield_respects_initial_activation_shape() -> None:
inputs = (
(0.5, 0.5),
(0.5, 0.5),
)
weights = tuple(tuple(0.0 for _ in range(4)) for _ in range(4))
network = HopfieldNetwork(weight_matrix=weights, params=HopfieldParams())
result = network.run(
inputs,
initial_activations=(
(0.1, 0.2),
(0.3, 0.4),
),
)
assert len(result.state.activations) == 2
assert len(result.state.activations[0]) == 2
def test_hopfield_round_trips_through_dict() -> None:
weights = tuple(tuple(float(i == j) for j in range(4)) for i in range(4))
network = HopfieldNetwork(
weight_matrix=weights,
params=HopfieldParams(epsilon=0.01, weight_scale=0.5),
)
restored = HopfieldNetwork.from_dict(network.to_dict())
assert restored.weight_matrix == network.weight_matrix
assert restored.params == network.params

View File

@ -0,0 +1,57 @@
from __future__ import annotations
from synaptopus.hopfield_build import (
HopfieldGridShape,
accumulate_sequence_transitions,
apply_grid_inhibition,
clear_diagonal,
grid_index,
)
def test_accumulate_sequence_transitions_builds_symmetric_weights() -> None:
shape = HopfieldGridShape(row_count=3, column_count=3)
weights = accumulate_sequence_transitions(
shape,
sequences=[(1, 2, 3)],
transition_offsets=(1,),
weight_increment=-0.5,
)
left = grid_index(0, 0, shape)
right = grid_index(1, 1, shape)
assert weights[left][right] == -0.5
assert weights[right][left] == -0.5
def test_apply_grid_inhibition_matches_row_and_column_structure() -> None:
shape = HopfieldGridShape(row_count=3, column_count=2)
weights = tuple(tuple(0.0 for _ in range(shape.size)) for _ in range(shape.size))
inhibited = apply_grid_inhibition(
weights,
shape,
row_inhibition=-0.2,
column_inhibition=-0.1,
)
current = grid_index(1, 0, shape)
same_column_other_row = grid_index(0, 0, shape)
same_row_other_column = grid_index(1, 1, shape)
assert inhibited[current][same_column_other_row] == -0.1
assert inhibited[current][same_row_other_column] == -0.2
def test_clear_diagonal_zeros_self_connections() -> None:
weights = (
(1.0, 2.0),
(3.0, 4.0),
)
cleared = clear_diagonal(weights)
assert cleared[0][0] == 0.0
assert cleared[1][1] == 0.0
assert cleared[0][1] == 2.0

104
tests/test_orchestration.py Normal file
View File

@ -0,0 +1,104 @@
from __future__ import annotations
from dataclasses import dataclass
from synaptopus.architectures import PolicyDecision
from synaptopus.orchestration import CooperativeSystem
from synaptopus.runtime import run_until_acceptance, run_until_acceptance_count
@dataclass(frozen=True)
class SequenceState:
accepted: tuple[int, ...] = ()
attempts: int = 0
class IncrementingGenerator:
def generate(self, state: SequenceState) -> int:
return state.attempts + 1
class EvenCritic:
def critique(self, state: SequenceState, candidate: int) -> bool:
return candidate % 2 == 0
class ModuloCategorizer:
def categorize(self, state: SequenceState, candidate: int) -> str:
return "novel" if candidate % 3 else "repeat"
class AcceptEvenNovelPolicy:
def decide(
self,
state: SequenceState,
candidate: int,
critique: bool,
category: str,
) -> PolicyDecision:
accepted = critique and category == "novel"
label = "accept" if accepted else "reject"
return PolicyDecision(accepted=accepted, label=label)
class SequenceTransition:
def advance(
self,
state: SequenceState,
candidate: int,
critique: bool,
category: str,
decision: PolicyDecision,
) -> SequenceState:
if decision.accepted:
return SequenceState(
accepted=state.accepted + (candidate,),
attempts=state.attempts + 1,
)
return SequenceState(
accepted=state.accepted,
attempts=state.attempts + 1,
)
def build_system() -> CooperativeSystem[SequenceState, int, bool, str]:
return CooperativeSystem(
generator=IncrementingGenerator(),
critic=EvenCritic(),
categorizer=ModuloCategorizer(),
policy=AcceptEvenNovelPolicy(),
transition=SequenceTransition(),
)
def test_cooperative_system_exposes_component_metadata() -> None:
step = build_system().step(SequenceState())
assert step.candidate == 1
assert step.accepted is False
assert step.metadata is not None
assert step.metadata.critique is False
assert step.metadata.category == "novel"
assert step.metadata.decision.label == "reject"
def test_cooperative_system_runs_until_acceptance() -> None:
record = run_until_acceptance(build_system(), SequenceState(), max_attempts=5)
assert record.accepted_count == 1
assert record.attempt_count == 2
assert record.final_state.accepted == (2,)
assert record.accepted[0].metadata is not None
assert record.accepted[0].metadata.decision.label == "accept"
def test_cooperative_system_runs_multiple_acceptances() -> None:
record = run_until_acceptance_count(
build_system(),
SequenceState(),
accepted_count=3,
max_attempts_per_accept=6,
)
assert record.final_state.accepted == (2, 4, 8)
assert tuple(step.candidate for step in record.accepted) == (2, 4, 8)

64
tests/test_reporting.py Normal file
View File

@ -0,0 +1,64 @@
from __future__ import annotations
from dataclasses import dataclass
from synaptopus.reporting import summarize_execution, summarize_sequence_run
from synaptopus.runtime import StepTrace, run_until_acceptance_count
@dataclass(frozen=True)
class SequenceState:
accepted: tuple[int, ...] = ()
attempts: int = 0
class EvenAcceptanceSystem:
def step(self, state: SequenceState) -> StepTrace[SequenceState, int, None]:
candidate = state.attempts + 1
accepted = candidate % 2 == 0
next_state = SequenceState(
accepted=state.accepted + ((candidate,) if accepted else ()),
attempts=state.attempts + 1,
)
return StepTrace(
previous_state=state,
next_state=next_state,
candidate=candidate,
accepted=accepted,
elapsed_seconds=0.0,
metadata=None,
)
def test_summarize_execution_reports_attempt_rates() -> None:
record = run_until_acceptance_count(
EvenAcceptanceSystem(),
SequenceState(),
accepted_count=3,
max_attempts_per_accept=4,
)
report = summarize_execution(record, parameters={"mode": "demo"})
assert report.parameters["mode"] == "demo"
assert report.accepted_count == 3
assert report.attempt_count == 6
assert report.average_attempts_per_accept == 2.0
def test_summarize_sequence_run_includes_entropy_metrics() -> None:
record = run_until_acceptance_count(
EvenAcceptanceSystem(),
SequenceState(),
accepted_count=3,
max_attempts_per_accept=4,
)
report = summarize_sequence_run(
record,
sequence_getter=lambda current: current.final_state.accepted,
alphabet_size=8,
)
assert report.sequence_analysis["item_count"] == 3
assert "unigram_entropy_bits" in report.sequence_analysis

75
tests/test_runtime.py Normal file
View File

@ -0,0 +1,75 @@
from __future__ import annotations
from dataclasses import dataclass
from synaptopus.runtime import (
StepTrace,
merge_execution_records,
run_until_acceptance,
run_until_acceptance_count,
)
@dataclass(frozen=True)
class CounterState:
value: int = 0
class EvenAcceptanceSystem:
def step(self, state: CounterState) -> StepTrace[CounterState, int, dict[str, int]]:
next_state = CounterState(state.value + 1)
return StepTrace(
previous_state=state,
next_state=next_state,
candidate=next_state.value,
accepted=next_state.value % 2 == 0,
elapsed_seconds=0.0,
metadata={"value": next_state.value},
)
def test_run_until_acceptance_collects_rejected_attempts() -> None:
record = run_until_acceptance(EvenAcceptanceSystem(), CounterState(), max_attempts=5)
assert record.accepted_count == 1
assert record.attempt_count == 2
assert record.final_state == CounterState(2)
assert record.accepted[0].candidate == 2
assert record.attempts[0].accepted is False
assert record.attempts[1].accepted is True
def test_run_until_acceptance_count_collects_multiple_acceptances() -> None:
record = run_until_acceptance_count(
EvenAcceptanceSystem(),
CounterState(),
accepted_count=3,
max_attempts_per_accept=5,
)
assert record.accepted_count == 3
assert record.attempt_count == 6
assert tuple(step.candidate for step in record.accepted) == (2, 4, 6)
assert record.final_state == CounterState(6)
def test_merge_execution_records_appends_continuation() -> None:
first = run_until_acceptance_count(
EvenAcceptanceSystem(),
CounterState(),
accepted_count=1,
max_attempts_per_accept=5,
)
second = run_until_acceptance_count(
EvenAcceptanceSystem(),
first.final_state,
accepted_count=2,
max_attempts_per_accept=5,
)
merged = merge_execution_records(first, second)
assert merged.accepted_count == 3
assert merged.attempt_count == 6
assert tuple(step.candidate for step in merged.accepted) == (2, 4, 6)
assert merged.final_state == CounterState(6)

40
tests/test_schema.py Normal file
View File

@ -0,0 +1,40 @@
from __future__ import annotations
import json
from synaptopus.graph import GraphEdgeSpec, GraphSchema, categorizer_node, critic_node, generator_node, policy_node
from testsupport import (
AcceptEvenHighPolicy,
CounterState,
EvenCritic,
IncrementingGenerator,
ThresholdCategorizer,
)
def test_graph_node_specs_and_schema_are_json_safe(tmp_path) -> None:
nodes = (
generator_node("gen", IncrementingGenerator()),
critic_node("crit", EvenCritic()),
categorizer_node("cat", ThresholdCategorizer()),
policy_node("pol", AcceptEvenHighPolicy()),
)
schema = GraphSchema(
nodes=tuple(node.spec() for node in nodes),
edges=(
GraphEdgeSpec("gen", "candidate", "crit", "candidate"),
GraphEdgeSpec("gen", "candidate", "cat", "candidate"),
GraphEdgeSpec("gen", "candidate", "pol", "candidate"),
GraphEdgeSpec("crit", "critique", "pol", "critique"),
GraphEdgeSpec("cat", "category", "pol", "category"),
),
)
destination = tmp_path / "graph.json"
schema.save_json(destination)
loaded = json.loads(destination.read_text(encoding="utf-8"))
assert loaded["artifact_type"] == "graph_schema"
assert loaded["payload"]["nodes"][0]["node_id"] == "gen"
assert loaded["payload"]["nodes"][0]["node_type"] == "generator"
assert loaded["payload"]["edges"][0]["source_output"] == "candidate"

View File

@ -0,0 +1,92 @@
from __future__ import annotations
import json
from dataclasses import dataclass
from synaptopus.runtime import StepTrace, run_until_acceptance_count
from synaptopus.serialization import (
deserialize_execution_record,
save_execution_record_json,
serialize_execution_record,
)
@dataclass(frozen=True)
class SequenceState:
accepted: tuple[int, ...] = ()
attempts: int = 0
class EvenAcceptanceSystem:
def step(self, state: SequenceState) -> StepTrace[SequenceState, int, dict[str, int]]:
candidate = state.attempts + 1
accepted = candidate % 2 == 0
next_state = SequenceState(
accepted=state.accepted + ((candidate,) if accepted else ()),
attempts=state.attempts + 1,
)
return StepTrace(
previous_state=state,
next_state=next_state,
candidate=candidate,
accepted=accepted,
elapsed_seconds=0.0,
metadata={"attempt": state.attempts + 1},
)
def test_execution_record_serialization_converts_dataclasses() -> None:
record = run_until_acceptance_count(
EvenAcceptanceSystem(),
SequenceState(),
accepted_count=2,
max_attempts_per_accept=4,
)
serialized = serialize_execution_record(record)
assert serialized.final_state["accepted"] == [2, 4]
assert serialized.attempts[0].previous_state["attempts"] == 0
assert serialized.attempts[1].candidate == 2
assert serialized.accepted[0].metadata["attempt"] == 2
def test_execution_record_serialization_saves_json(tmp_path) -> None:
record = run_until_acceptance_count(
EvenAcceptanceSystem(),
SequenceState(),
accepted_count=1,
max_attempts_per_accept=4,
)
destination = tmp_path / "trace.json"
save_execution_record_json(record, destination)
loaded = json.loads(destination.read_text(encoding="utf-8"))
assert loaded["artifact_type"] == "execution_trace"
assert loaded["payload"]["accepted"][0]["candidate"] == 2
assert loaded["payload"]["final_state"]["accepted"] == [2]
def test_execution_record_deserialization_restores_typed_record() -> None:
record = run_until_acceptance_count(
EvenAcceptanceSystem(),
SequenceState(),
accepted_count=2,
max_attempts_per_accept=4,
)
serialized = serialize_execution_record(record)
restored = deserialize_execution_record(
serialized,
state_decoder=lambda data: SequenceState(
accepted=tuple(int(value) for value in data["accepted"]),
attempts=int(data["attempts"]),
),
candidate_decoder=lambda data: int(data),
metadata_decoder=lambda data: None if data is None else {"attempt": int(data["attempt"])},
)
assert restored.final_state == record.final_state
assert tuple(step.candidate for step in restored.accepted) == (2, 4)
assert restored.attempts[0].metadata == {"attempt": 1}

92
tests/test_snapshots.py Normal file
View File

@ -0,0 +1,92 @@
from __future__ import annotations
import json
from synaptopus.examples import ParityPressureState, XorDemoState, build_parity_pressure_demo, build_xor_novelty_demo
from synaptopus.runtime import run_until_acceptance_count
from synaptopus.snapshots import (
create_demo_snapshot,
load_demo_snapshot_json,
restore_demo_snapshot,
resume_demo_snapshot,
save_demo_snapshot_json,
)
def test_xor_snapshot_round_trip_restores_state_and_system(tmp_path) -> None:
system = build_xor_novelty_demo()
record = run_until_acceptance_count(
system,
XorDemoState(),
accepted_count=1,
max_attempts_per_accept=4,
)
snapshot = create_demo_snapshot(
"xor_novelty",
system=system,
record=record,
parameters={"accepted_count": 1},
)
destination = tmp_path / "snapshot.json"
save_demo_snapshot_json(snapshot, destination)
loaded = load_demo_snapshot_json(destination)
restored_system, restored_record = restore_demo_snapshot(loaded)
assert destination.exists()
assert restored_system is not None
assert restored_record.final_state == record.final_state
assert tuple(step.candidate for step in restored_record.accepted) == ((0, 1),)
def test_parity_snapshot_resume_matches_continuous_run() -> None:
checkpoint_system = build_parity_pressure_demo()
partial = run_until_acceptance_count(
checkpoint_system,
ParityPressureState(),
accepted_count=2,
max_attempts_per_accept=12,
)
snapshot = create_demo_snapshot(
"parity_pressure",
system=checkpoint_system,
record=partial,
)
resumed_system, resumed = resume_demo_snapshot(
snapshot,
additional_accepted_count=2,
max_attempts_per_accept=12,
)
continuous = run_until_acceptance_count(
build_parity_pressure_demo(),
ParityPressureState(),
accepted_count=4,
max_attempts_per_accept=12,
)
assert resumed_system is not None
assert tuple(step.candidate for step in resumed.accepted) == tuple(
step.candidate for step in continuous.accepted
)
assert resumed.attempt_count == continuous.attempt_count
assert resumed.final_state == continuous.final_state
def test_snapshot_json_uses_demo_snapshot_artifact_type(tmp_path) -> None:
system = build_xor_novelty_demo()
record = run_until_acceptance_count(
system,
XorDemoState(),
accepted_count=1,
max_attempts_per_accept=4,
)
snapshot = create_demo_snapshot("xor_novelty", system=system, record=record)
destination = tmp_path / "snapshot.json"
save_demo_snapshot_json(snapshot, destination)
payload = json.loads(destination.read_text(encoding="utf-8"))
assert payload["artifact_type"] == "demo_snapshot"
assert payload["payload"]["demo_name"] == "xor_novelty"

39
tests/testsupport.py Normal file
View File

@ -0,0 +1,39 @@
from __future__ import annotations
from dataclasses import dataclass
from synaptopus.architectures import PolicyDecision
@dataclass(frozen=True)
class CounterState:
value: int = 0
class IncrementingGenerator:
def generate(self, state: CounterState) -> int:
return state.value + 1
class EvenCritic:
def critique(self, state: CounterState, candidate: int) -> bool:
return candidate % 2 == 0
class ThresholdCategorizer:
def categorize(self, state: CounterState, candidate: int) -> str:
return "high" if candidate >= 2 else "low"
class AcceptEvenHighPolicy:
def decide(
self,
state: CounterState,
candidate: int,
critique: bool,
category: str,
) -> PolicyDecision:
return PolicyDecision(
accepted=critique and category == "high",
label="accept" if critique and category == "high" else "reject",
)

20
typescript/README.md Normal file
View File

@ -0,0 +1,20 @@
# TypeScript Contracts
This directory contains TypeScript interfaces for the current Synaptopus JSON artifacts:
- artifact manifest
- versioned artifact envelopes
- graph schema
- execution trace
- run report
The interfaces mirror the formats documented in [../docs/FORMATS.md](../docs/FORMATS.md).
The current scope is intentionally narrow:
- stable artifact contracts
- typed loaders and type guards
- XOR novelty demo metadata interfaces
- version-aware parsing boundaries
This is the first frontend-facing layer for a future browser workbench or trace viewer.

10
typescript/package.json Normal file
View File

@ -0,0 +1,10 @@
{
"name": "@synaptopus/contracts",
"version": "0.1.0",
"private": true,
"type": "module",
"description": "TypeScript contracts for Synaptopus graph, trace, and report artifacts.",
"exports": {
".": "./src/index.ts"
}
}

View File

@ -0,0 +1,79 @@
export interface GraphNodeSpec {
node_id: string;
node_type: string;
input_names: string[];
output_names: string[];
}
export interface GraphEdgeSpec {
source_node_id: string;
source_output: string;
target_node_id: string;
target_input: string;
}
export interface GraphSchema {
nodes: GraphNodeSpec[];
edges: GraphEdgeSpec[];
}
export interface ArtifactEnvelope<Payload = unknown> {
artifact_type: string;
schema_version: string;
payload: Payload;
metadata: Record<string, unknown>;
}
export interface ArtifactManifestEntry {
artifact_type: string;
file_name: string;
}
export interface ArtifactManifest {
schema_version: string;
artifacts: ArtifactManifestEntry[];
metadata: Record<string, unknown>;
}
export interface SequenceAnalysis {
item_count: number;
alphabet_size: number;
unigram_entropy_bits: number;
conditional_entropy_bits: number;
normalized_entropy: number;
predictability: number;
redundancy: number;
}
export interface RunReport {
parameters: Record<string, unknown>;
accepted_count: number;
attempt_count: number;
total_seconds: number;
sequence_analysis: Partial<SequenceAnalysis>;
average_attempts_per_accept: number;
}
export interface SerializedStepTrace<
State = unknown,
Candidate = unknown,
Metadata = unknown
> {
previous_state: State;
next_state: State;
candidate: Candidate;
accepted: boolean;
elapsed_seconds: number;
metadata: Metadata | null;
}
export interface SerializedExecutionRecord<
State = unknown,
Candidate = unknown,
Metadata = unknown
> {
accepted: SerializedStepTrace<State, Candidate, Metadata>[];
attempts: SerializedStepTrace<State, Candidate, Metadata>[];
final_state: State;
total_seconds: number;
}

96
typescript/src/guards.ts Normal file
View File

@ -0,0 +1,96 @@
import type {
ArtifactEnvelope,
ArtifactManifest,
GraphEdgeSpec,
GraphNodeSpec,
GraphSchema,
RunReport,
SerializedExecutionRecord,
} from "./contracts.js";
function isObject(value: unknown): value is Record<string, unknown> {
return typeof value === "object" && value !== null;
}
function isStringArray(value: unknown): value is string[] {
return Array.isArray(value) && value.every((item) => typeof item === "string");
}
export function isGraphNodeSpec(value: unknown): value is GraphNodeSpec {
return (
isObject(value) &&
typeof value.node_id === "string" &&
typeof value.node_type === "string" &&
isStringArray(value.input_names) &&
isStringArray(value.output_names)
);
}
export function isGraphEdgeSpec(value: unknown): value is GraphEdgeSpec {
return (
isObject(value) &&
typeof value.source_node_id === "string" &&
typeof value.source_output === "string" &&
typeof value.target_node_id === "string" &&
typeof value.target_input === "string"
);
}
export function isGraphSchema(value: unknown): value is GraphSchema {
return (
isObject(value) &&
Array.isArray(value.nodes) &&
value.nodes.every(isGraphNodeSpec) &&
Array.isArray(value.edges) &&
value.edges.every(isGraphEdgeSpec)
);
}
export function isArtifactEnvelope(value: unknown): value is ArtifactEnvelope {
return (
isObject(value) &&
typeof value.artifact_type === "string" &&
typeof value.schema_version === "string" &&
"payload" in value &&
isObject(value.metadata)
);
}
export function isArtifactManifest(value: unknown): value is ArtifactManifest {
return (
isObject(value) &&
typeof value.schema_version === "string" &&
Array.isArray(value.artifacts) &&
value.artifacts.every(
(item) =>
isObject(item) &&
typeof item.artifact_type === "string" &&
typeof item.file_name === "string"
) &&
isObject(value.metadata)
);
}
export function isSerializedExecutionRecord(
value: unknown
): value is SerializedExecutionRecord {
return (
isObject(value) &&
Array.isArray(value.accepted) &&
Array.isArray(value.attempts) &&
"final_state" in value &&
typeof value.total_seconds === "number"
);
}
export function isRunReport(value: unknown): value is RunReport {
return (
isObject(value) &&
isObject(value.parameters) &&
typeof value.accepted_count === "number" &&
typeof value.attempt_count === "number" &&
typeof value.total_seconds === "number" &&
isObject(value.sequence_analysis) &&
typeof value.average_attempts_per_accept === "number"
);
}

39
typescript/src/index.ts Normal file
View File

@ -0,0 +1,39 @@
export type {
ArtifactEnvelope,
ArtifactManifest,
ArtifactManifestEntry,
GraphEdgeSpec,
GraphNodeSpec,
GraphSchema,
RunReport,
SequenceAnalysis,
SerializedExecutionRecord,
SerializedStepTrace,
} from "./contracts.js";
export type {
BinaryPair,
XorCategory,
XorCritique,
XorCritiqueLayerState,
XorDecision,
XorDemoState,
XorExecutionRecord,
XorStepMetadata,
XorStepTrace,
} from "./xor-demo.js";
export {
isArtifactEnvelope,
isArtifactManifest,
isGraphEdgeSpec,
isGraphNodeSpec,
isGraphSchema,
isRunReport,
isSerializedExecutionRecord,
} from "./guards.js";
export {
parseArtifactEnvelope,
parseArtifactManifest,
parseExecutionTrace,
parseGraphSchema,
parseRunReport,
} from "./loaders.js";

65
typescript/src/loaders.ts Normal file
View File

@ -0,0 +1,65 @@
import type {
ArtifactEnvelope,
ArtifactManifest,
GraphSchema,
RunReport,
SerializedExecutionRecord,
} from "./contracts.js";
import {
isArtifactEnvelope,
isArtifactManifest,
isGraphSchema,
isRunReport,
isSerializedExecutionRecord,
} from "./guards.js";
export function parseArtifactEnvelope<Payload = unknown>(
json: string,
expectedType?: string
): ArtifactEnvelope<Payload> {
const value: unknown = JSON.parse(json);
if (!isArtifactEnvelope(value)) {
throw new Error("Invalid Synaptopus artifact envelope");
}
if (expectedType && value.artifact_type !== expectedType) {
throw new Error(
`Unexpected artifact type: expected ${expectedType}, got ${value.artifact_type}`
);
}
return value as ArtifactEnvelope<Payload>;
}
export function parseArtifactManifest(json: string): ArtifactManifest {
const value: unknown = JSON.parse(json);
if (!isArtifactManifest(value)) {
throw new Error("Invalid Synaptopus artifact manifest");
}
return value;
}
export function parseGraphSchema(json: string): GraphSchema {
const value = parseArtifactEnvelope<GraphSchema>(json, "graph_schema").payload;
if (!isGraphSchema(value)) {
throw new Error("Invalid Synaptopus graph schema");
}
return value;
}
export function parseExecutionTrace(json: string): SerializedExecutionRecord {
const value = parseArtifactEnvelope<SerializedExecutionRecord>(
json,
"execution_trace"
).payload;
if (!isSerializedExecutionRecord(value)) {
throw new Error("Invalid Synaptopus execution trace");
}
return value;
}
export function parseRunReport(json: string): RunReport {
const value = parseArtifactEnvelope<RunReport>(json, "run_report").payload;
if (!isRunReport(value)) {
throw new Error("Invalid Synaptopus run report");
}
return value;
}

View File

@ -0,0 +1,56 @@
import type {
SerializedExecutionRecord,
SerializedStepTrace,
} from "./contracts.js";
export type BinaryPair = [number, number];
export interface XorDemoState {
accepted: BinaryPair[];
attempts: number;
}
export interface XorCritiqueLayerState {
activations: number[];
deltas: number[];
biases: number[];
}
export interface XorCritique {
outputs: number[];
loss: number;
layer_states: XorCritiqueLayerState[];
}
export interface XorCategory {
winner: number;
matched: boolean;
new_category: boolean;
delta_vigilance: boolean;
committed_categories: number;
vigilance: number;
expected_vector: number[];
}
export interface XorDecision {
accepted: boolean;
label: string;
}
export interface XorStepMetadata {
critique: XorCritique;
category: XorCategory;
decision: XorDecision;
}
export type XorStepTrace = SerializedStepTrace<
XorDemoState,
BinaryPair,
XorStepMetadata
>;
export type XorExecutionRecord = SerializedExecutionRecord<
XorDemoState,
BinaryPair,
XorStepMetadata
>;

12
typescript/tsconfig.json Normal file
View File

@ -0,0 +1,12 @@
{
"compilerOptions": {
"target": "ES2020",
"module": "ES2020",
"moduleResolution": "Bundler",
"strict": true,
"declaration": true,
"noEmit": true,
"skipLibCheck": true
},
"include": ["src/**/*.ts"]
}

351
viewer/app.js Normal file
View File

@ -0,0 +1,351 @@
const state = {
manifest: null,
graph: null,
trace: null,
report: null,
filter: "all",
};
const ids = {
manifestUrl: document.getElementById("manifest-url"),
manifestUrlLoad: document.getElementById("manifest-url-load"),
manifestFile: document.getElementById("manifest-file"),
graphFile: document.getElementById("graph-file"),
traceFile: document.getElementById("trace-file"),
reportFile: document.getElementById("report-file"),
summaryStatus: document.getElementById("summary-status"),
graphStatus: document.getElementById("graph-status"),
traceStatus: document.getElementById("trace-status"),
stats: document.getElementById("stats"),
analysisMetrics: document.getElementById("analysis-metrics"),
nodeChips: document.getElementById("node-chips"),
edgeList: document.getElementById("edge-list"),
traceList: document.getElementById("trace-list"),
traceTemplate: document.getElementById("trace-card-template"),
filterButtons: Array.from(document.querySelectorAll(".filter")),
};
ids.manifestUrlLoad.addEventListener("click", () => loadManifestFromUrl());
ids.manifestUrl.addEventListener("keydown", (event) => {
if (event.key === "Enter") {
event.preventDefault();
loadManifestFromUrl();
}
});
ids.manifestFile.addEventListener("change", (event) => loadJsonFile(event, "manifest"));
ids.graphFile.addEventListener("change", (event) => loadJsonFile(event, "graph"));
ids.traceFile.addEventListener("change", (event) => loadJsonFile(event, "trace"));
ids.reportFile.addEventListener("change", (event) => loadJsonFile(event, "report"));
for (const button of ids.filterButtons) {
button.addEventListener("click", () => {
state.filter = button.dataset.filter ?? "all";
syncFilterButtons();
renderTrace();
});
}
syncFilterButtons();
renderSummary();
renderGraph();
renderTrace();
async function loadJsonFile(event, kind) {
const input = event.currentTarget;
const file = input.files?.[0];
if (!file) {
return;
}
try {
const text = await file.text();
const parsed = JSON.parse(text);
if (kind === "manifest") {
validateManifest(parsed);
state.manifest = parsed;
ids.summaryStatus.textContent =
`Manifest loaded: schema ${parsed.schema_version}. ` +
"For full auto-load, use the manifest URL field while serving the artifacts over HTTP.";
return;
}
const envelope = validateEnvelope(parsed, expectedArtifactType(kind));
state[kind] = envelope.payload;
if (kind === "graph") {
renderGraph();
} else if (kind === "trace") {
renderTrace();
} else if (kind === "report") {
renderSummary();
}
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
window.alert(`Failed to load ${kind} JSON: ${message}`);
} finally {
input.value = "";
}
}
async function loadManifestFromUrl() {
const manifestUrl = ids.manifestUrl.value.trim();
if (!manifestUrl) {
window.alert("Enter a manifest URL first.");
return;
}
try {
const manifestResponse = await fetch(manifestUrl);
if (!manifestResponse.ok) {
throw new Error(`Manifest request failed with status ${manifestResponse.status}`);
}
const manifest = await manifestResponse.json();
validateManifest(manifest);
state.manifest = manifest;
const baseUrl = new URL(manifestUrl, window.location.href);
const parent = new URL("./", baseUrl);
const artifactMap = Object.fromEntries(
manifest.artifacts.map((artifact) => [artifact.artifact_type, artifact.file_name])
);
const graph = await fetchArtifactJson(parent, artifactMap.graph_schema, "graph_schema");
const trace = await fetchArtifactJson(
parent,
artifactMap.execution_trace,
"execution_trace"
);
const report = await fetchArtifactJson(parent, artifactMap.run_report, "run_report");
state.graph = graph.payload;
state.trace = trace.payload;
state.report = report.payload;
ids.summaryStatus.textContent = `Loaded manifest and artifacts from ${baseUrl.origin}.`;
renderGraph();
renderTrace();
renderSummary();
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
window.alert(`Failed to load manifest URL: ${message}`);
}
}
async function fetchArtifactJson(baseUrl, fileName, expectedType) {
if (!fileName) {
throw new Error(`Manifest does not include ${expectedType}`);
}
const artifactUrl = new URL(fileName, baseUrl);
const response = await fetch(artifactUrl);
if (!response.ok) {
throw new Error(
`${expectedType} request failed with status ${response.status}`
);
}
const json = await response.json();
return validateEnvelope(json, expectedType);
}
function expectedArtifactType(kind) {
if (kind === "graph") return "graph_schema";
if (kind === "trace") return "execution_trace";
if (kind === "report") return "run_report";
return kind;
}
function validateEnvelope(value, expectedType) {
if (
!value ||
typeof value !== "object" ||
typeof value.artifact_type !== "string" ||
typeof value.schema_version !== "string" ||
!("payload" in value) ||
typeof value.metadata !== "object"
) {
throw new Error("Invalid artifact envelope");
}
if (value.artifact_type !== expectedType) {
throw new Error(
`Unexpected artifact type: expected ${expectedType}, got ${value.artifact_type}`
);
}
return value;
}
function validateManifest(value) {
if (
!value ||
typeof value !== "object" ||
typeof value.schema_version !== "string" ||
!Array.isArray(value.artifacts)
) {
throw new Error("Invalid artifact manifest");
}
}
function syncFilterButtons() {
for (const button of ids.filterButtons) {
const active = button.dataset.filter === state.filter;
button.classList.toggle("is-active", active);
}
}
function renderSummary() {
ids.stats.replaceChildren();
ids.analysisMetrics.replaceChildren();
if (!state.report) {
ids.summaryStatus.textContent = "Load a report to populate metrics.";
return;
}
ids.summaryStatus.textContent = "Report loaded.";
appendStat("Accepted", state.report.accepted_count, ids.stats);
appendStat("Attempts", state.report.attempt_count, ids.stats);
appendStat(
"Avg / Accept",
formatNumber(state.report.average_attempts_per_accept),
ids.stats
);
appendStat("Seconds", formatNumber(state.report.total_seconds), ids.stats);
const analysis = state.report.sequence_analysis ?? {};
if (Object.keys(analysis).length === 0) {
const empty = document.createElement("div");
empty.className = "empty-note";
empty.textContent = "No sequence analysis in this report.";
ids.analysisMetrics.append(empty);
return;
}
for (const [key, value] of Object.entries(analysis)) {
appendMetric(key, value, ids.analysisMetrics);
}
}
function renderGraph() {
ids.nodeChips.replaceChildren();
ids.edgeList.replaceChildren();
if (!state.graph) {
ids.graphStatus.textContent = "Load a graph schema to inspect node wiring.";
return;
}
ids.graphStatus.textContent = `${state.graph.nodes.length} nodes, ${state.graph.edges.length} edges.`;
for (const node of state.graph.nodes) {
const chip = document.createElement("article");
chip.className = "chip";
chip.innerHTML = `
<p class="chip-type">${escapeHtml(node.node_type)}</p>
<h3>${escapeHtml(node.node_id)}</h3>
<p>${escapeHtml(node.input_names.join(", ") || "no inputs")} -> ${escapeHtml(
node.output_names.join(", ") || "no outputs"
)}</p>
`;
ids.nodeChips.append(chip);
}
for (const edge of state.graph.edges) {
const item = document.createElement("li");
item.innerHTML = `<code>${escapeHtml(edge.source_node_id)}.${escapeHtml(
edge.source_output
)}</code> -> <code>${escapeHtml(edge.target_node_id)}.${escapeHtml(
edge.target_input
)}</code>`;
ids.edgeList.append(item);
}
}
function renderTrace() {
ids.traceList.replaceChildren();
if (!state.trace) {
ids.traceStatus.textContent =
"Load a trace to inspect candidate-by-candidate behavior.";
return;
}
const attempts = state.trace.attempts ?? [];
const filtered = attempts.filter((attempt) => {
if (state.filter === "accepted") {
return attempt.accepted;
}
if (state.filter === "rejected") {
return !attempt.accepted;
}
return true;
});
ids.traceStatus.textContent = `${filtered.length} shown of ${attempts.length} attempts.`;
filtered.forEach((trace, index) => {
const fragment = ids.traceTemplate.content.cloneNode(true);
const card = fragment.querySelector(".trace-card");
const traceLabel = fragment.querySelector(".trace-label");
const traceTitle = fragment.querySelector(".trace-title");
const badge = fragment.querySelector(".badge");
const metrics = fragment.querySelector(".metric-list");
const metadata = fragment.querySelector(".json-block");
const transition = fragment.querySelector(".transition-block");
card.dataset.accepted = String(trace.accepted);
traceLabel.textContent = `Attempt ${index + 1}`;
traceTitle.textContent = `Candidate ${JSON.stringify(trace.candidate)}`;
badge.textContent = trace.accepted ? "Accepted" : "Rejected";
badge.classList.toggle("accepted", trace.accepted);
badge.classList.toggle("rejected", !trace.accepted);
appendMetric("elapsed_seconds", trace.elapsed_seconds, metrics);
const score =
trace.metadata?.critique?.outputs?.[0] ?? trace.metadata?.decision?.accepted;
if (score !== undefined) {
appendMetric("signal", score, metrics);
}
metadata.textContent = JSON.stringify(trace.metadata, null, 2);
transition.textContent = JSON.stringify(
{
previous_state: trace.previous_state,
next_state: trace.next_state,
},
null,
2
);
ids.traceList.append(fragment);
});
}
function appendStat(label, value, container) {
const article = document.createElement("article");
article.className = "stat-card";
article.innerHTML = `<p>${escapeHtml(label)}</p><h3>${escapeHtml(String(value))}</h3>`;
container.append(article);
}
function appendMetric(label, value, container) {
const dt = document.createElement("dt");
dt.textContent = label;
const dd = document.createElement("dd");
dd.textContent =
typeof value === "number" ? formatNumber(value) : JSON.stringify(value);
container.append(dt, dd);
}
function formatNumber(value) {
if (typeof value !== "number") {
return String(value);
}
if (Number.isInteger(value)) {
return String(value);
}
return value.toFixed(6).replace(/0+$/, "").replace(/\.$/, "");
}
function escapeHtml(value) {
return value
.replaceAll("&", "&amp;")
.replaceAll("<", "&lt;")
.replaceAll(">", "&gt;");
}

121
viewer/index.html Normal file
View File

@ -0,0 +1,121 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Synaptopus Trace Viewer</title>
<link rel="stylesheet" href="./styles.css" />
</head>
<body>
<main class="shell">
<header class="hero">
<p class="kicker">Synaptopus</p>
<h1>Trace Viewer</h1>
<p class="lede">
Load exported <code>graph.json</code>, <code>trace.json</code>, and
<code>report.json</code> artifacts to inspect a run without any build
tooling.
</p>
</header>
<section class="panel controls">
<div class="control-block">
<label for="manifest-url">Manifest URL</label>
<div class="inline-control">
<input
id="manifest-url"
type="url"
placeholder="http://127.0.0.1:8000/artifacts/manifest.json"
/>
<button id="manifest-url-load" type="button">Load</button>
</div>
</div>
<div class="control-block">
<label for="manifest-file">Manifest</label>
<input id="manifest-file" type="file" accept=".json,application/json" />
</div>
<div class="control-block">
<label for="graph-file">Graph Schema</label>
<input id="graph-file" type="file" accept=".json,application/json" />
</div>
<div class="control-block">
<label for="trace-file">Execution Trace</label>
<input id="trace-file" type="file" accept=".json,application/json" />
</div>
<div class="control-block">
<label for="report-file">Run Report</label>
<input id="report-file" type="file" accept=".json,application/json" />
</div>
<div class="control-block command">
<span>Suggested Flow</span>
<code>python -m http.server 8000</code>
</div>
</section>
<section class="grid">
<section class="panel" id="summary-panel">
<div class="panel-header">
<h2>Run Summary</h2>
<p id="summary-status">Load a report to populate metrics.</p>
</div>
<div class="stats" id="stats"></div>
<div class="subpanel">
<h3>Sequence Analysis</h3>
<dl class="metric-list" id="analysis-metrics"></dl>
</div>
</section>
<section class="panel" id="graph-panel">
<div class="panel-header">
<h2>Graph</h2>
<p id="graph-status">Load a graph schema to inspect node wiring.</p>
</div>
<div class="subpanel">
<h3>Nodes</h3>
<div class="chips" id="node-chips"></div>
</div>
<div class="subpanel">
<h3>Edges</h3>
<ol class="edge-list" id="edge-list"></ol>
</div>
</section>
</section>
<section class="panel" id="trace-panel">
<div class="panel-header">
<h2>Attempts</h2>
<div class="trace-controls">
<button class="filter is-active" data-filter="all" type="button">All</button>
<button class="filter" data-filter="accepted" type="button">Accepted</button>
<button class="filter" data-filter="rejected" type="button">Rejected</button>
</div>
</div>
<p id="trace-status">Load a trace to inspect candidate-by-candidate behavior.</p>
<div class="trace-list" id="trace-list"></div>
</section>
</main>
<template id="trace-card-template">
<article class="trace-card">
<header class="trace-head">
<div>
<p class="trace-label"></p>
<h3 class="trace-title"></h3>
</div>
<span class="badge"></span>
</header>
<dl class="metric-list compact"></dl>
<details class="detail-block">
<summary>Metadata</summary>
<pre class="json-block"></pre>
</details>
<details class="detail-block">
<summary>State Transition</summary>
<pre class="json-block transition-block"></pre>
</details>
</article>
</template>
<script type="module" src="./app.js"></script>
</body>
</html>

337
viewer/styles.css Normal file
View File

@ -0,0 +1,337 @@
:root {
--bg: #f3ecdf;
--bg-panel: rgba(255, 250, 242, 0.82);
--ink: #1c2220;
--muted: #5f625c;
--line: rgba(28, 34, 32, 0.14);
--accent: #9e4b21;
--accent-soft: #e8c6a2;
--olive: #6f7a46;
--rose: #b0574f;
--shadow: 0 18px 50px rgba(40, 28, 17, 0.12);
}
* {
box-sizing: border-box;
}
body {
margin: 0;
min-height: 100vh;
color: var(--ink);
background:
radial-gradient(circle at top left, rgba(158, 75, 33, 0.16), transparent 28%),
radial-gradient(circle at top right, rgba(111, 122, 70, 0.18), transparent 24%),
linear-gradient(180deg, #f8f1e7 0%, var(--bg) 100%);
font-family: "Iowan Old Style", "Palatino Linotype", "Book Antiqua", Georgia, serif;
}
code,
pre,
input,
button {
font-family: "Courier New", "SFMono-Regular", Consolas, monospace;
}
.shell {
width: min(1180px, calc(100vw - 2rem));
margin: 0 auto;
padding: 2rem 0 3rem;
}
.hero {
padding: 1.25rem 0 1rem;
}
.kicker {
margin: 0;
color: var(--accent);
letter-spacing: 0.14em;
text-transform: uppercase;
font-size: 0.8rem;
}
.hero h1 {
margin: 0.2rem 0 0.5rem;
font-size: clamp(2.4rem, 5vw, 4.7rem);
line-height: 0.95;
}
.lede {
margin: 0;
max-width: 55rem;
color: var(--muted);
font-size: 1.05rem;
}
.panel {
border: 1px solid var(--line);
border-radius: 24px;
background: var(--bg-panel);
box-shadow: var(--shadow);
backdrop-filter: blur(12px);
}
.controls {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(210px, 1fr));
gap: 1rem;
padding: 1rem;
}
.control-block {
display: flex;
flex-direction: column;
gap: 0.45rem;
padding: 0.9rem 1rem;
border-radius: 18px;
background: rgba(255, 255, 255, 0.58);
}
.control-block label,
.control-block span {
font-size: 0.9rem;
color: var(--muted);
}
.control-block input {
width: 100%;
}
.inline-control {
display: flex;
gap: 0.6rem;
}
.inline-control input {
flex: 1 1 auto;
}
.inline-control button {
border: 1px solid var(--line);
border-radius: 12px;
background: var(--ink);
color: #fff8ef;
padding: 0.55rem 0.85rem;
cursor: pointer;
}
.command code {
font-size: 0.86rem;
white-space: pre-wrap;
}
.grid {
display: grid;
grid-template-columns: 1.1fr 1fr;
gap: 1rem;
margin-top: 1rem;
}
.panel-header {
display: flex;
justify-content: space-between;
align-items: start;
gap: 1rem;
padding: 1.2rem 1.2rem 0.6rem;
}
.panel-header h2,
.subpanel h3 {
margin: 0;
}
.panel-header p {
margin: 0.2rem 0 0;
color: var(--muted);
font-size: 0.95rem;
}
.stats {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(120px, 1fr));
gap: 0.8rem;
padding: 0 1.2rem 1rem;
}
.stat-card,
.chip {
padding: 0.9rem 1rem;
border: 1px solid var(--line);
border-radius: 16px;
background: rgba(255, 255, 255, 0.68);
}
.stat-card p,
.chip p {
margin: 0;
color: var(--muted);
font-size: 0.88rem;
}
.stat-card h3,
.chip h3 {
margin: 0.25rem 0 0;
}
.chip-type {
color: var(--accent) !important;
text-transform: uppercase;
letter-spacing: 0.08em;
font-size: 0.75rem !important;
}
.subpanel {
padding: 0 1.2rem 1.2rem;
}
.metric-list {
display: grid;
grid-template-columns: max-content 1fr;
gap: 0.4rem 1rem;
margin: 0.8rem 0 0;
}
.metric-list dt {
color: var(--muted);
}
.metric-list dd {
margin: 0;
text-align: right;
font-family: "Courier New", "SFMono-Regular", Consolas, monospace;
}
.chips {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(180px, 1fr));
gap: 0.8rem;
margin-top: 0.8rem;
}
.edge-list {
margin: 0.8rem 0 0;
padding-left: 1.2rem;
}
#trace-panel {
margin-top: 1rem;
}
.trace-controls {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
.filter {
border: 1px solid var(--line);
background: rgba(255, 255, 255, 0.72);
color: var(--ink);
border-radius: 999px;
padding: 0.45rem 0.8rem;
cursor: pointer;
}
.filter.is-active {
border-color: transparent;
background: var(--ink);
color: #fff8ef;
}
#trace-status {
padding: 0 1.2rem;
color: var(--muted);
}
.trace-list {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(260px, 1fr));
gap: 1rem;
padding: 1rem 1.2rem 1.2rem;
}
.trace-card {
padding: 1rem;
border: 1px solid var(--line);
border-radius: 18px;
background: rgba(255, 255, 255, 0.74);
}
.trace-head {
display: flex;
justify-content: space-between;
gap: 1rem;
align-items: start;
}
.trace-label {
margin: 0;
color: var(--muted);
font-size: 0.85rem;
}
.trace-title {
margin: 0.2rem 0 0;
font-size: 1.05rem;
}
.badge {
border-radius: 999px;
padding: 0.3rem 0.65rem;
font-size: 0.82rem;
background: var(--accent-soft);
}
.badge.accepted {
background: rgba(111, 122, 70, 0.18);
color: #3f5120;
}
.badge.rejected {
background: rgba(176, 87, 79, 0.16);
color: #7d2f2a;
}
.compact {
margin-top: 0.8rem;
}
.detail-block {
margin-top: 0.8rem;
}
.detail-block summary {
cursor: pointer;
color: var(--accent);
}
.json-block {
margin: 0.6rem 0 0;
padding: 0.85rem;
border-radius: 14px;
background: #f7f1e8;
border: 1px solid rgba(28, 34, 32, 0.08);
overflow-x: auto;
font-size: 0.82rem;
line-height: 1.45;
}
.empty-note {
color: var(--muted);
margin-top: 0.8rem;
}
@media (max-width: 900px) {
.grid {
grid-template-columns: 1fr;
}
.panel-header {
flex-direction: column;
}
.inline-control {
flex-direction: column;
}
}