|
|
||
|---|---|---|
| .. | ||
| README.md | ||
| plot_curiosity.py | ||
| run_bench.py | ||
| run_curiosity_demo.py | ||
README.md
Bench
Runs a synthetic finite-state “puzzle belt” over a batch of boxes.
Run
python -m pip install -r requirements.txt
. scripts/bench_env.sh
python bench/run_bench.py
# Bench
- `run_bench.py`: pure speed micro-benchmark (synthetic FSM)
- `run_curiosity_demo.py`: demonstrates **non-advancing PEEK** and **k-ary sequences**
with two puzzle families:
- **Informative**: `EAT` is valuable *after* `PEEK`, costly otherwise
- **Uninformative**: `PEEK` yields cost but no benefit
Expect higher peek rates in the informative segments only.
# Bench
- `run_bench.py`: pure speed micro-benchmark (synthetic FSM)
- `run_curiosity_demo.py`: demonstrates **non-advancing PEEK** with **k-ary sequences**,
logs a CSV of results per segment
- `plot_curiosity.py`: reads CSV and renders summary figures into an output directory
## Typical usage
```bash
python -m pip install -r requirements.txt
. scripts/bench_env.sh
python bench/run_curiosity_demo.py --out results/curiosity_demo.csv
python bench/plot_curiosity.py --in results/curiosity_demo.csv --outdir results/figs