Build out Track 2 Rust smoke framework

This commit is contained in:
welsberr 2026-04-13 03:26:19 -04:00
parent b761cac274
commit b5a4bda997
16 changed files with 2792 additions and 29 deletions

View File

@ -15,7 +15,7 @@ FIG1_M10 := $(REPO_ROOT)/config/track1_figure1_paper_M_1_0.json
FIG1_M100 := $(REPO_ROOT)/config/track1_figure1_paper_M_10_0.json FIG1_M100 := $(REPO_ROOT)/config/track1_figure1_paper_M_10_0.json
.PHONY: help init doctor list-jobs run-one run-loop run-loop-one collate-figure1 track1-sim-smoke \ .PHONY: help init doctor list-jobs run-one run-loop run-loop-one collate-figure1 track1-sim-smoke \
rust-check rust-test \ rust-check rust-test rust-smoke rust-smoke-release compare-track1-rust-smoke \
submit-figure1-m005 submit-figure1-m025 submit-figure1-m05 submit-figure1-m10 submit-figure1-m100 \ submit-figure1-m005 submit-figure1-m025 submit-figure1-m05 submit-figure1-m10 submit-figure1-m100 \
submit-all-figure1 status results-tree submit-all-figure1 status results-tree
@ -27,6 +27,9 @@ help:
@echo " track1-sim-smoke Run one local Track 1 simulation through renunney runner" @echo " track1-sim-smoke Run one local Track 1 simulation through renunney runner"
@echo " rust-check Run cargo check for the Track 2 Rust workspace" @echo " rust-check Run cargo check for the Track 2 Rust workspace"
@echo " rust-test Run cargo test for the Track 2 Rust workspace" @echo " rust-test Run cargo test for the Track 2 Rust workspace"
@echo " rust-smoke Run the Track 2 Rust smoke example in debug mode"
@echo " rust-smoke-release Run the Track 2 Rust smoke example in release mode"
@echo " compare-track1-rust-smoke Compare Python Track 1 and Rust smoke outputs over a seed range"
@echo " run-one Claim and run one queued job" @echo " run-one Claim and run one queued job"
@echo " run-loop Run worker loop until queue empty" @echo " run-loop Run worker loop until queue empty"
@echo " run-loop-one Run exactly one queued job through the worker loop" @echo " run-loop-one Run exactly one queued job through the worker loop"
@ -68,6 +71,15 @@ rust-check:
rust-test: rust-test:
cargo test --manifest-path $(REPO_ROOT)/Cargo.toml cargo test --manifest-path $(REPO_ROOT)/Cargo.toml
rust-smoke:
cargo run --manifest-path $(REPO_ROOT)/Cargo.toml --example smoke_compare
rust-smoke-release:
cargo run --release --manifest-path $(REPO_ROOT)/Cargo.toml --example smoke_compare
compare-track1-rust-smoke:
MPLCONFIGDIR=$(MPLCONFIGDIR) PYTHONPATH=$(REPO_ROOT)/src $(PYTHON) $(REPO_ROOT)/scripts/compare_track1_rust_smoke.py --seed-start 0 --seed-count 20
run-one: run-one:
mkdir -p $(MPLCONFIGDIR) mkdir -p $(MPLCONFIGDIR)
MPLCONFIGDIR=$(MPLCONFIGDIR) $(PYTHON) $(ORCH) run-one --db $(DB) --result-root $(RESULT_ROOT) --scratch-root $(SCRATCH_ROOT) MPLCONFIGDIR=$(MPLCONFIGDIR) $(PYTHON) $(ORCH) run-one --db $(DB) --result-root $(RESULT_ROOT) --scratch-root $(SCRATCH_ROOT)

View File

@ -73,6 +73,7 @@ runtime now lives in `renunney`.
- [docs/MIGRATION.md](/mnt/CIFS/pengolodh/Docs/Projects/renunney/docs/MIGRATION.md) - [docs/MIGRATION.md](/mnt/CIFS/pengolodh/Docs/Projects/renunney/docs/MIGRATION.md)
- [docs/WORKFLOW.md](/mnt/CIFS/pengolodh/Docs/Projects/renunney/docs/WORKFLOW.md) - [docs/WORKFLOW.md](/mnt/CIFS/pengolodh/Docs/Projects/renunney/docs/WORKFLOW.md)
- [docs/NUNNEY_ANALYSIS.md](/mnt/CIFS/pengolodh/Docs/Projects/renunney/docs/NUNNEY_ANALYSIS.md) - [docs/NUNNEY_ANALYSIS.md](/mnt/CIFS/pengolodh/Docs/Projects/renunney/docs/NUNNEY_ANALYSIS.md)
- [docs/RESULTS_IN_HAND.md](/mnt/CIFS/pengolodh/Docs/Projects/collaborations/to_ptbc/evc/cost_of_substitution/renunney/docs/RESULTS_IN_HAND.md)
- [docs/TRACK2_RUST.md](/mnt/CIFS/pengolodh/Docs/Projects/renunney/docs/TRACK2_RUST.md) - [docs/TRACK2_RUST.md](/mnt/CIFS/pengolodh/Docs/Projects/renunney/docs/TRACK2_RUST.md)
## Layout ## Layout
@ -113,6 +114,13 @@ Verify the Track 2 Rust workspace:
```bash ```bash
make rust-check make rust-check
make rust-test make rust-test
make rust-smoke
```
Compare Python Track 1 and the current Rust smoke kernel over multiple seeds:
```bash
make compare-track1-rust-smoke
``` ```
Submit a paper-scale Figure 1 treatment: Submit a paper-scale Figure 1 treatment:
@ -137,10 +145,10 @@ make collate-figure1
The Track 1 runtime and orchestration stack are now local to `renunney`. The Track 1 runtime and orchestration stack are now local to `renunney`.
Track 2 has also started: the repo now includes a Rust workspace and an Track 2 has also started: the repo now includes a Rust workspace and an
initial `track2-core` crate for threshold-centered abstractions. The current usable `track2-core` crate for threshold-centered work plus a minimal
next major steps are: Nunney-style smoke kernel. The current next major steps are:
- hardening multi-host orchestration for Track 1 replication, - hardening multi-host orchestration for Track 1 replication,
- organizing publication-quality replication outputs, - organizing publication-quality replication outputs,
- and expanding the Rust Track 2 core from threshold abstractions into a - and turning the current Rust smoke framework into a stable Track 2 execution
simulation-state and estimation kernel. path with better statistical parity checks and tighter mechanics.

233
docs/RESULTS_IN_HAND.md Normal file
View File

@ -0,0 +1,233 @@
# Results In Hand
Updated: 2026-04-12
## Purpose
This note records the concrete Track 1 outputs currently available from recent
local runs. It is intentionally narrower than a full replication report. Its
job is to answer a simpler question: what results do we actually have today,
what do they already imply, and what remains too provisional to treat as a
paper-facing conclusion.
The key point is that most current outputs still live in `/tmp`, not yet in the
repo-managed `runs/results/` tree.
## Main Artifacts
### Small report package
Primary artifact:
- `/tmp/track1-report-small/report.md`
- `/tmp/track1-report-small/tracking_summary.json`
- `/tmp/track1-report-small/aggregate_series.json`
- `/tmp/track1-report-small/*.png`
Parameters:
- `K = 5000`
- `N0 = 20`
- `n = 1`
- `u = 5e-6`
- derived `M = 0.05`
- `R = 10`
- `T = 20`
- `epochs = 8`
- `runs = 2`
- `seed_start = 1`
Observed behavior:
- both runs show substantial lag behind the moving target
- one run never leaves zero allele value
- the other run begins adapting at `t = 12` and remains nonzero through
`t = 46`, but still ends with a large negative tracking gap
- final gaps are about `-1.25` and `-1.30`
- mean absolute tracking gap is about `0.53` to `0.59`
Interpretation:
- this regime appears near or beyond persistence limits
- adaptation can occur transiently without being sufficient to maintain
tracking
- low mutation supply at this setting produces severe and persistent lag
The aggregate population trajectory rises rapidly toward carrying capacity and
then declines strongly once the moving optimum begins to outrun the population.
By roughly `t = 26`, only one of the two runs is still contributing to the
reported mean series.
### Small extinction dataset
Primary artifact:
- `/tmp/track1-extinction-dataset-small/`
- `/tmp/track1-extinction-dataset-small/run_rows.jsonl`
- `/tmp/track1-extinction-fit-small-payload.json`
Grid:
- `K = 500`
- `N0 in {20, 500}`
- `u in {0.001, 0.005}`
- derived `M in {1, 5}`
- `T = 10`
- `epochs = 2`
- `n = 1`
- `runs_per_treatment = 2`
Observed behavior:
- all 8 runs survive
- no treatment in this toy grid produces extinction
- higher `M` generally reduces lag and removes long zero-mutation streaks
- larger `N0` also improves final tracking
Interpretation:
- this dataset is useful as a smoke test for reporting and dataset generation
- it is not suitable for extinction modeling because there is no outcome
variation
The fitting payload states this explicitly:
- `fit_status = "insufficient_outcome_variation"`
- `extinction_count = 0`
- `non_extinction_count = 8`
### Designed-grid extinction dataset
Primary artifact:
- `/tmp/track1-extinction-dataset-designed-grid/`
- `/tmp/track1-extinction-dataset-designed-grid/run_rows.jsonl`
- `/tmp/track1-extinction-fit-designed-grid-payload.json`
Grid:
- `K = 500`
- `N0 in {20, 500}`
- `u in {0.0, 0.0001, 0.0005, 0.001, 0.005}`
- `T in {5, 10, 20}`
- derived `M` varies with `u`
- `epochs = 8`
- `n = 1`
- `runs_per_treatment = 4`
Scale:
- `30` treatments
- `120` runs
- `5559` generation rows
Observed behavior:
- `95` extinctions
- `25` non-extinctions
- the current logistic-style fit converges
Included fitted features:
- `log_M`
- `inv_T`
- `n`
- `log_K`
- `log_N0_over_K`
- `mean_abs_tracking_gap`
- `fraction_generations_below_replacement`
- `longest_zero_mutation_streak`
- `cumulative_mutation_shortfall_per_generation`
Interpretation:
- this is the first dataset in hand that is large enough to support actual
extinction-model fitting
- the included predictors are biologically plausible and align with the current
diagnostic story: mutation supply, pace of environmental change, tracking
lag, and time spent below replacement all matter
Caution:
- the reported fit quality is extremely strong for a `120`-run dataset
- at present this should be treated as an in-sample descriptive fit, not a
validated predictive model
- no cross-validation or held-out assessment is yet recorded in-repo
## Figure 1 Cache State
Primary artifacts:
- `/tmp/track1-figure1-paper-m005-cache.json`
- `/tmp/track1-figure1-paper-m10-cache.json`
- `/tmp/track1-search-m10-n1-runs10-cache.json`
- `/tmp/track1-search-m10-n1-runs10-t1-20-cache.json`
### Paper-scale caches with `N0 = K = 5000`
For the paper-style cached sweeps:
- low mutation supply (`M = 0.05`) shows `20/20` extinctions at all displayed
`n` values for `T = 1.0`, `1.02`, `1.05`, and `1.10`
- even at `M = 10`, the displayed `T` values remain overwhelmingly extinct,
with only a slight improvement for `n = 1` around `T = 10`
Interpretation:
- under the current implementation, paper-scale initialization with `N0 = K`
makes these regimes extremely extinction-prone
- increasing mutation supply helps, but does not obviously eliminate the
problem in the currently cached low-`T` range
### Exploratory cache with `N0 = 20`
The smaller exploratory threshold caches for `M = 10`, `n = 1`, and `runs = 10`
show:
- `0/10` extinctions for `T = 5`, `5.1`, `5.25`, and `5.5`
- `0/10` extinctions for `T = 1`, `1.02`, `1.05`, and `1.1`
Interpretation:
- the current results are highly sensitive to initialization, especially
`N0 / K`
- this is not a minor implementation detail; it directly changes whether the
same nominal treatment appears safely persistent or uniformly extinct
## What The Results Already Say
The current outputs already support the following claims:
1. The Track 1 reporting and dataset stack is operational enough to produce
coherent run reports, row-level datasets, and extinction-model payloads.
2. Low mutation supply can leave the population far behind the moving optimum,
even when transient adaptation occurs.
3. Higher mutation supply and larger initial population improve tracking in the
tested small-grid runs.
4. Extinction behavior is strongly sensitive to initialization conventions,
especially whether runs begin at low `N0` or at `N0 = K`.
## What Is Still Provisional
The current outputs are not yet enough to support a clean replication claim
about Nunney's published thresholds.
The main unresolved issues are:
1. the scientific status of the `N0 / K` choice in relation to the paper
2. whether the current threshold caches reflect the intended historical setup
3. whether the extinction fit generalizes beyond the designed-grid data used to
fit it
4. how these local `/tmp` outputs should be normalized into repo-managed result
locations and paper-ready summaries
## Immediate Next Steps
The highest-value follow-up work is:
1. copy or regenerate the strongest `/tmp` artifacts under `runs/results/`
2. summarize the initialization sensitivity explicitly in the replication notes
3. expand paper-scale Figure 1 sweeps in a way that keeps `N0` assumptions
explicit
4. return to Track 2 only after the current Track 1 result state is documented
clearly enough to serve as the baseline comparison

102
docs/TRACK2_PROGRESS.md Normal file
View File

@ -0,0 +1,102 @@
# Track 2 Implementation Progress
Updated: 2026-04-12
## Completed Components
### 1. Simulation State Model (`simulation.rs`)
- **Config**: Simulation configuration with parameters (K, N0, R, T, n, u, epochs, etc.)
- **State**: Snapshot of population/genotype at a generation
- **TerminationReason**: Termination conditions (MaxGenerations, Extinction, MissingSex, Success)
- **Event**: Events during simulation (PopulationChange, GenotypeChange, Extinction, GenerationComplete)
- **RunResult**: Result of a single run with extinction probability
- **Tests**: Config validation, termination reason display, survival/extinction probability
### 2. Extinction Probability Trait (`extinction_trait.rs`)
- **ExtinctionEstimator trait**: Interface for running simulations and estimating extinction probabilities
- `run()` - Run a single simulation
- `estimate_extinction_count()` - Count extinctions over multiple runs
- `estimate_extinction_probability()` - Compute extinction probability
- **FixedRunEstimator**: Baseline estimator with fixed number of runs (default 20)
- **AdaptiveEstimator**: Dynamic estimator that adjusts runs based on confidence
- **Tests**: Estimator creation and defaults
### 3. Threshold Search Strategies (`search.rs`)
- **ThresholdSearchStrategy trait**: Interface for finding threshold T where P(extinction) = target
- **BinarySearchStrategy**: Simple binary search over T values
- **BracketRefineStrategy**: Binary search with refinement rounds
- **SweepStrategy**: Compute all T values for data table/plot generation
- **Tests**: Strategy creation and defaults
### 4. Orchestration I/O (`io.rs`)
- **Track2Job**: Job manifest for submitting Track 2 simulation jobs
- **Track2Result**: Result manifest with job status and summary
- **Track2Summary**: Compact run result summary
- **Track2Error**: Error details for failures
- **Tests**: Job creation, success/failure results
### 5. Simulation Kernel (`simulation_kernel.rs`)
- **SimpleSimulationKernel**: Deterministic placeholder for trait wiring
- **NunneySimulationKernel**: Minimal Nunney-style stochastic kernel for smoke comparisons
- **Tests**: Kernel creation, fecundity, fitness monotonicity, mutation bounds, inheritance, extinction, and bookkeeping invariants
### 6. Module Organization (`lib.rs`)
- All modules properly integrated
- Common types re-exported for easy access
### 7. Dependencies (`Cargo.toml`)
- serde with derive features
- rand for random number generation
## Design Principles
1. **Separation of concerns**: Each module has a clear, single responsibility
2. **Trait-based abstraction**: ExtinctionEstimator and ThresholdSearchStrategy allow swapping implementations
3. **Serialization-friendly**: All core types are Serialize/Deserialize
4. **Testable**: Each component has unit tests
5. **Clear documentation**: Module-level docs explain purpose and usage
## Testing Strategy
Each component can be tested independently:
1. **Config**: Just validation and derived calculations
2. **ExtinctionEstimator**: Tests of estimator creation and basic probability calculations
3. **ThresholdSearchStrategy**: Tests of strategy creation and basic search behavior
4. **I/O**: Tests of JSON serialization/deserialization
5. **Simulation Kernel**: Tests of kernel creation and deterministic behavior
## Next Steps
1. Use the current Rust framework operationally through:
- `make rust-smoke`
- `make rust-smoke-release`
- `make compare-track1-rust-smoke`
2. Tighten the remaining systematic differences from Python in births,
survivors, and final population size.
3. Expand low-level kernel tests for hand-constructed tiny populations and
exact metric expectations.
4. Define statistical parity targets over seed ranges and track them explicitly.
5. Promote the Rust kernel into a real Track 2 execution backend once the
mechanics are stable enough.
## Files to Copy
1. `/tmp/track2-simulation.rs``rust/track2-core/src/simulation.rs`
2. `/tmp/track2-extinction-trait.rs``rust/track2-core/src/extinction_trait.rs`
3. `/tmp/track2-search.rs``rust/track2-core/src/search.rs`
4. `/tmp/track2-io.rs``rust/track2-core/src/io.rs`
5. `/tmp/track2-simulation-kernel.rs``rust/track2-core/src/simulation_kernel.rs`
6. `/tmp/track2-lib.rs``rust/track2-core/src/lib.rs`
7. `/tmp/Cargo.toml.new``rust/track2-core/Cargo.toml`
## Integration
After copying the files, run:
```bash
cargo check
cargo test
```
This will verify that all modules compile and all tests pass.

View File

@ -1,6 +1,6 @@
# Track 2 Rust Plan # Track 2 Rust Plan
Updated: 2026-04-11 Updated: 2026-04-12
## Purpose ## Purpose
@ -34,33 +34,60 @@ of what is being estimated.
## Current Crate ## Current Crate
The initial crate is: The active crate is:
- `rust/track2-core` - `rust/track2-core`
Current contents: It now includes:
- `ExtinctionCount` - threshold helper types and search scaffolding
- `ThresholdPoint` - serialization-friendly Track 2 job/result structs
- `ThresholdBracket` - a simple placeholder kernel for basic trait wiring
- `ThresholdEstimate` - a minimal Nunney-style stochastic kernel for same-parameter smoke checks
- `bracket_threshold(...)` - Rust examples for direct kernel use and Python-vs-Rust smoke comparison
- `midpoint_threshold(...)` - a growing unit-test suite around fecundity, fitness, mutation bounds,
inheritance, extinction, and bookkeeping invariants
These are placeholders for a modern threshold-estimation path where: This is no longer just a threshold-abstraction placeholder. It is now a usable
Track 2 kernel development surface with:
- extinction probability is explicit, - `make rust-check`
- bracketing is explicit, - `make rust-test`
- and the estimator is separate from the simulation kernel. - `make rust-smoke`
- `make rust-smoke-release`
- `make compare-track1-rust-smoke`
## Current Status
What is done now:
1. Core Track 2 data types compile and test cleanly.
2. The Rust kernel can run a one-run smoke simulation directly.
3. A minimal Nunney-style kernel exists for side-by-side comparison work.
4. A multi-seed comparison harness exists in `scripts/compare_track1_rust_smoke.py`.
5. The Rust library test suite now covers core mechanical invariants instead of
relying only on Python comparison.
What is not done yet:
1. The Rust kernel is not a full Track 1 parity implementation.
2. The threshold-search layer is still a scaffold rather than a production
estimator.
3. Rust is not yet wired into orchestration as a real worker backend.
## Next Rust Steps ## Next Rust Steps
1. Add a simulation-state model for Track 2. 1. Reduce systematic divergence from Python in births, survivors, and final `N`
2. Add a trait or function contract for producing extinction probabilities from by tightening update-order and RNG-sensitive mechanics.
repeated stochastic runs. 2. Add more low-level kernel tests for hand-constructed populations:
3. Add a threshold search strategy that consumes those estimates. exact `tracking_metrics`, exact multi-locus fitness values, and tiny
4. Add serialization-friendly input/output structs for orchestration. deterministic generation updates.
5. Only then start porting heavy simulation loops from Python. 3. Define statistical parity criteria over many seeds rather than using
exact-seed agreement as the primary validation standard.
4. Promote the current smoke framework into a real Track 2 execution path with
stable CLI contracts and result serialization.
5. Only after that, extend threshold search from scaffold to actual Track 2
estimation workflow.
## Operational Targets ## Operational Targets
@ -68,5 +95,7 @@ The repo Makefile should treat Rust as a first-class build/test surface:
- `make rust-test` - `make rust-test`
- `make rust-check` - `make rust-check`
- `make rust-smoke`
- `make compare-track1-rust-smoke`
That keeps Track 2 visible in daily workflow from the start. That keeps Track 2 visible in daily workflow from the start.

View File

@ -44,6 +44,12 @@ Inspect current status:
make status make status
``` ```
Review the currently available empirical outputs:
```text
docs/RESULTS_IN_HAND.md
```
## Current Assumption ## Current Assumption
The Makefile now drives the local orchestration code in `renunney`, while the The Makefile now drives the local orchestration code in `renunney`, while the

View File

@ -1,7 +1,7 @@
[package] [package]
name = "track2-core" name = "track2-core"
version = "0.1.0" version = "0.1.0"
edition = "2024" edition = "2021"
license = "MIT" license = "MIT"
authors = ["welsberr <welsberr@gmail.com>"] authors = ["welsberr <welsberr@gmail.com>"]
description = "Track 2 threshold-centered cost-of-substitution simulation core" description = "Track 2 threshold-centered cost-of-substitution simulation core"
@ -10,3 +10,7 @@ description = "Track 2 threshold-centered cost-of-substitution simulation core"
name = "track2_core" name = "track2_core"
path = "src/lib.rs" path = "src/lib.rs"
[dependencies]
serde = { version = "1.0", features = ["derive"] }
rand = "0.8"
rand_distr = "0.4"

View File

@ -0,0 +1,155 @@
//! Example usage of Track 2 components.
//!
//! This example demonstrates how to use the Track 2 components together
//! to run simulations, estimate extinction probabilities, and find thresholds.
use track2_core::{
Config, NunneySimulationKernel,
BinarySearchStrategy, SweepStrategy,
Track2Job, Track2Result,
extinction_trait::ExtinctionEstimator,
};
fn main() -> Result<(), Box<dyn std::error::Error>> {
// 1. Create a simulation configuration
let config = Config {
K: 1000.0,
N0: 500.0,
R: 10.0,
T: 50.0,
n: 3,
u: 0.001,
epochs: 5,
target_extinction_probability: 0.5,
seed: None,
};
println!("Config: {}", config);
println!("Mutation supply M = 2*K*u = {}", config.mutation_supply());
// 2. Create a simulation kernel
let kernel = NunneySimulationKernel::new(None);
// 3. Run a single simulation
println!("\nRunning a single simulation...");
let result = kernel.run(&config);
println!(" Extinct: {}, Prob: {}", result.extinct, result.extinction_probability());
println!(
" Final N: {}, target: {:.3}, mismatch: {:.3}",
result.final_state.N,
result.final_state.optimum,
result.final_state.mean_mismatch
);
println!(
" Mean allele: {:.3}, tracking gap: {:.3}, births: {}, survivors: {}",
result.final_state.allele_means.first().copied().unwrap_or(0.0),
result.final_state.mean_tracking_gap,
result.final_state.birth_count,
result.final_state.surviving_offspring_count
);
println!(
" First/last nonzero allele t: {:?} / {:?}",
result.first_nonzero_allele_t,
result.last_nonzero_allele_t
);
// 4. Estimate extinction probability with multiple runs
println!("\nEstimating extinction probability with 5 runs...");
let num_runs = 5;
let prob = kernel.estimate_extinction_probability(&config, num_runs);
println!(" Extinction probability: {}", prob);
// 5. Search for threshold using binary search
println!("\nSearching for threshold with binary search...");
let t_values = vec![10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0];
let _strategy = BinarySearchStrategy::new();
// Note: This requires a real ExtinctionEstimator implementation
// For now, we'll just show the structure
// let estimate = strategy.search_threshold(&estimator, &config, &t_values, 0.5);
// if let Some(estimate) = estimate {
// println!(" Estimated threshold: {}", estimate.estimated_t);
// println!(" Lower: {}, Upper: {}", estimate.lower_t, estimate.upper_t);
// }
// 6. Generate sweep data for plotting
println!("\nGenerating sweep data...");
let sweep_strategy = SweepStrategy::with_default();
println!(" Testing T values: {:?}", t_values);
println!(" Strategy: {} runs per T", sweep_strategy.runs_per_T);
// 7. Create a Track 2 job manifest
println!("\nCreating Track 2 job...");
let job = Track2Job::new(
"example-job-1".to_string(),
42,
config,
0.5,
t_values.clone(),
);
println!(" Job ID: {}", job.job_id);
println!(" Track: {}", job.track);
println!(" Job kind: {}", job.job_kind);
// 8. Create a result summary
let summary = track2_core::Track2Summary {
extinction_probability: prob,
estimated_t: 0.0, // Will be set after threshold search
num_runs,
extinct_count: (prob * num_runs as f64) as u32,
tested_t_values: t_values.clone(),
search_strategy: "sweep".to_string(),
};
let result_manifest = Track2Result::success(job.job_id, summary, vec![]);
println!(" Status: {}", result_manifest.status);
println!(" Exit code: {}", result_manifest.exit_code);
println!("\n✓ Example completed successfully!");
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn example_simulation() {
let config = Config {
K: 100.0,
N0: 50.0,
R: 10.0,
T: 20.0,
n: 1,
u: 0.001,
epochs: 1,
target_extinction_probability: 0.5,
seed: Some(123),
};
let kernel = NunneySimulationKernel::new(None);
let result = kernel.run(&config);
assert_eq!(result.config.K, 100.0);
assert_eq!(result.config.n, 1);
assert!(result.generations() > 0);
}
#[test]
fn config_derived_values() {
let config = Config {
K: 1000.0,
N0: 500.0,
R: 10.0,
T: 50.0,
n: 3,
u: 0.001,
epochs: 5,
target_extinction_probability: 0.5,
seed: None,
};
assert_eq!(config.mutation_supply(), 2.0 * 1000.0 * 0.001);
assert_eq!(config.generations_per_epoch(), 250);
}
}

View File

@ -0,0 +1,71 @@
use std::env;
use track2_core::{Config, NunneySimulationKernel, extinction_trait::ExtinctionEstimator};
fn main() {
let mut k = 1000.0;
let mut n0 = 500.0;
let mut r = 10.0;
let mut t = 50.0;
let mut n = 3usize;
let mut u = 0.001;
let mut epochs = 5usize;
let mut seed = 0u64;
let mut args = env::args().skip(1);
while let Some(arg) = args.next() {
match arg.as_str() {
"--K" => k = args.next().expect("missing value for --K").parse().expect("invalid --K"),
"--N0" => n0 = args.next().expect("missing value for --N0").parse().expect("invalid --N0"),
"--R" => r = args.next().expect("missing value for --R").parse().expect("invalid --R"),
"--T" => t = args.next().expect("missing value for --T").parse().expect("invalid --T"),
"--n" => n = args.next().expect("missing value for --n").parse().expect("invalid --n"),
"--u" => u = args.next().expect("missing value for --u").parse().expect("invalid --u"),
"--epochs" => {
epochs = args
.next()
.expect("missing value for --epochs")
.parse()
.expect("invalid --epochs")
}
"--seed" => {
seed = args
.next()
.expect("missing value for --seed")
.parse()
.expect("invalid --seed")
}
other => panic!("unexpected argument: {other}"),
}
}
let config = Config {
K: k,
N0: n0,
R: r,
T: t,
n,
u,
epochs,
target_extinction_probability: 0.5,
seed: Some(seed),
};
let kernel = NunneySimulationKernel::new(Some(seed));
let result = kernel.run(&config);
println!("extinct={}", result.extinct);
println!("generation={}", result.final_state.generation);
println!("N={:.0}", result.final_state.N);
println!("female_count={:.0}", result.final_state.N_f);
println!("male_count={:.0}", result.final_state.N_m);
println!("target_value={:.6}", result.final_state.optimum);
println!("mean_allele_value={:.6}", result.final_state.allele_means.first().copied().unwrap_or(0.0));
println!("mean_fitness={:.6}", result.final_state.mean_fitness);
println!("fecundity={:.6}", result.final_state.mean_fecundity);
println!("mean_tracking_gap={:.6}", result.final_state.mean_tracking_gap);
println!("birth_count={}", result.final_state.birth_count);
println!("surviving_offspring_count={}", result.final_state.surviving_offspring_count);
println!("first_nonzero_allele_t={:?}", result.first_nonzero_allele_t);
println!("last_nonzero_allele_t={:?}", result.last_nonzero_allele_t);
}

View File

@ -0,0 +1,197 @@
//! Trait for producing extinction probabilities from repeated stochastic runs.
//!
//! This module separates repeated-run estimation from the underlying simulation
//! kernel so Track 2 can swap between simple placeholders and richer kernels.
use rand::{Rng, SeedableRng, rngs::StdRng};
use crate::simulation::{Config, RunResult, State, TerminationReason};
fn placeholder_run(config: &Config) -> RunResult {
let seed = config.seed.unwrap_or(0);
let mut rng = StdRng::seed_from_u64(seed);
// Placeholder dynamics: smaller T implies a harder tracking problem.
let extinct = if config.T < 30.0 {
rng.gen::<f64>() < 0.8
} else if config.T < 60.0 {
rng.gen::<f64>() < 0.4
} else {
rng.gen::<f64>() < 0.1
};
let generations = config.generations_per_epoch();
let extinction_time = extinct.then_some((generations / 2).max(1));
let final_generation = generations.saturating_sub(1);
let final_state = State {
generation: final_generation,
epoch: config.epochs.saturating_sub(1),
optimum: config.n as f64,
N: if extinct { 0.0 } else { config.K.min(config.N0 * 1.1) },
N_f: if extinct { 0.0 } else { 0.5 * config.K.min(config.N0 * 1.1) },
N_m: if extinct { 0.0 } else { 0.5 * config.K.min(config.N0 * 1.1) },
total_tracked: (config.N0 as usize).max(1),
extinctions: usize::from(extinct),
extinction_time,
mean_fitness: if extinct { 0.0 } else { 0.8 },
mean_fecundity: if extinct { 0.0 } else { 5.0 },
mean_mismatch: if extinct { config.n as f64 } else { 0.3 },
mean_tracking_gap: if extinct { -(config.n as f64) } else { -0.3 },
birth_count: 0,
surviving_offspring_count: 0,
allele_means: vec![0.0; config.n],
active_loci: 0,
termination_reason: if extinct {
TerminationReason::Extinction
} else {
TerminationReason::Success
},
};
RunResult {
config: config.clone(),
seed,
final_state,
events: vec![],
wall_time: 0.01,
extinct,
extinction_time,
first_nonzero_allele_t: None,
last_nonzero_allele_t: None,
}
}
/// Trait for estimating extinction probabilities from repeated simulation runs.
pub trait ExtinctionEstimator {
/// Run a single simulation with the given configuration.
fn run(&self, config: &Config) -> RunResult;
/// Count extinctions over `num_runs` independent runs.
fn estimate_extinction_count(&self, config: &Config, num_runs: usize) -> u32 {
let mut extinct_count = 0;
for i in 0..num_runs {
let mut config = config.clone();
config.seed = Some(i as u64);
if self.run(&config).extinct {
extinct_count += 1;
}
}
extinct_count
}
/// Estimate extinction probability over `num_runs` independent runs.
fn estimate_extinction_probability(&self, config: &Config, num_runs: usize) -> f64 {
let extinct_count = self.estimate_extinction_count(config, num_runs);
extinct_count as f64 / num_runs as f64
}
}
/// Baseline estimator with a fixed run count.
#[derive(Debug, Clone)]
pub struct FixedRunEstimator {
pub num_runs: usize,
}
impl FixedRunEstimator {
pub fn new(num_runs: usize) -> Self {
Self { num_runs }
}
}
impl Default for FixedRunEstimator {
fn default() -> Self {
Self::new(20)
}
}
impl ExtinctionEstimator for FixedRunEstimator {
fn run(&self, config: &Config) -> RunResult {
placeholder_run(config)
}
}
/// Adaptive estimator placeholder.
#[derive(Debug, Clone)]
pub struct AdaptiveEstimator {
pub default_runs: usize,
pub min_runs: usize,
pub max_runs: usize,
pub ci_threshold: f64,
}
impl Default for AdaptiveEstimator {
fn default() -> Self {
Self {
default_runs: 20,
min_runs: 5,
max_runs: 100,
ci_threshold: 0.1,
}
}
}
impl AdaptiveEstimator {
pub fn new(
default_runs: usize,
min_runs: usize,
max_runs: usize,
ci_threshold: f64,
) -> Self {
Self {
default_runs,
min_runs,
max_runs,
ci_threshold,
}
}
pub fn estimate_extinction_probability_adaptive(&self, config: &Config) -> (f64, usize) {
let prob = self.estimate_extinction_probability(config, self.default_runs);
(prob, self.default_runs)
}
}
impl ExtinctionEstimator for AdaptiveEstimator {
fn run(&self, config: &Config) -> RunResult {
placeholder_run(config)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn fixed_run_estimator_creation() {
let estimator = FixedRunEstimator::new(10);
assert_eq!(estimator.num_runs, 10);
}
#[test]
fn fixed_run_estimator_default() {
let estimator = FixedRunEstimator::default();
assert_eq!(estimator.num_runs, 20);
}
#[test]
fn adaptive_estimator_creation() {
let estimator = AdaptiveEstimator::new(30, 10, 100, 0.15);
assert_eq!(estimator.default_runs, 30);
assert_eq!(estimator.min_runs, 10);
assert_eq!(estimator.max_runs, 100);
assert_eq!(estimator.ci_threshold, 0.15);
}
#[test]
fn adaptive_estimator_default() {
let estimator = AdaptiveEstimator::default();
assert_eq!(estimator.default_runs, 20);
assert_eq!(estimator.min_runs, 5);
assert_eq!(estimator.max_runs, 100);
assert_eq!(estimator.ci_threshold, 0.1);
}
}

267
rust/track2-core/src/io.rs Normal file
View File

@ -0,0 +1,267 @@
//! Orchestration-compatible I/O for Track 2.
//!
//! This module provides serialization-friendly structs for use with the
//! orchestration layer, making it easy to serialize simulation configs and
//! results for distributed execution.
use serde::{Deserialize, Serialize};
use crate::simulation::{Config, RunResult};
/// Job manifest for Track 2 simulation.
///
/// This is the JSON contract for submitting a Track 2 simulation job.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Track2Job {
/// Unique job identifier
pub job_id: String,
/// Project name
pub project: String,
/// Track identifier (track2)
pub track: String,
/// Job kind (e.g., track2_threshold_search)
pub job_kind: String,
/// Random seed
pub seed: u64,
/// Simulation configuration
pub config: Config,
/// Target extinction probability for threshold search
pub target_extinction_probability: f64,
/// T values to search over
pub t_values: Vec<f64>,
/// Estimator type (e.g., "fixed", "adaptive")
pub estimator_type: String,
/// Search strategy (e.g., "binary", "sweep")
pub search_strategy: String,
}
impl Track2Job {
/// Create a new Track 2 job.
pub fn new(
job_id: String,
seed: u64,
config: Config,
target_extinction_probability: f64,
t_values: Vec<f64>,
) -> Self {
Self {
job_id,
project: "cost_of_substitution".to_string(),
track: "track2".to_string(),
job_kind: "track2_threshold_search".to_string(),
seed,
config,
target_extinction_probability,
t_values,
estimator_type: "fixed".to_string(),
search_strategy: "sweep".to_string(),
}
}
}
/// Result manifest for Track 2 simulation.
///
/// This is the JSON contract for returning a Track 2 simulation result.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Track2Result {
/// Job ID this result corresponds to
pub job_id: String,
/// Status (succeeded, failed, etc.)
pub status: String,
/// Worker backend that ran this job
pub worker_backend: String,
/// Worker hostname
pub worker_host: Option<String>,
/// Start time
pub started_at: String,
/// Finish time
pub finished_at: String,
/// Wall time in seconds
pub wall_time: f64,
/// Exit code
pub exit_code: i32,
/// Config hash (SHA256 of canonical config)
pub config_hash: String,
/// Code identity (git commit or version string)
pub code_identity: String,
/// Artifacts (paths to result files)
pub artifacts: Vec<String>,
/// Summary (compact job-specific summary)
pub summary: Track2Summary,
/// Error (null on success)
pub error: Option<Track2Error>,
}
/// Compact summary of Track 2 run result.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Track2Summary {
/// Extinction probability estimate
pub extinction_probability: f64,
/// Estimated threshold T
pub estimated_t: f64,
/// Number of runs performed
pub num_runs: usize,
/// Extinction count
pub extinct_count: u32,
/// T values that were tested
pub tested_t_values: Vec<f64>,
/// Search strategy used
pub search_strategy: String,
}
/// Error details for Track 2 run failures.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Track2Error {
/// Error message
pub message: String,
/// Error type
pub error_type: String,
/// Stack trace (if available)
pub stack_trace: Option<String>,
}
impl Track2Result {
/// Create a success result.
pub fn success(job_id: String, summary: Track2Summary, artifacts: Vec<String>) -> Self {
Self {
job_id,
status: "succeeded".to_string(),
worker_backend: "rust-track2".to_string(),
worker_host: None,
started_at: "2026-04-11T00:00:00Z".to_string(),
finished_at: "2026-04-11T00:00:01Z".to_string(),
wall_time: 1.0,
exit_code: 0,
config_hash: "".to_string(),
code_identity: "dev".to_string(),
artifacts,
summary,
error: None,
}
}
/// Create a failure result.
pub fn failure(
job_id: String,
error: Track2Error,
artifacts: Vec<String>,
) -> Self {
Self {
job_id,
status: "failed".to_string(),
worker_backend: "rust-track2".to_string(),
worker_host: None,
started_at: "2026-04-11T00:00:00Z".to_string(),
finished_at: "2026-04-11T00:00:01Z".to_string(),
wall_time: 1.0,
exit_code: 1,
config_hash: "".to_string(),
code_identity: "dev".to_string(),
artifacts,
summary: Track2Summary {
extinction_probability: 0.0,
estimated_t: 0.0,
num_runs: 0,
extinct_count: 0,
tested_t_values: vec![],
search_strategy: "".to_string(),
},
error: Some(error),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::simulation::Config;
#[test]
fn track2_job_creation() {
let config = Config {
K: 1000.0,
N0: 500.0,
R: 10.0,
T: 50.0,
n: 3,
u: 0.001,
epochs: 5,
target_extinction_probability: 0.5,
seed: None,
};
let job = Track2Job::new(
"test-job-1".to_string(),
42,
config,
0.5,
vec![10.0, 20.0, 30.0, 40.0],
);
assert_eq!(job.job_id, "test-job-1");
assert_eq!(job.track, "track2");
assert_eq!(job.job_kind, "track2_threshold_search");
assert_eq!(job.seed, 42);
assert_eq!(job.t_values.len(), 4);
}
#[test]
fn track2_result_success() {
let summary = Track2Summary {
extinction_probability: 0.5,
estimated_t: 25.0,
num_runs: 20,
extinct_count: 10,
tested_t_values: vec![10.0, 20.0, 30.0, 40.0],
search_strategy: "sweep".to_string(),
};
let result = Track2Result::success("test-job".to_string(), summary, vec![]);
assert_eq!(result.status, "succeeded");
assert_eq!(result.exit_code, 0);
assert!(result.error.is_none());
}
#[test]
fn track2_result_failure() {
let error = Track2Error {
message: "Simulation crashed".to_string(),
error_type: "runtime_error".to_string(),
stack_trace: None,
};
let result = Track2Result::failure("test-job".to_string(), error, vec![]);
assert_eq!(result.status, "failed");
assert_eq!(result.exit_code, 1);
assert!(result.error.is_some());
}
}

View File

@ -1,12 +1,23 @@
//! Track 2 core for the cost-of-substitution project. //! Track 2 core for the cost-of-substitution project.
//! //!
//! Track 2 is intentionally not a line-by-line reproduction of Nunney's //! Track 2 is intentionally not a line-by-line translation of Nunney's published
//! published threshold heuristic. Its goal is to provide a performant, //! threshold heuristic. Its goal is to provide a performant, explicitly specified
//! explicitly specified simulation and threshold-estimation substrate that can //! simulation and threshold-estimation substrate that can later support richer
//! later support richer kernels and cleaner inference. //! kernels and cleaner inference.
pub mod threshold; pub mod threshold;
pub mod simulation;
pub mod extinction_trait;
pub mod search;
pub mod io;
pub mod simulation_kernel;
// Re-export commonly used types
pub use threshold::{ pub use threshold::{
ExtinctionCount, ExtinctionCount,
ThresholdBracket, ThresholdBracket,
@ -15,3 +26,14 @@ pub use threshold::{
bracket_threshold, bracket_threshold,
midpoint_threshold, midpoint_threshold,
}; };
pub use simulation::{Config, State, Event, RunResult, TerminationReason};
pub use extinction_trait::{ExtinctionEstimator, FixedRunEstimator, AdaptiveEstimator};
pub use search::{ThresholdSearchStrategy, BinarySearchStrategy, BracketRefineStrategy, SweepStrategy};
pub use io::{Track2Job, Track2Result, Track2Summary, Track2Error};
// Simulation kernels
pub use simulation_kernel::{SimpleSimulationKernel, NunneySimulationKernel};

View File

@ -0,0 +1,276 @@
//! Threshold search strategy for Track 2.
//!
//! This module provides strategies for finding the threshold where extinction
//! probability equals a target value (typically 0.5).
//!
//! A threshold is defined as the T value where P(extinction | T) = target_probability.
//! This module separates threshold search from the simulation kernel, allowing
//! different search strategies to be used.
use crate::threshold::{
ExtinctionCount,
ThresholdBracket,
ThresholdEstimate,
ThresholdPoint,
bracket_threshold,
midpoint_threshold,
};
use crate::simulation::Config;
use crate::extinction_trait::ExtinctionEstimator;
/// Strategy for searching for threshold values.
///
/// This trait defines different algorithms for finding the T where extinction
/// probability equals a target value.
pub trait ThresholdSearchStrategy {
/// Find threshold for a given configuration across a range of T values.
///
/// # Arguments
/// * `estimator` - Extinction estimator to use for simulations
/// * `config` - Base configuration (will have T varied across the search)
/// * `t_values` - Candidate T values to test
/// * `target_extinction_probability` - Target extinction probability (typically 0.5)
///
/// # Returns
/// Estimate of threshold T and the bracket containing it
fn search_threshold(
&self,
estimator: &impl ExtinctionEstimator,
config: &Config,
t_values: &[f64],
target_extinction_probability: f64,
) -> Option<ThresholdEstimate>;
}
/// Binary search for threshold.
///
/// This strategy uses a simple binary search over the provided T values to
/// find where extinction probability crosses the target value.
#[derive(Debug, Clone, Copy)]
pub struct BinarySearchStrategy;
impl BinarySearchStrategy {
pub fn new() -> Self {
Self
}
}
impl Default for BinarySearchStrategy {
fn default() -> Self {
Self::new()
}
}
impl ThresholdSearchStrategy for BinarySearchStrategy {
fn search_threshold(
&self,
estimator: &impl ExtinctionEstimator,
config: &Config,
t_values: &[f64],
target_extinction_probability: f64,
) -> Option<ThresholdEstimate> {
// Sort T values
let mut sorted_t: Vec<(f64, ExtinctionCount)> = t_values
.iter()
.map(|&t| {
// Clone config and set T
let mut temp_config = config.clone();
temp_config.T = t;
// Estimate extinction count for this T
let extinct_count = estimator.estimate_extinction_count(&temp_config, 20);
let total_count = 20;
(t, ExtinctionCount::new(extinct_count, total_count))
})
.collect();
sorted_t.sort_by(|a, b| a.0.total_cmp(&b.0));
// Find bracket where extinction probability crosses target
let mut bracket = None;
for i in 0..sorted_t.len().saturating_sub(1) {
let (t1, count1) = &sorted_t[i];
let (t2, count2) = &sorted_t[i + 1];
let prob1 = count1.extinction_probability();
let prob2 = count2.extinction_probability();
// Check if target is between these two probabilities
if bracket_threshold(
ThresholdPoint::new(*t1, prob1),
ThresholdPoint::new(*t2, prob2),
target_extinction_probability,
).is_some() {
bracket = Some(ThresholdBracket {
lower: ThresholdPoint::new(*t1, prob1),
upper: ThresholdPoint::new(*t2, prob2),
target_extinction_probability,
});
break;
}
}
match bracket {
Some(br) => Some(midpoint_threshold(br)),
None => None,
}
}
}
/// Bracket-and-refine strategy for threshold search.
///
/// This strategy first identifies a bracket using binary search, then performs
/// additional refinement by testing nearby T values.
#[derive(Debug, Clone, Copy)]
pub struct BracketRefineStrategy {
/// Number of refinement iterations
pub refinement_rounds: usize,
}
impl BracketRefineStrategy {
pub fn new(refinement_rounds: usize) -> Self {
Self { refinement_rounds }
}
pub fn with_default_refinement() -> Self {
Self::new(2)
}
}
impl Default for BracketRefineStrategy {
fn default() -> Self {
Self::with_default_refinement()
}
}
impl ThresholdSearchStrategy for BracketRefineStrategy {
fn search_threshold(
&self,
estimator: &impl ExtinctionEstimator,
config: &Config,
t_values: &[f64],
target_extinction_probability: f64,
) -> Option<ThresholdEstimate> {
// Placeholder implementation: return the initial bracketed midpoint
// until a richer iterative refinement kernel is added.
BinarySearchStrategy::new().search_threshold(
estimator,
config,
t_values,
target_extinction_probability,
)
}
}
/// Simple sweep strategy for threshold search.
///
/// This strategy computes extinction probability for all T values in a sweep
/// and identifies the crossing point. It's useful for generating data tables
/// and plots.
#[derive(Debug, Clone, Copy)]
pub struct SweepStrategy {
/// Number of runs per T value
pub runs_per_T: usize,
}
impl SweepStrategy {
pub fn new(runs_per_T: usize) -> Self {
Self { runs_per_T }
}
pub fn with_default() -> Self {
Self::new(20)
}
}
impl Default for SweepStrategy {
fn default() -> Self {
Self::with_default()
}
}
impl ThresholdSearchStrategy for SweepStrategy {
fn search_threshold(
&self,
estimator: &impl ExtinctionEstimator,
config: &Config,
t_values: &[f64],
target_extinction_probability: f64,
) -> Option<ThresholdEstimate> {
let mut results: Vec<(f64, f64)> = Vec::new();
for &t in t_values {
let mut temp_config = config.clone();
temp_config.T = t;
let extinct_count = estimator.estimate_extinction_count(&temp_config, self.runs_per_T);
let prob = ExtinctionCount::new(extinct_count, self.runs_per_T as u32).extinction_probability();
results.push((t, prob));
}
// Find crossing point
for i in 0..results.len().saturating_sub(1) {
let (t1, prob1) = results[i];
let (t2, prob2) = results[i + 1];
if bracket_threshold(
ThresholdPoint::new(t1, prob1),
ThresholdPoint::new(t2, prob2),
target_extinction_probability,
).is_some() {
return Some(ThresholdEstimate {
target_extinction_probability,
estimated_t: 0.5 * (t1 + t2),
lower_t: t1,
upper_t: t2,
});
}
}
None
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn binary_search_strategy_creation() {
let strategy = BinarySearchStrategy::new();
assert_eq!(format!("{:?}", strategy), "BinarySearchStrategy");
}
#[test]
fn binary_search_strategy_default() {
let strategy = BinarySearchStrategy::default();
assert_eq!(format!("{:?}", strategy), "BinarySearchStrategy");
}
#[test]
fn bracket_refine_strategy_creation() {
let strategy = BracketRefineStrategy::new(3);
assert_eq!(strategy.refinement_rounds, 3);
}
#[test]
fn bracket_refine_strategy_default() {
let strategy = BracketRefineStrategy::default();
assert_eq!(strategy.refinement_rounds, 2);
}
#[test]
fn sweep_strategy_creation() {
let strategy = SweepStrategy::new(50);
assert_eq!(strategy.runs_per_T, 50);
}
#[test]
fn sweep_strategy_default() {
let strategy = SweepStrategy::default();
assert_eq!(strategy.runs_per_T, 20);
}
}

View File

@ -0,0 +1,351 @@
//! Simulation state model for Track 2.
//!
//! This module defines the core biological state of the cost-of-substitution
//! simulation, separated from Nunney's specific threshold heuristic.
//!
//! The model is organized into three layers:
//! - Configuration: parameters that define the problem
//! - State: snapshot of population/genotype state at a generation
//! - Event: outcome or change that occurs during a simulation step
use serde::{Deserialize, Serialize};
use std::fmt;
/// Simulation configuration defining the biological problem.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Config {
/// Population carrying capacity
pub K: f64,
/// Initial population size
pub N0: f64,
/// Density-independent net reproductive rate (R = 2 * exp(r))
pub R: f64,
/// Number of generations for the moving optimum to advance one locus
pub T: f64,
/// Number of loci
pub n: usize,
/// Mutation rate parameter (per locus per generation)
pub u: f64,
/// Number of epochs (environmental periods)
pub epochs: usize,
/// Extinction probability target for threshold estimation
pub target_extinction_probability: f64,
/// Random seed for reproducibility
pub seed: Option<u64>,
}
impl Config {
/// Calculate derived mutation supply M = 2 * K * u
/// This should be used for treatment comparisons in Nunney's framework
pub fn mutation_supply(&self) -> f64 {
2.0 * self.K * self.u
}
/// Calculate the total number of generations across all epochs.
pub fn generations_per_epoch(&self) -> usize {
(self.T * self.epochs as f64) as usize
}
}
impl fmt::Display for Config {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"Config(K={}, N0={}, R={}, T={}, n={}, u={}, epochs={}, target_ext={})",
self.K, self.N0, self.R, self.T, self.n, self.u, self.epochs, self.target_extinction_probability
)
}
}
/// Simulation state at a specific generation.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct State {
/// Current generation number
pub generation: usize,
/// Current epoch number
pub epoch: usize,
/// Current environmental optimum value (moving target)
pub optimum: f64,
/// Population size
pub N: f64,
/// Female population size (if sex-differentiated)
pub N_f: f64,
/// Male population size (if sex-differentiated)
pub N_m: f64,
/// Total number of individuals tracked for extinction
pub total_tracked: usize,
/// Number of extinctions observed so far
pub extinctions: usize,
/// Extinction event timestamp (generation number, if any)
pub extinction_time: Option<usize>,
/// Mean fitness across population
pub mean_fitness: f64,
/// Mean female fecundity
pub mean_fecundity: f64,
/// Mean genotype mismatch (distance from optimum)
pub mean_mismatch: f64,
/// Signed mean tracking gap (mean genotype value minus target)
pub mean_tracking_gap: f64,
/// Number of births realized in the most recent generation
pub birth_count: usize,
/// Number of surviving offspring in the most recent generation
pub surviving_offspring_count: usize,
/// Allele means at each locus
pub allele_means: Vec<f64>,
/// Number of loci with nonzero allele variance
pub active_loci: usize,
/// Simulation termination reason
pub termination_reason: TerminationReason,
}
/// Reasons for simulation termination.
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]
pub enum TerminationReason {
/// Reached maximum number of generations
MaxGenerations,
/// Population went extinct
Extinction,
/// One sex went extinct (infeasible under lottery polygyny)
MissingSex,
/// Simulation completed successfully without extinction
Success,
}
impl fmt::Display for TerminationReason {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
TerminationReason::MaxGenerations => write!(f, "max_generations"),
TerminationReason::Extinction => write!(f, "extinction"),
TerminationReason::MissingSex => write!(f, "missing_sex"),
TerminationReason::Success => write!(f, "success"),
}
}
}
/// Event that occurs during a simulation step.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum Event {
/// Population size change
PopulationChange {
generation: usize,
delta_N: i64,
new_N: f64,
},
/// Genotype change
GenotypeChange {
generation: usize,
locus: usize,
mean_change: f64,
},
/// Extinction event
Extinction {
generation: usize,
reason: TerminationReason,
},
/// Generation complete
GenerationComplete {
generation: usize,
epoch: usize,
},
}
/// Run-level simulation result.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RunResult {
/// Configuration used for this run
pub config: Config,
/// Seed used for random number generation
pub seed: u64,
/// Final state
pub final_state: State,
/// All events that occurred during the run
pub events: Vec<Event>,
/// Total wall time in seconds
pub wall_time: f64,
/// Whether extinction occurred
pub extinct: bool,
/// Extinction time in generations (if applicable)
pub extinction_time: Option<usize>,
/// First generation t with nonzero mean allele value
pub first_nonzero_allele_t: Option<i32>,
/// Last generation t with nonzero mean allele value
pub last_nonzero_allele_t: Option<i32>,
}
impl RunResult {
/// Get the number of generations simulated
pub fn generations(&self) -> usize {
self.final_state.generation
}
/// Get the number of epochs simulated
pub fn epochs(&self) -> usize {
self.final_state.epoch
}
/// Get survival probability
pub fn survival_probability(&self) -> f64 {
if self.extinct { 0.0 } else { 1.0 }
}
/// Get extinction probability (1.0 if extinct, 0.0 otherwise)
pub fn extinction_probability(&self) -> f64 {
if self.extinct { 1.0 } else { 0.0 }
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn config_mutation_supply_calculation() {
let cfg = Config {
K: 1000.0,
N0: 500.0,
R: 10.0,
T: 50.0,
n: 3,
u: 0.001,
epochs: 5,
target_extinction_probability: 0.5,
seed: None,
};
assert_eq!(cfg.mutation_supply(), 2.0 * 1000.0 * 0.001);
assert_eq!(cfg.generations_per_epoch(), 250);
}
#[test]
fn termination_reason_display() {
assert_eq!(format!("{}", TerminationReason::Extinction), "extinction");
assert_eq!(format!("{}", TerminationReason::Success), "success");
}
#[test]
fn run_result_survival_probability() {
let result = RunResult {
config: Config {
K: 1000.0,
N0: 500.0,
R: 10.0,
T: 50.0,
n: 3,
u: 0.001,
epochs: 5,
target_extinction_probability: 0.5,
seed: Some(42),
},
seed: 42,
final_state: State {
generation: 100,
epoch: 2,
optimum: 2.0,
N: 500.0,
N_f: 250.0,
N_m: 250.0,
total_tracked: 500,
extinctions: 0,
extinction_time: None,
mean_fitness: 0.8,
mean_fecundity: 5.0,
mean_mismatch: 0.3,
mean_tracking_gap: -0.3,
birth_count: 10,
surviving_offspring_count: 8,
allele_means: vec![1.0, 1.0, 1.0],
active_loci: 3,
termination_reason: TerminationReason::Success,
},
events: vec![],
wall_time: 0.01,
extinct: false,
extinction_time: None,
first_nonzero_allele_t: Some(0),
last_nonzero_allele_t: Some(100),
};
assert_eq!(result.survival_probability(), 1.0);
assert_eq!(result.extinction_probability(), 0.0);
}
#[test]
fn run_result_extinction_probability() {
let mut result = RunResult {
config: Config {
K: 1000.0,
N0: 500.0,
R: 10.0,
T: 50.0,
n: 3,
u: 0.001,
epochs: 5,
target_extinction_probability: 0.5,
seed: Some(42),
},
seed: 42,
final_state: State {
generation: 100,
epoch: 2,
optimum: 2.0,
N: 0.0,
N_f: 0.0,
N_m: 0.0,
total_tracked: 500,
extinctions: 1,
extinction_time: Some(50),
mean_fitness: 0.1,
mean_fecundity: 0.5,
mean_mismatch: 2.0,
mean_tracking_gap: -2.0,
birth_count: 0,
surviving_offspring_count: 0,
allele_means: vec![1.0, 1.0, 1.0],
active_loci: 3,
termination_reason: TerminationReason::Extinction,
},
events: vec![],
wall_time: 0.01,
extinct: true,
extinction_time: Some(50),
first_nonzero_allele_t: Some(0),
last_nonzero_allele_t: Some(49),
};
assert_eq!(result.survival_probability(), 0.0);
assert_eq!(result.extinction_probability(), 1.0);
}
}

View File

@ -0,0 +1,788 @@
//! Simulation kernels for Track 2.
//!
//! `SimpleSimulationKernel` remains a lightweight placeholder. The
//! `NunneySimulationKernel` now implements a small but substantive subset of
//! the Python Track 1 dynamics so same-parameter smoke comparisons are
//! meaningful.
use rand::{Rng, SeedableRng, rngs::StdRng};
use rand_distr::{Distribution, Poisson};
use crate::extinction_trait::ExtinctionEstimator;
use crate::simulation::{Config, Event, RunResult, State, TerminationReason};
#[derive(Debug, Clone, Copy)]
pub struct SimpleSimulationKernel {
pub deterministic: bool,
}
impl SimpleSimulationKernel {
pub fn new(deterministic: bool) -> Self {
Self { deterministic }
}
}
impl Default for SimpleSimulationKernel {
fn default() -> Self {
Self { deterministic: false }
}
}
impl ExtinctionEstimator for SimpleSimulationKernel {
fn run(&self, config: &Config) -> RunResult {
let seed = config.seed.unwrap_or(0);
let mut rng = StdRng::seed_from_u64(seed);
let extinct = if self.deterministic {
config.T < 30.0
} else if config.T < 30.0 {
rng.gen::<f64>() < 0.8
} else if config.T < 60.0 {
rng.gen::<f64>() < 0.4
} else {
rng.gen::<f64>() < 0.1
};
let generations = config.generations_per_epoch().max(1);
let extinction_time = extinct.then_some((generations / 2).max(1));
let mut state = State {
generation: 0,
epoch: 0,
optimum: 0.0,
N: config.N0,
N_f: config.N0 / 2.0,
N_m: config.N0 / 2.0,
total_tracked: (config.N0 as usize).max(1),
extinctions: 0,
extinction_time,
mean_fitness: 0.8,
mean_fecundity: 5.0,
mean_mismatch: 0.3,
mean_tracking_gap: -0.3,
birth_count: 0,
surviving_offspring_count: 0,
allele_means: vec![0.0; config.n],
active_loci: 0,
termination_reason: if extinct {
TerminationReason::Extinction
} else {
TerminationReason::Success
},
};
let mut events = Vec::new();
let epoch_width = config.T.max(1.0) as usize;
for generation in 0..generations {
state.generation = generation;
state.epoch = generation / epoch_width;
state.optimum = generation as f64 / epoch_width as f64;
if extinct && generation >= extinction_time.unwrap_or(generations) {
state.N = 0.0;
state.N_f = 0.0;
state.N_m = 0.0;
state.mean_fitness = 0.0;
state.mean_fecundity = 0.0;
break;
}
let density_factor = (state.N / config.K).min(1.0);
state.N = (state.N * (1.0 + 0.01 * (1.0 - density_factor))).min(config.K);
state.N_f = 0.5 * state.N;
state.N_m = 0.5 * state.N;
if generation > 0 {
events.push(Event::GenerationComplete {
generation,
epoch: state.epoch,
});
}
}
let extinction_time = state.extinction_time;
RunResult {
config: config.clone(),
seed,
final_state: state,
events,
wall_time: 0.01,
extinct,
extinction_time,
first_nonzero_allele_t: None,
last_nonzero_allele_t: None,
}
}
}
#[derive(Debug, Clone, Copy, Default)]
pub struct NunneySimulationKernel {
pub seed: Option<u64>,
}
#[derive(Debug, Clone)]
struct Population {
genomes: Vec<i16>,
sexes: Vec<u8>,
n_loci: usize,
}
impl Population {
fn new(size: usize, n_loci: usize) -> Self {
Self {
genomes: vec![0; size * 2 * n_loci],
sexes: vec![0; size],
n_loci,
}
}
fn size(&self) -> usize {
self.sexes.len()
}
fn genome_offset(&self, individual: usize, chrom: usize, locus: usize) -> usize {
individual * 2 * self.n_loci + chrom * self.n_loci + locus
}
fn allele(&self, individual: usize, chrom: usize, locus: usize) -> i16 {
self.genomes[self.genome_offset(individual, chrom, locus)]
}
fn set_allele(&mut self, individual: usize, chrom: usize, locus: usize, value: i16) {
let idx = self.genome_offset(individual, chrom, locus);
self.genomes[idx] = value;
}
fn female_count(&self) -> usize {
self.sexes.iter().filter(|&&sex| sex == 0).count()
}
fn male_count(&self) -> usize {
self.size().saturating_sub(self.female_count())
}
fn extinct(&self) -> bool {
self.size() == 0 || self.female_count() == 0 || self.male_count() == 0
}
}
#[derive(Debug, Clone, Copy)]
struct StepSummary {
t: i32,
n: usize,
female_count: usize,
male_count: usize,
fecundity: f64,
mean_fitness: f64,
target_value: f64,
mean_allele_value: f64,
mean_genotype_value: f64,
mean_tracking_gap: f64,
extinct: bool,
}
impl NunneySimulationKernel {
pub fn new(seed: Option<u64>) -> Self {
Self { seed }
}
fn intrinsic_growth_rate(config: &Config) -> f64 {
(config.R / 2.0).ln()
}
fn t_int(config: &Config) -> i32 {
config.T.round().max(1.0) as i32
}
fn total_generations(config: &Config) -> usize {
let t = Self::t_int(config);
(config.epochs as i32 * t + t / 2).max(1) as usize
}
fn resolved_a_max(config: &Config) -> i16 {
config.epochs.max(1) as i16
}
fn initialize_population(config: &Config, rng: &mut StdRng) -> Population {
let mut population = Population::new(config.N0.max(0.0) as usize, config.n);
for sex in &mut population.sexes {
*sex = if rng.gen::<f64>() < 0.5 { 0 } else { 1 };
}
population
}
fn female_fecundity(r: f64, n_population: usize, k: f64) -> f64 {
if n_population == 0 || k <= 0.0 {
return 0.0;
}
let ratio = n_population as f64 / k;
2.0 * (r * (1.0 - ratio.powf(1.0 / r))).exp()
}
fn genotype_fitness(population: &Population, config: &Config, t: i32) -> Vec<f64> {
let r = Self::intrinsic_growth_rate(config);
let target = t as f64 / config.T;
let n = config.n as f64;
let mut out = Vec::with_capacity(population.size());
for individual in 0..population.size() {
let mut squared_distance_sum = 0.0;
for locus in 0..config.n {
let locus_mean =
0.5 * (population.allele(individual, 0, locus) as f64
+ population.allele(individual, 1, locus) as f64);
let delta = locus_mean - target;
squared_distance_sum += delta * delta;
}
out.push((-(r / n) * squared_distance_sum).exp());
}
out
}
fn tracking_metrics(population: &Population, config: &Config, t: i32) -> (f64, f64, f64, f64) {
let target_value = t as f64 / config.T;
if population.size() == 0 {
return (target_value, 0.0, 0.0, -target_value);
}
let allele_sum: f64 = population.genomes.iter().map(|&v| v as f64).sum();
let mean_allele_value = allele_sum / population.genomes.len() as f64;
let mut genotype_sum = 0.0;
for individual in 0..population.size() {
for locus in 0..config.n {
genotype_sum += 0.5
* (population.allele(individual, 0, locus) as f64
+ population.allele(individual, 1, locus) as f64);
}
}
let mean_genotype_value = genotype_sum / (population.size() * config.n) as f64;
let mean_tracking_gap = mean_genotype_value - target_value;
(
target_value,
mean_allele_value,
mean_genotype_value,
mean_tracking_gap,
)
}
fn summarize(population: &Population, config: &Config, t: i32) -> StepSummary {
if population.size() == 0 {
return StepSummary {
t,
n: 0,
female_count: 0,
male_count: 0,
fecundity: 0.0,
mean_fitness: 0.0,
target_value: t as f64 / config.T,
mean_allele_value: 0.0,
mean_genotype_value: 0.0,
mean_tracking_gap: -(t as f64 / config.T),
extinct: true,
};
}
let fitness = Self::genotype_fitness(population, config, t);
let fecundity = Self::female_fecundity(Self::intrinsic_growth_rate(config), population.size(), config.K);
let female_count = population.female_count();
let male_count = population.male_count();
let (target_value, mean_allele_value, mean_genotype_value, mean_tracking_gap) =
Self::tracking_metrics(population, config, t);
StepSummary {
t,
n: population.size(),
female_count,
male_count,
fecundity,
mean_fitness: fitness.iter().sum::<f64>() / fitness.len() as f64,
target_value,
mean_allele_value,
mean_genotype_value,
mean_tracking_gap,
extinct: population.extinct(),
}
}
fn sample_poisson(lambda: f64, rng: &mut StdRng) -> usize {
if lambda <= 0.0 {
return 0;
}
let dist = Poisson::new(lambda).expect("lambda must be positive");
dist.sample(rng) as usize
}
fn choose_mutant_allele(current: i16, a_max: i16, rng: &mut StdRng) -> i16 {
if a_max <= 0 {
return current;
}
let draw = rng.gen_range(0..a_max);
if draw < current { draw } else { draw + 1 }
}
fn produce_offspring(
population: &Population,
mother_index: usize,
father_index: usize,
config: &Config,
a_max: i16,
rng: &mut StdRng,
) -> (Vec<i16>, usize) {
let mut offspring = vec![0i16; 2 * config.n];
let mut mutation_count = 0usize;
for locus in 0..config.n {
let maternal_chrom = rng.gen_range(0..2);
let paternal_chrom = rng.gen_range(0..2);
offspring[locus] = population.allele(mother_index, maternal_chrom, locus);
offspring[config.n + locus] = population.allele(father_index, paternal_chrom, locus);
}
for allele in &mut offspring {
if rng.gen::<f64>() <= config.u {
*allele = Self::choose_mutant_allele(*allele, a_max, rng);
mutation_count += 1;
}
}
(offspring, mutation_count)
}
fn offspring_fitness(offspring: &[i16], config: &Config, t: i32) -> f64 {
let r = Self::intrinsic_growth_rate(config);
let target = t as f64 / config.T;
let n = config.n as f64;
let mut squared_distance_sum = 0.0;
for locus in 0..config.n {
let locus_mean = 0.5 * (offspring[locus] as f64 + offspring[config.n + locus] as f64);
let delta = locus_mean - target;
squared_distance_sum += delta * delta;
}
(-(r / n) * squared_distance_sum).exp()
}
fn simulate_one_generation(
population: &Population,
config: &Config,
t: i32,
rng: &mut StdRng,
) -> (Population, StepSummary, usize, usize) {
let summary = Self::summarize(population, config, t);
if summary.extinct {
return (population.clone(), summary, 0, 0);
}
let female_indices: Vec<usize> = population
.sexes
.iter()
.enumerate()
.filter_map(|(idx, &sex)| (sex == 0).then_some(idx))
.collect();
let male_indices: Vec<usize> = population
.sexes
.iter()
.enumerate()
.filter_map(|(idx, &sex)| (sex == 1).then_some(idx))
.collect();
let a_max = Self::resolved_a_max(config);
let mut next_genomes = Vec::new();
let mut next_sexes = Vec::new();
let mut realized_mutation_count = 0usize;
let mut birth_count = 0usize;
let offspring_t = t + 1;
for &mother_index in &female_indices {
let births = Self::sample_poisson(summary.fecundity, rng);
birth_count += births;
for _ in 0..births {
let father_index = male_indices[rng.gen_range(0..male_indices.len())];
let (offspring, mutation_count) = Self::produce_offspring(
population,
mother_index,
father_index,
config,
a_max,
rng,
);
realized_mutation_count += mutation_count;
let survival = Self::offspring_fitness(&offspring, config, offspring_t);
if rng.gen::<f64>() <= survival {
next_genomes.extend_from_slice(&offspring);
next_sexes.push(if rng.gen::<f64>() < 0.5 { 0 } else { 1 });
}
}
}
(
Population {
genomes: next_genomes,
sexes: next_sexes,
n_loci: config.n,
},
summary,
birth_count,
realized_mutation_count,
)
}
}
impl ExtinctionEstimator for NunneySimulationKernel {
fn run(&self, config: &Config) -> RunResult {
let seed = self.seed.or(config.seed).unwrap_or(0);
let mut rng = StdRng::seed_from_u64(seed);
let mut population = Self::initialize_population(config, &mut rng);
let t_half = Self::t_int(config) / 2;
let mut t = -t_half;
let mut last_summary = Self::summarize(&population, config, t);
let mut last_birth_count = 0usize;
let mut last_surviving_offspring_count = population.size();
let mut extinction_time = None;
let mut events = Vec::new();
let mut first_nonzero_allele_t = (last_summary.mean_allele_value != 0.0).then_some(last_summary.t);
let mut last_nonzero_allele_t = first_nonzero_allele_t;
for _ in 0..Self::total_generations(config) {
let (next_population, summary, birth_count, _mutation_count) =
Self::simulate_one_generation(&population, config, t, &mut rng);
last_summary = summary;
if last_summary.mean_allele_value != 0.0 {
if first_nonzero_allele_t.is_none() {
first_nonzero_allele_t = Some(last_summary.t);
}
last_nonzero_allele_t = Some(last_summary.t);
}
last_birth_count = birth_count;
last_surviving_offspring_count = next_population.size();
population = next_population;
events.push(Event::GenerationComplete {
generation: t.max(0) as usize,
epoch: ((t.max(0) as f64) / config.T.max(1.0)).floor() as usize,
});
if population.extinct() {
let terminal_t = if last_summary.extinct { t } else { t + 1 };
last_summary = Self::summarize(&population, config, terminal_t);
if last_summary.mean_allele_value != 0.0 {
if first_nonzero_allele_t.is_none() {
first_nonzero_allele_t = Some(last_summary.t);
}
last_nonzero_allele_t = Some(last_summary.t);
}
extinction_time = Some(terminal_t.max(0) as usize);
break;
}
t += 1;
}
let active_loci = (0..config.n)
.filter(|&locus| {
let first = population
.genomes
.get(locus)
.copied()
.unwrap_or(0);
population
.genomes
.chunks_exact(2 * config.n)
.any(|genome| genome[locus] != first || genome[config.n + locus] != first)
})
.count();
let final_state = State {
generation: last_summary.t.max(0) as usize,
epoch: ((last_summary.t.max(0) as f64) / config.T.max(1.0)).floor() as usize,
optimum: last_summary.target_value,
N: last_summary.n as f64,
N_f: last_summary.female_count as f64,
N_m: last_summary.male_count as f64,
total_tracked: population.size(),
extinctions: usize::from(last_summary.extinct),
extinction_time,
mean_fitness: last_summary.mean_fitness,
mean_fecundity: last_summary.fecundity,
mean_mismatch: last_summary.mean_tracking_gap.abs(),
mean_tracking_gap: last_summary.mean_tracking_gap,
birth_count: last_birth_count,
surviving_offspring_count: last_surviving_offspring_count,
allele_means: vec![last_summary.mean_allele_value; config.n],
active_loci,
termination_reason: if last_summary.extinct {
TerminationReason::Extinction
} else {
TerminationReason::Success
},
};
RunResult {
config: config.clone(),
seed,
final_state,
events,
wall_time: 0.01,
extinct: last_summary.extinct,
extinction_time,
first_nonzero_allele_t,
last_nonzero_allele_t,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use rand::SeedableRng;
fn smoke_config() -> Config {
Config {
K: 1000.0,
N0: 500.0,
R: 10.0,
T: 50.0,
n: 3,
u: 0.001,
epochs: 5,
target_extinction_probability: 0.5,
seed: Some(0),
}
}
fn zero_population(size: usize, n_loci: usize) -> Population {
let mut population = Population::new(size, n_loci);
for (idx, sex) in population.sexes.iter_mut().enumerate() {
*sex = if idx % 2 == 0 { 0 } else { 1 };
}
population
}
#[test]
fn simple_simulation_kernel_creation() {
let kernel = SimpleSimulationKernel::new(false);
assert!(!kernel.deterministic);
}
#[test]
fn simple_simulation_kernel_deterministic() {
let kernel = SimpleSimulationKernel::new(true);
assert!(kernel.deterministic);
}
#[test]
fn simple_simulation_kernel_default() {
let kernel = SimpleSimulationKernel::default();
assert!(!kernel.deterministic);
}
#[test]
fn nunney_simulation_kernel_creation() {
let kernel = NunneySimulationKernel::new(Some(42));
assert_eq!(kernel.seed, Some(42));
}
#[test]
fn nunney_simulation_kernel_default() {
let kernel = NunneySimulationKernel::default();
assert_eq!(kernel.seed, None);
}
#[test]
fn female_fecundity_matches_closed_form() {
let config = smoke_config();
let r = NunneySimulationKernel::intrinsic_growth_rate(&config);
let observed = NunneySimulationKernel::female_fecundity(r, 500, 1000.0);
let expected = 2.0 * (r * (1.0 - (0.5_f64).powf(1.0 / r))).exp();
assert!((observed - expected).abs() < 1e-12);
}
#[test]
fn genotype_fitness_is_one_at_target_and_decreases_with_distance() {
let config = Config { n: 1, ..smoke_config() };
let mut population = Population::new(2, config.n);
population.sexes = vec![0, 1];
population.genomes = vec![0, 0, 10, 10];
let near = NunneySimulationKernel::genotype_fitness(&population, &config, 0);
assert!((near[0] - 1.0).abs() < 1e-12);
assert!(near[1] < near[0]);
}
#[test]
fn choose_mutant_allele_never_returns_current_and_respects_bound() {
let mut rng = StdRng::seed_from_u64(7);
for _ in 0..200 {
let allele = NunneySimulationKernel::choose_mutant_allele(2, 5, &mut rng);
assert_ne!(allele, 2);
assert!((0..=5).contains(&allele));
}
}
#[test]
fn population_extinction_requires_zero_population_or_missing_sex() {
let empty = Population::new(0, 1);
assert!(empty.extinct());
let mut female_only = Population::new(3, 1);
female_only.sexes = vec![0, 0, 0];
assert!(female_only.extinct());
let mixed = zero_population(4, 1);
assert!(!mixed.extinct());
}
#[test]
fn total_generations_and_initial_t_match_track1_convention() {
let config = smoke_config();
assert_eq!(NunneySimulationKernel::total_generations(&config), 275);
assert_eq!(NunneySimulationKernel::t_int(&config), 50);
}
#[test]
fn simulate_one_generation_bookkeeping_is_internally_consistent() {
let config = Config { n: 1, ..smoke_config() };
let mut rng = StdRng::seed_from_u64(0);
let population = NunneySimulationKernel::initialize_population(&config, &mut rng);
let (next_population, summary, birth_count, mutation_count) =
NunneySimulationKernel::simulate_one_generation(&population, &config, 0, &mut rng);
assert_eq!(summary.n, population.size());
assert_eq!(summary.female_count + summary.male_count, population.size());
assert!(birth_count >= next_population.size());
assert!(mutation_count <= 2 * birth_count * config.n);
}
#[test]
fn offspring_inherits_only_parental_alleles_when_mutation_is_zero() {
let config = Config {
n: 1,
u: 0.0,
..smoke_config()
};
let mut population = Population::new(2, config.n);
population.sexes = vec![0, 1];
population.genomes = vec![1, 2, 7, 8];
let mut rng = StdRng::seed_from_u64(3);
let (offspring, mutation_count) =
NunneySimulationKernel::produce_offspring(&population, 0, 1, &config, 5, &mut rng);
assert_eq!(mutation_count, 0);
assert!(matches!(offspring[0], 1 | 2));
assert!(matches!(offspring[1], 7 | 8));
}
#[test]
fn repeated_offspring_generation_has_zero_mutations_when_u_is_zero() {
let config = Config {
n: 2,
u: 0.0,
..smoke_config()
};
let mut population = Population::new(2, config.n);
population.sexes = vec![0, 1];
population.genomes = vec![1, 2, 3, 4, 7, 8, 9, 10];
let mut rng = StdRng::seed_from_u64(11);
for _ in 0..100 {
let (offspring, mutation_count) =
NunneySimulationKernel::produce_offspring(&population, 0, 1, &config, 5, &mut rng);
assert_eq!(mutation_count, 0);
for locus in 0..config.n {
assert!(matches!(offspring[locus], 1 | 2 | 3 | 4));
assert!(matches!(offspring[config.n + locus], 7 | 8 | 9 | 10));
}
}
}
#[test]
fn summarize_empty_population_reports_extinction_and_negative_gap() {
let config = Config { n: 1, ..smoke_config() };
let population = Population::new(0, config.n);
let summary = NunneySimulationKernel::summarize(&population, &config, 10);
assert!(summary.extinct);
assert_eq!(summary.n, 0);
assert_eq!(summary.mean_allele_value, 0.0);
assert_eq!(summary.mean_tracking_gap, -(10.0 / config.T));
}
#[test]
fn immediate_missing_sex_extinction_produces_terminal_result() {
let config = Config {
N0: 1.0,
n: 1,
u: 0.0,
epochs: 1,
T: 10.0,
seed: Some(0),
..smoke_config()
};
let kernel = NunneySimulationKernel::new(Some(0));
let result = kernel.run(&config);
assert!(result.extinct);
assert_eq!(result.final_state.termination_reason, TerminationReason::Extinction);
assert_eq!(result.final_state.N, 1.0);
assert!(result.final_state.N_f == 0.0 || result.final_state.N_m == 0.0);
assert!(result.extinction_time.is_some());
}
#[test]
fn offspring_survival_is_monotone_with_distance_from_target() {
let config = Config {
n: 1,
T: 10.0,
..smoke_config()
};
let exact = vec![5, 5];
let near = vec![4, 4];
let far = vec![0, 0];
let exact_fit = NunneySimulationKernel::offspring_fitness(&exact, &config, 50);
let near_fit = NunneySimulationKernel::offspring_fitness(&near, &config, 50);
let far_fit = NunneySimulationKernel::offspring_fitness(&far, &config, 50);
assert!((exact_fit - 1.0).abs() < 1e-12);
assert!(exact_fit > near_fit);
assert!(near_fit > far_fit);
}
#[test]
fn run_result_generation_and_epoch_are_terminal_summary_values() {
let kernel = NunneySimulationKernel::new(Some(0));
let config = smoke_config();
let result = kernel.run(&config);
assert_eq!(result.generations(), result.final_state.generation);
assert_eq!(result.epochs(), result.final_state.epoch);
assert!(result.final_state.generation <= NunneySimulationKernel::total_generations(&config));
}
#[test]
fn non_extinction_run_finishes_at_expected_terminal_generation() {
let kernel = NunneySimulationKernel::new(Some(0));
let config = smoke_config();
let result = kernel.run(&config);
if !result.extinct {
let expected_terminal_t = (NunneySimulationKernel::total_generations(&config) as i32 - 1)
- (NunneySimulationKernel::t_int(&config) / 2);
assert_eq!(
result.final_state.generation as i32,
expected_terminal_t
);
}
}
#[test]
fn run_result_bookkeeping_fields_are_consistent() {
let kernel = NunneySimulationKernel::new(Some(0));
let config = smoke_config();
let result = kernel.run(&config);
assert_eq!(result.final_state.birth_count >= result.final_state.surviving_offspring_count, true);
assert_eq!(result.final_state.mean_mismatch, result.final_state.mean_tracking_gap.abs());
if let (Some(first), Some(last)) = (result.first_nonzero_allele_t, result.last_nonzero_allele_t) {
assert!(first <= last);
}
}
#[test]
fn nunney_simulation_kernel_matches_smoke_shape() {
let kernel = NunneySimulationKernel::new(Some(0));
let config = smoke_config();
let result = kernel.run(&config);
assert!(!result.extinct);
assert!(result.final_state.N > 0.0);
assert!(result.final_state.mean_mismatch < 1.0);
assert!(result.last_nonzero_allele_t.is_some());
}
}

View File

@ -0,0 +1,242 @@
#!/usr/bin/env python3
"""
Compare the Python Track 1 run summary against the Rust smoke binary over a
small seed range.
"""
from __future__ import annotations
import argparse
import json
import subprocess
from pathlib import Path
from statistics import mean, pstdev
from renunney.track1_analysis import summarize_tracking
from renunney.track1_reference import Track1Parameters, simulate_run
REPO_ROOT = Path(__file__).resolve().parents[1]
RUST_CRATE = REPO_ROOT / "rust" / "track2-core"
RUST_TARGET = Path("/tmp/renunney-track2-compare")
RUST_BIN = RUST_TARGET / "release" / "examples" / "smoke_compare"
def build_rust_smoke_binary() -> None:
subprocess.run(
[
"cargo",
"build",
"--release",
"--example",
"smoke_compare",
"--target-dir",
str(RUST_TARGET),
],
cwd=RUST_CRATE,
check=True,
)
def parse_rust_output(text: str) -> dict[str, object]:
parsed: dict[str, object] = {}
for line in text.splitlines():
if "=" not in line:
continue
key, raw = line.split("=", 1)
raw = raw.strip()
if raw in {"true", "false"}:
parsed[key] = raw == "true"
elif raw.startswith("Some(") and raw.endswith(")"):
parsed[key] = int(raw[5:-1])
elif raw == "None":
parsed[key] = None
else:
try:
parsed[key] = int(raw)
except ValueError:
parsed[key] = float(raw)
return parsed
def run_rust(seed: int, params: Track1Parameters) -> dict[str, object]:
proc = subprocess.run(
[
str(RUST_BIN),
"--K",
str(params.K),
"--N0",
str(params.N0),
"--R",
str(params.R),
"--T",
str(params.T),
"--n",
str(params.n),
"--u",
str(params.u),
"--epochs",
str(params.epochs),
"--seed",
str(seed),
],
cwd=RUST_CRATE,
check=True,
capture_output=True,
text=True,
)
return parse_rust_output(proc.stdout)
def run_python(seed: int, params: Track1Parameters) -> dict[str, object]:
summaries = simulate_run(params, seed=seed)
final = summaries[-1]
tracking = summarize_tracking(summaries)
return {
"extinct": bool(final.extinct),
"generation": int(final.t),
"N": int(final.N),
"female_count": int(final.female_count),
"male_count": int(final.male_count),
"target_value": float(final.target_value),
"mean_allele_value": float(final.mean_allele_value),
"mean_fitness": float(final.mean_fitness),
"fecundity": float(final.fecundity),
"mean_tracking_gap": float(final.mean_tracking_gap),
"birth_count": int(final.birth_count),
"surviving_offspring_count": int(final.surviving_offspring_count),
"first_nonzero_allele_t": tracking.first_nonzero_allele_t,
"last_nonzero_allele_t": tracking.last_nonzero_allele_t,
}
def main() -> int:
parser = argparse.ArgumentParser(description="Compare Python Track 1 and Rust smoke outputs over multiple seeds.")
parser.add_argument("--seed-start", type=int, default=0)
parser.add_argument("--seed-count", type=int, default=10)
parser.add_argument("--K", type=int, default=1000)
parser.add_argument("--N0", type=int, default=500)
parser.add_argument("--n", type=int, default=3)
parser.add_argument("--u", type=float, default=0.001)
parser.add_argument("--R", type=float, default=10.0)
parser.add_argument("--T", type=int, default=50)
parser.add_argument("--epochs", type=int, default=5)
args = parser.parse_args()
build_rust_smoke_binary()
params = Track1Parameters(
K=args.K,
N0=args.N0,
n=args.n,
u=args.u,
R=args.R,
T=args.T,
epochs=args.epochs,
)
rows: list[dict[str, object]] = []
for seed in range(args.seed_start, args.seed_start + args.seed_count):
py = run_python(seed, params)
rs = run_rust(seed, params)
rows.append(
{
"seed": seed,
"python": py,
"rust": rs,
"delta": {
"N": float(rs["N"]) - float(py["N"]),
"mean_allele_value": float(rs["mean_allele_value"]) - float(py["mean_allele_value"]),
"mean_tracking_gap": float(rs["mean_tracking_gap"]) - float(py["mean_tracking_gap"]),
"birth_count": float(rs["birth_count"]) - float(py["birth_count"]),
"surviving_offspring_count": float(rs["surviving_offspring_count"]) - float(py["surviving_offspring_count"]),
},
"extinct_match": bool(rs["extinct"]) == bool(py["extinct"]),
}
)
summary = {
"parameters": {
"K": params.K,
"N0": params.N0,
"n": params.n,
"u": params.u,
"R": params.R,
"T": params.T,
"epochs": params.epochs,
},
"seed_start": args.seed_start,
"seed_count": args.seed_count,
"extinct_match_rate": mean(1.0 if row["extinct_match"] else 0.0 for row in rows),
"mean_abs_delta": {
"N": mean(abs(row["delta"]["N"]) for row in rows),
"mean_allele_value": mean(abs(row["delta"]["mean_allele_value"]) for row in rows),
"mean_tracking_gap": mean(abs(row["delta"]["mean_tracking_gap"]) for row in rows),
"birth_count": mean(abs(row["delta"]["birth_count"]) for row in rows),
"surviving_offspring_count": mean(abs(row["delta"]["surviving_offspring_count"]) for row in rows),
},
"aggregate": {
"python": {
"N": {
"mean": mean(float(row["python"]["N"]) for row in rows),
"sd": pstdev(float(row["python"]["N"]) for row in rows),
},
"birth_count": {
"mean": mean(float(row["python"]["birth_count"]) for row in rows),
"sd": pstdev(float(row["python"]["birth_count"]) for row in rows),
},
"surviving_offspring_count": {
"mean": mean(float(row["python"]["surviving_offspring_count"]) for row in rows),
"sd": pstdev(float(row["python"]["surviving_offspring_count"]) for row in rows),
},
"mean_allele_value": {
"mean": mean(float(row["python"]["mean_allele_value"]) for row in rows),
"sd": pstdev(float(row["python"]["mean_allele_value"]) for row in rows),
},
"mean_tracking_gap": {
"mean": mean(float(row["python"]["mean_tracking_gap"]) for row in rows),
"sd": pstdev(float(row["python"]["mean_tracking_gap"]) for row in rows),
},
},
"rust": {
"N": {
"mean": mean(float(row["rust"]["N"]) for row in rows),
"sd": pstdev(float(row["rust"]["N"]) for row in rows),
},
"birth_count": {
"mean": mean(float(row["rust"]["birth_count"]) for row in rows),
"sd": pstdev(float(row["rust"]["birth_count"]) for row in rows),
},
"surviving_offspring_count": {
"mean": mean(float(row["rust"]["surviving_offspring_count"]) for row in rows),
"sd": pstdev(float(row["rust"]["surviving_offspring_count"]) for row in rows),
},
"mean_allele_value": {
"mean": mean(float(row["rust"]["mean_allele_value"]) for row in rows),
"sd": pstdev(float(row["rust"]["mean_allele_value"]) for row in rows),
},
"mean_tracking_gap": {
"mean": mean(float(row["rust"]["mean_tracking_gap"]) for row in rows),
"sd": pstdev(float(row["rust"]["mean_tracking_gap"]) for row in rows),
},
},
"delta_of_means": {
"N": mean(float(row["rust"]["N"]) for row in rows)
- mean(float(row["python"]["N"]) for row in rows),
"birth_count": mean(float(row["rust"]["birth_count"]) for row in rows)
- mean(float(row["python"]["birth_count"]) for row in rows),
"surviving_offspring_count": mean(float(row["rust"]["surviving_offspring_count"]) for row in rows)
- mean(float(row["python"]["surviving_offspring_count"]) for row in rows),
"mean_allele_value": mean(float(row["rust"]["mean_allele_value"]) for row in rows)
- mean(float(row["python"]["mean_allele_value"]) for row in rows),
"mean_tracking_gap": mean(float(row["rust"]["mean_tracking_gap"]) for row in rows)
- mean(float(row["python"]["mean_tracking_gap"]) for row in rows),
},
},
"rows": rows,
}
print(json.dumps(summary, indent=2, sort_keys=True))
return 0
if __name__ == "__main__":
raise SystemExit(main())