OpenAI Codex port of code and text of 1989 thesis.

This commit is contained in:
welsberr 2026-03-18 12:10:23 -04:00
parent 22211b8171
commit 07e6993c5d
80 changed files with 56778 additions and 1 deletions

260
MIGRATION_PLAN.md Normal file
View File

@ -0,0 +1,260 @@
# Python 3 Migration Plan
## Scope
The original system is a cooperative composition pipeline built from three neural subsystems:
- `Bach`: a Hopfield-Tank note generator over a 5-position by 8-note grid.
- `Salieri`: a back-propagation critic trained against a rule-based classical-sequence supervisor.
- `Beethoven`: an ART1 novelty/category network over the note sequence plus one classicality bit.
The immediate goal should be a Python 3 package that reproduces the Pascal algorithms and file-driven behavior closely enough to validate compatibility, while replacing the Pascal linked-list memory model with direct numeric data structures.
## What Exists Today
### Core orchestration
- `THES/ANNCOMP.PP` is the integrated driver.
- The composition loop is effectively:
1. Generate a candidate note with the Hopfield-Tank network.
2. Evaluate/train the back-propagation network using the current note window and the rule-based instructor.
3. Pass the same window plus the classical/not-classical flag into ART1.
### Shared state
- `THES/GLOBALS.PP` defines:
- fixed note vocabulary of 8 notes,
- sequence window length of 5,
- ART1 dimensions `Max_F1_nodes = 41`, `Max_F2_nodes = 25`,
- `Common_Area_`, which is the cross-network exchange object.
### Hopfield-Tank subsystem
- `THES/ANNCOMP.PP` implements `Bach` and nested `HTN`.
- The network operates on a flattened 40-cell representation: `8 notes x 5 positions`.
- It loads a `64 x 64` weight matrix from `HTN.DAT`, but the active note grid uses the first 40 cells.
- The update rule uses:
- per-neuron activation `a`,
- output `0.5 * (1 + tanh(a / c))`,
- resistance/capacitance/input/weight/iteration scaling factors from globals.
- `THES/HTNDATA.PP` shows how the Hopfield weights were built from `SEQUENCE.DAT`, plus row/column inhibition and sequence reinforcement.
### Back-propagation subsystem
- `THES/BP_UNIT.PP` is a general BP implementation with:
- input, hidden, and output nodes,
- weight matrix and momentum,
- feed-forward,
- back-propagation,
- file-based parameter and weight loading.
- `THES/S61.DAT` configures Salieri as:
- 40 input nodes,
- 20 hidden nodes,
- 1 output node,
- learning rate `0.5`,
- momentum `0.5`.
- `THES/ANNCOMP.PP` converts the current 5-note window into a 40-bit one-hot vector and trains the network online against `Classical_instructor`.
### Rule-based supervisor
- `THES/CLASINST.PP` loads `SEQUENCE.DAT`.
- It converts the 5-note sequence to a digit string and returns `1` if the target suffix matches any stored example sequence, else `0`.
- This acts as the teaching signal for the BP network.
### ART1 subsystem
- `THES/ANNCOMP.PP` implements `ART1`.
- F1 input is the 40-bit one-hot sequence plus one bit for `Is_classical`, for a total vector length of 41.
- F2 supports up to 25 committed categories.
- The implementation includes a nonstandard compatibility detail: when all categories are saturated and none remain eligible, vigilance is reduced by 1 percent and matching is retried.
### Legacy data model problem
- `THES/STRUCT.PP` provides generic linked-list vectors and matrices (`DVE`, `HVE`) used to work around Turbo Pascal memory constraints.
- `THES/BP_UNIT.PP` stores nodes, IO vectors, and weights through those linked structures rather than direct arrays.
- That representation should not be preserved in Python except where needed for compatibility tests.
## Recommended Python Representation
Use explicit typed structures and dense arrays:
- `numpy.ndarray` for:
- Hopfield state vectors and weight matrices,
- BP activations, deltas, biases, and weights,
- ART1 F1/F2 activations and top-down/bottom-up LTM weights.
- `dataclasses.dataclass` for stable API/state containers.
- `Enum` for note identifiers only if it does not complicate file compatibility.
Recommended canonical encodings:
- `NoteSequence`: shape `(5,)`, integer values `0..8`.
- `SequenceOneHot`: shape `(40,)`, binary.
- `ArtInputVector`: shape `(41,)`, binary.
- `HopfieldWeights`: shape `(40, 40)` as the normalized active subset of the legacy file.
- `BPWeightsIH`, `BPWeightsHO` or one legacy-compatible dense square matrix, depending on whether fidelity or clarity is prioritized in a given layer of the codebase.
## Package Layout
```text
composer_ans/
__init__.py
types.py
encoding.py
io/
__init__.py
legacy_files.py
hopfield.py
backprop.py
art1.py
classical_rules.py
pipeline.py
compatibility.py
tests/
data/
test_encoding.py
test_classical_rules.py
test_hopfield.py
test_backprop.py
test_art1.py
test_pipeline.py
```
## API Design
Keep the public API small and deterministic.
```python
from composer_ans.pipeline import CompositionContext, CompositionPipeline
ctx = CompositionContext(notes=[0, 0, 0, 0, 0])
pipeline = CompositionPipeline.from_legacy_data("THES")
result = pipeline.step(ctx)
```
Suggested subsystem APIs:
```python
candidate = hopfield.generate_next_note(notes, params)
is_classical, bp_state = salieri.evaluate_and_train(notes, target=None)
art_result = beethoven.categorize(notes, is_classical)
```
Where:
- `target=None` means "derive target from the classical instructor", matching the Pascal integrated flow.
- Each call returns structured state useful for debugging and test baselines, not just the final scalar.
## Migration Strategy
### Phase 1: Preserve semantics, not implementation style
- Recreate file readers for:
- `SEQUENCE.DAT`,
- `S61.DAT`,
- `S61.WT`,
- `HTN.DAT`.
- Recreate sequence encodings exactly:
- 5-note rolling window,
- 40-bit one-hot flattening,
- ART1 extra classicality bit.
- Recreate the rule-based instructor exactly before porting the trainable models.
Deliverable:
- A Python package that can parse legacy files and reproduce the same encoded inputs the Pascal code would produce.
### Phase 2: Port Hopfield-Tank
- Implement the continuous-time iterative update as written.
- Preserve:
- noise injection behavior,
- stop condition using epsilon on alternating time buffers,
- "pick max cell in each column" post-processing.
- Isolate random number generation behind an injectable RNG so deterministic tests are possible.
Deliverable:
- `generate_next_note()` producing the same result as Pascal for fixed seeds and known sequences.
### Phase 3: Port Salieri back-propagation
- First implement a legacy-compatible execution mode mirroring the square-node storage and update order.
- Then wrap it with a clearer façade that exposes standard layer matrices.
- Preserve:
- sigmoid behavior,
- theta updates,
- momentum handling,
- online training after every presentation,
- periodic weight dumping capability.
Deliverable:
- `evaluate_and_train()` matching legacy outputs and weight updates for a controlled presentation sequence.
### Phase 4: Port Beethoven ART1
- Port the F1/F2 STM and LTM equations directly.
- Preserve:
- 41-bit input vector,
- eligibility and commitment logic,
- resonance loop,
- modified vigilance-reduction behavior on saturation.
- Keep ART1 state persistent across calls, because the Pascal version learns over the composition session.
Deliverable:
- `categorize()` returning winner, new-category flag, vigilance-change flag, and current category count.
### Phase 5: Rebuild the integrated pipeline
- Recreate `Common_Area_` as a Python dataclass.
- Implement a single-step pipeline equivalent to one iteration of the Pascal composition loop.
- Add an optional batch runner that emits a complete composition and an event log.
Deliverable:
- End-to-end run over a fixed number of notes using legacy data assets.
## Compatibility Plan
Compatibility should be measured in layers:
- Encoding compatibility:
- identical one-hot vectors and ART input vectors for the same note windows.
- File compatibility:
- legacy `.DAT` and `.WT` files load without manual editing.
- Behavioral compatibility:
- same classical instructor decisions,
- same Hopfield winner for fixed seed/input,
- same BP output progression for replayed presentations,
- same ART1 category decisions for replayed inputs.
- Pipeline compatibility:
- same sequence of generated notes for a fixed random seed, or if exact replication is blocked by legacy RNG differences, same per-step subsystem outputs within defined tolerances.
## Known Risks
- Pascal `Single`, file layout, and RNG behavior may not map exactly to Python defaults.
- `HTN.DAT` is written as a Pascal binary `FILE OF ARRAY[1..64,1..64] OF REAL`; a dedicated reader may be needed to confirm element size and ordering.
- The BP code relies on update order within linked structures. A mathematically equivalent refactor may still diverge numerically unless a legacy mode preserves operation order.
- ART1 has thesis-specific modifications; replacing them with textbook ART1 would break compatibility.
## Recommended Delivery Order
1. Build legacy readers and encoders.
2. Port `Classical_instructor`.
3. Port Hopfield-Tank and verify with controlled seeds.
4. Port BP in legacy-compatible mode and replay known presentations.
5. Port ART1 with persistent state.
6. Assemble the integrated pipeline.
7. Add a second, cleaner API layer only after compatibility tests pass.
## Immediate Next Step
Implement the non-neural compatibility layer first:
- legacy file parsers,
- note/sequence encoders,
- rule-based classical instructor,
- golden tests based on the files already in `THES`.
That gives a stable foundation for porting the three neural subsystems without losing track of what the original program actually did.

View File

@ -1,3 +1,100 @@
# TriuneCadence # TriuneCadence
A Python3 port of the music composition code from my 1989 master's thesis, 'Integration and Hybridization in Neural Network Modelling'. TriuneCadence is a Python implementation of a modular neural music-composition system inspired by Wesley R. Elsberry's 1989 master's thesis on constrained melodic composition.
It combines three different network families in one pipeline:
- a Hopfield-Tank note generator
- a back-propagation critic (`Salieri`)
- an ART1 novelty/category module (`Beethoven`)
The repository includes:
- a modern Python codebase with generic network modules and thesis-specific adapters
- legacy thesis source, text, and data files in [`THES/`](./THES)
- timing, entropy, and predictability analysis for generated note sequences
- JSON serialization for learned model state and run reports
## Why This Repo Exists
The original system was implemented in Turbo Pascal on late-1980s hardware under severe memory constraints. That led to pointer-heavy data structures and implementation complexity that obscured what was, architecturally, a strong multi-network design.
This repository keeps the core ideas accessible:
- generic reusable implementations of the underlying network families
- a thesis-faithful composition pipeline built on top of those generic modules
- a practical environment for experimentation, analysis, and historical comparison
## Quick Start
Run a short composition from the thesis data:
```bash
python -m composer_ans --thes-root THES --notes 16
```
Or, if installed as a package:
```bash
triune-cadence --thes-root THES --notes 16
```
Save model state and a run report:
```bash
triune-cadence \
--thes-root THES \
--notes 32 \
--save-salieri salieri.json \
--save-beethoven beethoven.json \
--save-report run.json
```
## Sweepable Parameters
The CLI currently exposes a few parameters that are useful for experiments:
- `--object-threshold`
- `--max-attempts-per-note`
- `--art-vigilance`
- `--art-vigilance-decay`
Saved run reports include those parameters along with:
- note sequence
- per-note generation time
- total runtime
- unigram entropy
- first-order conditional entropy
- normalized entropy
- predictability
- redundancy
## Project Layout
Core Python modules live in [`composer_ans/`](./composer_ans):
- generic Hopfield-Tank core: [`composer_ans/hopfield.py`](./composer_ans/hopfield.py)
- generic back-propagation core: [`composer_ans/backprop.py`](./composer_ans/backprop.py)
- generic ART1 core: [`composer_ans/art1.py`](./composer_ans/art1.py)
- thesis-specific wrappers: [`composer_ans/salieri.py`](./composer_ans/salieri.py), [`composer_ans/beethoven.py`](./composer_ans/beethoven.py)
- integrated composition pipeline: [`composer_ans/pipeline.py`](./composer_ans/pipeline.py)
- analysis and reporting: [`composer_ans/analysis.py`](./composer_ans/analysis.py), [`composer_ans/reporting.py`](./composer_ans/reporting.py)
Legacy materials are in [`THES/`](./THES).
## Historical Context
The thesis reports that the integrated system generated 152 notes in about three hours on a 16 MHz 80386-class machine, and in about fifteen hours on an 8088-based machine with an 8087 coprocessor. This Python version can report per-note generation time directly so present-day runs can be compared against those historical figures.
## Development
Run the test suite with:
```bash
pytest -q
```
## Related Repo Notes
The original migration planning artifact is preserved in [`MIGRATION_PLAN.md`](./MIGRATION_PLAN.md).

193
THES/ANN.PP Normal file
View File

@ -0,0 +1,193 @@
UNIT ANN;
{
This unit provides several functions of general use in Artificial Neural
Network (ANN) modelling.
}
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
INTERFACE
{PUBLIC DECLARATIONS}
FUNCTION Close_enough (target, plus_minus, x : DOUBLE) : BOOLEAN;
FUNCTION Gaussian_noise (mean, variance : DOUBLE) :DOUBLE;
FUNCTION Tanh (rr : DOUBLE) :DOUBLE ;
FUNCTION Linear (m, B, X : DOUBLE):DOUBLE;
FUNCTION Linear_ramp (low, high, x : DOUBLE) : DOUBLE;
FUNCTION Threshold (low, high, thresh, x : DOUBLE):DOUBLE;
FUNCTION Sigmoid (range,slope_mod,shift,X : DOUBLE):DOUBLE;
FUNCTION Signum (xx : DOUBLE):INTEGER;
IMPLEMENTATION
{PRIVATE DECLARATIONS}
CONST
mach_inf = 1E37;
exp_max = 80.0;
TYPE
REAL = DOUBLE;
{IMPLEMENTATIONS OF PROCEDURES AND FUNCTIONS}
FUNCTION Close_enough (target, plus_minus, x : REAL) : BOOLEAN;
{
Given a target and an absolute value of allowed deviation (plus_minus),
Close_enough returns TRUE if the tested value (x) is within the
defined interval.
}
BEGIN {}
IF (x >= (target - plus_minus)) AND (x <= (target + plus_minus))
THEN {}
BEGIN
Close_enough := TRUE;
END
ELSE {}
BEGIN
Close_enough := FALSE;
END;
END; {}
FUNCTION Gaussian_noise(mean, variance : REAL) :REAL;
{Produces random numbers which conform to a Gaussian distribution}
VAR
u1, u2, x : REAL;
BEGIN {Gaussian_noise}
u1 := Random;
u2 := Random;
x := Sqrt(-2*Ln(u1))*Cos(2*Pi*u2);
x := variance*x + mean;
Gaussian_noise := x;
END; {Gaussian_noise}
{
Activation functions
}
FUNCTION tanh(rr : REAL) :REAL ;
{returns the hyperbolic tangent of rr}
BEGIN {tanh}
IF (rr > Exp_Max) THEN {}
BEGIN
rr := Exp_Max;
END;
IF (rr < -Exp_Max) THEN {}
BEGIN
rr := -Exp_max;
END;
tanh := (Exp(rr) - Exp(-rr)) / (Exp(rr) + Exp(-rr));
END;
FUNCTION Linear (m, B, X : REAL):REAL;
{
Linear returns the parameter value times slope, plus intercept.
}
BEGIN {Linear}
Linear := X*m + B;
END; {Linear}
FUNCTION Linear_ramp (LOW, HIGH, X : REAL) : REAL;
{
Returns X when X is between LOW and HIGH, the appropriate bound
otherwise.
}
BEGIN {Linear_ramp}
IF (X < HIGH) AND (X > LOW) THEN
{}
BEGIN
Linear_ramp := X;
END
ELSE {}
BEGIN
IF (x >= HIGH) THEN {}
BEGIN
Linear_ramp := HIGH;
END
ELSE {}
BEGIN
Linear_ramp := LOW;
END;
END;
END; {Linear_ramp}
FUNCTION Threshold(LOW,HIGH,THRESH,X : REAL):REAL;
{
Returns LOW when X is below THRESH and HIGH when X is greater
than or equal to THRESH.
}
BEGIN {Threshold}
IF (X < THRESH) THEN {}
BEGIN
Threshold := LOW;
END
ELSE {}
BEGIN
Threshold := HIGH;
END;
END; {Threshold}
FUNCTION Sigmoid(range,slope_mod,shift,X : REAL):REAL;
{
Function of the form :
[ range / (1 + exp(-slope_mod * X)) ] - shift
range - determines the range of values, 0..range
slope_mod - modifies the slope of the curve
shift - changes the range from 0..range to (0-shift)..(range-shift)
}
CONST
Machine_Infinity = 1E37;
VAR
Temp : REAL;
BEGIN {Sigmoid}
Temp := 0.0 - (Slope_mod * X);
IF Temp > Exp_Max THEN Temp := Exp_Max;
IF Temp < -Exp_Max THEN Temp := -Exp_Max;
Temp := Exp(Temp);
Sigmoid := (range/(1+(Temp))) - shift;
END; {Sigmoid}
FUNCTION signum(xx : REAL):INTEGER;
BEGIN
IF xx >= 0.0 THEN signum := 1
ELSE signum := -1;
END;
BEGIN {INITIALIZATION}
END.


1742
THES/ANNCOMP.PP Normal file

File diff suppressed because it is too large Load Diff

377
THES/ANSI_Z.PP Normal file
View File

@ -0,0 +1,377 @@
UNIT ANSI_Z;
{
This unit provides certain ANSI and VT-52 screen control functions.
}
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
INTERFACE
USES DOS, MISC1;
TYPE
ANSI_MODE_ = (NULL_,ANSI_,VT52_,IBM_PC_);
CONST
ANSI_MODE : ANSI_MODE_ = ANSI_;
PROCEDURE ANSI_CLRSCR;
{Clear the screen using ANSI control}
PROCEDURE ANSI_CUU(VAR ii : INTEGER);
{Cursor up}
PROCEDURE ANSI_CUD(VAR ii : INTEGER);
{Cursor down}
PROCEDURE ANSI_CUF(VAR ii : INTEGER);
{Cursor forward or right}
PROCEDURE ANSI_CUB(VAR ii : INTEGER);
{Cursor backward or left}
PROCEDURE ANSI_EEOL;
{Erase to End Of Line (VT52)}
PROCEDURE ANSI_CUH;
{Cursor home}
PROCEDURE ANSI_CUP(line, col : INTEGER);
{Cursor position}
IMPLEMENTATION
TYPE
Position_ = RECORD
l : BYTE;
c : BYTE;
END;
CONST
C_pos : Position_ = (l : 0;
c : 0);
VAR
inch : CHAR;
{ ANSI Control sequences
ESC [ Pn ; Pn R -> Cursor Position Report (CPR)
ESC [ Pn D -> Cursor Backward (CUB)
ESC [ Pn B -> Cursor Down (CUD)
ESC [ Pn C -> Cursor Forward (CUF)
ESC [ Pn ; Pn H -> Cursor Position (CUP)
ESC [ Pn A -> Cursor Up (CUU)
ESC [ Pn c -> Device Attributes (DA)
ESC # 8 -> Screen Alignment Display (DECALN)
ESC Z -> Identify Terminal (DECID)
ESC = -> Keypad Application Mode (DECKPAM)
ESC > -> Keypad Numeric Mode (DECKPNM)
ESC 8 -> Restore Cursor (DECRC)
ESC [ <sol> ; <par> ; <nbits> ; <xspeed> ; <rspeed> ; <clkmul> ; <flags> x
-> Report Terminal Parameters (DECREPTPARM)
ESC [ <sol> x -> Request Terminal Parameters (DECREQTPARM)
ESC 7 -> Save Cursor (DECSC)
ESC [ Pn ; Pn r -> Set Top and Bottom Margins (DECSTBM)
ESC [ Ps n -> Device Status Report (DSR)
ESC [ Ps J -> Erase in Display (ED)
ESC [ Ps K -> Erase in Line (EL)
ESC H -> Horizontal Tabulation Set (HTS)
ESC [ Pn ; Pn f -> Horizontal and Vertical Position (HVP)
ESC D -> Index (IND)
ESC E -> Next Line (NEL)
ESC M -> Reverse Index (RI)
ESC c -> Reset to Initial State (RIS)
ESC [ Ps ; Ps ; ... ; Ps l -> Reset Mode (RM)
ESC ( A | B | 0 | 1 | 2 -> Select Character Set (SCS)
ESC [ Ps ; ... ; Ps m -> Select Graphic Rendition (SGR)
ESC Ps ; ... Ps h -> Select Mode (SM)
ESC [ Ps g -> Tabulation Clear (TBC)
VT52 Mode control sequences
ESC A -> Cursor Up
ESC B -> Cursor Down
ESC C -> Cursor Right
ESC D -> Cursor Left
ESC F -> Enter Graphics Mode
ESC G -> Exit Graphics Mode
ESC H -> Cursor to Home
ESC I -> Reverse Line Feed
ESC J -> Erase To End Of Screen
ESC K -> Erase to End Of Line
ESC Y line column -> Direct Cursor Address
ESC Z -> Identify
ESC = -> Enter Alternate Keypad Mode
ESC > -> Exit Alternate Keypad Mode
ESC < -> Enter ANSI Mode
}
{
CASE ANSI_MODE OF
ANSI_ : BEGIN
END;
VT52_ : BEGIN
END;
IBM_PC_ : BEGIN
END;
END;
}
{$V-}
FUNCTION INT_TO_STR (ii : INTEGER):STRING;
VAR
tstr : STRING;
BEGIN
Str(ii,tstr);
int_to_str := tstr;
END;
{$V-}
PROCEDURE ANSI_CLRSCR ;
{Clear the screen using ANSI control}
BEGIN {}
CASE ANSI_MODE OF
ANSI_ : BEGIN
Write(Output,ascii_esc,ascii_obracket,ascii_two,'J');
END;
VT52_ : BEGIN
Write(Output,ascii_esc,'H',ascii_esc,'J',ascii_esc,'H');
END;
{ IBM_PC_ : BEGIN
CRT.CLRSCR;
END; }
END;
c_pos.l := 0;
c_pos.c := 0;
END; {}
PROCEDURE ANSI_CUU(VAR ii : INTEGER);
{Cursor up}
BEGIN {}
IF ii <= 1 THEN ii := 1;
CASE ANSI_MODE OF
ANSI_ : BEGIN
Write(Output,ascii_esc,ascii_obracket);
IF (ii > 1) THEN Write(Output,int_to_str(ii));
Write(Output,'A');
END;
VT52_ : BEGIN
Write(Output,ascii_esc,'A');
END;
{ IBM_PC_ : BEGIN
c_pos.l := CRT.WHEREY;
c_pos.c := CRT.WHEREX;
IF c_pos.l-ii >= 0 THEN
c_pos.l := c_pos.l - ii
ELSE
c_pos.l := 0;
CRT.GOTOXY(c_pos.c,c_pos.l);
END; }
END;
END; {}
PROCEDURE ANSI_CUD(VAR ii : INTEGER);
{Cursor down}
BEGIN {}
CASE ANSI_MODE OF
ANSI_ : BEGIN
Write(Output,ascii_esc,ascii_obracket);
IF (ii > 1) THEN Write(Output,int_to_str(ii));
Write(Output,'B');
END;
VT52_ : BEGIN
Write(Output,ascii_esc,'B');
END;
{ IBM_PC_ : BEGIN
c_pos.l := CRT.WHEREY;
c_pos.c := CRT.WHEREX;
IF c_pos.l+ii <= 24 THEN
c_pos.l := c_pos.l + ii
ELSE
c_pos.l := 24;
CRT.GOTOXY(c_pos.c,c_pos.l);
END; }
END;
END; {}
PROCEDURE ANSI_CUF(VAR ii : INTEGER);
{Cursor forward or right}
BEGIN {}
CASE ANSI_MODE OF
ANSI_ : BEGIN
Write(Output,ascii_esc,ascii_obracket);
IF (ii > 1) THEN Write(Output,int_to_str(ii));
Write(Output,'C');
END;
VT52_ : BEGIN
Write(Output,ascii_esc,'C');
END;
{ IBM_PC_ : BEGIN
c_pos.l := CRT.WHEREY;
c_pos.c := CRT.WHEREX;
IF c_pos.c+ii <= 79 THEN
c_pos.c := c_pos.c + ii
ELSE
c_pos.c := 79;
CRT.GOTOXY(c_pos.c,c_pos.l);
END;}
END;
END; {}
PROCEDURE ANSI_CUB(VAR ii : INTEGER);
{Cursor backward or left}
BEGIN {}
CASE ANSI_MODE OF
ANSI_ : BEGIN
Write(Output,ascii_esc,ascii_obracket);
IF (ii > 1) THEN Write(Output,int_to_str(ii));
Write(Output,'D');
END;
VT52_ : BEGIN
Write(Output,ascii_esc,'D');
END;
{ IBM_PC_ : BEGIN
c_pos.l := CRT.WHEREY;
c_pos.c := CRT.WHEREX;
IF ((c_pos.c-ii) >= 0) THEN
c_pos.c := c_pos.c - ii
ELSE
c_pos.c := 0;
CRT.GOTOXY(c_pos.c,c_pos.l);
END; }
END;
END; {}
PROCEDURE ANSI_EEOL;
{Erase to End Of Line (VT52)}
BEGIN {}
CASE ANSI_MODE OF
ANSI_ : BEGIN
Write(Output,ascii_esc,ascii_obracket,ascii_zero,'K');
END;
VT52_ : BEGIN
Write(Output,ascii_esc,'K');
END;
{ IBM_PC_ : BEGIN
CRT.CLREOL;
END; }
END;
END; {}
PROCEDURE ANSI_EEOS;
{Erase to End Of Screen (VT52)}
BEGIN {}
CASE ANSI_MODE OF
ANSI_ : BEGIN
END;
VT52_ : BEGIN
Write(Output,ascii_esc,'J');
END;
{ IBM_PC_ : BEGIN
END; }
END;
END; {}
PROCEDURE ANSI_CUH;
{Cursor home}
BEGIN {}
CASE ANSI_MODE OF
ANSI_ : BEGIN
Write(Output,ascii_esc,ascii_obracket,'0;0','H');
END;
VT52_ : BEGIN
Write(Output,ascii_esc,'H');
END;
{ IBM_PC_ : BEGIN
CRT.GOTOXY(BYTE(1),BYTE(1));
END; }
END;
c_pos.l := 0;
c_pos.c := 0;
END; {}
PROCEDURE ANSI_CUP(line, col : INTEGER);
{Cursor position}
BEGIN {}
line := line MOD 256;
col := col MOD 256;
CASE ANSI_MODE OF
ANSI_ : BEGIN
Write(Output,ascii_esc,ascii_obracket,int_to_str(line),';',
int_to_str(col),'f');
END;
VT52_ : BEGIN
Write(Output,ascii_esc,'Y',Chr(Ord(BYTE(line+32))),Chr(Ord(
BYTE(col+32))));
END;
{ IBM_PC_ : BEGIN
CRT.GOTOXY(BYTE(col+1),BYTE(line+1));
END; }
END;
c_pos.l := line;
c_pos.c := col;
END; {}
{
IF (ANSI_MODE = ANSI) THEN BEGIN
END
ELSE BEGIN {VT52}
END;
}
BEGIN {Initialization}
ASSIGN (INPUT,'');
RESET(INPUT);
ASSIGN(OUTPUT,'');
REWRITE(OUTPUT);
WRITELN(OUTPUT);
REPEAT
WRITE('Is this machine''s video 1) ANSI or 2) VT-52 compatible ?');
READLN(inch);
inch := UPCASE(inch);
UNTIL (inch IN ['1','2','3']);
CASE inch OF
'1' : ANSI_MODE := ANSI_;
'2' : ANSI_MODE := VT52_;
{ '3' : ANSI_MODE := IBM_PC_; }
END;
ANSI_CLRSCR;
END.


153
THES/B61T.1 Normal file
View File

@ -0,0 +1,153 @@
8
1
4
1
1
2
5
1
5
8
2
5
1
4
2
3
4
3
2
3
8
4
3
2
2
3
2
6
5
2
7
8
8
2
3
4
1
4
8
8
2
2
1
4
7
8
4
1
5
1
3
1
8
8
5
1
4
6
7
1
4
8
7
5
1
4
8
2
5
5
8
1
8
4
1
4
5
4
1
4
2
8
3
1
4
8
1
4
1
7
5
2
4
2
8
2
7
6
4
3
6
3
2
7
8
5
1
6
4
2
2
2
4
7
5
8
8
6
7
8
4
6
6
3
5
4
2
1
6
2
4
5
2
2
4
3
6
8
4
6
8
7
5
1
1
2
8
5
1
1
4
5


153
THES/B61U.1 Normal file
View File

@ -0,0 +1,153 @@
5
2
4
5
3
8
4
8
5
6
1
1
8
5
4
4
3
6
2
6
4
1
5
4
5
3
5
1
8
8
7
6
5
3
8
2
7
4
2
3
4
7
5
2
8
6
5
1
3
3
7
8
1
2
4
6
8
7
5
5
2
8
6
6
4
8
7
2
2
1
8
1
4
2
2
7
6
5
7
1
6
3
1
5
7
4
4
3
6
7
6
3
3
6
7
8
8
5
4
1
3
7
2
4
2
4
7
2
5
5
8
1
3
4
3
2
3
5
4
6
2
1
3
8
7
3
8
8
5
3
3
3
5
4
8
1
3
7
3
2
7
1
3
7
5
5
2
2
4
8
2
7


209
THES/BEETHOVN.MUS Normal file
View File

@ -0,0 +1,209 @@
4
5
4
1
2
2
7
4
7
1
5
8
1
2
3
1
7
3
4
5
5
3
6
2
2
7
4
3
1
8
2
8
7
5
1
2
7
1
8
6
3
3
5
8
8
4
5
1
2
8
3
3
4
7
8
2
7
1
3
8
6
3
4
6
1
7
2
2
1
6
6
4
8
3
3
1
3
1
4
3
6
4
7
4
3
8
6
6
4
1
6
6
3
6
2
2
8
3
7
5
4
2
4
2
5
7
1
5
7
8
8
6
1
6
5
3
2
3
8
4
1
8
4
5
2
2
7
2
6
3
5
5
8
7
3
1
5
1
7
7
6
3
5
2
1
1
8
4
8
1
6
1
2
3
6
5
2
3
7
1
1
6
3
8
6
7
3
4
2
4
6
3
8
2
6
4
4
4
2
3
5
2
8
3
3
4
6
1
4
6
2
6
1
8
7
1
7
5
6
1
2
8
8
5
4
1
4
6
4

1850
THES/BP_UNIT.PP Normal file

File diff suppressed because it is too large Load Diff

94
THES/CLASCOMP.PP Normal file
View File

@ -0,0 +1,94 @@
PROGRAM classical_composition (Input,Output);
{
This program composes note sequences in a manner designed to conform as
far as possible to a set of example sequences.
}
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
USES
DOS, misc1, ANSI_Z, globals, clasinst;
{General}
VAR
ii, jj, kk : INTEGER;
notes : notes_;
sinch : CHAR;
scon, outfname : STRING;
outf : TEXT;
cnt,success : INTEGER;
ana : ARRAY[1..8] OF INTEGER;
ana_cnt : INTEGER;
PROCEDURE fill_ana (notes : notes_);
VAR
ii, tr : INTEGER;
BEGIN
ana_cnt := 0;
FOR ii := 1 TO 8 DO BEGIN
notes[5] := ii;
ana[ii] := classical_instructor(notes);
END;
FOR ii := 1 TO 8 DO BEGIN
IF ana[ii] = 1 THEN BEGIN
INC(ana_cnt);
ana[ana_cnt] := ii;
END;
END;
END;
BEGIN
Randomize;
FOR ii := 1 TO 5 DO BEGIN
notes[ii] := 0;
END;
{Get filename to test}
Write('Name of file to process: ');
Readln(outfname);
{Open for input}
Assign(outf,outfname);
Rewrite(outf);
FOR ii := 1 TO 10000 DO BEGIN
fill_ana(notes);
IF ana_cnt <> 0 THEN BEGIN
notes[5] := ana[(Random(ana_cnt) + 1)];
END
ELSE BEGIN
notes[5] := Random(8) + 1;
END;
FOR jj := 1 TO 4 DO BEGIN
notes[jj] := notes[jj+1];
END;
Writeln(outf,notes[5]);
END;
Close(outf);
END.


120
THES/CLASINST.PP Normal file
View File

@ -0,0 +1,120 @@
UNIT ClasInst; {Classical_Instructor}
{
This unit provides a critique of note sequences based on a data file of
example sequences, 'SEQUENCE.DAT'.
}
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
INTERFACE
USES DOS, misc1, ANSI_Z, globals;
{General}
FUNCTION Classical_instructor(Seq : Notes_):INTEGER;
{This function does a look-up of a sequence of notes and returns a 1 if it
is a listed sequence, a 0 if it is not recognized as being a classical
sequence. This function is used in the training of Salieri, the PDP network.}
IMPLEMENTATION
CONST
Max_Seq = 100;
TYPE
N_str_ = STRING[v_len_in];
N_Str_Ary_Ptr_ = ^N_Str_Ary_;
N_Str_Ary_ = ARRAY[1..Max_Seq] OF N_Str_;
CONST
Seq_Table : N_Str_ary_ptr_ = NIL;
Number_of_seqs : INTEGER = 0;
VAR
Target, Instring : N_Str_;
Inf : TEXT;
Inchar : CHAR;
ii, jj : INTEGER;
Found : BOOLEAN;
nchar : CHAR;
FUNCTION Classical_instructor(Seq : Notes_):INTEGER;
{This function does a look-up of a sequence of notes and returns a 1 if it
is a listed sequence, a 0 if it is not recognized as being a classical
sequence. This function is used in the training of Salieri, the PDP
network.}
{Determine if a sequence of notes can be considered to be classical in
form. Does this by executing a look-up matching against the last
n notes in the passed-in sequence. Returns 0 if not found, 1 if found.
Requires the following TYPE definition:
Notes_ = ARRAY[1..v_len_in] OF INTEGER;
}
BEGIN {Classical_instructor}
{Convert notes to string representation}
Target := ''; {Clear string}
FOR ii := 1 TO v_len_out DO BEGIN
nchar := Chr(seq[ii]+48);
Target := Target + nchar;
END;
ANSI_CUP(0,39);
Write('Classical_Instructor: Target: ',target);
ANSI_CUP(23,0);
{Run through possible sequences, mark if found}
ii := 1;
Found := FALSE;
REPEAT {}
jj := Length(Seq_Table^[ii]);
Found := (Copy(Target,v_len_out-jj+1,jj) = Seq_Table^[ii]);
ii := ii + 1;
UNTIL (ii>Number_of_seqs) OR (Found);
{}
IF (Found) THEN {Return 1}
BEGIN
Classical_instructor := 1;
END
ELSE {Return 0}
BEGIN
Classical_instructor := 0;
END;
END; {Classical_instructor}
BEGIN
ii := 1;
NEW(Seq_Table); {Allocate space for table}
ASSIGN(Inf,'SEQUENCE.DAT'); {Set up to read data}
RESET(Inf);
WHILE (NOT EOF(Inf)) AND (ii <= Max_Seq) DO {Get data, put in table}
BEGIN
READLN(Inf,Seq_Table^[ii]);
ii := ii + 1;
END;
CLOSE(Inf);
Number_of_seqs := ii - 1;
END.


379
THES/COMPCOOP.TXT Normal file
View File

@ -0,0 +1,379 @@
.okidata9
.ipr//
.nopage break
.page length 66
.lpi6
.above header 3
.below header 3
.above footer 3
.below footer 3
.page break
.page number 1
.pindent 15
.pitch12
.head//
.foot/ #/
.ipr/T/
Competing Network Models and Problem-Solving
(Poster Presentation at the First Annual Meeting of the
International Neural Network Society, September 6-10, 1988.)
Diane J. Blackwood,
Department of Biomedical Engineering,
University of Texas at Arlington
Wesley R. Elsberry,
Department of Computer Science,
University of Texas at Arlington
and
Sam Leven,
Neural Systems and Science,
45 San Jacinto Way
San Francisco, CA 94127
ABSTRACT
Three of the most-often discussed neural networks models are
analyzed and differentiated. The Hopfield, PDP, and ART models
ask different questions, it is asserted -- and offer different
answers for analyzing and construing complex environments. The
three may not be competitors but, rather, complements. In fact,
they may replicate different neural processes (Leven, 1987b). We
seek to demonstrate the value of each model -- in a single case
study.
The model offered by Hopfield (e.g., 1982) represents a
fast-converging computable technique for analyzing highly limited
classes of inputs. The PDP model (Rumelhart, et® al., 1986)
offers the prospect of adoption of varied schemas, at the cost of
a larger, more complex system. The ART model (e.g., Carpenter,
et® al., 1987a) allows the greatest adaptability, including the
capacity to vary vigilance levels and emulate many neural
functions -- with the costs of much greater complexity and strain
on system resources.
We present a single system, including analysis of different
aspects of a problem by Hopfield, PDP, and ART networks, as an
example of the potential for including many capabilities within
the same environment.
.start page
While self-criticism in the neural network community is not
unusual (eg., Rumelhart, et® al., 1986, Ch® 1; Grossberg, 1987a),
we may find rapprochement among "competing paradigms" more
effective than the occasional nastiness we encounter. Some
problems, especially in complex controls on robotics, may be best
addressed by a cooperative approach.
In fact, the three paradigms most often considered mutually
exclusive (Hopfield, PDP, and ART) may actually represent
different neural processes (Leven, 1987a). In any case, they
clearly contemplate separate issues -- and may be best in
approaching distinct problems.
Hopfield's model (Hopfield, 1982; Hopfield and Tank, 1986)
represents a fast-converging computable technique for analyzing
stereotyped or highly limited classes of inputs. Achieved minima
have the virtue of remaining highly stable (representing
permanent learning). This virtue has the accompanying cost, of
course, of minimizing adaptability --recognizing new aspects of
data is not seriously contemplated for a stable implementation.
The model has a notable tolerance for data sets containing great
amounts of simple noise; however, it tends to shrink from "multi
flavored" problems, which require category or schema formation in
an extensive environment.
The model of the Parallel Distributed Processing (PDP) group
(Rumelhart, et® al., 1986) contemplates "schema formation",
seeking to apply standard cognitive psychological insights to
pattern recognition and category formation processes. They have
sought to take minimal anatomies and build, following the work of
Schank and Abelson (1977), basic semantic structures.
The PDP school has achieved notable successes in
representing language (Sejnowski, 1986) and other areas with
stable knowledge domains. Where "dynamic schemata" (Schank,
1982) are generic to a problem -- where existing memory
structures must be modified -- the strength of the simulated
annealing algorithm becomes a weakness. Changing existing
knowledge structures (by modification or replacement in the same
state space) is well-nigh impossible (Yoon, et® al., 1988).
This weakness of the PDP, its stubbornness in resisting data
that should produce restructured schemata, is also a strength.
In certain environments, stable representations of higher-order
structures (rules) coupled with the capacity to learn or be
trained "up-front" may offer system designers desired control.
Some systems should not be ENDLESSLY adaptive.
.start page
Stephen Grossberg and his school (1987b & c, Carpenter et®
al. 1987a & b) have suggested that the Adaptive Resonance (ART)
model best represents higher-order neural functions. Equipped
with representations for motivational processes and interactions
between routines ("avalanches") and higher order structures (eg.,
motivational dipoles and associated READ architectures), a full-
blown ART system can model highly adaptive motor tasks and
emulate higher-order behaviors (Levine, 1986; Leven, 1987a & b;
and Ricart, 1988).
ART has the capacity to RECONSTRUE categories, based on
continuing mismatches between data and existing higher order
constructs and motivating environmental feedback. It also allows
"masking fields" to eliminate from consideration whole segments
of data which the system anticipates to be inappropriate or
unnecessarily unsettling.
Under some circumstances, when using dipole structures to
eliminate whole sets of competing representations (or rules), for
example, ART can be faster -- and more effective -- than the
alternatives we have presented. However, training an ART
environment to perform highly routinized behaviors in which
context has limited relevance has been considered more
inefficient than using, say, the Hopfield model. Ordinarily, the
powerful structures an ART modeler employs slow the learning
process with error-checking routines which value fault-
intolerance over speed. Yet, sometimes, in highly stable
environments, designers may be uncomfortable with an ART system's
capacity to "re-learn" essential skills they must employ.
Additionally, the rapid trainability and stability of a PDP
environment may prove superior to ART, for many of the same
reasons. Some higher-order rules (schemata) may be system-
critical. In these cases, PROGRAMMERS SHOULD DESIGN SYSTEMS --
NOT THE SYSTEMS DESIGNING THEMSELVES. Hence, some systems may
require less-intrusive network engines (like PDP) --especially
when these engines also provide greater speed.
Thus, the three models for neural network design may be
COMPLEMENTARY in function: Hopfield offering speed and stability,
PDP providing up-front learning and stable rule structures, and
ART employing context- and environment-sensitive capabilities
(see Figure 1). We demonstrate, below, that modelers ought to
consider these qualities in developing extensive systems -- and
utilize the many effective tools at our disposal.
.start page
EXAMPLE PROBLEM
BEETHOVEN is a "music composition" system (see Figure 2).
It provides a three part neural network model. The system
emulates fundamental compositional rules to generate and perform
a musical sequence.
BACH is a Hopfield net provides a sequence of notes,
emulating musical melodic performance. A single voice selects
notes from within a single octave. Biases are provided -- as a
composer has the innate tendency to choose certain intervals and
to reject notes that tend to violate common rules of harmony
(eg., Aldwell and Schachter, 1978).
This network of notes is output, in sequence, to a PDP back-
propagation network named SALIERI, which has learned a set of
standard, somewhat higher-order harmonic rules. The network
judges the effectiveness of the sequence, note by note, based on
the intervals involved and the absolute note values (eg., #7
should precede #8 -- and, almost always, at the end of a phrase).
These schemata, then, reject inappropriate sequences AND INHIBIT
SOME INAPPROPRIATE NEXT NOTES. This "look-ahead" capability is
unusual in a PDP environment, yet is fitting for the inhibitory
role the network is playing and for the stability of the rule
structure being employed.
The output from PDP flows, directly, to an ART network,
BEETHOVEN. Employing a model of motivation (based on
construction of category valuation and a healthy boredom at
repetition), BEETHOVEN rejects "unaesthetic" sequences. As the
number of phrases performed increases, the ART model develops
intense biases, which it imposes on BACH and SALIERI.
One additional component of the environment is LOBES, the
Context Manager. LOBES, loosely emulative of human frontal lobes
(see Levine, 1986), maintains information about the processes
being performed, mediates inter-model interaction, and provides
for the final external output (sounding the speaker).
The model, then, utilizes the best capabilities of three
distinctly different paradigms. Hopfield performs efficient
routine processes, as would a "reptilian brain" (MacLean, 1970).
PDP serves as an insistent schoolmarm, observing and enforcing
higher-level rules, like a "neo-mammalian brain." ART provides a
sense of fitness, an aesthetic fitting for models of the limbic
system (or "mammalian brain").
Integration of many memory and processing functions in a
three part model may be similar to human brain function (Leven,
1987b). Regardless of its biological versimilitude, however,
such an approach seems to offer unique combinations of speed,
stability, and flexibility.
.start page
.nopage break
.page length 88
.lpi8
.page break
.ipr//
.pindent 12
.ipr//
.foot/  #/
REFERENCES
Aldwell, E® & C® Schachter. 1978. Harmony and voice leading. Harcourt, Brace & Jovanovich, New York.
Carpenter, G.A® & S® Grossberg. 1987a. A massively parallel architecture for a self-organizing neural
pattern recognition machine. Computer Vision, Graphics, and Image Processing 37:54-115.
Carpenter, G.A® & S® Grossberg. 1987b. ART 2: self-organization of stable category recognition codes for
analog input patterns. Applied Optics 26(23):4919-4930.
Grossberg, S. 1987a® Competitive Learning: From interactive activation to adaptive resonance.
Cognitive Science 11:23-63.
Grossberg, S., ed. 1987b & c. The Adaptive Brain. Vol. I and II. Elsevier/North-Holland, Amsterdam.
Hartley, R® and H® Szu. 1987. A comparison of the computational power of neural network models. IEEE Proc.
ICNN III:15-22.
Hopfield, J.J. 1982. Neural networks and physical systems with emergent collective computational abilities.
Proc. Natl. Acad. Sci. USA 79:2554-2558.
Hopfield, J.J® and D.W® Tank. 1985. "Neural" computation of decisions in optimization problems. Biol.
Cybern. 52:141-152.
Hopfield, J.J® and D.W® Tank. 1986. Computing with neural circuits: A model. Science 233:625-633.
Leven, S. 1987a. Choice and neural process. Unpublished Ph.D. Dissertation, University of Texas at
Arlington.
Leven, S. 1987b. S.A.M.: A triune extension to the ART model. Symposium on Neural Networks, North
Texas State University. (Poster presentation)
Leven, S. 1988. Memory, helplessness, and the dynamics of hope. Presented at the Metroplex Institute for
Neural Dynamics' Workshop on Motivation, Emotion, and Goal Direction in Neural Networks.
Levine, D.S. 1986. A neural network theory of frontal lobe function. In: The Proceedings of the Eighth
Annual Conference of the Cognitive Science Society. Erlbaum.
MacLean, P. 1970. The triune brain, emotion, and scientific bias. In: F® Schmitt, ed. The
Neurosciences: Second Study Program. Rockefeller University Press.
Ricart, R. 1988. Backward conditioning: A neural network model which exhibits both excitatory and inhibitory
conditioning. Presented at the Metroplex Institute for Neural Dynamics' Workshop on Motivation, Emotion,
and Goal Direction in Neural Networks.
Rumelhart, D® & J® McClelland. 1986. Parallel Distributed Processing. MIT Press.
Schank, R. 1982. Dynamic memory. Cambridge University Press.
Schank, R.C® & R.P® Abelson. 1977. Scripts, Plans, Goals, and Understanding. Erlbaum, Hillsdale, NJ.
Sejnowski, T.J® 1986. Open questions about computation in cerebral cortex. In: J.L. McClelland & D.E.
Rumelhart, eds. Parallel Distributed Processing Volume 2. MIT Press.
Simpson, R. 1988. A review of artificial neural systems II: Paradigms, applications, and implementations.
Prepublication copy of paper submitted to CRC Critical Reviews in Artificial Intelligence.
Tank, D.W® & J.J® Hopfield. 1986. Simple "neural" optimization networks: An A/D converter, signal decision
circuit, and a linear programming circuit. IEEE Transaction on Circuits and Systems CAS-33(5):533-541
Yoon, Y®, L.L® Peterson, & P.R® Bergstrasser. 1988. A dermatology expert system using connectionist
network. Unpublished poster presentation, IEEE ICNN.
.start page
.nopage break
.page length 88
.ipr//
.lpi8
.page break
.ipr//
.pindent 12
.foot/ #/
Convergence Convergence Stability Feedback Category Mixed Data Category Computational
Speed Likelihood Of Capability Formation (Complex Reconstruction Simplicity
Network Environment)
------- ------- ------- ------- ------- ------- ------- -------
Hopfield + + + - - - - +
PDP 0 0 +/0 + + 0 - 0
ART - - 0/- + ++ + + -
Where '+' indicates a relative advantage, '0' indicates no special advantage or disadvantage,
and '-' indicates a relative disadvantage.
Figure 1. Comparative analysis of features of the Hopfield, PDP, and ART artificial neural network models
.ipr//
.pitch12
.ipr/T/
.pindent 10
+-----------------+
| | (Match, Other Info)
| Beethoven |---------------------+
| | |
| (ART 1) |<----------------+ |
| | (Context) | |
+-----------------+ | |
^ | |
| | |
|(Approval) | |
| | |
| | V
+-----------------+ +-----------------+
| | | |
| Salieri | (Approval) | Lobes |
| +------------>| (Context |
| (PDP) |<------------| Management) |
| | (Silence!) | |
+-----------------+ +-----------------+
^ ^ | |
| (Candidate note) | | |
+-------------------------+ | |
| | |
| | |
+-----------------+ | |
| | | |
| Bach | (Generate Note!) | |
| |<-------------------+ |
| (Hopfield) | |
| | |
+-----------------+ (New Note) |
|
V
+-------------------+
| |
| |
| Speaker |
| |
| |
+-------------------+
Figure 2. Structure of sample system utilizing Hopfield, PDP, and ART models.
.start page
.nopage break

180
THES/GLOBALS.PP Normal file
View File

@ -0,0 +1,180 @@
UNIT Globals;
{
This unit provides a variety of constants and types used in the integrated
ANN note generator and related programs.
}
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
INTERFACE
USES
Misc1, DOS, ANSI_Z;
{PUBLIC DECLARATIONS}
CONST
Pi = 3.141592653589793;
Exp_Max = 80.0;
Colon = ':';
graphic_string = '0123456789';
{For Bach}
v_len_in = 8;
v_len_out = 5;
epsilon : REAL = 0.005;
HTN_co_res : REAL = 3.5;
HTN_co_cap : REAL = 10.0;
HTN_co_wt : REAL = 1.0;
HTN_co_inp : REAL = 1.0;
HTN_co_iter : REAL = 1.0;
ART_co_vigilance : REAL = 1.0;
global_resistance : REAL = 1.0;
global_capacitance : REAL = 1.0;
(* System goes low, but then some nodes start getting higher activity...
global_resistance = 5;
global_capacitance = 1; *)
{For Play_note}
N_C_mid = 264;
N_D = 297;
N_E = 330;
N_F =352;
N_G = 396;
N_A = 440;
N_B = 495;
N_C_hi = 528;
{For Beethoven (ART 1)}
Max_F2_nodes = 25;
Max_F1_nodes = 41;
Vector_length = Max_F1_nodes;
(* Max_F1_nodes = 16;
Vector_length = 16; c. 6/18/89 *)
TYPE
REAL = SINGLE;
{For Beethoven (ART 1)}
LTM_weights_ = ARRAY[1..Max_F1_nodes] OF REAL;
F1_node_ = RECORD
Curr_A : REAL; {Value of node now}
Last_A : REAL; {Value from last time step}
END;
F1_layer_ptr_ = ^F1_layer_;
F1_layer_ = ARRAY[1..Max_F1_nodes] OF F1_node_;
F2_node_ = RECORD
Curr_B : REAL; {Value of node now}
Last_B : REAL; {Value from last time step}
Wup : LTM_weights_; {BU LTM weights}
Wdn : LTM_weights_; {TD LTM weights}
WIN : INTEGER; {0 if not winner, 1 if winner}
Eligible : BOOLEAN; {TRUE if not rejected}
Committed : BOOLEAN; {TRUE if represents a category}
END;
F2_layer_ptr_ = ^F2_layer_;
F2_layer_ = ARRAY[1..Max_F2_nodes] OF F2_node_;
{General}
file_string_ = STRING[127];
Time_rec_ = RECORD
h,m,s,f : INTEGER;
END;
Note_ = (Note_C_Lo,Note_D,Note_E,Note_F,Note_G,Note_A,Note_B,
Note_C_Hi);
Vector_ = ARRAY[1..Max_F1_nodes] OF BYTE;
Notes_ = ARRAY[1..V_LEN_OUT] OF INTEGER;
Common_Area_ = RECORD
Notes : Notes_;
Delta_Vigilance : BOOLEAN;
New_category : BOOLEAN;
Is_Classical : BOOLEAN;
Candidate_Note : INTEGER;
END;
note_record_ = RECORD
n : ARRAY[1..500] OF BYTE;
c : INTEGER;
END;
PROCEDURE Dump_Common(cmn : Common_Area_);
IMPLEMENTATION
{PRIVATE DECLARATIONS}
{IMPLEMENTATIONS OF PROCEDURES AND FUNCTIONS}
PROCEDURE Dump_Common(cmn : Common_Area_);
{}
VAR
ii : INTEGER;
BEGIN {}
{ WRITELN('Dump of Common Area -----------');}
ANSI_CUP(1,0);
Write('Notes in sequence:');
ANSI_CUP(1,27);
FOR ii := 1 TO V_Len_Out DO BEGIN
{}
Write(cmn.Notes[ii],' ');
END; {}
{ WRITELN;}
ANSI_CUP(3,0);
Write('Change in vigilance:');
ANSI_CUP(3,31);
Write(cmn.Delta_Vigilance:5);
ANSI_CUP(4,0);
Write('New Category formed: ');
ANSI_CUP(4,31);
Write(cmn.New_category:5);
ANSI_CUP(5,0);
Write('Candidate sequence classical?:');
ANSI_CUP(5,31);
Write(cmn.Is_Classical:5);
ANSI_CUP(6,0);
Write('Candidate note:');
ANSI_CUP(6,35);
Write(cmn.Candidate_Note);
ANSI_CUP(7,0);
Write('------------------------------------');
ANSI_CUP(23,0);
END; {}
{----------------------------------------------------------}
BEGIN {INITIALIZATION}
END.


7
THES/GO.BAT Normal file
View File

@ -0,0 +1,7 @@
c:
cd \
md wre_thes
cd wre_thes
PAUSE Make sure diskette is in drive A:
a:lharc e a:*.lhz


BIN
THES/HTN.DAT Normal file

Binary file not shown.

212
THES/HTNDATA.PP Normal file
View File

@ -0,0 +1,212 @@
PROGRAM HTN_data_build(Input,Output);
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
USES
CRT, misc1, DOS;
CONST
row_inhibition : REAL = -0.08;
col_inhibition : REAL = -0.08;
seq_add : REAL = 0.0;
TYPE
REAL = SINGLE;
file_string_ = STRING[127];
data_array_ = ARRAY[1..64,1..64] OF REAL;
VAR
inf, outf : TEXT;
outdatf : FILE OF data_array_;
data : data_array_;
ii, jj, kk, ll, mm, nn : INTEGER;
inch : CHAR;
line, value : file_string_;
di1, di2 : INTEGER;
note1, note2, posit1, posit2 : INTEGER;
error : INTEGER;
min, max, range : REAL;
tii, tjj : INTEGER;
tr, ts : REAL;
sums : ARRAY[1..5,1..8] OF REAL;
sumssum : REAL;
PROCEDURE init_sums ;
VAR
ii, jj : INTEGER;
BEGIN
FOR ii := 1 TO 5 DO
FOR jj := 1 TO 8 DO sums[ii,jj] := 0;
END;
FUNCTION maximum(r1,r2:REAL):REAL;
BEGIN
IF r1 >= r2 THEN maximum := r1
ELSE maximum := r2;
END;
FUNCTION signum(x : REAL):REAL;
BEGIN
IF (x >= 0.0) THEN BEGIN
signum := 1;
END
ELSE BEGIN
signum := -1;
END;
END;
PROCEDURE show_node_sums;
VAR
ii, jj : INTEGER;
BEGIN
init_sums;
sumssum := 0;
FOR ii := 1 TO 5 DO
FOR jj := 1 TO 8 DO BEGIN
FOR kk := 1 TO 5 DO
FOR ll := 1 TO 8 DO BEGIN
sums[ii,jj] := sums[ii,jj] + data[(8*(ii-1)+jj),
(8*(kk-1)+ll)];
END;
END;
FOR jj := 1 TO 8 DO BEGIN
FOR ii := 1 TO 5 DO BEGIN
WRITE(sums[ii,jj]:6:3,' ');
sumssum := sumssum + sums[ii,jj];
END;
WRITELN;
END;
WRITELN (sumssum);
WRITELN;
END;
PROCEDURE set_row_and_column_inhibition;
VAR
ii, jj, kk, ll : INTEGER;
BEGIN
FOR note1 := 1 TO 8 DO
FOR posit1 := 1 TO 5 DO BEGIN
di1 := (8*(posit1-1)+note1);
FOR ii := 1 TO 8 DO{increase column inhibition}
BEGIN
IF (note1 <> ii) THEN BEGIN
di2 := (8*(posit1-1)+ii);
data[di1,di2] := data[di1,di2] + col_inhibition;
data[di2,di1] := data[di1,di2];
END;
END;
FOR jj := 1 TO 5 DO BEGIN
IF (posit1 <> jj) THEN BEGIN
di2 := (8*(jj-1)+note1);
data[di1,di2] := data[di1,di2] + row_inhibition;
data[di2,di1] := data[di1,di2];
END
ELSE BEGIN
END;
END;
END;
END;
PROCEDURE clear_diagonal;
VAR
ii, jj, kk, ll : INTEGER;
BEGIN
FOR ii := 1 TO 40 DO data[ii,ii] := 0.0;
END;
BEGIN
col_inhibition := ((8.0+(8.0-5.0))/5.0) * row_inhibition;
seq_add := -(row_inhibition/7.0);
init_sums;
NoSound;
FOR ii := 1 TO 40 DO
FOR jj := 1 TO 40 DO data[ii,jj] := 0.0;
Assign(inf,'sequence.dat');
Reset(inf);
Assign(outdatf,'htn.dat');
ReWRITE(outdatf);
WHILE NOT Eof(inf) DO BEGIN {get a line}
Readln(inf,line);
WRITELN(line); {increment connection values in the
data array}
FOR ii := 1 TO (Length(line)-1) DO BEGIN
Val(Copy(line,ii,1),note1,error);
Val(Copy(line,ii+1,1),note2,error);
WRITELN(note1,',',note2);
FOR posit1 := 1 TO 4 DO BEGIN
di1 := (8*(posit1-1))+note1;
di2 := (8*(posit1))+note2;
data[di1,di2] := data[di1,di2] + seq_add;
data[di2,di1] := data[di1,di2];
{symmetric weights!}
END;
END;
IF Length(line) >= 3 THEN
FOR ii := 1 TO (Length(line)-2) DO BEGIN
Val(Copy(line,ii,1),note1,error);
Val(Copy(line,ii+2,1),note2,error);
WRITELN(note1,',',note2);
FOR posit1 := 1 TO 3 DO BEGIN
di1 := (8*(posit1-1))+note1;
di2 := (8*(posit1+1))+note2;
data[di1,di2] := data[di1,di2] + seq_add;
data[di2,di1] := data[di1,di2];
END;
END;
END;
show_node_sums;
set_row_and_column_inhibition;
show_node_sums;
clear_diagonal;
show_node_sums;
WRITE(outdatf,data);
Close(inf);
Close(outdatf);
END.


15444
THES/INT_ANN.TXT Normal file

File diff suppressed because it is too large Load Diff

1
THES/M8910.PRN Normal file
View File

@ -0,0 +1 @@
裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹<EFBFBD>

1
THES/M8910.TXT Normal file
View File

@ -0,0 +1 @@
裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹裹<EFBFBD>

70
THES/M8912.TXT Normal file
View File

@ -0,0 +1,70 @@
.okidata9
.page length 66
.lpi6
.pitch12
.pindent 10
.above header 2
.below header 2
.above footer 2
.below footer 2
.page number 1
.head//
.foot//
.page break
The Metroplex Institute for Neural Dynamics
100 Allentown Parkway, Suite 211, Allen, Texas 75002 (214) 422-4570
December 1989 Newsletter
DECEMBER MEETING ANNOUNCEMENT
DATE: Saturday, December 9, 1989
TIME: 12:30 PM
PLACE: McDermott Library, Room 2.406 (or adjacent room), the University of
Texas at Dallas
TOPIC: Isomorphic back-propagation networks
SPEAKER: Mike Manry, Electrical Engineering, UTA
Dr. Manry will give an overview of research into methods of mapping signal
processing algorithms onto back-propagation networks. The signal
processing algorithms include Gaussian classifiers.
A new design technique for back-propagation networks is presented. In this
technique, unit activation functions are approximated by power series.
Basic building blocks such as monomial networks and multiplier networks are
implemented with any desired degree of accuracy. Representation theorems
for designing arbitrary continuous functions are also discussed.
These studies will give us a clearer understanding of the strengths,
limitations, and uses of back-propagation networks. One specific
application being used in this research is recognition of hand-printed
numerals.
DIRECTIONS: UTD is on Floyd Road, north of Campbell Road in Richardson.
Take Central Expressway to the Renner Road exit (north of Campbell Road).
Take Renner Road west past the traffic light at Custer. You'll have to
curve south as Renner ends at Floyd road. The main entrance to the UTD
campus is at the (only) stop sign on Floyd between Renner and Campbell. Go
west into the campus to the parking lot behind the guard station (it is
usually open on Saturday afternoons). McDermott Library is the building on
the south border of the lot. We meet in one of the classrooms on the left
after you go in the main entrance which, according to UTD's unique
numbering scheme, is on level two.
------------------------------------------------------------------------
November Meeting Notes
The presentation on transputers and the Occam language given by David Bye
of INMOS was very informative. David Bye has complimentary literature
available on transputers, systems, and the Occam language. If you are
interested, contact him at the SGS Thomson (INMOS) facility at 1310
Electronics Drive in Carrollton.
------------------------------------------------------------------------
For further information, contact Wesley Elsberry at (817) 551-7018.
.start page


515
THES/MISC1.PP Normal file
View File

@ -0,0 +1,515 @@
UNIT misc1;
{
This unit provides a number of functions of general utility.
}
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
INTERFACE
USES DOS;
CONST
ASCII_NUL = #0;
ASCII_SOH = #1;
ASCII_STX = #2;
ASCII_ETX = #3;
ASCII_EOT = #4;
ASCII_ENQ = #5;
ASCII_ACK = #6;
ASCII_BEL = #7;
ASCII_BS = #8;
ASCII_HT = #9;
ASCII_LF = #10;
ASCII_VT = #11;
ASCII_FF = #12;
ASCII_CR = #13;
ASCII_SO = #14;
ASCII_SI = #15;
ASCII_DLE = #16;
ASCII_DC1 = #17;
ASCII_XON = #17;
ASCII_DC2 = #18;
ASCII_DC3 = #19;
ASCII_XOFF = #19;
ASCII_DC4 = #20;
ASCII_NAK = #21;
ASCII_SYN = #22;
ASCII_ETB = #23;
ASCII_CAN = #24;
ASCII_EM = #25;
ASCII_SUB = #26;
ASCII_EOF = #26;
ASCII_ESC = #27;
ASCII_FS = #28;
ASCII_GS = #29;
ASCII_RS = #30;
ASCII_US = #31;
ASCII_SP = #32;
ASCII_EXCL = #33;
ASCII_DQUOTE = #34;
ASCII_POUND = #35;
ASCII_DOLLAR = #36;
ASCII_PERCENT = #37;
ASCII_AMPERSAND = #38;
ASCII_SQUOTE = #39;
ASCII_OPAREN = #40;
ASCII_CPAREN = #41;
ASCII_ASTERISK = #42;
ASCII_PLUS = #43;
ASCII_COMMA = #44;
ASCII_DASH = #45;
ASCII_PERIOD = #46;
ASCII_SLASH = #47;
ASCII_ZERO = #48;
ASCII_ONE = #49;
ASCII_TWO = #50;
ASCII_THREE = #51;
ASCII_FOUR = #52;
ASCII_FIVE = #53;
ASCII_SIX = #54;
ASCII_SEVEN = #55;
ASCII_EIGHT = #56;
ASCII_NINE = #57;
ASCII_COLON = #58;
ASCII_SEMICOLON = #59;
ASCII_LESSTHAN = #60;
ASCII_EQUAL = #61;
ASCII_GREATERTHAN = #62;
ASCII_QMARK = #63;
ASCII_AT = #64;
ASCII_OBRACKET = #91;
ASCII_BACKSLASH = #92;
ASCII_CBRACKET = #93;
ASCII_CARAT = #94;
ASCII_UNDERLINE = #95;
ASCII_BACKQUOTE = #96;
ASCII_OBRACE = #123;
ASCII_VLINE = #124;
ASCII_CBRACE = #125;
ASCII_TILDE = #126;
ASCII_DEL = #127;
TYPE
Time_rec_ = RECORD
h,m,s,f : INTEGER;
END;
PROCEDURE Time(VAR TR : Time_rec_);
{Gets system time from MS-DOS}
PROCEDURE Elapsed_time(VAR TR1, TR2 : Time_rec_);
{Computes the difference between TR1 and TR2, returns result in TR1.
TR1's previous value is destroyed.}
FUNCTION Convert_time_to_real(VAR CTR : Time_rec_):REAL;
{}
PROCEDURE Convert_real_to_time(VAR RT : REAL; VAR CTR : Time_rec_);
{}
PROCEDURE Trim(VAR alex : STRING;tchar : CHAR);
{ This procedure trims a string variable of type STRING beginning
PROCEDURE StrUp(VAR strng : STRING);
{ This procedure maps the characters of a string of type STRING to uppercase}
FUNCTION IsUpper(x : CHAR):BOOLEAN;
{Returns true if x is an uppercase letter}
FUNCTION IsLower(x : CHAR):BOOLEAN;
{Returns true if x is a lowercase letter}
PROCEDURE Error(msg : STRING);
{ writes error message out to screen}
FUNCTION Gaussian(x,mu,sigma : REAL):REAL;
{returns the gaussian density function of x, where mu is the}
FUNCTION Normal_Prob(x,mu,sigma : REAL):REAL;
{uses a polynomial approximation to estimate
the area under the normal curve}
FUNCTION Power(num,expon : REAL):REAL;
{returns num^expon}
FUNCTION Slope(sumx,sumy,sumxy,sumx2,n :REAL):REAL;
{returns linear regression determined slope of line}
FUNCTION Intercept(sumx,sumy,n,m : REAL):REAL;
{returns linear regression determined intercept of line}
FUNCTION CorrCo(m,sigmax,sigmay : REAL):REAL;
{returns correlation coefficient of x and y}
FUNCTION SD(sum,sum_sqrd,n : REAL):REAL;
{returns standard deviation given the sum of values, the sum of
the squares of values, and the number of values}
FUNCTION Map_Real(mapval, domain_min, domain_max,
range_min, range_max : REAL): REAL;
{ this functions maps the value passed to it into a new range }
FUNCTION Map_Int(mapval, domain_min, domain_max,
range_min, range_max : INTEGER): INTEGER;
{ this functions maps the value passed to it into a new range }
{ must have MAP_REAL as above in program }
FUNCTION Map_Int_From_Real(mapval, domain_min, domain_max : REAL;
range_min, range_max : INTEGER): INTEGER;
{ this functions maps the value passed to it into a new range of type integer}
FUNCTION dir_console_IO (VAR ch :CHAR) : BOOLEAN;
{Returns TRUE if a character has been captured at the keyboard, FALSE
otherwise. If a character has been captured, CH contains it.}
FUNCTION check_kbd_status : BOOLEAN;
{Returns TRUE if a key has been pressed, FALSE otherwise}
FUNCTION max_single(s1,s2 : SINGLE):SINGLE;
{Returns the greater of two SINGLE type values}
FUNCTION min_single(s1,s2 : SINGLE):SINGLE;
{Returns the lesser of two SINGLE type values}
IMPLEMENTATION
PROCEDURE Time(VAR TR : Time_rec_);
{Gets system time from MS-DOS}
CONST
lllama = 0;
VAR
regs : registers;
BEGIN {Time}
WITH regs DO BEGIN
ax:=$2c00;
MSDos(regs);
TR.h := Hi(cx);
TR.m := Lo(cx);
TR.s := Hi(dx);
TR.f := Lo(dx);
END;
END; {Time}
FUNCTION Convert_time_to_real(VAR CTR : Time_rec_):REAL;
{}
VAR
Tempr : REAL;
BEGIN {Convert_time_to_real}
WITH CTR DO Tempr := f + (s*100.0) + (m*6000.0) + (h*360000.0);
Convert_time_to_real := Tempr;
END; {Convert_time_to_real}
PROCEDURE Convert_real_to_time(VAR RT : REAL;
VAR CTR : Time_rec_);
{}
VAR
TempI : INTEGER;
Tempr1, Tempr2 : REAL;
BEGIN {Convert_real_to_time}
WITH CTR DO BEGIN
Tempr2 := RT;
Tempr1 := INT(Tempr2 / 360000.0);
h := Trunc(Tempr1);
Tempr2 := Tempr2 - (Tempr1 * 360000.0);
Tempr1 := INT(Tempr2 /6000.0);
m := Trunc(Tempr1);
Tempr2 := Tempr2 - (Tempr1 * 6000.0);
Tempr1 := INT(Tempr2 / 100);
s := Trunc(Tempr1);
Tempr2 := Tempr2 - (Tempr1 * 100);
Tempr1 := INT(Tempr2);
f := Trunc(Tempr1);
END;
END; {Convert_real_to_time}
PROCEDURE Elapsed_time(VAR TR1, TR2 : Time_rec_);
{Computes the difference between TR1 and TR2, returns result in TR1.
TR1's previous value is destroyed.}
VAR
Dif : TIme_rec_;
T1 , T2 : REAL;
BEGIN {Elapsed_time}
Write('Time difference ',TR2.h:2,ascii_Colon,TR2.m:2,ascii_Colon,
TR2.s:2,ascii_Colon,TR2.f:2, ' - ',TR1.h:2,ascii_Colon,TR1.m:
2,ascii_Colon,TR1.s:2,ascii_Colon,TR1.f:2);
T1 := Convert_time_to_real(TR1);
T2 := Convert_time_to_real(TR2);
IF (T2 < T1) THEN {}
BEGIN
T2 := T2 + 8640000.0;
END
ELSE {}
BEGIN
END;
T1 := T2 - T1;
Convert_real_to_time(T1,TR1);
Writeln(' = ',TR1.h:2,ascii_Colon,TR1.m:2,ascii_Colon,TR1.s:2,
ascii_Colon,TR1.f:2);
END; {Elapsed_time}
{$V-}
PROCEDURE TRIM(VAR alex : STRING;
tchar : CHAR);
{ This procedure trims a string variable of type STRING beginning
with the first occurrence of the character TCHAR}
VAR
ii,jj :INTEGER;
BEGIN
ii := Pos(tchar,alex);
IF ii <> 0 THEN alex := Copy(alex,1,ii-1);
END;
{$V+}
{$V-}
PROCEDURE STRUP(VAR strng : STRING);
{ This procedure maps the characters of a string of type STRING to
uppercase}
VAR
ii : INTEGER;
BEGIN
FOR ii := 1 TO Length(strng) DO strng[ii] := UpCase(strng[ii]);
END;
{$V+}
FUNCTION ISUPPER(x : CHAR):BOOLEAN;
{Returns true if x is an uppercase letter}
BEGIN
IF (x IN ['A'..'Z']) THEN isupper := TRUE
ELSE isupper := FALSE;
END;
FUNCTION ISLOWER(x : CHAR):BOOLEAN;
{Returns true if x is a lowercase letter}
BEGIN
IF (x IN ['a'..'z']) THEN islower := TRUE
ELSE islower := FALSE;
END;
{$V-}
PROCEDURE ERROR(msg : STRING);
{ writes error message out to screen}
CONST
bell = ^G;
BEGIN
Write(bell,msg);
END;
{$V+}
FUNCTION GAUSSIAN(x,mu,sigma : REAL):REAL;
{returns the gaussian density function of x, where mu is the
mean and sigma is the standard deviation}
BEGIN
gaussian := (1/(sigma*Sqrt(2*Pi)))*Exp(-Sqr(x-mu)/(2*Sqr(sigma)));
END;
FUNCTION NORMAL_PROB(x,mu,sigma : REAL):REAL;
{uses a polynomial approximation to estimate
the area under the normal curve}
CONST
b1 = 0.319381530;
b2 = -0.356563782;
b3 = 1.781477937;
b4 = -1.821255978;
b5 = 1.330274429;
p = 0.2316419;
epsi = 7.5E-09;
VAR
t, t2, t3, t4, t5, q, z : REAL;
BEGIN
z := gaussian(x,mu,sigma) * ((x-mu)/sigma);
t := 1/(1+p*x);
t2 := t*t;
t3 := t2*t;
t4 := t3*t;
t5 := t4*t;
q := z * (b1*t + b2*t2 + b3*t3 + b4*t4 + b5*t5) + epsi;
normal_prob := 1-q;
END;
FUNCTION POWER(num,expon : REAL):REAL;
{returns num^expon}
CONST
Machine_infinity = 1E37;
VAR
temp : REAL;
BEGIN
temp := expon*Ln(num);
IF temp >= Ln(machine_infinity) THEN power := machine_infinity
ELSE power := Exp(temp);
END;
FUNCTION SLOPE(sumx,sumy,sumxy,sumx2,n :REAL):REAL;
{returns linear regression determined slope of line}
BEGIN
slope := (sumxy-(sumx*sumy/n))/ (sumx2-(Sqr(sumx)/n));
END;
FUNCTION INTERCEPT(sumx,sumy,n,m : REAL):REAL;
{returns linear regression determined intercept of line}
BEGIN
intercept := ((sumy-(m*sumx))/n);
END;
FUNCTION CORRCO(m,sigmax,sigmay : REAL):REAL;
{returns correlation coefficient of x and y}
BEGIN
corrco := m*sigmax/sigmay;
END;
FUNCTION SD(sum,sum_sqrd,n : REAL):REAL;
{returns standard deviation given the sum of values, the sum of
the squares of values, and the number of values}
BEGIN
sd := Sqrt((sum_sqrd-(Sqr(sum)/n))/(n-1));
END;
FUNCTION MAP_REAL(mapval, domain_min, domain_max, range_min, range_max :
REAL): REAL;
{ this functions maps the value passed to it into a new range }
BEGIN
map_real := (((mapval - domain_min)/(domain_max - domain_min)) * (
range_max - range_min)) + range_min;
END;
FUNCTION MAP_INT(mapval, domain_min, domain_max, range_min, range_max :
INTEGER): INTEGER;
{ this functions maps the value passed to it into a new range }
{ must have MAP_REAL as above in program }
VAR
mv, dn, dx, rn, rx : REAL;
BEGIN
mv := mapval;
dn := domain_min;
dx := domain_max;
rn := range_min;
rx := range_max;
map_int := Round(map_real(mv,dn,dx,rn,rx));
END;
FUNCTION MAP_INT_FROM_REAL(mapval, domain_min, domain_max : REAL;
range_min, range_max : INTEGER): INTEGER;
{ this functions maps the value passed to it into a new range of type
integer}
BEGIN
map_int_from_real := Round(map_real(mapval,domain_min,domain_max,
range_min,range_max));
END;
FUNCTION dir_console_IO (VAR ch :CHAR) : BOOLEAN;
VAR
regs : registers; {From the DOS unit}
BEGIN
regs.AH := $06;
regs.DL := $FF;
MSDos(regs);
IF ((regs.flags AND FZERO) = 0) THEN BEGIN
ch := Chr(regs.AL);
dir_console_IO := TRUE;
END
ELSE BEGIN
dir_console_IO := FALSE;
END;
END;
FUNCTION check_kbd_status : BOOLEAN;
VAR
regs : registers; {From the DOS unit}
BEGIN
regs.AH := $0B;
MSDos(regs);
IF (Ord(regs.AL) = $FF) THEN BEGIN
check_kbd_status := TRUE;
END
ELSE BEGIN
check_kbd_status := FALSE;
END;
END;
FUNCTION max_single(s1,s2 : SINGLE):SINGLE;
{Returns the greater of two SINGLE type values}
BEGIN
IF s1 >= s2 THEN max_single := s1
ELSE max_single := s2;
END;
FUNCTION min_single(s1,s2 : SINGLE):SINGLE;
{Returns the lesser of two SINGLE type values}
BEGIN
IF s1 < s2 THEN min_single := s1
ELSE min_single := s2;
END;
BEGIN {initialize}
END. {INITIALIZE}


111
THES/PLAYCOMP.PP Normal file
View File

@ -0,0 +1,111 @@
PROGRAM play_composition(Input,Output);
{
This program plays notess output by the integrated ANN note generator, the
random composition program, and the classical composition program.
}
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
USES
Dos, CRT, misc1;
CONST
Pi = 3.141592653589793;
Exp_Max = 80.0;
Colon = ':';
graphic_string = '0123456789';
note_time = 1800;
rest_time = 550;
{For Play_note}
N_C_mid = 264;
N_D = 297;
N_E = 330;
N_F =352;
N_G = 396;
N_A = 440;
N_B = 495;
N_C_hi = 528;
TYPE
REAL = SINGLE;
{General}
Note_ = (Note_C_Lo,Note_D,Note_E,Note_F,Note_G,Note_A,Note_B,
Note_C_Hi);
VAR
inf : TEXT; {Input file handle}
instr : STRING;
cnote : INTEGER;
inch : CHAR;
{----------------------------------------------------------}
PROCEDURE play_a_note(cn : INTEGER);
VAR
ii : INTEGER;
BEGIN
CASE cn OF
1 : Sound(n_c_mid);
2 : Sound(n_d);
3 : Sound(n_e);
4 : Sound(n_f);
5 : Sound(n_g);
6 : Sound(n_a);
7 : Sound(n_b);
8 : Sound(n_c_hi);
ELSE
NoSound;
END;
Delay(note_time);
NoSound;
Delay(rest_time);
END;
BEGIN {Main}
{get filename}
REPEAT
Write ('File to play? ');
Readln (instr);
instr := FSearch(instr,GetEnv('PATH'));
UNTIL (Length(instr) <> 0);
Assign (inf,instr);
Reset(inf);
WHILE NOT Eof(inf) DO BEGIN
Readln(inf,cnote);
play_a_note(cnote);
IF dir_console_IO(inch) THEN
IF UpCase(inch) = 'Q' THEN BEGIN
NoSound;
EXIT;
END;
END;
NoSound;
END. {Main}


38
THES/RANDCOMP.PP Normal file
View File

@ -0,0 +1,38 @@
PROGRAM random_compositon (Input,Output);
{
This program outputs random notes.
}
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
VAR
ii, jj : INTEGER;
Outf : TEXT;
BEGIN
Assign(outf,'rmus.mus');
Rewrite(outf);
Randomize;
FOR ii := 1 TO 152 DO BEGIN
Writeln(outf,(Random(8)+1):1);
END;
Close(outf);
END.


8150
THES/RB.BIG Normal file

File diff suppressed because it is too large Load Diff

35
THES/README.TXT Normal file
View File

@ -0,0 +1,35 @@
Integration and hybridization in neural network modelling
Copyright 1989 by Wesley R. Elsberry
A masters thesis presented to the University of Texas at Arlington.
Files on this disk:
INT_ANN LZH
{The text of the thesis, minus the figures.}
ANNEXE LZH
{Simulation and support programs. ANNCOMP.EXE is the main simulation
program.}
DATAF LZH
{Contains data files necessary for running the simulations}
GO BAT
{A batch file that creates a directory, WRE_THES, on the C: drive, and
which unzips the compressed files into the new directory. Make sure
that the diskette is in drive A: before invoking.}
ANNPRETT LZH
{Pretty-printed Pascal source files.}
LHARC COM
{A file decompression utility.}
and, of course, this file.


10000
THES/RMUS.MUS Normal file

File diff suppressed because it is too large Load Diff

14
THES/S61.DAT Normal file
View File

@ -0,0 +1,14 @@
Salieri net set up
!L 0.5
!A 0.5
!I 40
!H 20
!O 1
!T 1 set training_iterations
!E 0.1
!D s61.dat
!R s61.out
!W s61.wt
!Z


0
THES/S61.OUT Normal file
View File

64
THES/S61.WT Normal file
View File

@ -0,0 +1,64 @@
!V 61
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.7619 -0.2440 -0.1363 -0.1567 0.1649 -0.6699 -0.4912 -0.3075 -0.2865 0.8204 -0.3366 -0.3830 0.1775 -0.3050 -0.4589 -0.7307 0.5236 1.0064 -0.0586 0.2335 -0.5273 0.7445 0.0047 -0.2695 0.6989 -0.4886 -0.4487 -0.2863 -0.5671 0.5934 0.6863 -0.8008 1.1089 -0.2723 0.3502 -0.7020 0.2360 0.5577 0.0949 0.1120 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.4271 -0.2637 -0.3443 -0.6513 -0.2711 -0.2320 0.5724 0.0793 -0.8650 -0.5592 -0.6041 -0.1700 0.6063 0.2487 0.2779 0.9439 0.4172 -0.8293 -0.9616 0.4019 0.6721 0.2279 -0.0191 0.3942 0.1210 0.5819 0.3871 -0.8956 -0.5692 -0.4746 0.7667 0.0447 0.3069 0.0175 0.5428 0.6562 -0.6471 -0.8328 -0.5945 -0.1348 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.2501 -0.1498 0.0885 -0.5993 0.8040 0.3958 -0.5907 0.7542 0.9009 0.4252 0.3181 -0.7228 0.6832 -0.8961 -0.4582 0.6826 0.5334 0.2884 0.4276 -0.5050 -0.8641 -0.7934 -0.9001 0.8385 0.5766 -0.0339 -0.4170 -0.1786 -1.0047 0.9630 0.0238 0.6001 -0.3116 -0.4854 0.6570 -0.9361 -0.7937 -0.2901 -0.9896 0.2287 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.6261 0.3837 -0.2655 -0.3956 0.9548 -0.1316 -0.5253 -0.9668 -0.1028 -0.9994 -0.7686 0.0356 0.5310 -0.7876 -0.2767 0.7533 -0.6487 -0.4826 -0.6541 -0.0878 0.3610 -0.0012 0.5387 0.2986 0.0380 -0.0088 -0.7853 0.8435 0.0355 -0.0624 0.4193 0.5576 0.3407 -0.1143 0.7528 0.1061 0.7788 -0.4991 -1.0082 -0.5767 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.3924 -0.2315 -0.8433 -0.2085 -0.5893 -0.1208 -0.8374 0.9299 -0.1998 0.5650 -0.5473 -0.9985 -0.9879 -0.4499 -0.8220 0.6650 -0.5227 -1.0613 0.0678 0.6481 -0.2177 -0.6247 -0.2414 -0.7925 -0.0001 0.8063 0.4990 0.2884 -1.0256 -0.3472 -0.6133 0.5906 -0.0301 -0.1869 -0.4838 -0.3142 -0.9172 -0.0061 -0.5495 0.8685 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.9535 -0.9352 -0.6520 -0.8873 0.6817 0.8109 0.4857 -0.5989 0.2740 0.2735 0.0627 0.4266 -0.0968 -0.9935 0.0969 0.0595 -0.8384 -0.1636 -0.6407 0.6949 0.6436 0.9458 -0.3651 0.5753 -0.1892 -0.6576 0.1575 -0.5663 -0.0988 -0.1222 0.5250 -0.5408 -0.6548 -0.5102 0.9596 -0.4082 -0.0818 -0.4318 -0.7150 -0.5757 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.4120 0.5625 -0.8810 0.7308 -0.8768 -0.2254 -0.3858 0.3594 0.6833 -0.1741 0.8853 0.6383 -0.2512 -0.6341 0.3679 -0.0086 0.4678 0.5472 -0.3544 0.0796 0.3187 0.8012 0.4148 -0.8477 -0.6299 -0.8463 -0.8339 0.7348 -0.9866 0.2152 0.8069 -0.0533 -0.7932 0.8631 -0.3095 -0.0156 0.4422 0.2448 0.1867 0.8851 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.6310 0.5604 0.5902 -0.3295 0.8722 -0.3304 0.5414 -0.2042 -1.0311 -0.5006 -0.6887 0.3324 0.1877 0.7726 -0.6233 -0.5918 0.4051 0.3512 -0.9627 0.0591 0.5596 -1.0250 0.0732 0.7424 0.1863 -0.2007 0.6890 0.8843 -0.1599 -0.6749 -0.1450 0.6026 -0.3409 -0.5392 0.4410 0.7942 -0.1104 -0.9241 0.2544 -0.5754 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.3154 0.7638 -0.5161 -0.6787 -0.6147 -0.4910 0.9554 -0.6002 -0.3040 -0.9965 -0.4748 -0.8161 -0.0513 -1.0038 -0.7551 -0.4040 0.8968 -0.5826 0.3153 0.9365 -0.4450 -0.4500 -0.3676 -0.5534 0.0291 -0.3440 0.6621 0.5321 0.5965 0.6141 -0.0184 -0.3625 -0.6506 -0.9503 -0.3996 -0.3161 0.4383 -0.4238 -0.7178 0.1013 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.5623 0.2054 0.0967 0.9602 0.3215 -0.2972 -0.2421 -0.2586 -0.8317 -0.8753 0.4790 0.4881 -0.6946 0.6584 -0.6727 0.2096 0.4446 0.6243 0.2529 -0.1499 -0.8026 0.9073 0.8276 0.9188 -0.1902 0.0255 0.3577 0.5839 -0.8137 -0.7689 -0.4078 0.5474 0.4189 -0.8387 -0.1737 0.3239 0.5573 0.1530 0.8796 -0.3542 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.2866 -1.0059 0.9690 0.7599 0.4782 -0.6084 -0.8412 0.5370 0.5231 -1.0303 -0.9897 -0.5671 0.8804 -0.4718 0.8907 -0.5204 0.8514 -0.4012 0.7465 -0.4825 -1.0175 0.8864 -0.1017 0.7829 0.3255 -0.4844 -0.1501 0.3590 0.2925 0.5096 0.9179 0.1356 -0.5691 -0.3420 -0.7553 -0.2266 0.0195 0.2677 -0.6291 0.3170 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.9237 -0.3537 0.1886 -0.5079 0.8179 -0.9842 -0.4943 -0.7417 -0.8436 -0.1618 0.1755 -0.0201 -0.2625 -0.4896 -0.4766 -0.9160 -0.3595 -0.8030 0.0743 0.1043 -0.4981 0.6396 0.8456 0.2367 -0.6813 -0.5641 -0.5740 -0.3468 0.4840 0.4694 0.1666 0.7121 -0.1332 -0.5978 0.1084 -0.7212 -1.1000 0.6436 0.0591 -0.2757 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.6575 -0.2055 0.0306 -0.3778 0.3174 0.2309 0.4669 0.8171 -0.6834 -0.3146 0.3023 -0.5447 -0.5955 0.1957 0.8378 0.3657 0.0583 -0.8839 0.3607 0.1423 0.8788 -0.9295 -0.7587 0.9530 -0.3602 0.8292 -0.8012 -0.9375 -0.0091 0.0446 0.5999 0.3758 0.0621 0.1505 -0.4968 0.0511 0.8480 0.7593 0.2344 0.0724 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.2137 -0.1663 0.1527 0.3831 -0.4444 -0.9529 0.2738 0.3726 -0.6549 0.7200 -0.9318 0.0797 0.8368 -0.4717 0.7169 0.6159 -0.4459 0.0119 0.0730 -0.1345 0.5315 0.3527 -0.7682 0.6127 0.4587 -0.1653 0.7051 0.5052 0.1339 -0.4084 -0.9209 -0.4960 0.2223 -0.4034 -0.3324 0.5219 0.6125 -0.3441 -0.3277 0.3082 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.1701 0.7238 -0.5399 0.4200 -0.5724 0.1927 0.6207 -0.7291 -0.5549 0.0121 0.2647 0.4297 -0.1209 0.9535 -0.7414 -0.4780 -0.5360 -0.4089 -0.5886 -0.2116 -0.1244 0.4154 0.1569 -0.1825 -0.9552 0.0146 -0.6584 -0.0751 -0.9857 -0.0007 -0.6749 -0.2210 0.8795 -0.4335 0.9668 -0.3744 -0.3040 -0.6058 0.6232 0.7091 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.7773 0.1475 -0.6726 0.2005 -0.2117 0.4589 0.8681 -0.2211 0.4067 -0.6055 0.7003 0.8207 -0.4224 -0.9159 0.5727 0.1668 0.0661 0.7136 -0.4574 0.5190 0.0826 0.2380 0.3364 0.8962 -0.3149 0.4892 -0.9789 0.0774 -0.8708 -0.8830 0.3458 0.0208 -0.0763 -0.3155 -0.4446 -0.2720 -0.9251 -0.5315 0.6031 0.9395 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.7416 0.4924 -0.4038 -0.6895 -0.8944 0.1152 -0.9205 -0.8366 0.6731 -0.0312 0.8193 0.8270 -0.8365 -0.2766 -0.0429 0.3934 0.3325 0.8114 0.8929 0.4366 0.9132 0.2455 -0.5544 0.7308 0.7297 -0.3294 0.1758 -0.7753 -0.2380 0.8242 -0.4577 -1.0306 -0.0554 -0.9811 0.3900 -0.7742 -0.1291 -0.2936 0.0148 -0.6617 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.0703 -0.3768 0.4144 0.2983 0.3524 -0.6137 0.2440 -0.8838 0.8980 -0.2774 -0.5914 -0.1437 -0.1138 -0.5522 -0.7996 -0.2161 0.7566 -0.1850 -0.8698 0.3928 0.1164 -0.3284 0.1758 -0.4813 -0.3258 -1.0765 -0.0610 0.1492 0.7963 0.7967 -0.6693 -0.9627 -0.0141 -0.2931 0.8108 -0.5054 -0.1443 -0.7649 0.8161 0.4256 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W -0.3265 0.5221 0.5236 0.5709 -0.8379 -0.6252 -0.1016 0.9371 0.9186 0.7908 0.6904 0.2015 -0.3986 -0.6092 -0.6130 0.0180 0.2700 -0.8266 0.1088 -1.0348 -0.2726 0.0343 -0.4277 0.8879 -0.8953 -0.2478 0.2174 0.2613 -0.6738 -0.1412 0.2374 0.8069 -0.6075 -0.9603 0.2013 -0.5475 -0.5451 0.4215 0.7904 0.6294 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.6204 -0.2479 0.3610 -0.6442 0.2198 -0.1270 0.0529 -0.0496 0.7118 -0.3627 0.0039 -0.9271 0.5207 0.4132 -0.4368 0.7495 -0.3887 -0.8411 0.7995 -0.5429 -0.2067 0.5127 -0.9422 0.1268 0.2113 0.8595 0.6901 -1.0819 0.8435 0.0205 -0.6033 -0.6975 -0.0426 0.0301 -0.4844 -0.9754 0.4493 0.8814 0.0705 -0.3817 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
!W 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.8465 -0.9480 -1.0240 0.8453 0.3406 -0.6085 -1.1087 0.7574 0.6576 -0.0049 0.1833 0.0483 -0.3441 1.3800 -0.4230 -0.8121 0.3001 -0.3075 -0.6494 -0.8333 0.0000
!T 0.0230 -0.0624 -0.0387 -0.0249 -0.0592 0.0946 -0.0461 -0.0368 -0.0627 0.0138 -0.0085 -0.0554 -0.0227 0.1130 0.0041 -0.0466 -0.0653 0.0552 0.0170 -0.0734 -0.0908 -0.0234 -0.0130 -0.1462 -0.0006 -0.0003 -0.0064 -0.1354 -0.0098 -0.0589 -0.0142 -0.0955 0.0469 0.0579 -0.0568 0.0028 0.0167 0.0024 -0.0718 -0.1274 -0.1158 -0.2530 -0.3168 -0.2604 -0.6344 -0.1292 0.2879 -0.4527 0.0816 -0.2186 -0.4175 -0.3668 0.0683 -0.2972 -0.0510 0.2390 -0.0202 -0.3516 -0.0919 0.1575 -0.2976
!Z

252
THES/SAL.PP Normal file
View File

@ -0,0 +1,252 @@
PROGRAM Salieri_network_training_program (Input,Output);
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
USES
DOS, struct, BP_unit, misc1, ANSI_Z, globals, clasinst;
{General}
TYPE
REAL = SINGLE;
bpnp_ = BP_node_ptr_;
wnp_ = weight_node_ptr_;
vnp_ = vector_node_ptr_;
seq_pop_rec_ = RECORD
n : notes_;
t : INTEGER;
e : REAL;
END;
seq_pop_ = ARRAY[0..99] OF seq_pop_rec_;
seq_pop_command_ = (init,replace);
VAR
snet, s31net, s46net : BP_net_;
ii, jj, kk : INTEGER;
Done : BOOLEAN;
cmn : common_area_;
notes : notes_;
tp1, tp2, tp3 : DVE_ptr_;
error_m, tne, sum : ARRAY[1..3] OF REAL;
ss : STRING;
binsum : ARRAY[1..3] OF INTEGER;
fpos, fneg : INTEGER;
tr : REAL;
sinch : CHAR;
scon : STRING;
sp : seq_pop_;
PROCEDURE maintain_seq_pop (VAR sp1 : seq_pop_;
spot : INTEGER;
cmd : seq_pop_command_);
VAR
ii, jj : INTEGER;
BEGIN
CASE cmd OF
init : BEGIN
FOR ii := 0 TO 99 DO BEGIN
FOR jj := 1 TO 5 DO BEGIN
sp1[ii].n[jj] := 0;
END;
sp1[ii].t := 0;
sp1[ii].e := 0.0;
END; {FOR ii}
END; {init}
replace : BEGIN
REPEAT
FOR jj := 1 TO 3 DO BEGIN
IF (jj = 1) THEN BEGIN
sp1[spot].n[jj] := Random(9);
END
ELSE BEGIN
IF (sp1[spot].n[jj-1] = 0) THEN BEGIN
sp1[spot].n[jj] := Random(9);
END
ELSE BEGIN
sp1[spot].n[jj] := Random(8) + 1;
END;
END;
END;
FOR jj := 4 TO v_len_out DO BEGIN
sp1[spot].n[jj] := Random(8) + 1;
END; {FOR jj}
sp1[spot].t := Classical_instructor(sp1[spot].n);
UNTIL (Odd(spot)) OR (sp1[spot].t = 1);
END; {replace}
ELSE
BEGIN
END;
END; {Case CMD}
END;
PROCEDURE Set_input_vector_from_notes (vp : DVE_ptr_;
n : notes_);
VAR
ii : INTEGER;
vpt : DVE_ptr_;
vn : ARRAY[1..40] OF INTEGER;
BEGIN
FillChar (vn,SizeOf(vn),#0);
{Blank the current vector}
FOR ii := 1 TO 5 DO BEGIN{Notes subscript}
IF n[ii] > 0 THEN vn [((ii-1)*8)+n[ii]] := 1;
END; {For notes subscript}
vpt := vp;
FOR ii := 1 TO 40 DO BEGIN
vnp_(vpt^.dptr)^.v := vn[ii];
vpt := vpt^.right;
END; {FOR ii}
END;
BEGIN
Done := FALSE;
s46net.data_fname := 's61.dat';
ANSI_CUP(13,0);
Writeln(MemAvail:8);
Writeln(s46net.data_fname);
Setup_BP_net (s46net,s46net.data_fname);
Writeln;
Writeln(s46net.wt_fname);
Set_BP_net_weights_from_file(s46net,s46net.wt_fname);
ANSI_CLRSCR;
Writeln(MemAvail:8);
maintain_seq_pop(sp,0,init);
FOR ii := 1 TO 100 DO BEGIN
maintain_seq_pop(sp,ii-1,replace);
END;
REPEAT
IF dir_console_IO (sinch) THEN BEGIN
IF (UpCase(sinch) = 'Q') THEN BEGIN
Close (s46net.out_f);
EXIT;
END;
END;
FOR ii := 1 TO 3 DO BEGIN
error_m[ii] := 0;
sum[ii] := 0;
binsum[ii] := 0;
fpos := 0;
fneg := 0;
END;
FOR ii := 1 TO 100 DO BEGIN
IF dir_console_IO (sinch) THEN BEGIN
IF (UpCase(sinch) = 'Q') THEN BEGIN
Close (s46net.out_f);
EXIT;
END;
END;
Set_input_vector_from_notes (s46net.vi,sp[ii-1].n);
vnp_(s46net.vts^.dptr)^.v := sp[ii-1].t;
BP_train_and_change (s46net);
tne[3] := ABS(BP_net_error(s46net));
sp[ii-1].e := tne[3];
notes := sp[ii-1].n;
FOR kk := 3 TO 3 DO BEGIN
error_m[kk] := max_single(ABS(error_m[kk]),tne[kk]);
END;
IF ((tne[3] > 0.50) AND (vnp_(s46net.vts^.dptr)^.v = 1.0)) OR
((tne[3] >= 0.50) AND (vnp_(s46net.vts^.dptr)^.v = 0.0))
THEN BEGIN
INC(binsum[3]);
IF ((tne[3] > 0.50) AND (vnp_(s46net.vts^.dptr)^.v = 1.0))
THEN INC(fneg);
IF ((tne[3] >= 0.50) AND (vnp_(s46net.vts^.dptr)^.v = 0.0)
) THEN INC(fpos);
Write (s46net.out_f,'I ');
FOR kk := 1 TO 5 DO BEGIN
Write (s46net.out_f,(notes[kk]/1.0):1:1,' ');
END;
Writeln (s46net.out_f);
Writeln (s46net.out_f,'T ',vnp_(s46net.vts^.dptr)^.v:1:1);
END;
ANSI_CUP(20,0);
Write(ii:4,' Max Current Ave. Binary ');
FOR kk := 1 TO 5 DO Write(notes[kk]:1);
Write(' ',vnp_(s46net.vts^.dptr)^.v:2:1);
ANSI_CUP(24,17);
FOR kk := 1 TO 5 DO Write(notes[kk]:1);
FOR kk := 3 TO 3 DO BEGIN
ANSI_CUP(20+kk,0);
sum[kk] := sum[kk] + tne[kk];
Write(kk:4,' ',error_m[kk]:5:3,' ',tne[kk]:5:3,
' ',(sum[kk]/ii):5:3,' ',binsum[kk]:3);
END;
Write(' FPOS: ',fpos:3,' FNEG: ',fneg:3);
IF (sp[ii-1].e < Random) THEN BEGIN
maintain_seq_pop(sp,ii-1,replace);
END;
END; {FOR ii}
FOR kk := 3 TO 3 DO BEGIN
ANSI_CUP(14+kk,0);
Write(kk:4,' ',error_m[kk]:5:3,' ',(sum[kk]/100):5:3,' ',
binsum[kk]:3);
END;
Write(' FPOS: ',fpos:3,' FNEG: ',fneg:3);
Done := (error_m[3] <= s46net.errtol);
Dump_BP_net_weights(s46net,s46net.wt_fname);
UNTIL (Done);
Close (s46net.out_f);
END.


15
THES/SEQUENCE.DAT Normal file
View File

@ -0,0 +1,15 @@
154
145
15
14
51
41
146
245
135
358
78
751
251
258


288
THES/STRUCT.PP Normal file
View File

@ -0,0 +1,288 @@
UNIT struct;
{
This unit provides a general linked-list seet of functions.
}
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
INTERFACE
{PUBLIC DECLARATIONS}
TYPE
dve_ptr_ = ^dve_;
dve_ = RECORD {terminology: DVE is diploid vector
element}
{Linkage can be both forward and
backward}
up, down, left, right : dve_ptr_;
{Forward and backward links}
dptr : POINTER; {Untyped pointer}
END;
hve_ptr_ = ^hve_;
hve_ = RECORD {terminology: HVE is haploid vector
element}
{Linkage is forward only}
down, right : hve_ptr_; {Forward links}
dptr : POINTER; {Untyped pointer}
END;
FUNCTION Create_DVE_vector (num_elem : WORD; size : WORD): POINTER;
{This function takes a size for a pointer based structure which is based
on the DVE_ type declaration and returns a pointer to a doubly linked
list which has NUM_ELEM number of elements. The calling program should
assign this pointer's value to a pointer of the type which SIZE
describes. The head element's LEFT pointer is NIL, RIGHT points to the
next element or is NIL, UP and DOWN are NIL. Elements beside the head
element have non-NIL LEFT pointers. Calling routines should check for
NIL returned pointers, as this indicates that not enough memory was
available to allocate the vector.}
FUNCTION copy_DVE_vector (aptr : POINTER; num_elem : WORD; size : WORD): POINTER;
{ This function takes a pointer to a DVE vector and creates a copy, then
passes back a pointer to the copy vector.}
FUNCTION Create_HVE_vector (num_elem : WORD; size : WORD; rt : BOOLEAN): POINTER;
{This function creates a linked list of HVE_ type, with space allocated
based on size of a HVE_ based structure definition. If RT is TRUE, then
the function allocates a vector using the RIGHT linkages, and DOWN
pointers are set to NIL. Otherwise, the obverse is true. A NIL pointer
returned as a result indicates that not enough memory was available for
allocation.}
FUNCTION Create_matrix (n_across, n_down : WORD; size : WORD): POINTER;
{This function creates a doubly linked matrix of DVE vector elements,
using the provided size.}
FUNCTION Find_element_DVE (num : WORD; pntr : POINTER): POINTER;
FUNCTION Find_element_HVE (num : WORD; pntr : POINTER): POINTER;
FUNCTION Find_element_matrix (n1, n2 : WORD; pntr : POINTER): POINTER;
IMPLEMENTATION
{PRIVATE DECLARATIONS}
{IMPLEMENTATIONS OF PROCEDURES AND FUNCTIONS}
FUNCTION Create_DVE_vector (num_elem : WORD;
size : WORD): POINTER;
{This function takes a size for a pointer based structure which is based
on the DVE_ type declaration and returns a pointer to a doubly linked
list which has NUM_ELEM number of elements. The calling program should
assign this pointer's value to a pointer of the type which SIZE
describes. The head element's LEFT pointer is NIL, RIGHT points to the
next element or is NIL, UP and DOWN are NIL. Elements beside the head
element have non-NIL LEFT pointers. Calling routines should check for
NIL returned pointers, as this indicates that not enough memory was
available to allocate the vector.}
VAR
ii : INTEGER;
TempStart, Temp : dve_ptr_;
BEGIN
GetMem (TempStart,SizeOf(TempStart^));
Temp := TempStart;
Temp^.left := NIL;
Temp^.up := NIL;
Temp^.down := NIL;
GetMem(Temp^.dptr,size);;
FOR II := 2 TO num_elem DO BEGIN
GetMem (Temp^.right,SizeOf(TempStart^));
Temp^.right^.left := Temp;
Temp := Temp^.right;
Temp^.up := NIL;
Temp^.down := NIL;
GetMem(Temp^.dptr,size);
END; {for}
Temp^.right := NIL;
Create_DVE_vector := TempStart;
END;
FUNCTION copy_DVE_vector (aptr : POINTER;
num_elem : WORD;
size : WORD): POINTER;
{ This function takes a pointer to a DVE vector and creates a copy, then
passes back a pointer to the copy vector.}
VAR
tptr, bptr, t2ptr : dve_ptr_;
ii, jj : WORD;
BEGIN
bptr := create_DVE_vector (num_elem,size);
t2ptr := bptr;
Tptr := aptr;
ii := 1;
WHILE (Tptr <> NIL) AND (ii < num_elem) DO BEGIN
Move (bptr^.dptr^,tptr^.dptr^,size);
Tptr := Tptr^.right;
bptr := bptr^.right;
INC(ii);
END; {while}
copy_DVE_vector := t2ptr;
END;
FUNCTION Create_HVE_vector (num_elem : WORD;
size : WORD;
rt : BOOLEAN): POINTER;
{This function creates a linked list of HVE_ type, with space allocated
based on size of a HVE_ based structure definition and on the size of
the
user structure pointed to within each vector element. If RT is TRUE,
then
the function allocates a vector using the RIGHT linkages, and DOWN
pointers
are set to NIL. Otherwise, the obverse is true. A NIL pointer returned
as
a result indicates that not enough memory was available for allocation.}
VAR
ii : INTEGER;
TempStart, Temp : HVE_ptr_;
BEGIN
IF rt THEN BEGIN
GetMem (TempStart,SizeOf(TempStart^));
Temp := TempStart;
Temp^.down := NIL;
GetMem(Temp^.dptr,size);;
FOR II := 2 TO num_elem DO BEGIN
GetMem (Temp^.right,SizeOf(TempStart^));
Temp := Temp^.right;
Temp^.down := NIL;
GetMem(Temp^.dptr,size);
END; {for}
Temp^.right := NIL;
END {if}
ELSE BEGIN
GetMem (TempStart,SizeOf(TempStart^));
Temp := TempStart;
Temp^.right := NIL;
GetMem(Temp^.dptr,size);;
FOR II := 2 TO num_elem DO BEGIN
GetMem (Temp^.down,SizeOf(TempStart^));
Temp := Temp^.down;
Temp^.right := NIL;
GetMem(Temp^.dptr,size);
END; {for}
Temp^.down := NIL;
END; {else}
Create_HVE_vector := TempStart;
END;
FUNCTION Create_matrix (n_across, n_down : WORD;
size : WORD): POINTER;
{This function creates a doubly linked matrix of DVE vector elements,
using the provided size for allocation of user space.}
VAR
ii, jj : WORD;
StartPtr, T1Ptr, T2Ptr : DVE_ptr_;
PROCEDURE Link_DVE_vectors (v1, v2 : DVE_ptr_);
BEGIN
WHILE (v1 <> NIL) AND (v2 <> NIL) DO BEGIN
v1^.down := v2;
v2^.up := v1; {Next elements}
v1 := v1^.right;
v2 := v2^.right;
END; {while}
END; {Link_DVE_vectors}
BEGIN
StartPtr := Create_DVE_vector (n_across, size);
T1Ptr := StartPtr;
FOR jj := 2 TO n_down DO BEGIN
T2Ptr := Create_DVE_vector (n_across, size);
Link_DVE_vectors (T1Ptr, T2Ptr);
T1Ptr := T2Ptr;
END; {for}
Create_matrix := StartPtr;
END; {Create_matrix}
FUNCTION Find_element_DVE (num : WORD;
pntr : POINTER): POINTER;
VAR
ii : WORD;
T1 : DVE_ptr_;
BEGIN
T1 := pntr;
ii := 1;
WHILE (T1 <> NIL) AND (ii < num) DO BEGIN
T1 := T1^.right;
INC(ii);
END; {while}
Find_element_DVE := T1;
END; {}
FUNCTION Find_element_HVE (num : WORD;
pntr : POINTER): POINTER;
VAR
ii : WORD;
T1 : HVE_ptr_;
BEGIN
T1 := pntr;
ii := 1;
WHILE (T1 <> NIL) AND (ii < num) DO BEGIN
T1 := T1^.right;
INC(ii);
END; {while}
Find_element_HVE := T1;
END; {}
FUNCTION Find_element_matrix (n1, n2 : WORD;
pntr : POINTER): POINTER;
VAR
ii, jj : WORD;
T1 : DVE_ptr_;
BEGIN
T1 := pntr;
ii := 1;
jj := 1;
WHILE (T1 <> NIL) AND (jj < n2) DO BEGIN
T1 := T1^.down;
jj := jj + 1;
END; {while}
WHILE (T1 <> NIL) AND (ii < n1) DO BEGIN
T1 := T1^.right;
ii := ii + 1;
END; {while}
Find_element_matrix := T1;
END; {}
BEGIN {Initialization}
END.


70
THES/TESTCOMP.PP Normal file
View File

@ -0,0 +1,70 @@
PROGRAM test_music (Input,Output);
{
This program compares the conformance of an input note sequence to the set
of example sequences in 'SEQUENCE.DAT'.
}
{
Copyright 1989 by Wesley R. Elsberry. All rights reserved.
Commercial use of this software is prohibited without written consent of
the author.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
USES
DOS, misc1, ANSI_Z, globals, clasinst;
{General}
VAR
ii, jj, kk : INTEGER;
notes : notes_;
sinch : CHAR;
scon,infname : STRING;
inf : TEXT;
cnt,success : INTEGER;
BEGIN
Success := 0;
Cnt := 0;
FOR ii := 1 TO 5 DO BEGIN
notes[ii] := 0;
END;
{Get filename to test}
Write('Name of file to process: ');
Readln(infname);
{Open for input}
Assign(inf,infname);
Reset(inf);
REPEAT
Readln(inf,notes[5]);
IF (classical_instructor(notes) = 1) THEN BEGIN
INC(success);
END;
INC(cnt);
FOR ii := 1 TO 4 DO BEGIN
notes[ii] := notes[ii+1];
END;
UNTIL (Eof(inf));
Close(inf);
Writeln('Count = ',cnt:5,' Successes = ',success:5);
END.


8946
THES/THES.6 Normal file

File diff suppressed because it is too large Load Diff

BIN
THES/THES1.EXP Normal file

Binary file not shown.

186
THES/THPROPOS.TXT Normal file
View File

@ -0,0 +1,186 @@
Revision date:
Jun 07, 1988
Thesis proposal
Wesley R. Elsberry
Master's candidate, CSE
Committee:
Karan Briggs, CSE (Graduate Chairman)
Daniel Levine, Mathematics
Lynn Peterson, CSE
Preliminary outline
I. Introduction
II. Literature review
III. Topic proposal
a. Topic description
b. Topic verification (implementation)
i. Application proposal
ii. Description
iii. Resources needed for accomplishment
************************************
I. Introduction
The field of artificial neural network research currently
suffers from several misapprehensions on the part of researchers.
First, communication continues to be sketchy and prone to
misunderstanding, as no clearcut definitions have been attached
to even the most commonly accepted terms and phrases that
comprise ANN jargon. Researchers will ignore the
interdisciplinary nature of ANN research to promote or denigrate
ANN results in a specialized context. Often this is done in such
a way that it is not clear that the comments or analysis are only
valid in the specialized context. Second, the motivations for
research vary wildly, and thus criticisms of models or data often
are intiated on the basis of entirely different goal assumptions.
Finally, much criticism and infighting occurs not because of any
real research related causes, but because of politicking and the
quest for personal power or recognition. While Kuhn [History of
Scientific Revolutions] may revel in the unfolding byplay, it is
a source of annoyance and an obstacle to good work for others
engaged in this research. While these misapprehensions may not
be conscious in nature, that does not lessen the negative impact
of the misapprehensions.
One misapprehension which remains particularly pervasive is the
idea that there exists one 'correct' model for artificial neural
networks. The biological reality reflects a complex set of
systems which accomplish diverse functions. No one has suggested
that all biological neural systems operate in the same manner.
Other, more easily apprehensible, biological systems reflect that
variation arises both in structures and mechanisms that perform
functional tasks. Spiders, insects, fish, birds, and mammals
have all developed methods of flight, yet none are quite the
same. Other examples can demonstrate that the same mechanism may
be coopted for more than one purpose. Certainly the expectation
should be that biological neural systems follow this pattern, yet
the prevailing attitude in current ANN research denies this.
Different models reflect variation in an approach to a single
function, or simply approaches to different functions.
Comparisons which should account for this feature often do not.
Since various models will have features which make them
preferable for classes of problems, solving problems which can be
divided into subset problems may be best solved through
integration and coordination of differing ANN models. This
approach is expected to prove more tractable and productive than
attempting to force a solution model to fit a specified problem
complex (or changing the problem specification to fit the model).
II. Literature review
Problem solving as McCulloch and Pitts envisioned it
[from Levine 83]
As Rosenblatt redefined it
[from Levine 83 and Rosenblatt ??]
What Hopfield says about Grossberg [this will be short]
[from H-T 86]
What Rumelhart and McClelland say about Hopfield
[from PDP]
What Rumelhart and McClelland say about Grossberg
[from PDP]
What Grossberg says about everybody else [stated as briefly as
possible]
[from Applied Optics article, 87 Cognitive Science
article]
Evidences for multi-model integration:
PDP Ch. 26, p 541: "A problem with the PDP models presented in
this book is that they are too specialized, so concerned with
solving the problem of the moment that they do not ask how the
whole might fit together. The various chapters present us with
different versions of a single, homogeneous structure,
perfectly well-suited for doing its task, but not sufficient,
in my opinion, at doing the whole task. One structure can't do
the job: There have to be several parts to the system that do
different things, sometimes communicating with each other,
sometimes not."
Of course, McClelland here means to have several variants of
the PDP model performing the functions, and is not per se
referring to a multi-model approach. But the admission that a
single instantiation of a model does not a solution make is
very important.
III. Topic proposal
a. Topic description
Use the models of Hopfield, PDP, and Grossberg's ART in an
integrated manner to solve a problem set that is a complex
suite of problem classes. The purpose here is not to
develop a general tool for such problems, but to demonstrate
the desirability and applicability of using an integrative
approach to ANN problem solving.
b. Topic verification (implementation)
i. Application proposal
Possible project 1: Cryptographic example. Small problem
that involves transposition, pattern recognition, and
feature detection and extraction. Models used as pre- and
co- processors for problem-solving.
ii. Description
The data set generated for presentation to the solution
system may have complex interdependencies which the ANN
would have to extract.
iii. Resources needed for accomplishment
Computer:
Available currently:
Heathkit H-100, MS-DOS, 768K
Heathkit H-158, MS-DOS (PC comp), 640K
DEC PDP 11/23, RT-11, 256K
Languages:
Available currently:
Under MS-DOS:
Turbo Pascal
XLISP
PD-Prolog
Turbo C
ECO-C88
ICON
MS-FORTRAN
MASM
Under RT-11:
MACRO-11 (assembler)
DIBOL


136
THES/TRANS.PP Normal file
View File

@ -0,0 +1,136 @@
PROGRAM trannote(Input,Output);
{
This program reads a file written by note_generator and converts it
to a form usable by Music Transcription System (MTS).
}
{
Copyright 1989 by Wesley R. Elsberry & Diane J. Blackwood.
All rights reserved.
Commercial use of this software is prohibited without written consent of
the authors.
For information, bug reports, and updates contact
Wesley R. Elsberry
528 Chambers Creek Drive South
Everman, Texas 76140
Telephone: (817) 551-7018
}
USES
Dos, CRT, ANN, struct;
CONST
{For file_note}
fnote: ARRAY[1..8] OF INTEGER = (12,14,16,17,19,21,23,24);
VAR
cnote : INTEGER;
hunt : STRING;
inf : TEXT; {Input file handle}
instr : STRING;
line_ct : INTEGER;
line : STRING; {String to hold line from temp file
before writing to final file}
outf : TEXT; {Output file handle}
panel : INTEGER;
{----------------------------------------------------------}
PROCEDURE filenote(cn: INTEGER;
VAR ln_ct: INTEGER);
VAR
i : INTEGER;
BEGIN
IF ( (ln_ct = 11) OR (SeekEof(inf)) ) THEN Write (outf,
'0 1 4 40 0 ')
ELSE Write (outf,'1 1 4 40 0 ');
Write (outf,fnote[cn]);
Writeln (outf,' -1 -1 ');
IF (ln_ct = 1) THEN inc(panel);
IF (ln_ct = 11) THEN BEGIN
ln_ct := 0;
Writeln (outf);
Writeln (outf);
END;
END;
BEGIN {Main}
{initialize variables}
line_ct := 1;
panel := 0;
{get filename of input file}
REPEAT
Write ('File to translate? ');
Readln (instr);
instr := FSearch(instr,GetEnv('PATH'));
UNTIL (Length(instr) <> 0);
Assign (inf,instr);
Reset(inf);
{get filename of output file}
REPEAT
Write ('File to store translated music? ');
Readln (instr);
hunt := FSearch(instr,GetEnv('PATH'));
UNTIL (Length(hunt) = 0);
Assign (outf,'temp.sng');
Rewrite(outf);
{write header to mts file}
Writeln(outf,'30');
Writeln(outf,'10');
Writeln(outf,'4 4 0 3 3 7 6 4 2 2 2 1 1 1 1 2 ');
Writeln(outf,'0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ');
{process each note for use by mts}
WHILE NOT Eof(inf) DO BEGIN
Readln(inf,cnote);
filenote(cnote,line_ct);
line_ct := line_ct+1;
END;
{close the files in use}
Close(outf);
Close(inf);
{Copy temp file to final file with the value for panel corrected}
Assign (inf,'temp.sng');
Reset (inf);
Assign (outf,instr);
Rewrite (outf);
{read first two header lines and use panel to correct the number of
panels}
Readln(inf,line);
Writeln(outf,line);
Readln(inf,line);
Writeln(outf,panel);
{read the rest of the temp file and write to the final file}
WHILE NOT Eof(inf) DO BEGIN
Readln(inf,line);
Writeln(outf,line);
END;
Writeln (outf);
Writeln (outf);
{Close the files in use}
Close(outf);
Close(inf);
END. {Main}


60
composer_ans/__init__.py Normal file
View File

@ -0,0 +1,60 @@
"""Compatibility-first Python port of the thesis composition system."""
from .analysis import CompositionAnalysis, analyze_composition
from .art1 import ART1Category, ART1Network, ART1Params, ART1Result
from .backprop import BackpropNetwork, BackpropResult
from .beethoven import BeethovenCategorizer, BeethovenResult
from .classical_rules import ClassicalInstructor
from .encoding import encode_art_input, encode_note_sequence, encode_sequence_one_hot
from .hopfield import (
HopfieldNetworkState,
HopfieldParams,
HopfieldResult,
HopfieldRunResult,
generate_next_note,
run_hopfield_network,
)
from .salieri import SalieriCritic, SalieriResult
from .pipeline import CompositionPipeline, PipelineStep
from .reporting import build_run_report, save_run_report_json
from .types import (
CompositionContext,
CompositionRecord,
CompositionRunReport,
LegacyPaths,
NoteSequence,
)
__all__ = [
"CompositionAnalysis",
"ART1Category",
"ART1Network",
"ART1Params",
"ART1Result",
"BackpropNetwork",
"BackpropResult",
"BeethovenCategorizer",
"BeethovenResult",
"ClassicalInstructor",
"CompositionContext",
"CompositionPipeline",
"CompositionRecord",
"CompositionRunReport",
"HopfieldNetworkState",
"HopfieldParams",
"HopfieldResult",
"HopfieldRunResult",
"LegacyPaths",
"NoteSequence",
"PipelineStep",
"SalieriCritic",
"SalieriResult",
"analyze_composition",
"build_run_report",
"encode_art_input",
"encode_note_sequence",
"encode_sequence_one_hot",
"generate_next_note",
"run_hopfield_network",
"save_run_report_json",
]

4
composer_ans/__main__.py Normal file
View File

@ -0,0 +1,4 @@
from .cli import main
raise SystemExit(main())

66
composer_ans/analysis.py Normal file
View File

@ -0,0 +1,66 @@
from __future__ import annotations
from collections import Counter, defaultdict
from dataclasses import dataclass
import math
@dataclass(frozen=True)
class CompositionAnalysis:
note_count: int
alphabet_size: int
unigram_entropy_bits: float
conditional_entropy_bits: float
normalized_entropy: float
predictability: float
redundancy: float
def shannon_entropy(sequence: tuple[int, ...] | list[int]) -> float:
if not sequence:
return 0.0
counts = Counter(sequence)
total = len(sequence)
return -sum((count / total) * math.log2(count / total) for count in counts.values())
def first_order_conditional_entropy(sequence: tuple[int, ...] | list[int]) -> float:
if len(sequence) < 2:
return 0.0
transitions: dict[int, Counter[int]] = defaultdict(Counter)
source_counts = Counter(sequence[:-1])
for left, right in zip(sequence[:-1], sequence[1:]):
transitions[left][right] += 1
total_transitions = len(sequence) - 1
entropy = 0.0
for source, next_counts in transitions.items():
source_prob = source_counts[source] / total_transitions
total = sum(next_counts.values())
source_entropy = -sum(
(count / total) * math.log2(count / total) for count in next_counts.values()
)
entropy += source_prob * source_entropy
return entropy
def analyze_composition(
sequence: tuple[int, ...] | list[int],
*,
alphabet_size: int = 8,
) -> CompositionAnalysis:
notes = tuple(int(note) for note in sequence)
unigram_entropy = shannon_entropy(notes)
conditional_entropy = first_order_conditional_entropy(notes)
max_entropy = math.log2(alphabet_size) if alphabet_size > 1 else 0.0
normalized_entropy = unigram_entropy / max_entropy if max_entropy else 0.0
predictability = 1.0 - (conditional_entropy / max_entropy if max_entropy else 0.0)
redundancy = 1.0 - normalized_entropy
return CompositionAnalysis(
note_count=len(notes),
alphabet_size=alphabet_size,
unigram_entropy_bits=unigram_entropy,
conditional_entropy_bits=conditional_entropy,
normalized_entropy=normalized_entropy,
predictability=predictability,
redundancy=redundancy,
)

207
composer_ans/art1.py Normal file
View File

@ -0,0 +1,207 @@
from __future__ import annotations
from dataclasses import dataclass
import json
@dataclass(frozen=True)
class ART1Params:
max_categories: int
input_length: int
vigilance: float = 0.9
initial_bottom_up: float = 0.1
initial_top_down: float = 0.9
vigilance_decay: float = 0.99
@dataclass(frozen=True)
class ART1Category:
bottom_up: tuple[float, ...]
top_down: tuple[float, ...]
committed: bool
@dataclass(frozen=True)
class ART1Result:
winner: int
matched: bool
new_category: bool
delta_vigilance: bool
committed_categories: int
vigilance: float
expected_vector: tuple[int, ...]
class ART1Network:
def __init__(self, params: ART1Params) -> None:
self.params = params
self.vigilance = params.vigilance
self._categories = [
{
"bottom_up": [params.initial_bottom_up] * params.input_length,
"top_down": [params.initial_top_down] * params.input_length,
"committed": False,
}
for _ in range(params.max_categories)
]
@property
def committed_categories(self) -> int:
return sum(1 for category in self._categories if category["committed"])
@property
def categories(self) -> tuple[ART1Category, ...]:
return tuple(
ART1Category(
bottom_up=tuple(category["bottom_up"]),
top_down=tuple(category["top_down"]),
committed=bool(category["committed"]),
)
for category in self._categories
)
def categorize(self, input_vector: tuple[int, ...] | list[int]) -> ART1Result:
vector = tuple(int(value) for value in input_vector)
if len(vector) != self.params.input_length:
raise ValueError(
f"expected input length {self.params.input_length}, got {len(vector)}"
)
eligible = {index for index, category in enumerate(self._categories) if category["committed"]}
delta_vigilance = False
while True:
if not eligible:
if self.committed_categories < self.params.max_categories:
winner = self.committed_categories
self._commit_category(winner, vector)
return ART1Result(
winner=winner,
matched=True,
new_category=True,
delta_vigilance=delta_vigilance,
committed_categories=self.committed_categories,
vigilance=self.vigilance,
expected_vector=tuple(vector),
)
self.vigilance *= self.params.vigilance_decay
delta_vigilance = True
eligible = {
index for index, category in enumerate(self._categories) if category["committed"]
}
winner = self._choose_winner(vector, eligible)
self._resonate(winner, vector)
expected_vector = self._expected_vector(winner)
return ART1Result(
winner=winner,
matched=True,
new_category=False,
delta_vigilance=True,
committed_categories=self.committed_categories,
vigilance=self.vigilance,
expected_vector=expected_vector,
)
winner = self._choose_winner(vector, eligible)
expected_vector = self._expected_vector(winner)
if self._match(vector, expected_vector):
self._resonate(winner, vector)
return ART1Result(
winner=winner,
matched=True,
new_category=False,
delta_vigilance=delta_vigilance,
committed_categories=self.committed_categories,
vigilance=self.vigilance,
expected_vector=expected_vector,
)
eligible.remove(winner)
def _choose_winner(self, vector: tuple[int, ...], eligible: set[int]) -> int:
best_index = min(eligible)
best_score = float("-inf")
for index in sorted(eligible):
category = self._categories[index]
score = sum(
vector[i] * category["bottom_up"][i] for i in range(self.params.input_length)
)
if score > best_score:
best_score = score
best_index = index
return best_index
def _expected_vector(self, category_index: int) -> tuple[int, ...]:
top_down = self._categories[category_index]["top_down"]
threshold = sum(top_down) / self.params.input_length
return tuple(1 if value >= threshold else 0 for value in top_down)
def _match(self, vector: tuple[int, ...], expected_vector: tuple[int, ...]) -> bool:
ones_in_input = sum(vector)
raw_match = sum(1 for left, right in zip(vector, expected_vector) if left == 1 and right == 1)
if ones_in_input == 0:
return raw_match > 0
return (raw_match / ones_in_input) >= self.vigilance
def _commit_category(self, category_index: int, vector: tuple[int, ...]) -> None:
category = self._categories[category_index]
category["committed"] = True
category["top_down"] = [float(value) for value in vector]
ones = max(1, sum(vector))
category["bottom_up"] = [float(value) / ones for value in vector]
def _resonate(self, category_index: int, vector: tuple[int, ...]) -> None:
category = self._categories[category_index]
intersected = [1 if category["top_down"][i] >= 0.5 and vector[i] == 1 else 0 for i in range(self.params.input_length)]
category["top_down"] = [float(value) for value in intersected]
ones = max(1, sum(intersected))
category["bottom_up"] = [float(value) / ones for value in intersected]
def to_dict(self) -> dict[str, object]:
return {
"params": {
"max_categories": self.params.max_categories,
"input_length": self.params.input_length,
"vigilance": self.params.vigilance,
"initial_bottom_up": self.params.initial_bottom_up,
"initial_top_down": self.params.initial_top_down,
"vigilance_decay": self.params.vigilance_decay,
},
"vigilance": self.vigilance,
"categories": self._categories,
}
@classmethod
def from_dict(cls, data: dict[str, object]) -> "ART1Network":
params_data = data["params"] # type: ignore[index]
network = cls(
ART1Params(
max_categories=int(params_data["max_categories"]), # type: ignore[index]
input_length=int(params_data["input_length"]), # type: ignore[index]
vigilance=float(params_data["vigilance"]), # type: ignore[index]
initial_bottom_up=float(params_data["initial_bottom_up"]), # type: ignore[index]
initial_top_down=float(params_data["initial_top_down"]), # type: ignore[index]
vigilance_decay=float(params_data["vigilance_decay"]), # type: ignore[index]
)
)
network.vigilance = float(data["vigilance"])
network._categories = [
{
"bottom_up": [float(value) for value in category["bottom_up"]],
"top_down": [float(value) for value in category["top_down"]],
"committed": bool(category["committed"]),
}
for category in data["categories"] # type: ignore[index]
]
return network
def save_json(self, path: str) -> None:
with open(path, "w", encoding="utf-8") as handle:
json.dump(self.to_dict(), handle, indent=2)
@classmethod
def load_json(cls, path: str) -> "ART1Network":
with open(path, "r", encoding="utf-8") as handle:
data = json.load(handle)
return cls.from_dict(data)

282
composer_ans/backprop.py Normal file
View File

@ -0,0 +1,282 @@
from __future__ import annotations
from dataclasses import dataclass
import json
import math
import random
from typing import Iterable
from .types import LegacyBPWeights, SalieriConfig
def sigmoid(range_value: float, slope_mod: float, shift: float, x: float) -> float:
temp = -(slope_mod * x)
temp = max(min(temp, 80.0), -80.0)
return (range_value / (1.0 + math.exp(temp))) - shift
@dataclass(frozen=True)
class BackpropNodeState:
node_type: str
net_input: float
delta: float
theta: float
range_value: float = 1.0
shift: float = 0.0
@dataclass(frozen=True)
class BackpropResult:
outputs: tuple[float, ...]
error: float
node_states: tuple[BackpropNodeState, ...]
class BackpropNetwork:
def __init__(
self,
*,
n_input: int,
n_hidden: int,
n_output: int,
learning_rate: float,
alpha: float,
weights: list[list[float]],
thetas: list[float],
) -> None:
self.n_input = n_input
self.n_hidden = n_hidden
self.n_output = n_output
self.learning_rate = learning_rate
self.alpha = alpha
self.node_count = n_input + n_hidden + n_output
self.weights = weights
self.thetas = thetas
self.last_weight_updates = [
[0.0 for _ in range(self.node_count)] for _ in range(self.node_count)
]
self.last_theta_updates = [0.0 for _ in range(self.node_count)]
self.node_types = self._build_node_types()
self.connectivity = self._build_connectivity()
@classmethod
def random(
cls,
*,
n_input: int,
n_hidden: int,
n_output: int,
learning_rate: float = 0.5,
alpha: float = 0.5,
rng: random.Random | None = None,
) -> "BackpropNetwork":
generator = rng or random.Random()
node_count = n_input + n_hidden + n_output
weights = [
[generator.uniform(-1.0, 1.0) for _ in range(node_count)]
for _ in range(node_count)
]
thetas = [0.0] * n_input + [generator.gauss(0.0, 0.25) for _ in range(n_hidden + n_output)]
return cls(
n_input=n_input,
n_hidden=n_hidden,
n_output=n_output,
learning_rate=learning_rate,
alpha=alpha,
weights=weights,
thetas=thetas,
)
@classmethod
def from_legacy(
cls,
*,
config: SalieriConfig,
legacy_weights: LegacyBPWeights,
) -> "BackpropNetwork":
return cls(
n_input=config.n_input,
n_hidden=config.n_hidden,
n_output=config.n_output,
learning_rate=config.learning_rate,
alpha=config.alpha,
weights=[list(row) for row in legacy_weights.weights],
thetas=list(legacy_weights.thetas),
)
def predict(self, inputs: Iterable[float]) -> BackpropResult:
input_values = tuple(float(value) for value in inputs)
if len(input_values) != self.n_input:
raise ValueError(f"expected {self.n_input} inputs, got {len(input_values)}")
net_inputs = [0.0 for _ in range(self.node_count)]
activations = [0.0 for _ in range(self.node_count)]
for idx in range(self.node_count):
if self.node_types[idx] == "input":
net_inputs[idx] = input_values[idx]
activations[idx] = input_values[idx]
continue
total = 0.0
for src in range(self.node_count):
if not self.connectivity[idx][src]:
continue
if self.node_types[src] == "input":
total += net_inputs[src] * self.weights[idx][src]
else:
total += sigmoid(1.0, 1.0, 0.0, net_inputs[src] + self.thetas[src]) * self.weights[idx][src]
net_inputs[idx] = total
activations[idx] = sigmoid(1.0, 1.0, 0.0, total + self.thetas[idx])
outputs = tuple(activations[self.n_input + self.n_hidden :])
node_states = tuple(
BackpropNodeState(
node_type=self.node_types[idx],
net_input=net_inputs[idx],
delta=0.0,
theta=self.thetas[idx],
)
for idx in range(self.node_count)
)
return BackpropResult(outputs=outputs, error=0.0, node_states=node_states)
def train_step(self, inputs: Iterable[float], targets: Iterable[float]) -> BackpropResult:
input_values = tuple(float(value) for value in inputs)
target_values = tuple(float(value) for value in targets)
if len(target_values) != self.n_output:
raise ValueError(f"expected {self.n_output} targets, got {len(target_values)}")
net_inputs = [0.0 for _ in range(self.node_count)]
activations = [0.0 for _ in range(self.node_count)]
for idx in range(self.node_count):
if self.node_types[idx] == "input":
net_inputs[idx] = input_values[idx]
activations[idx] = input_values[idx]
continue
total = 0.0
for src in range(self.node_count):
if not self.connectivity[idx][src]:
continue
source_activation = (
net_inputs[src]
if self.node_types[src] == "input"
else sigmoid(1.0, 1.0, 0.0, net_inputs[src] + self.thetas[src])
)
total += source_activation * self.weights[idx][src]
net_inputs[idx] = total
activations[idx] = sigmoid(1.0, 1.0, 0.0, total + self.thetas[idx])
deltas = [0.0 for _ in range(self.node_count)]
output_start = self.n_input + self.n_hidden
max_error = 0.0
for idx in range(self.node_count - 1, -1, -1):
activation = activations[idx]
if self.node_types[idx] == "output":
target = target_values[idx - output_start]
raw_error = target - activation
max_error = max(max_error, abs(raw_error))
deltas[idx] = raw_error * activation * (1.0 - activation)
elif self.node_types[idx] == "hidden":
downstream = 0.0
for dst in range(self.node_count):
if self.connectivity[dst][idx]:
downstream += deltas[dst] * self.weights[dst][idx]
deltas[idx] = activation * (1.0 - activation) * downstream
for idx in range(self.node_count):
theta_update = self.learning_rate * deltas[idx] + self.alpha * self.last_theta_updates[idx]
self.last_theta_updates[idx] = theta_update
self.thetas[idx] += theta_update
for dst in range(self.node_count):
destination_activation = (
net_inputs[dst]
if self.node_types[dst] == "input"
else activations[dst]
)
for src in range(self.node_count):
if not self.connectivity[dst][src]:
continue
update = self.learning_rate * (deltas[src] * destination_activation)
update += self.alpha * self.last_weight_updates[dst][src]
self.last_weight_updates[dst][src] = update
self.weights[dst][src] += update
outputs = tuple(activations[output_start:])
node_states = tuple(
BackpropNodeState(
node_type=self.node_types[idx],
net_input=net_inputs[idx],
delta=deltas[idx],
theta=self.thetas[idx],
)
for idx in range(self.node_count)
)
return BackpropResult(outputs=outputs, error=max_error, node_states=node_states)
def _build_node_types(self) -> list[str]:
return (
["input"] * self.n_input
+ ["hidden"] * self.n_hidden
+ ["output"] * self.n_output
)
def _build_connectivity(self) -> list[list[bool]]:
connectivity = [[False for _ in range(self.node_count)] for _ in range(self.node_count)]
hidden_start = self.n_input
output_start = self.n_input + self.n_hidden
for dst in range(hidden_start, output_start):
for src in range(self.n_input):
connectivity[dst][src] = True
for dst in range(output_start, self.node_count):
for src in range(hidden_start, output_start):
connectivity[dst][src] = True
return connectivity
def to_dict(self) -> dict[str, object]:
return {
"n_input": self.n_input,
"n_hidden": self.n_hidden,
"n_output": self.n_output,
"learning_rate": self.learning_rate,
"alpha": self.alpha,
"weights": self.weights,
"thetas": self.thetas,
"last_weight_updates": self.last_weight_updates,
"last_theta_updates": self.last_theta_updates,
}
@classmethod
def from_dict(cls, data: dict[str, object]) -> "BackpropNetwork":
network = cls(
n_input=int(data["n_input"]),
n_hidden=int(data["n_hidden"]),
n_output=int(data["n_output"]),
learning_rate=float(data["learning_rate"]),
alpha=float(data["alpha"]),
weights=[[float(value) for value in row] for row in data["weights"]], # type: ignore[index]
thetas=[float(value) for value in data["thetas"]], # type: ignore[index]
)
network.last_weight_updates = [
[float(value) for value in row]
for row in data.get("last_weight_updates", network.last_weight_updates) # type: ignore[arg-type]
]
network.last_theta_updates = [
float(value)
for value in data.get("last_theta_updates", network.last_theta_updates) # type: ignore[arg-type]
]
return network
def save_json(self, path: str) -> None:
with open(path, "w", encoding="utf-8") as handle:
json.dump(self.to_dict(), handle, indent=2)
@classmethod
def load_json(cls, path: str) -> "BackpropNetwork":
with open(path, "r", encoding="utf-8") as handle:
data = json.load(handle)
return cls.from_dict(data)

77
composer_ans/beethoven.py Normal file
View File

@ -0,0 +1,77 @@
from __future__ import annotations
from dataclasses import dataclass
import json
from .art1 import ART1Network, ART1Params, ART1Result
from .encoding import encode_art_input, encode_note_sequence
from .types import ART_CATEGORY_LIMIT, ART_INPUT_LENGTH, NoteSequence
@dataclass(frozen=True)
class BeethovenResult:
notes: NoteSequence
is_classical: bool
art_result: ART1Result
class BeethovenCategorizer:
def __init__(self, network: ART1Network | None = None) -> None:
self.network = network or ART1Network(
ART1Params(
max_categories=ART_CATEGORY_LIMIT,
input_length=ART_INPUT_LENGTH,
)
)
def categorize(
self,
notes: list[int] | tuple[int, ...],
*,
is_classical: bool,
) -> BeethovenResult:
sequence = encode_note_sequence(notes)
input_vector = encode_art_input(sequence, is_classical=is_classical)
art_result = self.network.categorize(input_vector)
return BeethovenResult(
notes=sequence,
is_classical=is_classical,
art_result=art_result,
)
@classmethod
def with_params(
cls,
*,
max_categories: int = ART_CATEGORY_LIMIT,
input_length: int = ART_INPUT_LENGTH,
vigilance: float = 0.9,
vigilance_decay: float = 0.99,
) -> "BeethovenCategorizer":
return cls(
network=ART1Network(
ART1Params(
max_categories=max_categories,
input_length=input_length,
vigilance=vigilance,
vigilance_decay=vigilance_decay,
)
)
)
def to_dict(self) -> dict[str, object]:
return {"network": self.network.to_dict()}
@classmethod
def from_dict(cls, data: dict[str, object]) -> "BeethovenCategorizer":
return cls(network=ART1Network.from_dict(data["network"])) # type: ignore[arg-type]
def save_json(self, path: str) -> None:
with open(path, "w", encoding="utf-8") as handle:
json.dump(self.to_dict(), handle, indent=2)
@classmethod
def load_json(cls, path: str) -> "BeethovenCategorizer":
with open(path, "r", encoding="utf-8") as handle:
data = json.load(handle)
return cls.from_dict(data)

View File

@ -0,0 +1,33 @@
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
from .encoding import encode_note_sequence
from .types import NoteSequence
@dataclass(frozen=True)
class ClassicalInstructor:
sequences: tuple[str, ...]
@classmethod
def from_sequence_file(cls, path: str | Path) -> "ClassicalInstructor":
sequence_path = Path(path)
sequences = tuple(
line.strip().rstrip("\x1a")
for line in sequence_path.read_text(encoding="ascii").splitlines()
if line.strip().rstrip("\x1a")
)
return cls(sequences=sequences)
def classify(self, notes: list[int] | tuple[int, ...]) -> int:
target = "".join(str(note) for note in encode_note_sequence(notes))
for candidate in self.sequences:
candidate_len = len(candidate)
if target[-candidate_len:] == candidate:
return 1
return 0
def __call__(self, notes: list[int] | tuple[int, ...]) -> int:
return self.classify(notes)

81
composer_ans/cli.py Normal file
View File

@ -0,0 +1,81 @@
from __future__ import annotations
import argparse
from pathlib import Path
from .beethoven import BeethovenCategorizer
from .pipeline import CompositionPipeline
from .reporting import build_run_report, save_run_report_json
from .salieri import SalieriCritic
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(prog="triune-cadence")
parser.add_argument("--thes-root", default="THES")
parser.add_argument("--notes", type=int, default=16)
parser.add_argument("--object-threshold", type=int, default=3)
parser.add_argument("--max-attempts-per-note", type=int, default=500)
parser.add_argument("--art-vigilance", type=float, default=0.9)
parser.add_argument("--art-vigilance-decay", type=float, default=0.99)
parser.add_argument("--save-salieri")
parser.add_argument("--save-beethoven")
parser.add_argument("--load-salieri")
parser.add_argument("--load-beethoven")
parser.add_argument("--save-report")
return parser
def main() -> int:
args = build_parser().parse_args()
root = Path(args.thes_root)
pipeline = CompositionPipeline.from_legacy_data_with_options(
root,
object_threshold=args.object_threshold,
art_vigilance=args.art_vigilance,
art_vigilance_decay=args.art_vigilance_decay,
)
if args.load_salieri:
pipeline.salieri = SalieriCritic.load_json(args.load_salieri)
if args.load_beethoven:
pipeline.beethoven = BeethovenCategorizer.load_json(args.load_beethoven)
record = pipeline.compose(
max_notes=args.notes,
max_attempts_per_note=args.max_attempts_per_note,
)
report = build_run_report(
record,
parameters={
"thes_root": str(root),
"notes_requested": args.notes,
"object_threshold": args.object_threshold,
"max_attempts_per_note": args.max_attempts_per_note,
"art_vigilance": args.art_vigilance,
"art_vigilance_decay": args.art_vigilance_decay,
},
)
print("notes:", " ".join(str(note) for note in report.notes))
print(
"per_note_seconds:",
" ".join(f"{elapsed:.6f}" for elapsed in report.per_note_seconds),
)
print(f"total_seconds: {report.total_seconds:.6f}")
if report.per_note_seconds:
mean_seconds = sum(report.per_note_seconds) / len(report.per_note_seconds)
print(f"mean_note_seconds: {mean_seconds:.6f}")
print(f"unigram_entropy_bits: {report.unigram_entropy_bits:.4f}")
print(f"conditional_entropy_bits: {report.conditional_entropy_bits:.4f}")
print(f"normalized_entropy: {report.normalized_entropy:.4f}")
print(f"predictability: {report.predictability:.4f}")
print(f"redundancy: {report.redundancy:.4f}")
if args.save_salieri:
pipeline.salieri.save_json(args.save_salieri)
if args.save_beethoven:
pipeline.beethoven.save_json(args.save_beethoven)
if args.save_report:
save_run_report_json(report, args.save_report)
return 0

34
composer_ans/encoding.py Normal file
View File

@ -0,0 +1,34 @@
from __future__ import annotations
from .types import ART_INPUT_LENGTH, NOTE_VOCABULARY_SIZE, NoteSequence, SEQUENCE_LENGTH
def encode_note_sequence(notes: list[int] | tuple[int, ...]) -> NoteSequence:
if len(notes) != SEQUENCE_LENGTH:
raise ValueError(f"expected {SEQUENCE_LENGTH} notes, got {len(notes)}")
encoded = tuple(int(note) for note in notes)
for note in encoded:
if not 0 <= note <= NOTE_VOCABULARY_SIZE:
raise ValueError(f"note out of range: {note}")
return encoded
def encode_sequence_one_hot(notes: list[int] | tuple[int, ...]) -> tuple[int, ...]:
encoded = encode_note_sequence(notes)
vector = [0] * (SEQUENCE_LENGTH * NOTE_VOCABULARY_SIZE)
for index, note in enumerate(encoded):
if note > 0:
vector[index * NOTE_VOCABULARY_SIZE + (note - 1)] = 1
return tuple(vector)
def encode_art_input(
notes: list[int] | tuple[int, ...],
*,
is_classical: bool,
) -> tuple[int, ...]:
vector = list(encode_sequence_one_hot(notes))
vector.append(1 if is_classical else 0)
if len(vector) != ART_INPUT_LENGTH:
raise AssertionError(f"unexpected ART input length: {len(vector)}")
return tuple(vector)

View File

@ -0,0 +1,62 @@
from __future__ import annotations
from dataclasses import asdict
import csv
from pathlib import Path
from .pipeline import CompositionPipeline
from .reporting import build_run_report, save_run_report_json
from .types import CompositionRunReport
def run_parameter_sweep(
*,
thes_root: str | Path,
output_dir: str | Path,
notes: int,
parameter_sets: list[dict[str, object]],
) -> list[CompositionRunReport]:
root = Path(thes_root)
destination = Path(output_dir)
destination.mkdir(parents=True, exist_ok=True)
reports: list[CompositionRunReport] = []
for index, params in enumerate(parameter_sets, start=1):
pipeline = CompositionPipeline.from_legacy_data_with_options(
root,
object_threshold=int(params.get("object_threshold", 3)),
art_vigilance=float(params.get("art_vigilance", 0.9)),
art_vigilance_decay=float(params.get("art_vigilance_decay", 0.99)),
)
max_attempts = int(params.get("max_attempts_per_note", 500))
record = pipeline.compose(max_notes=notes, max_attempts_per_note=max_attempts)
report = build_run_report(
record,
parameters={
"notes_requested": notes,
**params,
},
)
save_run_report_json(report, str(destination / f"run_{index:03d}.json"))
reports.append(report)
_write_summary_csv(destination / "summary.csv", reports)
return reports
def _write_summary_csv(path: Path, reports: list[CompositionRunReport]) -> None:
if not reports:
return
rows = []
for report in reports:
row = asdict(report)
row["notes"] = " ".join(str(note) for note in report.notes)
row["per_note_seconds"] = " ".join(f"{value:.6f}" for value in report.per_note_seconds)
row["parameters"] = str(report.parameters)
rows.append(row)
fieldnames = list(rows[0].keys())
with path.open("w", encoding="utf-8", newline="") as handle:
writer = csv.DictWriter(handle, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)

274
composer_ans/hopfield.py Normal file
View File

@ -0,0 +1,274 @@
from __future__ import annotations
from dataclasses import dataclass
import math
import random
from typing import Callable
from .encoding import encode_note_sequence
from .types import NOTE_VOCABULARY_SIZE, SEQUENCE_LENGTH
NoiseFn = Callable[[float, float], float]
@dataclass(frozen=True)
class HopfieldParams:
epsilon: float = 0.005
resistance_scale: float = 3.5
capacitance_scale: float = 10.0
weight_scale: float = 1.0
input_scale: float = 1.0
iteration_scale: float = 1.0
global_resistance: float = 1.0
global_capacitance: float = 1.0
@dataclass(frozen=True)
class HopfieldResult:
input_notes: tuple[int, ...]
output_notes: tuple[int, ...]
candidate_note: int
iterations: int
activations: tuple[tuple[float, ...], ...]
outputs: tuple[tuple[float, ...], ...]
@dataclass(frozen=True)
class HopfieldNetworkState:
activations: tuple[tuple[float, ...], ...]
outputs: tuple[tuple[float, ...], ...]
external_inputs: tuple[tuple[float, ...], ...]
@dataclass(frozen=True)
class HopfieldRunResult:
state: HopfieldNetworkState
iterations: int
def make_gaussian_noise(rng: random.Random | None = None) -> NoiseFn:
generator = rng or random.Random()
def gaussian_noise(mean: float, variance: float) -> float:
u1 = generator.random()
u2 = generator.random()
# Match the Pascal Box-Muller form and avoid ln(0).
u1 = max(u1, 1e-12)
x = math.sqrt(-2.0 * math.log(u1)) * math.cos(2.0 * math.pi * u2)
return variance * x + mean
return gaussian_noise
def tanh_clamped(value: float, exp_max: float = 80.0) -> float:
value = max(min(value, exp_max), -exp_max)
return (math.exp(value) - math.exp(-value)) / (math.exp(value) + math.exp(-value))
def run_hopfield_network(
external_inputs: tuple[tuple[float, ...], ...],
weight_matrix: tuple[tuple[float, ...], ...],
*,
params: HopfieldParams | None = None,
initial_activations: tuple[tuple[float, ...], ...] | None = None,
) -> HopfieldRunResult:
model = params or HopfieldParams()
row_count = len(external_inputs)
if row_count == 0:
raise ValueError("external_inputs cannot be empty")
column_count = len(external_inputs[0])
if any(len(row) != column_count for row in external_inputs):
raise ValueError("external_inputs rows must be the same length")
_validate_weight_matrix(weight_matrix, active_size=row_count * column_count)
base_activations = initial_activations or tuple(
tuple(0.5 for _ in range(column_count)) for _ in range(row_count)
)
if len(base_activations) != row_count or any(len(row) != column_count for row in base_activations):
raise ValueError("initial_activations shape must match external_inputs")
activations = [
[list(row) for row in base_activations],
[list(row) for row in base_activations],
]
outputs = [
[[0.0 for _ in range(column_count)] for _ in range(row_count)],
[[0.0 for _ in range(column_count)] for _ in range(row_count)],
]
inputs = [
[list(row) for row in external_inputs],
[list(row) for row in external_inputs],
]
outputs[1][0][0] = 20.0
time_step = 0
_update_outputs(activations, outputs, time_step, model, row_count, column_count)
iterations = 0
while not _done(outputs, model.epsilon, row_count, column_count):
time_step = time_step % 2
next_time = (time_step + 1) % 2
_update_outputs(activations, outputs, time_step, model, row_count, column_count)
for row_index in range(row_count):
for column_index in range(column_count):
delta = _delta_neuron_activation(
row_index=row_index,
column_index=column_index,
row_count=row_count,
column_count=column_count,
time_step=time_step,
activations=activations,
outputs=outputs,
inputs=inputs,
weight_matrix=weight_matrix,
params=model,
)
activations[next_time][row_index][column_index] = (
activations[time_step][row_index][column_index]
+ model.iteration_scale * delta
)
time_step += 1
iterations += 1
final_slot = time_step % 2
state = HopfieldNetworkState(
activations=tuple(tuple(row) for row in activations[final_slot]),
outputs=tuple(tuple(row) for row in outputs[final_slot]),
external_inputs=tuple(tuple(row) for row in external_inputs),
)
return HopfieldRunResult(state=state, iterations=iterations)
def generate_next_note(
notes: list[int] | tuple[int, ...],
weight_matrix: tuple[tuple[float, ...], ...],
*,
params: HopfieldParams | None = None,
noise: NoiseFn | None = None,
) -> HopfieldResult:
model = params or HopfieldParams()
gaussian_noise = noise or make_gaussian_noise()
input_notes = encode_note_sequence(notes)
inputs = [[0.0 for _ in range(SEQUENCE_LENGTH)] for _ in range(NOTE_VOCABULARY_SIZE)]
for note_index in range(NOTE_VOCABULARY_SIZE):
for position in range(SEQUENCE_LENGTH):
note_value = input_notes[position]
if note_value == 0:
current_input = gaussian_noise(0.5, 0.25)
elif note_value == note_index + 1:
current_input = 0.67 + gaussian_noise(0.0, 0.1)
else:
current_input = 0.33 + gaussian_noise(0.0, 0.1)
inputs[note_index][position] = current_input
output_notes = list(input_notes)
run_result = run_hopfield_network(
tuple(tuple(row) for row in inputs),
weight_matrix,
params=model,
)
for position in range(SEQUENCE_LENGTH):
output_notes[position] = _max_cell_in_column(run_result.state.outputs, position)
return HopfieldResult(
input_notes=input_notes,
output_notes=tuple(output_notes),
candidate_note=output_notes[-1],
iterations=run_result.iterations,
activations=run_result.state.activations,
outputs=run_result.state.outputs,
)
def _validate_weight_matrix(
weight_matrix: tuple[tuple[float, ...], ...],
*,
active_size: int,
) -> None:
if len(weight_matrix) < active_size:
raise ValueError(f"weight matrix needs at least {active_size} rows")
if any(len(row) < active_size for row in weight_matrix[:active_size]):
raise ValueError(f"weight matrix needs at least {active_size} columns")
def _update_outputs(
activations: list[list[list[float]]],
outputs: list[list[list[float]]],
time_step: int,
params: HopfieldParams,
row_count: int,
column_count: int,
) -> None:
for row_index in range(row_count):
for column_index in range(column_count):
outputs[time_step][row_index][column_index] = 0.5 * (
1.0
+ tanh_clamped(
activations[time_step][row_index][column_index] / params.global_capacitance
)
)
def _done(
outputs: list[list[list[float]]],
epsilon: float,
row_count: int,
column_count: int,
) -> bool:
for row_index in range(row_count):
for column_index in range(column_count):
if abs(outputs[0][row_index][column_index] - outputs[1][row_index][column_index]) > epsilon:
return False
return True
def _weight_coord(row_index: int, column_index: int, row_count: int) -> int:
return row_count * column_index + row_index
def _delta_neuron_activation(
*,
row_index: int,
column_index: int,
row_count: int,
column_count: int,
time_step: int,
activations: list[list[list[float]]],
outputs: list[list[list[float]]],
inputs: list[list[list[float]]],
weight_matrix: tuple[tuple[float, ...], ...],
params: HopfieldParams,
) -> float:
weight_sum = 0.0
current_index = _weight_coord(row_index, column_index, row_count)
for other_row in range(row_count):
for other_column in range(column_count):
other_index = _weight_coord(other_row, other_column, row_count)
weight_sum += (
weight_matrix[current_index][other_index]
* params.weight_scale
* outputs[time_step][other_row][other_column]
)
activation = activations[time_step][row_index][column_index]
neuron_input = inputs[time_step][row_index][column_index]
numerator = (
-(activation / (params.global_resistance * params.resistance_scale))
+ (neuron_input * params.input_scale)
+ weight_sum
)
return numerator / (params.global_capacitance * params.capacitance_scale)
def _max_cell_in_column(
output_grid: list[list[float]] | tuple[tuple[float, ...], ...],
position: int,
) -> int:
max_value = 0.0
max_note = 1
for note_index in range(NOTE_VOCABULARY_SIZE):
output = output_grid[note_index][position]
if output > max_value:
max_value = output
max_note = note_index + 1
return max_note

View File

@ -0,0 +1,15 @@
from .legacy_files import (
load_hopfield_weight_matrix,
load_legacy_paths,
load_salieri_config,
load_salieri_weights,
load_sequence_table,
)
__all__ = [
"load_hopfield_weight_matrix",
"load_legacy_paths",
"load_salieri_config",
"load_salieri_weights",
"load_sequence_table",
]

View File

@ -0,0 +1,108 @@
from __future__ import annotations
from pathlib import Path
import struct
from composer_ans.types import (
HOPFIELD_WEIGHT_DIMENSION,
LegacyBPWeights,
LegacyPaths,
SALIERI_NODE_COUNT,
SalieriConfig,
)
def load_legacy_paths(root: str | Path) -> LegacyPaths:
return LegacyPaths(root=Path(root))
def load_sequence_table(path: str | Path) -> tuple[str, ...]:
sequence_path = Path(path)
return tuple(
line.strip().rstrip("\x1a")
for line in sequence_path.read_text(encoding="ascii").splitlines()
if line.strip().rstrip("\x1a")
)
def load_salieri_config(path: str | Path) -> SalieriConfig:
values: dict[str, str] = {}
for raw_line in Path(path).read_text(encoding="ascii").splitlines():
line = raw_line.strip()
if not line.startswith("!"):
continue
code = line[1:2].upper()
if code == "Z":
break
payload = line[2:].strip()
values[code] = payload
return SalieriConfig(
learning_rate=float(values["L"]),
alpha=float(values["A"]),
n_input=int(values["I"]),
n_hidden=int(values["H"]),
n_output=int(values["O"]),
training_iterations=int(values["T"].split()[0]),
error_tolerance=float(values["E"]),
data_file=values["D"],
report_file=values["R"],
weight_file=values["W"],
)
def load_salieri_weights(path: str | Path) -> LegacyBPWeights:
vector_length = None
weights: list[tuple[float, ...]] = []
thetas: tuple[float, ...] | None = None
for raw_line in Path(path).read_text(encoding="ascii").splitlines():
line = raw_line.strip()
if not line.startswith("!"):
continue
code = line[1:2].upper()
payload = line[2:].strip()
if code == "V":
vector_length = int(payload)
elif code == "W":
row = tuple(float(item) for item in payload.split())
weights.append(row)
elif code == "T":
thetas = tuple(float(item) for item in payload.split())
elif code == "Z":
break
if vector_length is None:
raise ValueError("missing !V in weight file")
if len(weights) != vector_length:
raise ValueError(f"expected {vector_length} weight rows, got {len(weights)}")
if any(len(row) != vector_length for row in weights):
raise ValueError("weight matrix is not square")
if thetas is None:
raise ValueError("missing !T in weight file")
if len(thetas) != vector_length:
raise ValueError(f"expected {vector_length} theta values, got {len(thetas)}")
return LegacyBPWeights(
vector_length=vector_length,
weights=tuple(weights),
thetas=thetas,
)
def load_hopfield_weight_matrix(path: str | Path) -> tuple[tuple[float, ...], ...]:
data = Path(path).read_bytes()
expected_size = HOPFIELD_WEIGHT_DIMENSION * HOPFIELD_WEIGHT_DIMENSION * 4
if len(data) != expected_size:
raise ValueError(f"expected {expected_size} bytes, got {len(data)}")
values = struct.unpack(
f"<{HOPFIELD_WEIGHT_DIMENSION * HOPFIELD_WEIGHT_DIMENSION}f",
data,
)
rows = []
for offset in range(0, len(values), HOPFIELD_WEIGHT_DIMENSION):
rows.append(tuple(values[offset : offset + HOPFIELD_WEIGHT_DIMENSION]))
return tuple(rows)
def extract_active_hopfield_submatrix(
matrix: tuple[tuple[float, ...], ...],
) -> tuple[tuple[float, ...], ...]:
active_size = SALIERI_NODE_COUNT - 21
return tuple(tuple(row[:active_size]) for row in matrix[:active_size])

157
composer_ans/pipeline.py Normal file
View File

@ -0,0 +1,157 @@
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
import time
from .beethoven import BeethovenCategorizer, BeethovenResult
from .hopfield import HopfieldResult, generate_next_note
from .io.legacy_files import extract_active_hopfield_submatrix, load_hopfield_weight_matrix
from .salieri import SalieriCritic, SalieriResult
from .types import CompositionContext, CompositionRecord, LegacyPaths
@dataclass(frozen=True)
class PipelineStep:
context: CompositionContext
hopfield: HopfieldResult
salieri: SalieriResult
beethoven: BeethovenResult
accepted: bool
objects: bool
elapsed_seconds: float
class CompositionPipeline:
def __init__(
self,
*,
hopfield_weights: tuple[tuple[float, ...], ...],
salieri: SalieriCritic,
beethoven: BeethovenCategorizer,
object_threshold: int = 3,
) -> None:
self.hopfield_weights = hopfield_weights
self.salieri = salieri
self.beethoven = beethoven
self.object_threshold = object_threshold
@classmethod
def from_legacy_data(cls, root: str | Path) -> "CompositionPipeline":
return cls.from_legacy_data_with_options(root)
@classmethod
def from_legacy_data_with_options(
cls,
root: str | Path,
*,
object_threshold: int = 3,
art_vigilance: float = 0.9,
art_vigilance_decay: float = 0.99,
) -> "CompositionPipeline":
paths = LegacyPaths(root=Path(root))
hopfield_weights = extract_active_hopfield_submatrix(
load_hopfield_weight_matrix(paths.hopfield_weights)
)
salieri = SalieriCritic.from_legacy_paths(paths.root)
beethoven = BeethovenCategorizer.with_params(
vigilance=art_vigilance,
vigilance_decay=art_vigilance_decay,
)
return cls(
hopfield_weights=hopfield_weights,
salieri=salieri,
beethoven=beethoven,
object_threshold=object_threshold,
)
def neural_step(self, context: CompositionContext) -> PipelineStep:
start_time = time.perf_counter()
hopfield = generate_next_note(context.notes, self.hopfield_weights)
salieri = self.salieri.evaluate_and_train(hopfield.output_notes)
beethoven = self.beethoven.categorize(
hopfield.output_notes,
is_classical=salieri.is_classical,
)
since_novelty = 0 if (beethoven.art_result.delta_vigilance or beethoven.art_result.new_category) else context.since_novelty + 1
frustration = context.frustration + 1 if beethoven.art_result.delta_vigilance else context.frustration
objects = since_novelty >= self.object_threshold
if objects:
since_novelty = 0
accepted = not (
(objects and salieri.is_classical) or ((not objects) and (not salieri.is_classical))
)
if accepted:
next_context = CompositionContext(
notes=hopfield.output_notes,
delta_vigilance=beethoven.art_result.delta_vigilance,
new_category=beethoven.art_result.new_category,
is_classical=salieri.is_classical,
candidate_note=hopfield.candidate_note,
since_novelty=since_novelty,
frustration=0,
note_count=context.note_count + 1,
)
else:
reset_notes = list(hopfield.output_notes)
reset_notes[-1] = 0
next_context = CompositionContext(
notes=tuple(reset_notes),
delta_vigilance=beethoven.art_result.delta_vigilance,
new_category=beethoven.art_result.new_category,
is_classical=salieri.is_classical,
candidate_note=hopfield.candidate_note,
since_novelty=since_novelty,
frustration=frustration + 1,
note_count=context.note_count,
)
return PipelineStep(
context=next_context,
hopfield=hopfield,
salieri=salieri,
beethoven=beethoven,
accepted=accepted,
objects=objects,
elapsed_seconds=time.perf_counter() - start_time,
)
def step_until_accepted(
self,
context: CompositionContext,
*,
max_attempts: int = 500,
) -> PipelineStep:
current = context
for _ in range(max_attempts):
step = self.neural_step(current)
if step.accepted:
return step
current = step.context
raise RuntimeError("failed to accept a note within max_attempts")
def compose(
self,
*,
max_notes: int,
initial_context: CompositionContext | None = None,
max_attempts_per_note: int = 500,
) -> CompositionRecord:
compose_start = time.perf_counter()
context = initial_context or CompositionContext()
accepted_notes: list[int] = []
per_note_seconds: list[float] = []
for _ in range(max_notes):
note_start = time.perf_counter()
step = self.step_until_accepted(context, max_attempts=max_attempts_per_note)
accepted_notes.append(step.context.notes[-1])
per_note_seconds.append(time.perf_counter() - note_start)
context = step.context
return CompositionRecord(
notes=tuple(accepted_notes),
per_note_seconds=tuple(per_note_seconds),
total_seconds=time.perf_counter() - compose_start,
)

34
composer_ans/reporting.py Normal file
View File

@ -0,0 +1,34 @@
from __future__ import annotations
from dataclasses import asdict
import json
from .analysis import CompositionAnalysis, analyze_composition
from .types import CompositionRecord, CompositionRunReport
def build_run_report(
record: CompositionRecord,
*,
alphabet_size: int = 8,
parameters: dict[str, object] | None = None,
) -> CompositionRunReport:
analysis = analyze_composition(record.notes, alphabet_size=alphabet_size)
return CompositionRunReport(
notes=record.notes,
per_note_seconds=record.per_note_seconds,
total_seconds=record.total_seconds,
parameters=parameters or {},
note_count=analysis.note_count,
alphabet_size=analysis.alphabet_size,
unigram_entropy_bits=analysis.unigram_entropy_bits,
conditional_entropy_bits=analysis.conditional_entropy_bits,
normalized_entropy=analysis.normalized_entropy,
predictability=analysis.predictability,
redundancy=analysis.redundancy,
)
def save_run_report_json(report: CompositionRunReport, path: str) -> None:
with open(path, "w", encoding="utf-8") as handle:
json.dump(asdict(report), handle, indent=2)

78
composer_ans/salieri.py Normal file
View File

@ -0,0 +1,78 @@
from __future__ import annotations
from dataclasses import dataclass
import json
from pathlib import Path
from .backprop import BackpropNetwork, BackpropResult
from .classical_rules import ClassicalInstructor
from .encoding import encode_note_sequence, encode_sequence_one_hot
from .io.legacy_files import load_salieri_config, load_salieri_weights
from .types import LegacyPaths, NoteSequence
@dataclass(frozen=True)
class SalieriResult:
notes: NoteSequence
target: int
raw_output: float
is_classical: bool
error: float
network_result: BackpropResult
class SalieriCritic:
def __init__(self, *, network: BackpropNetwork, instructor: ClassicalInstructor) -> None:
self.network = network
self.instructor = instructor
@classmethod
def from_legacy_paths(cls, root: str | Path) -> "SalieriCritic":
paths = LegacyPaths(root=Path(root))
config = load_salieri_config(paths.salieri_config)
weights = load_salieri_weights(paths.salieri_weights)
instructor = ClassicalInstructor.from_sequence_file(paths.sequence_data)
network = BackpropNetwork.from_legacy(config=config, legacy_weights=weights)
return cls(network=network, instructor=instructor)
def evaluate_and_train(
self,
notes: list[int] | tuple[int, ...],
*,
target: int | None = None,
) -> SalieriResult:
sequence = encode_note_sequence(notes)
encoded = tuple(float(value) for value in encode_sequence_one_hot(sequence))
training_target = self.instructor(sequence) if target is None else int(target)
network_result = self.network.train_step(encoded, (float(training_target),))
raw_output = network_result.outputs[0]
return SalieriResult(
notes=sequence,
target=training_target,
raw_output=raw_output,
is_classical=raw_output > 0.5,
error=network_result.error,
network_result=network_result,
)
def to_dict(self) -> dict[str, object]:
return {
"network": self.network.to_dict(),
"sequences": list(self.instructor.sequences),
}
@classmethod
def from_dict(cls, data: dict[str, object]) -> "SalieriCritic":
network = BackpropNetwork.from_dict(data["network"]) # type: ignore[arg-type]
instructor = ClassicalInstructor(sequences=tuple(data["sequences"])) # type: ignore[arg-type]
return cls(network=network, instructor=instructor)
def save_json(self, path: str) -> None:
with open(path, "w", encoding="utf-8") as handle:
json.dump(self.to_dict(), handle, indent=2)
@classmethod
def load_json(cls, path: str) -> "SalieriCritic":
with open(path, "r", encoding="utf-8") as handle:
data = json.load(handle)
return cls.from_dict(data)

89
composer_ans/types.py Normal file
View File

@ -0,0 +1,89 @@
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
NOTE_VOCABULARY_SIZE = 8
SEQUENCE_LENGTH = 5
ART_INPUT_LENGTH = 41
ART_CATEGORY_LIMIT = 25
HOPFIELD_WEIGHT_DIMENSION = 64
SALIERI_NODE_COUNT = 61
NoteSequence = tuple[int, ...]
@dataclass(frozen=True)
class LegacyPaths:
root: Path
@property
def sequence_data(self) -> Path:
return self.root / "SEQUENCE.DAT"
@property
def salieri_config(self) -> Path:
return self.root / "S61.DAT"
@property
def salieri_weights(self) -> Path:
return self.root / "S61.WT"
@property
def hopfield_weights(self) -> Path:
return self.root / "HTN.DAT"
@dataclass(frozen=True)
class SalieriConfig:
learning_rate: float
alpha: float
n_input: int
n_hidden: int
n_output: int
training_iterations: int
error_tolerance: float
data_file: str
report_file: str
weight_file: str
@dataclass(frozen=True)
class LegacyBPWeights:
vector_length: int
weights: tuple[tuple[float, ...], ...]
thetas: tuple[float, ...]
@dataclass(frozen=True)
class CompositionContext:
notes: NoteSequence = (0, 0, 0, 0, 0)
delta_vigilance: bool = False
new_category: bool = False
is_classical: bool = False
candidate_note: int = 0
since_novelty: int = 0
frustration: int = 0
note_count: int = 0
@dataclass(frozen=True)
class CompositionRecord:
notes: tuple[int, ...]
per_note_seconds: tuple[float, ...] = ()
total_seconds: float = 0.0
@dataclass(frozen=True)
class CompositionRunReport:
notes: tuple[int, ...]
per_note_seconds: tuple[float, ...]
total_seconds: float
parameters: dict[str, object]
note_count: int
alphabet_size: int
unigram_entropy_bits: float
conditional_entropy_bits: float
normalized_entropy: float
predictability: float
redundancy: float

Binary file not shown.

View File

@ -0,0 +1,189 @@
\documentclass[12pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{geometry}
\geometry{margin=1in}
\title{Competing Network Models and Problem-Solving}
\author{Converted from legacy plain-text source}
\date{}
\begin{document}
\maketitle
\begin{flushleft}
% This is a conservative automated conversion from the legacy text file.
% Manual cleanup will still be necessary for figures, references, footnotes, and formatting.
% raw formatter directive: .okidata9
% raw formatter directive: .ipr//
% raw formatter directive: .nopage break
% raw formatter directive: .page length 66
% raw formatter directive: .lpi6
% raw formatter directive: .above header 3
% raw formatter directive: .below header 3
% raw formatter directive: .above footer 3
% raw formatter directive: .below footer 3
% raw formatter directive: .page break
% raw formatter directive: .page number 1
% raw formatter directive: .pindent 15
% raw formatter directive: .pitch12
% raw formatter directive: .head//
% raw formatter directive: .foot/ \#/
% raw formatter directive: .ipr/T/
Competing Network Models and Problem-Solving
(Poster Presentation at the First Annual Meeting of the International Neural Network Society, September 6-10, 1988.)
Diane J. Blackwood, Department of Biomedical Engineering, University of Texas at Arlington
Wesley R. Elsberry, Department of Computer Science, University of Texas at Arlington
and
Sam Leven, Neural Systems and Science, 45 San Jacinto Way San Francisco, CA 94127
\section*{ABSTRACT}
Three of the most-often discussed neural networks models are analyzed and differentiated. The Hopfield, PDP, and ART models ask different questions, it is asserted -- and offer different answers for analyzing and construing complex environments. The three may not be competitors but, rather, complements. In fact, they may replicate different neural processes (Leven, 1987b). We seek to demonstrate the value of each model -- in a single case study.
The model offered by Hopfield (e.g., 1982) represents a fast-converging computable technique for analyzing highly limited classes of inputs. The PDP model (Rumelhart, et' al., 1986) offers the prospect of adoption of varied schemas, at the cost of a larger, more complex system. The ART model (e.g., Carpenter, et' al., 1987a) allows the greatest adaptability, including the capacity to vary vigilance levels and emulate many neural functions -- with the costs of much greater complexity and strain on system resources.
We present a single system, including analysis of different aspects of a problem by Hopfield, PDP, and ART networks, as an example of the potential for including many capabilities within the same environment.
% raw formatter directive: .start page
While self-criticism in the neural network community is not unusual (eg., Rumelhart, et' al., 1986, Ch' 1; Grossberg, 1987a), we may find rapprochement among "competing paradigms" more effective than the occasional nastiness we encounter. Some problems, especially in complex controls on robotics, may be best addressed by a cooperative approach.
In fact, the three paradigms most often considered mutually exclusive (Hopfield, PDP, and ART) may actually represent different neural processes (Leven, 1987a). In any case, they clearly contemplate separate issues -- and may be best in approaching distinct problems.
Hopfield's model (Hopfield, 1982; Hopfield and Tank, 1986) represents a fast-converging computable technique for analyzing stereotyped or highly limited classes of inputs. Achieved minima have the virtue of remaining highly stable (representing permanent learning). This virtue has the accompanying cost, of course, of minimizing adaptability --recognizing new aspects of data is not seriously contemplated for a stable implementation. The model has a notable tolerance for data sets containing great amounts of simple noise; however, it tends to shrink from "multi flavored" problems, which require category or schema formation in an extensive environment.
The model of the Parallel Distributed Processing (PDP) group (Rumelhart, et' al., 1986) contemplates "schema formation", seeking to apply standard cognitive psychological insights to pattern recognition and category formation processes. They have sought to take minimal anatomies and build, following the work of Schank and Abelson (1977), basic semantic structures.
The PDP school has achieved notable successes in representing language (Sejnowski, 1986) and other areas with stable knowledge domains. Where "dynamic schemata" (Schank, 1982) are generic to a problem -- where existing memory structures must be modified -- the strength of the simulated annealing algorithm becomes a weakness. Changing existing knowledge structures (by modification or replacement in the same state space) is well-nigh impossible (Yoon, et' al., 1988).
This weakness of the PDP, its stubbornness in resisting data that should produce restructured schemata, is also a strength. In certain environments, stable representations of higher-order structures (rules) coupled with the capacity to learn or be trained "up-front" may offer system designers desired control. Some systems should not be ENDLESSLY adaptive.
% raw formatter directive: .start page
Stephen Grossberg and his school (1987b \& c, Carpenter et' al. 1987a \& b) have suggested that the Adaptive Resonance (ART) model best represents higher-order neural functions. Equipped with representations for motivational processes and interactions between routines ("avalanches") and higher order structures (eg., motivational dipoles and associated READ architectures), a full- blown ART system can model highly adaptive motor tasks and emulate higher-order behaviors (Levine, 1986; Leven, 1987a \& b; and Ricart, 1988).
ART has the capacity to RECONSTRUE categories, based on continuing mismatches between data and existing higher order constructs and motivating environmental feedback. It also allows "masking fields" to eliminate from consideration whole segments of data which the system anticipates to be inappropriate or unnecessarily unsettling.
Under some circumstances, when using dipole structures to eliminate whole sets of competing representations (or rules), for example, ART can be faster -- and more effective -- than the alternatives we have presented. However, training an ART environment to perform highly routinized behaviors in which context has limited relevance has been considered more inefficient than using, say, the Hopfield model. Ordinarily, the powerful structures an ART modeler employs slow the learning process with error-checking routines which value fault- intolerance over speed. Yet, sometimes, in highly stable environments, designers may be uncomfortable with an ART system's capacity to "re-learn" essential skills they must employ.
Additionally, the rapid trainability and stability of a PDP environment may prove superior to ART, for many of the same reasons. Some higher-order rules (schemata) may be system- critical. In these cases, PROGRAMMERS SHOULD DESIGN SYSTEMS -- NOT THE SYSTEMS DESIGNING THEMSELVES. Hence, some systems may require less-intrusive network engines (like PDP) --especially when these engines also provide greater speed.
Thus, the three models for neural network design may be COMPLEMENTARY in function: Hopfield offering speed and stability, PDP providing up-front learning and stable rule structures, and ART employing context- and environment-sensitive capabilities (see Figure 1). We demonstrate, below, that modelers ought to consider these qualities in developing extensive systems -- and utilize the many effective tools at our disposal.
% raw formatter directive: .start page
\section*{EXAMPLE PROBLEM}
BEETHOVEN is a "music composition" system (see Figure 2). It provides a three part neural network model. The system emulates fundamental compositional rules to generate and perform a musical sequence.
BACH is a Hopfield net provides a sequence of notes, emulating musical melodic performance. A single voice selects notes from within a single octave. Biases are provided -- as a composer has the innate tendency to choose certain intervals and to reject notes that tend to violate common rules of harmony (eg., Aldwell and Schachter, 1978).
This network of notes is output, in sequence, to a PDP back- propagation network named SALIERI, which has learned a set of standard, somewhat higher-order harmonic rules. The network judges the effectiveness of the sequence, note by note, based on the intervals involved and the absolute note values (eg., \#7 should precede \#8 -- and, almost always, at the end of a phrase). These schemata, then, reject inappropriate sequences AND INHIBIT SOME INAPPROPRIATE NEXT NOTES. This "look-ahead" capability is unusual in a PDP environment, yet is fitting for the inhibitory role the network is playing and for the stability of the rule structure being employed.
The output from PDP flows, directly, to an ART network, BEETHOVEN. Employing a model of motivation (based on construction of category valuation and a healthy boredom at repetition), BEETHOVEN rejects "unaesthetic" sequences. As the number of phrases performed increases, the ART model develops intense biases, which it imposes on BACH and SALIERI.
One additional component of the environment is LOBES, the Context Manager. LOBES, loosely emulative of human frontal lobes (see Levine, 1986), maintains information about the processes being performed, mediates inter-model interaction, and provides for the final external output (sounding the speaker).
The model, then, utilizes the best capabilities of three distinctly different paradigms. Hopfield performs efficient routine processes, as would a "reptilian brain" (MacLean, 1970). PDP serves as an insistent schoolmarm, observing and enforcing higher-level rules, like a "neo-mammalian brain." ART provides a sense of fitness, an aesthetic fitting for models of the limbic system (or "mammalian brain").
Integration of many memory and processing functions in a three part model may be similar to human brain function (Leven, 1987b). Regardless of its biological versimilitude, however, such an approach seems to offer unique combinations of speed, stability, and flexibility.
% raw formatter directive: .start page
% raw formatter directive: .nopage break
% raw formatter directive: .page length 88
% raw formatter directive: .lpi8
% raw formatter directive: .page break
% raw formatter directive: .ipr//
% raw formatter directive: .pindent 12
% raw formatter directive: .ipr/
/
% raw formatter directive: .foot/
\#/
\section*{REFERENCES}
Aldwell, E' \& C' Schachter. 1978. Harmony and voice leading. Harcourt, Brace \& Jovanovich, New York.
Carpenter, G.A' \& S' Grossberg. 1987a. A massively parallel architecture for a self-organizing neural pattern recognition machine. Computer Vision, Graphics, and Image Processing 37:54-115.
Carpenter, G.A' \& S' Grossberg. 1987b. ART 2: self-organization of stable category recognition codes for analog input patterns. Applied Optics 26(23):4919-4930.
Grossberg, S. 1987a' Competitive Learning: From interactive activation to adaptive resonance. Cognitive Science 11:23-63.
Grossberg, S., ed. 1987b \& c. The Adaptive Brain. Vol. I and II. Elsevier/North-Holland, Amsterdam.
Hartley, R' and H' Szu. 1987. A comparison of the computational power of neural network models. IEEE Proc.
\section*{ICNN III:15-22.}
Hopfield, J.J. 1982. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79:2554-2558.
Hopfield, J.J' and D.W' Tank. 1985. "Neural" computation of decisions in optimization problems. Biol. Cybern. 52:141-152.
Hopfield, J.J' and D.W' Tank. 1986. Computing with neural circuits: A model. Science 233:625-633.
Leven, S. 1987a. Choice and neural process. Unpublished Ph.D. Dissertation, University of Texas at Arlington.
Leven, S. 1987b. S.A.M.: A triune extension to the ART model. Symposium on Neural Networks, North Texas State University. (Poster presentation)
Leven, S. 1988. Memory, helplessness, and the dynamics of hope. Presented at the Metroplex Institute for Neural Dynamics' Workshop on Motivation, Emotion, and Goal Direction in Neural Networks.
Levine, D.S. 1986. A neural network theory of frontal lobe function. In: The Proceedings of the Eighth Annual Conference of the Cognitive Science Society. Erlbaum.
MacLean, P. 1970. The triune brain, emotion, and scientific bias. In: F' Schmitt, ed. The Neurosciences: Second Study Program. Rockefeller University Press.
Ricart, R. 1988. Backward conditioning: A neural network model which exhibits both excitatory and inhibitory conditioning. Presented at the Metroplex Institute for Neural Dynamics' Workshop on Motivation, Emotion, and Goal Direction in Neural Networks.
Rumelhart, D' \& J' McClelland. 1986. Parallel Distributed Processing. MIT Press.
Schank, R. 1982. Dynamic memory. Cambridge University Press.
Schank, R.C' \& R.P' Abelson. 1977. Scripts, Plans, Goals, and Understanding. Erlbaum, Hillsdale, NJ.
Sejnowski, T.J' 1986. Open questions about computation in cerebral cortex. In: J.L. McClelland \& D.E. Rumelhart, eds. Parallel Distributed Processing Volume 2. MIT Press.
Simpson, R. 1988. A review of artificial neural systems II: Paradigms, applications, and implementations. Prepublication copy of paper submitted to CRC Critical Reviews in Artificial Intelligence.
Tank, D.W' \& J.J' Hopfield. 1986. Simple "neural" optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit. IEEE Transaction on Circuits and Systems CAS-33(5):533-541
Yoon, Y', L.L' Peterson, \& P.R' Bergstrasser. 1988. A dermatology expert system using connectionist network. Unpublished poster presentation, IEEE ICNN.
% raw formatter directive: .start page
% raw formatter directive: .nopage break
% raw formatter directive: .page length 88
% raw formatter directive: .ipr//
% raw formatter directive: .lpi8
% raw formatter directive: .page break
% raw formatter directive: .ipr/
/
% raw formatter directive: .pindent 12
% raw formatter directive: .foot/ \#/
Convergence Convergence Stability Feedback Category Mixed Data Category Computational Speed Likelihood Of Capability Formation (Complex Reconstruction Simplicity Network Environment) ------- ------- ------- ------- ------- ------- ------- -------
Hopfield + + + - - - - +
\section*{PDP 0 0 +/0 + + 0 - 0}
\section*{ART - - 0/- + ++ + + -}
Where '+' indicates a relative advantage, '0' indicates no special advantage or disadvantage, and '-' indicates a relative disadvantage.
Figure 1. Comparative analysis of features of the Hopfield, PDP, and ART artificial neural network models
% raw formatter directive: .ipr//
% raw formatter directive: .pitch12
% raw formatter directive: .ipr/T/
% raw formatter directive: .pindent 10
+-----------------+ | | (Match, Other Info) | Beethoven |---------------------+ | | |
\section*{| (ART 1) |<----------------+ |}
| | (Context) | | +-----------------+ | | \textasciicircum{} | | | | | |(Approval) | | | | |
\section*{| | V}
+-----------------+ +-----------------+ | | | | | Salieri | (Approval) | Lobes | | +------------>| (Context | | (PDP) |<------------| Management) | | | (Silence!) | | +-----------------+ +-----------------+ \textasciicircum{} \textasciicircum{} | | | (Candidate note) | | | +-------------------------+ | | | | | | | | +-----------------+ | | | | | | | Bach | (Generate Note!) | | | |<-------------------+ | | (Hopfield) | | | | | +-----------------+ (New Note) | |
+-------------------+ | | | | | Speaker | | | | | +-------------------+
Figure 2. Structure of sample system utilizing Hopfield, PDP, and ART models.
% raw formatter directive: .start page
% raw formatter directive: .nopage break
\end{flushleft}
\end{document}

View File

@ -0,0 +1,436 @@
@inproceedings{Amano1989,
author = {Amano, A., Aritsuka, T., Hataoka, N., and Ichikawa, A},
year = {1989},
title = {On the use of neural networks and fuzzy logic in speech recognition},
booktitle = {Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-89) Vol. I},
pages = {301-305}
}
@inproceedings{Blackwood1988,
author = {Blackwood, D. J., Elsberry, W. R., and Leven, S},
year = {1988},
title = {Competitive network models and problem solving},
note = {Poster Presentation at the First Annual Meeting of the International Neural Network Society.}
}
@article{Bower1981,
author = {Bower, G},
year = {1981},
title = {Mood and memory},
journal = {American Psychologist},
volume = {36},
pages = {129-148}
}
@article{Carpenter1987a,
author = {Carpenter, G., and Grossberg, S},
year = {1987a},
title = {A massively parallel architecture for a self-organizing neural pattern recognition machine},
journal = {Computer Vision, Graphics, and Image Processing},
volume = {37},
pages = {54-115}
}
@article{Carpenter1987b,
author = {Carpenter, G., and Grossberg, S},
year = {1987b},
title = {ART 2: self-organization of stable category recognition codes for analog input patterns},
journal = {Applied Optics},
volume = {26},
pages = {4919-4930}
}
@book{Charniak1985,
author = {Charniak, E., and McDermott, D},
year = {1985},
title = {Introduction to Artificial Intelligence},
publisher = {Addison-Wesley},
address = {Reading, Massachusetts},
note = {701 pp.}
}
@inproceedings{Cruz1989,
author = {Cruz, V., Cristobal, G., Michaux, T., and Barquin, S},
year = {1989},
title = {Invarient image recognition using a neural network model},
booktitle = {Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-89) Vol. II},
pages = {17-22}
}
@incollection{Farhat1986,
author = {Farhat, N},
year = {1986},
title = {Neural net models and optical computing: an overview},
booktitle = {Hybrid and Optical Computing},
editor = {Harold Szu},
publisher = {SPIE},
address = {Bellingham, Washington},
volume = {634},
pages = {277-306}
}
@misc{Ferguson1987,
author = {Ferguson, J},
year = {1987},
title = {Personal Communication}
}
@inproceedings{Foo1989,
author = {Foo, Y. P. S., and Szu, H},
year = {1989},
title = {Solving large-scale optimization problems by divide and conquer neural networks},
booktitle = {Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-89) Vol. I},
pages = {507-512}
}
@article{Grossberg1972,
author = {Grossberg, S},
year = {1972},
title = {A neural theory of punishment and avoidance. II. Quantitative theory},
journal = {Mathematical Biosciences},
volume = {15},
pages = {253-285}
}
@article{Grossberg1973,
author = {Grossberg, S},
year = {1973},
title = {Contour enhancement, short term memory, and constancies in reverberating neural networks},
journal = {Studies in Applied Mathematics},
volume = {52},
pages = {213-257}
}
@article{Grossberg1975,
author = {Grossberg, S},
year = {1975},
title = {A neural model of attention, reinforcement, and discrimination learning},
journal = {International Review of Neurobiology},
volume = {18},
pages = {263-327}
}
@incollection{Harmon1970,
author = {Harmon, L. D},
year = {1970},
title = {Neural subsystems: an interpretive summary},
booktitle = {The Neurosciences Second Study Program},
editor = {F. O. Schmitt},
publisher = {Rockefeller University Press},
address = {New York},
pages = {486-494}
}
@book{Hawking1988,
author = {Hawking, S. W},
year = {1988},
title = {A Brief History of Time: From the Big Bang to Black Holes},
publisher = {Bantam Books},
address = {New York}
}
@book{Hebb1949,
author = {Hebb, D},
year = {1949},
title = {The Organization of Behavior},
publisher = {Wiley},
address = {New York}
}
@incollection{HechtNielsen1986,
author = {Hecht-Nielsen, R},
year = {1986},
title = {Performance limits of optical, electro-optical, and electronic neurocomputers},
booktitle = {Hybrid and Optical Computing},
editor = {H. Szu},
publisher = {SPIE},
address = {Bellingham, Washington},
volume = {634},
pages = {277-306}
}
@inproceedings{HechtNielsen1987,
author = {Hecht-Nielsen, R},
year = {1987},
title = {Counterprpagation Networks},
booktitle = {Proceedings of the IEEE International Joint Conference on Neural Networks (ICNN-87) Vol. II},
pages = {19-32}
}
@article{Hewitt1985,
author = {Hewitt, C},
year = {1985},
title = {The challenge of open systems},
journal = {Byte},
volume = {10},
number = {4},
pages = {223-242}
}
@inproceedings{Hirsch1989,
author = {Hirsch, M},
year = {1989},
title = {Convergence in Cascades of Neural Networks},
booktitle = {Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-89) Vol. I},
pages = {207-208}
}
@article{Hopfield1982,
author = {Hopfield, J. J},
year = {1982},
title = {Neural networks and physical systems with emergent collective computational abilities},
journal = {Proceedings of the National Academy of Sciences},
volume = {79},
pages = {2554-2558}
}
@article{Hopfield1985,
author = {Hopfield, J. J., and Tank, D},
year = {1985},
title = {"Neural" computation of decisions in optimization problems},
journal = {Biological Cybernetics},
volume = {52},
pages = {141-152}
}
@inproceedings{Kohonen1989,
author = {Kohonen, T},
year = {1989},
title = {A self-learning Musical Grammar, or "Associative memory of the second kind"},
booktitle = {Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-89) Vol. I},
pages = {1-6}
}
@phdthesis{Leven1987a,
author = {Leven, S},
year = {1987a},
title = {Choice and Neural process: A dissertation},
school = {University of Texas at Arlington},
note = {Chapter 5: Neural process and form -- mathematics and meaning.}
}
@inproceedings{Leven1987b,
author = {Leven, S},
year = {1987b},
title = {S.A.M.: a triune extension to the ART model},
note = {Poster presentation at the North Texas State University Symposium on Neural Networks.}
}
@article{Levine1983,
author = {Levine, D},
year = {1983},
title = {Neural population modeling and psychology: a review},
journal = {Mathematical Biosciences},
volume = {66},
pages = {1-86}
}
@inproceedings{Levine1986,
author = {Levine, D},
year = {1986},
title = {A neural network theory of frontal lobe function},
booktitle = {Proceedings of the Eighth Annual Conference of the Cognitive Science Society, Erlbaum, Hillsdale, New Jersey},
pages = {716-727}
}
@unpublished{Levine1990,
author = {Levine, D},
year = {1990},
title = {Integration, disintegration and the frontal lobes},
note = {To appear in Motivation, Emotion, and Goal Direction in Neural Networks, D. Levine and S. Leven, eds., Erlbaum, Hillsdale, New Jersey.}
}
@article{Levine1989,
author = {Levine, D., and Prueitt, P},
year = {1989},
title = {Modeling some effects of frontal lobe damage: novelty and perseveration},
journal = {Neural Networks},
volume = {2},
pages = {103-116}
}
@article{Lippmann1987,
author = {Lippmann, R. P},
year = {1987},
title = {An introduction to computing with neural nets},
journal = {IEEE ASSP Magazine},
pages = {4-22},
month = {apr}
}
@article{MacCulloch1943,
author = {MacCulloch, W. S., and Pitts, W},
year = {1943},
title = {A logical calculus of the ideas immanent in nervous activity},
journal = {Bull. Math. Biophys},
volume = {5},
pages = {115-133}
}
@incollection{MacLean1970,
author = {MacLean, P. D},
year = {1970},
title = {The triune brain, emotion, and scientific bias},
booktitle = {The Neurosciences Second Study Program},
editor = {F. O. Schmitt},
publisher = {Rockefeller University Press},
address = {New York},
pages = {486-494}
}
@inproceedings{Matsuoka1989,
author = {Matsuoka, T. and Hamada, H. and Nakatsu, R.},
year = {1989},
title = {Syllable recognition using integrated neural networks}
}
@techreport{Neuroscience1988,
author = {{Metroplex Study Group on Computational Neuroscience}},
year = {1988},
title = {Computational neuroscience: an opportunity for technology leadership for the Metroplex},
institution = {North Texas Commission Regional Technology Program},
note = {Report to the North Texas Commission Regional Technology Program.}
}
@article{Newell1976,
author = {Newell, A., and Simon, H. A},
year = {1976},
title = {Computer science as empirical inquiry: symbols and search},
journal = {Communications of the ACM},
volume = {19},
number = {3},
pages = {113-126}
}
@article{Nottebohm1989,
author = {Nottebohm, F},
year = {1989},
title = {From bird song to neurogenesis},
journal = {Scientific American},
pages = {74-79},
month = {feb}
}
@book{Pao1989,
author = {Pao,Y.-H},
year = {1989},
title = {Adaptive Pattern Recognition and Neural Networks},
publisher = {Addison-Wesley},
address = {Reading, Massachusetts}
}
@misc{Paris1989,
author = {Paris, M},
year = {1989},
title = {Personal Communication}
}
@techreport{Parker1985,
author = {Parker, D. B},
year = {1985},
title = {Learning-logic},
institution = {Massachusetts Institute of Technology, Center for Computational Research in Economics and Management Science},
address = {Cambridge, Massachusetts},
number = {TR-47}
}
@inproceedings{Rabelo1989,
author = {Rabelo, L. C., and Alptekin, S},
year = {1989},
title = {Using hybrid neural networks and expert systems},
booktitle = {Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-89) Vol. II}
}
@inproceedings{Reddy1973,
author = {Reddy, D. R., Erman, L. D., Fennell, R. D., and Neely, R. B},
year = {1973},
title = {The Hearsay speech understanding system: an example of the recognition process},
booktitle = {Proceedings of the International Conference on Artificial Intelligence},
pages = {185-194}
}
@incollection{Rumelhart1986,
author = {Rumelhart, D., Hinton, G., and Williams, R},
year = {1986},
title = {Learning internal representations by back propagation},
booktitle = {Parallel Distributed Processing},
editor = {D. Rumelhart and J. McClelland and the PDP Research Group},
publisher = {MIT Press},
address = {Cambridge, Massachusetts},
volume = {1},
pages = {365-422}
}
@article{Shannon1948,
author = {Shannon, C},
year = {1948},
title = {A mathematical theory of communication},
journal = {Bell System Technical Journal},
volume = {27},
pages = {379-423}
}
@unpublished{Simpson1988,
author = {Simpson, P},
year = {1988},
title = {A review of artificial neural systems, Parts 1 and 2},
note = {Submitted to CRC Critical Reviews in Artificial Intelligence.}
}
@inproceedings{Sontag1989,
author = {Sontag, E. D., and Sussman, H. J},
year = {1989},
title = {Back-propagation separates when perceptrons do},
booktitle = {Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-89) Vol. I},
pages = {639-642}
}
@inproceedings{Szu1989,
author = {Szu, H},
year = {1989},
title = {Reconfigurable neural nets by energy convergence learning principle based on extended McCullch-Pitts neurons and synapses},
booktitle = {Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-89) Vol. I},
pages = {485-496}
}
@inproceedings{Tsutsumi1989,
author = {Tsutsumi, K},
year = {1989},
title = {A multi-layered neural network composed of backprop. and Hopfield nets and internal space representation},
booktitle = {Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-89) Vol. I},
pages = {507-512}
}
@phdthesis{Werbos1974,
author = {Werbos, P. J},
year = {1974},
title = {Beyond regression: new tools for prediction and analysis in the behavioral sciences},
school = {Unpublished Ph. D. dissertation, Harvard University}
}
@inproceedings{Widrow1987,
author = {Widrow, B},
year = {1987},
title = {ADALINE and MADALINE - 1963},
booktitle = {Proceedings of the IEEE International Joint Conference on Neural Networks (ICNN-87) Vol. I},
pages = {143-158}
}
@article{Widrow1961,
author = {Widrow, B., Pierce, W. H., and Angell, J. B},
year = {1961},
title = {Birth, life, and death in microelectronic systems},
journal = {IRE Transactions on Military Electronics},
volume = {4},
pages = {191-201}
}
@article{Widrow1988,
author = {Widrow, B. and Winter, R},
year = {1988},
title = {Neural nets for adaptive filtering and adaptive pattern recognition},
journal = {IEEE Computer},
volume = {21},
number = {3},
pages = {25-39}
}

File diff suppressed because it is too large Load Diff

BIN
latex/thesis_proposal.pdf Normal file

Binary file not shown.

99
latex/thesis_proposal.tex Normal file
View File

@ -0,0 +1,99 @@
\documentclass[12pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{geometry}
\geometry{margin=1in}
\title{Thesis Proposal}
\author{Converted from legacy plain-text source}
\date{}
\begin{document}
\maketitle
\begin{flushleft}
% This is a conservative automated conversion from the legacy text file.
% Manual cleanup will still be necessary for figures, references, footnotes, and formatting.
Revision date: Jun 07, 1988
Thesis proposal
Wesley R. Elsberry Master's candidate, CSE
Committee:
Karan Briggs, CSE (Graduate Chairman) Daniel Levine, Mathematics Lynn Peterson, CSE
Preliminary outline
\section*{I. Introduction}
\section*{II. Literature review}
\section*{III. Topic proposal}
a. Topic description
b. Topic verification (implementation)
i. Application proposal
ii. Description
iii. Resources needed for accomplishment
************************************
\section*{I. Introduction}
The field of artificial neural network research currently suffers from several misapprehensions on the part of researchers. First, communication continues to be sketchy and prone to misunderstanding, as no clearcut definitions have been attached to even the most commonly accepted terms and phrases that comprise ANN jargon. Researchers will ignore the interdisciplinary nature of ANN research to promote or denigrate ANN results in a specialized context. Often this is done in such a way that it is not clear that the comments or analysis are only valid in the specialized context. Second, the motivations for research vary wildly, and thus criticisms of models or data often are intiated on the basis of entirely different goal assumptions. Finally, much criticism and infighting occurs not because of any real research related causes, but because of politicking and the quest for personal power or recognition. While Kuhn [History of Scientific Revolutions] may revel in the unfolding byplay, it is a source of annoyance and an obstacle to good work for others engaged in this research. While these misapprehensions may not be conscious in nature, that does not lessen the negative impact of the misapprehensions.
One misapprehension which remains particularly pervasive is the idea that there exists one 'correct' model for artificial neural networks. The biological reality reflects a complex set of systems which accomplish diverse functions. No one has suggested that all biological neural systems operate in the same manner. Other, more easily apprehensible, biological systems reflect that variation arises both in structures and mechanisms that perform functional tasks. Spiders, insects, fish, birds, and mammals have all developed methods of flight, yet none are quite the same. Other examples can demonstrate that the same mechanism may be coopted for more than one purpose. Certainly the expectation should be that biological neural systems follow this pattern, yet the prevailing attitude in current ANN research denies this.
Different models reflect variation in an approach to a single function, or simply approaches to different functions. Comparisons which should account for this feature often do not.
Since various models will have features which make them preferable for classes of problems, solving problems which can be divided into subset problems may be best solved through integration and coordination of differing ANN models. This approach is expected to prove more tractable and productive than attempting to force a solution model to fit a specified problem complex (or changing the problem specification to fit the model).
\section*{II. Literature review}
Problem solving as McCulloch and Pitts envisioned it [from Levine 83]
As Rosenblatt redefined it [from Levine 83 and Rosenblatt ??]
What Hopfield says about Grossberg [this will be short] [from H-T 86]
What Rumelhart and McClelland say about Hopfield [from PDP]
What Rumelhart and McClelland say about Grossberg [from PDP]
What Grossberg says about everybody else [stated as briefly as possible] [from Applied Optics article, 87 Cognitive Science article]
Evidences for multi-model integration:
PDP Ch. 26, p 541: "A problem with the PDP models presented in this book is that they are too specialized, so concerned with solving the problem of the moment that they do not ask how the whole might fit together. The various chapters present us with different versions of a single, homogeneous structure, perfectly well-suited for doing its task, but not sufficient, in my opinion, at doing the whole task. One structure can't do the job: There have to be several parts to the system that do different things, sometimes communicating with each other, sometimes not."
Of course, McClelland here means to have several variants of the PDP model performing the functions, and is not per se referring to a multi-model approach. But the admission that a single instantiation of a model does not a solution make is very important.
\section*{III. Topic proposal}
a. Topic description Use the models of Hopfield, PDP, and Grossberg's ART in an integrated manner to solve a problem set that is a complex suite of problem classes. The purpose here is not to develop a general tool for such problems, but to demonstrate the desirability and applicability of using an integrative approach to ANN problem solving.
b. Topic verification (implementation)
i. Application proposal
Possible project 1: Cryptographic example. Small problem that involves transposition, pattern recognition, and feature detection and extraction. Models used as pre- and co- processors for problem-solving.
ii. Description The data set generated for presentation to the solution system may have complex interdependencies which the ANN would have to extract.
iii. Resources needed for accomplishment
Computer: Available currently: Heathkit H-100, MS-DOS, 768K Heathkit H-158, MS-DOS (PC comp), 640K
\section*{DEC PDP 11/23, RT-11, 256K}
Languages: Available currently: Under MS-DOS: Turbo Pascal
\section*{XLISP}
PD-Prolog Turbo C
\section*{ECO-C88}
\section*{ICON}
\section*{MS-FORTRAN}
\section*{MASM}
Under RT-11: MACRO-11 (assembler)
\section*{DIBOL}
\end{flushleft}
\end{document}

17
pyproject.toml Normal file
View File

@ -0,0 +1,17 @@
[build-system]
requires = ["setuptools>=61"]
build-backend = "setuptools.build_meta"
[project]
name = "triune-cadence"
version = "0.1.0"
description = "TriuneCadence: a modular neural music composition system inspired by a 1989 thesis"
requires-python = ">=3.10"
[project.scripts]
triune-cadence = "composer_ans.cli:main"
composer-ans = "composer_ans.cli:main"
[tool.pytest.ini_options]
testpaths = ["tests"]
pythonpath = ["."]

9
tests/conftest.py Normal file
View File

@ -0,0 +1,9 @@
from __future__ import annotations
import sys
from pathlib import Path
ROOT = Path(__file__).resolve().parents[1]
if str(ROOT) not in sys.path:
sys.path.insert(0, str(ROOT))

16
tests/test_analysis.py Normal file
View File

@ -0,0 +1,16 @@
from composer_ans.analysis import analyze_composition
def test_entropy_distinguishes_constant_and_varied_sequences() -> None:
constant = analyze_composition((1, 1, 1, 1))
varied = analyze_composition((1, 2, 3, 4))
assert constant.unigram_entropy_bits == 0.0
assert varied.unigram_entropy_bits > constant.unigram_entropy_bits
def test_analysis_reports_predictability_for_repeating_pattern() -> None:
repeating = analyze_composition((1, 2, 1, 2, 1, 2, 1, 2))
assert repeating.conditional_entropy_bits < repeating.unigram_entropy_bits
assert 0.0 <= repeating.predictability <= 1.0

23
tests/test_art1.py Normal file
View File

@ -0,0 +1,23 @@
from composer_ans.art1 import ART1Network, ART1Params
def test_generic_art1_commits_first_category() -> None:
network = ART1Network(ART1Params(max_categories=3, input_length=4, vigilance=0.9))
result = network.categorize((1, 0, 1, 0))
assert result.winner == 0
assert result.new_category is True
assert result.committed_categories == 1
assert result.expected_vector == (1, 0, 1, 0)
def test_generic_art1_reuses_matching_category() -> None:
network = ART1Network(ART1Params(max_categories=3, input_length=4, vigilance=0.9))
network.categorize((1, 0, 1, 0))
result = network.categorize((1, 0, 1, 0))
assert result.winner == 0
assert result.new_category is False
assert result.committed_categories == 1

49
tests/test_backprop.py Normal file
View File

@ -0,0 +1,49 @@
from pathlib import Path
from composer_ans.backprop import BackpropNetwork
from composer_ans.io.legacy_files import load_salieri_config, load_salieri_weights
from composer_ans.salieri import SalieriCritic
THES = Path(__file__).resolve().parents[1] / "THES"
def test_generic_backprop_predict_and_train_step() -> None:
network = BackpropNetwork.random(
n_input=2,
n_hidden=2,
n_output=1,
learning_rate=0.5,
alpha=0.1,
)
predicted = network.predict((0.0, 1.0))
trained = network.train_step((0.0, 1.0), (1.0,))
assert len(predicted.outputs) == 1
assert len(trained.outputs) == 1
assert 0.0 <= trained.outputs[0] <= 1.0
assert trained.error >= 0.0
assert any(state.delta != 0.0 for state in trained.node_states if state.node_type != "input")
def test_backprop_loads_legacy_salieri_network() -> None:
config = load_salieri_config(THES / "S61.DAT")
weights = load_salieri_weights(THES / "S61.WT")
network = BackpropNetwork.from_legacy(config=config, legacy_weights=weights)
result = network.predict(tuple(0.0 for _ in range(config.n_input)))
assert network.node_count == 61
assert len(result.outputs) == 1
assert 0.0 <= result.outputs[0] <= 1.0
def test_salieri_wrapper_runs_on_thesis_sequence_window() -> None:
critic = SalieriCritic.from_legacy_paths(THES)
result = critic.evaluate_and_train((1, 4, 5, 1, 0))
assert result.target in (0, 1)
assert 0.0 <= result.raw_output <= 1.0
assert isinstance(result.is_classical, bool)

12
tests/test_beethoven.py Normal file
View File

@ -0,0 +1,12 @@
from composer_ans.beethoven import BeethovenCategorizer
def test_beethoven_wrapper_encodes_notes_and_classicality() -> None:
beethoven = BeethovenCategorizer()
first = beethoven.categorize((1, 4, 5, 1, 0), is_classical=True)
second = beethoven.categorize((1, 4, 5, 1, 0), is_classical=True)
assert first.art_result.committed_categories >= 1
assert second.art_result.winner == first.art_result.winner
assert second.art_result.new_category is False

View File

@ -0,0 +1,21 @@
from pathlib import Path
from composer_ans.classical_rules import ClassicalInstructor
THES = Path(__file__).resolve().parents[1] / "THES"
def test_classical_instructor_matches_legacy_suffix_behavior() -> None:
instructor = ClassicalInstructor.from_sequence_file(THES / "SEQUENCE.DAT")
assert instructor([0, 0, 1, 5, 4]) == 1
assert instructor([0, 1, 4, 5, 1]) == 1
assert instructor([1, 2, 3, 4, 5]) == 0
def test_classical_instructor_loads_all_sequences() -> None:
instructor = ClassicalInstructor.from_sequence_file(THES / "SEQUENCE.DAT")
assert len(instructor.sequences) == 14
assert instructor.sequences[:3] == ("154", "145", "15")

46
tests/test_cli.py Normal file
View File

@ -0,0 +1,46 @@
from pathlib import Path
import subprocess
import sys
ROOT = Path(__file__).resolve().parents[1]
def test_cli_runs_and_reports_metrics(tmp_path: Path) -> None:
result = subprocess.run(
[
sys.executable,
"-m",
"composer_ans",
"--thes-root",
str(ROOT / "THES"),
"--notes",
"4",
"--object-threshold",
"2",
"--max-attempts-per-note",
"600",
"--art-vigilance",
"0.85",
"--art-vigilance-decay",
"0.98",
"--save-salieri",
str(tmp_path / "salieri.json"),
"--save-beethoven",
str(tmp_path / "beethoven.json"),
"--save-report",
str(tmp_path / "report.json"),
],
cwd=ROOT,
text=True,
capture_output=True,
check=True,
)
assert "notes:" in result.stdout
assert "per_note_seconds:" in result.stdout
assert "total_seconds:" in result.stdout
assert "unigram_entropy_bits:" in result.stdout
assert (tmp_path / "salieri.json").exists()
assert (tmp_path / "beethoven.json").exists()
assert (tmp_path / "report.json").exists()

21
tests/test_encoding.py Normal file
View File

@ -0,0 +1,21 @@
from composer_ans.encoding import encode_art_input, encode_note_sequence, encode_sequence_one_hot
def test_encode_note_sequence_validates_shape_and_range() -> None:
assert encode_note_sequence([1, 2, 3, 4, 5]) == (1, 2, 3, 4, 5)
def test_encode_sequence_one_hot_matches_pascal_layout() -> None:
vector = encode_sequence_one_hot([1, 0, 8, 2, 0])
assert len(vector) == 40
assert vector[:8] == (1, 0, 0, 0, 0, 0, 0, 0)
assert vector[8:16] == (0, 0, 0, 0, 0, 0, 0, 0)
assert vector[16:24] == (0, 0, 0, 0, 0, 0, 0, 1)
assert vector[24:32] == (0, 1, 0, 0, 0, 0, 0, 0)
assert vector[32:40] == (0, 0, 0, 0, 0, 0, 0, 0)
def test_encode_art_input_appends_classicality_bit() -> None:
vector = encode_art_input([1, 2, 3, 4, 5], is_classical=True)
assert len(vector) == 41
assert vector[-1] == 1

28
tests/test_experiments.py Normal file
View File

@ -0,0 +1,28 @@
import csv
from pathlib import Path
from composer_ans.experiments import run_parameter_sweep
THES = Path(__file__).resolve().parents[1] / "THES"
def test_run_parameter_sweep_writes_reports_and_summary(tmp_path: Path) -> None:
reports = run_parameter_sweep(
thes_root=THES,
output_dir=tmp_path,
notes=2,
parameter_sets=[
{"object_threshold": 2, "art_vigilance": 0.85},
{"object_threshold": 3, "art_vigilance": 0.9},
],
)
assert len(reports) == 2
assert (tmp_path / "run_001.json").exists()
assert (tmp_path / "run_002.json").exists()
assert (tmp_path / "summary.csv").exists()
with (tmp_path / "summary.csv").open("r", encoding="utf-8", newline="") as handle:
rows = list(csv.DictReader(handle))
assert len(rows) == 2

59
tests/test_hopfield.py Normal file
View File

@ -0,0 +1,59 @@
from pathlib import Path
from composer_ans.hopfield import HopfieldParams, generate_next_note, run_hopfield_network
from composer_ans.io.legacy_files import extract_active_hopfield_submatrix, load_hopfield_weight_matrix
THES = Path(__file__).resolve().parents[1] / "THES"
def deterministic_noise(mean: float, variance: float) -> float:
return mean
def test_hopfield_zero_matrix_converges_to_first_note_for_blank_column() -> None:
zero_matrix = tuple(tuple(0.0 for _ in range(40)) for _ in range(40))
result = generate_next_note(
[1, 2, 3, 4, 0],
zero_matrix,
noise=deterministic_noise,
)
assert result.output_notes[:4] == (1, 2, 3, 4)
assert result.candidate_note == 1
assert result.iterations > 0
def test_generic_hopfield_core_runs_on_arbitrary_grid_shape() -> None:
inputs = (
(0.8, 0.2),
(0.1, 0.9),
(0.4, 0.3),
)
size = len(inputs) * len(inputs[0])
weights = tuple(tuple(0.0 for _ in range(size)) for _ in range(size))
result = run_hopfield_network(inputs, weights)
assert result.iterations > 0
assert len(result.state.outputs) == 3
assert len(result.state.outputs[0]) == 2
def test_hopfield_legacy_matrix_runs_with_deterministic_noise() -> None:
matrix = extract_active_hopfield_submatrix(load_hopfield_weight_matrix(THES / "HTN.DAT"))
result = generate_next_note(
[1, 4, 5, 1, 0],
matrix,
params=HopfieldParams(),
noise=deterministic_noise,
)
assert len(result.output_notes) == 5
assert all(1 <= note <= 8 for note in result.output_notes)
assert 1 <= result.candidate_note <= 8
assert result.iterations > 0
assert len(result.outputs) == 8
assert len(result.outputs[0]) == 5

View File

@ -0,0 +1,51 @@
from pathlib import Path
from composer_ans.io.legacy_files import (
extract_active_hopfield_submatrix,
load_hopfield_weight_matrix,
load_salieri_config,
load_salieri_weights,
load_sequence_table,
)
THES = Path(__file__).resolve().parents[1] / "THES"
def test_load_sequence_table() -> None:
sequences = load_sequence_table(THES / "SEQUENCE.DAT")
assert len(sequences) == 14
assert sequences[-2:] == ("251", "258")
def test_load_salieri_config() -> None:
config = load_salieri_config(THES / "S61.DAT")
assert config.learning_rate == 0.5
assert config.alpha == 0.5
assert config.n_input == 40
assert config.n_hidden == 20
assert config.n_output == 1
assert config.training_iterations == 1
assert config.error_tolerance == 0.1
assert config.data_file == "s61.dat"
assert config.weight_file == "s61.wt"
def test_load_salieri_weights() -> None:
weights = load_salieri_weights(THES / "S61.WT")
assert weights.vector_length == 61
assert len(weights.weights) == 61
assert len(weights.weights[0]) == 61
assert len(weights.thetas) == 61
assert weights.weights[0][:5] == (0.0, 0.0, 0.0, 0.0, 0.0)
def test_load_hopfield_weight_matrix() -> None:
matrix = load_hopfield_weight_matrix(THES / "HTN.DAT")
active = extract_active_hopfield_submatrix(matrix)
assert len(matrix) == 64
assert len(matrix[0]) == 64
assert len(active) == 40
assert len(active[0]) == 40
assert matrix[0][0] == 0.0
assert abs(matrix[0][1] - (-0.35199999809265137)) < 1e-7

View File

@ -0,0 +1,75 @@
from __future__ import annotations
import importlib.util
from pathlib import Path
ROOT = Path(__file__).resolve().parents[1]
def _load_module(name: str, relative_path: str):
module_path = ROOT / relative_path
spec = importlib.util.spec_from_file_location(name, module_path)
module = importlib.util.module_from_spec(spec)
assert spec.loader is not None
spec.loader.exec_module(module)
return module
def test_thesis_conversion_separates_appendix_titles(tmp_path) -> None:
converter = _load_module("convert_thes_to_latex", "tools/convert_thes_to_latex.py")
source = ROOT / "THES" / "INT_ANN.TXT"
destination = tmp_path / "thesis.tex"
converter.convert_thesis_file(
source,
destination,
"Integration and Hybridization in Neural Network Modelling",
)
tex = destination.read_text(encoding="utf-8")
assert (
r"\chapter{Data File Listing: Classical Sequences Data File}" in tex
)
assert (
r"\chapter{Data File Listing: Back-Propagation Network Data File}" in tex
)
assert "Classical Sequences Data File Appendix 16" not in tex
def test_bibtex_export_preserves_year_suffix_and_extracts_proceedings_fields() -> None:
extractor = _load_module("extract_thesis_bibtex", "tools/extract_thesis_bibtex.py")
entry = (
"Carpenter, G., and Grossberg, S. 1987a. "
"A massively parallel architecture for a self-organizing neural pattern recognition machine. "
"Computer Vision, Graphics, and Image Processing 37, 54-115."
)
bibtex = extractor.to_bibtex(entry, 1)
assert "@article{Carpenter1987a" in bibtex
assert "year = {1987a}" in bibtex
assert "journal = {Computer Vision, Graphics, and Image Processing}" in bibtex
assert "volume = {37}" in bibtex
assert "pages = {54-115}" in bibtex
def test_bibtex_export_applies_manual_normalization_overrides() -> None:
extractor = _load_module("extract_thesis_bibtex", "tools/extract_thesis_bibtex.py")
hopfield_entry = (
"Hopfield, J. J. 1982. Neural networks and physical systems with emergent "
"collective computational abilities. Proceedings of the National Academy "
"of Sciences 79, 2554-2558."
)
hopfield_bibtex = extractor.to_bibtex(hopfield_entry, 1)
assert "@article{Hopfield1982" in hopfield_bibtex
assert "journal = {Proceedings of the National Academy of Sciences}" in hopfield_bibtex
assert "volume = {79}" in hopfield_bibtex
leven_entry = (
"Leven, S. 1987b. S.A.M.: a triune extension to the ART model. "
"'North Texas State University Symposium on Neural Networks' Poster Presentation."
)
leven_bibtex = extractor.to_bibtex(leven_entry, 2)
assert "@inproceedings{Leven1987b" in leven_bibtex
assert "title = {S.A.M.: a triune extension to the ART model}" in leven_bibtex

40
tests/test_pipeline.py Normal file
View File

@ -0,0 +1,40 @@
from pathlib import Path
from composer_ans.pipeline import CompositionPipeline
from composer_ans.types import CompositionContext
THES = Path(__file__).resolve().parents[1] / "THES"
def test_pipeline_single_neural_step_returns_structured_result() -> None:
pipeline = CompositionPipeline.from_legacy_data(THES)
step = pipeline.neural_step(CompositionContext())
assert len(step.context.notes) == 5
assert 0 <= step.context.candidate_note <= 8
assert isinstance(step.accepted, bool)
assert isinstance(step.objects, bool)
assert step.elapsed_seconds >= 0.0
def test_pipeline_step_until_accepted_accepts_note() -> None:
pipeline = CompositionPipeline.from_legacy_data(THES)
step = pipeline.step_until_accepted(CompositionContext(), max_attempts=500)
assert step.accepted is True
assert 1 <= step.context.notes[-1] <= 8
assert step.context.note_count == 1
def test_pipeline_compose_returns_requested_number_of_notes() -> None:
pipeline = CompositionPipeline.from_legacy_data(THES)
record = pipeline.compose(max_notes=3)
assert len(record.notes) == 3
assert len(record.per_note_seconds) == 3
assert record.total_seconds >= 0.0
assert all(1 <= note <= 8 for note in record.notes)

33
tests/test_reporting.py Normal file
View File

@ -0,0 +1,33 @@
import json
from pathlib import Path
from composer_ans.reporting import build_run_report, save_run_report_json
from composer_ans.types import CompositionRecord
def test_build_run_report_combines_timing_and_analysis() -> None:
record = CompositionRecord(
notes=(1, 2, 1, 2),
per_note_seconds=(0.1, 0.2, 0.3, 0.4),
total_seconds=1.0,
)
report = build_run_report(record, parameters={"object_threshold": 3})
assert report.note_count == 4
assert report.total_seconds == 1.0
assert len(report.per_note_seconds) == 4
assert report.parameters["object_threshold"] == 3
assert 0.0 <= report.predictability <= 1.0
def test_save_run_report_json_writes_expected_fields(tmp_path: Path) -> None:
record = CompositionRecord(notes=(1, 2, 3), per_note_seconds=(0.1, 0.2, 0.3), total_seconds=0.6)
report = build_run_report(record)
path = tmp_path / "report.json"
save_run_report_json(report, str(path))
data = json.loads(path.read_text(encoding="utf-8"))
assert data["notes"] == [1, 2, 3]
assert data["total_seconds"] == 0.6

16
tests/test_salieri.py Normal file
View File

@ -0,0 +1,16 @@
from pathlib import Path
from composer_ans.salieri import SalieriCritic
THES = Path(__file__).resolve().parents[1] / "THES"
def test_salieri_uses_classical_instructor_when_target_omitted() -> None:
critic = SalieriCritic.from_legacy_paths(THES)
positive = critic.evaluate_and_train((0, 0, 1, 5, 4))
negative = critic.evaluate_and_train((1, 2, 3, 4, 5))
assert positive.target == 1
assert negative.target == 0

View File

@ -0,0 +1,30 @@
from pathlib import Path
from composer_ans.beethoven import BeethovenCategorizer
from composer_ans.salieri import SalieriCritic
THES = Path(__file__).resolve().parents[1] / "THES"
def test_salieri_round_trip_json(tmp_path: Path) -> None:
critic = SalieriCritic.from_legacy_paths(THES)
critic.evaluate_and_train((1, 4, 5, 1, 0))
path = tmp_path / "salieri.json"
critic.save_json(str(path))
restored = SalieriCritic.load_json(str(path))
result = restored.evaluate_and_train((1, 4, 5, 1, 0))
assert 0.0 <= result.raw_output <= 1.0
def test_beethoven_round_trip_json(tmp_path: Path) -> None:
beethoven = BeethovenCategorizer()
beethoven.categorize((1, 4, 5, 1, 0), is_classical=True)
path = tmp_path / "beethoven.json"
beethoven.save_json(str(path))
restored = BeethovenCategorizer.load_json(str(path))
result = restored.categorize((1, 4, 5, 1, 0), is_classical=True)
assert result.art_result.committed_categories >= 1

View File

@ -0,0 +1,356 @@
from __future__ import annotations
from pathlib import Path
import re
ROOT = Path(__file__).resolve().parents[1]
THES = ROOT / "THES"
OUT = ROOT / "latex"
FILES = {
"INT_ANN.TXT": {
"target": "integration_and_hybridization_in_neural_network_modelling",
"title": "Integration and Hybridization in Neural Network Modelling",
"mode": "thesis",
},
"COMPCOOP.TXT": {
"target": "competing_network_models_and_problem_solving",
"title": "Competing Network Models and Problem-Solving",
"mode": "generic",
},
"THPROPOS.TXT": {
"target": "thesis_proposal",
"title": "Thesis Proposal",
"mode": "generic",
},
}
FRONT_MATTER_HEADINGS = {
"ACKNOWLEDGEMENTS",
"ABSTRACT",
"TABLE OF CONTENTS",
"LIST OF ILLUSTRATIONS",
"LIST OF TABLES",
}
def escape_latex(text: str) -> str:
replacements = {
"\\": r"\textbackslash{}",
"&": r"\&",
"%": r"\%",
"$": r"\$",
"#": r"\#",
"_": r"\_",
"{": r"\{",
"}": r"\}",
"~": r"\textasciitilde{}",
"^": r"\textasciicircum{}",
}
for src, dst in replacements.items():
text = text.replace(src, dst)
return text
def clean_line(line: str) -> str:
line = re.sub(r"[\x00-\x08\x0b-\x1f]", "", line)
line = line.replace("\x1a", "").replace("\x17", "").replace("\ufeff", "")
line = line.replace("<EFBFBD>", "'").replace("®", "'").replace("™", "'")
line = re.sub(r"\s+", " ", line)
return line.rstrip()
def looks_like_heading(line: str) -> bool:
stripped = line.strip()
if not stripped:
return False
if len(stripped) > 80:
return False
if stripped.startswith("."):
return False
if stripped in FRONT_MATTER_HEADINGS:
return True
if re.fullmatch(r"[ivxlcdmIVXLCDM]+", stripped):
return False
if re.fullmatch(r"\d+", stripped):
return False
if re.fullmatch(r"[A-Z][A-Z\s\-]{2,}", stripped) and len(stripped.split()) <= 10:
return True
if re.fullmatch(r"[IVX]+\.\s+.*", stripped):
return True
if re.fullmatch(r"\d+\.\s+.*", stripped):
return True
if stripped.isupper() and len(stripped.split()) <= 12 and len(stripped) >= 4:
return True
return False
def is_page_artifact(line: str) -> bool:
stripped = line.strip()
if not stripped:
return False
if re.fullmatch(r"[ivxlcdmIVXLCDM]+", stripped):
return True
if re.fullmatch(r"\d+", stripped):
return True
if stripped == "Publication No." or re.fullmatch(r"_+", stripped):
return True
return False
def heading_command(line: str) -> str:
stripped = line.strip()
if stripped in FRONT_MATTER_HEADINGS:
return "section*"
if re.fullmatch(r"[IVX]+\.\s+.*", stripped):
return "section*"
if re.fullmatch(r"\d+\.\s+.*", stripped):
return "section*"
return "section*"
def convert_text_file(source: Path, destination: Path, title: str) -> None:
lines = [clean_line(line) for line in source.read_text(encoding="latin-1").splitlines()]
body: list[str] = []
paragraph: list[str] = []
def flush_paragraph() -> None:
if paragraph:
body.append(escape_latex(" ".join(part.strip() for part in paragraph if part.strip())))
body.append("")
paragraph.clear()
for line in lines:
stripped = line.strip()
if not stripped:
flush_paragraph()
continue
if is_page_artifact(stripped):
flush_paragraph()
continue
if stripped.startswith("."):
flush_paragraph()
body.append(f"% raw formatter directive: {escape_latex(stripped)}")
continue
if looks_like_heading(stripped):
flush_paragraph()
body.append(rf"\{heading_command(stripped)}{{{escape_latex(stripped)}}}")
continue
paragraph.append(stripped)
flush_paragraph()
tex = [
r"\documentclass[12pt]{article}",
r"\usepackage[utf8]{inputenc}",
r"\usepackage[T1]{fontenc}",
r"\usepackage{geometry}",
r"\geometry{margin=1in}",
r"\title{" + escape_latex(title) + "}",
r"\author{Converted from legacy plain-text source}",
r"\date{}",
r"\begin{document}",
r"\maketitle",
r"\begin{flushleft}",
"% This is a conservative automated conversion from the legacy text file.",
"% Manual cleanup will still be necessary for figures, references, footnotes, and formatting.",
"",
*body,
r"\end{flushleft}",
r"\end{document}",
"",
]
destination.write_text("\n".join(tex), encoding="utf-8")
def _clean_lines(source: Path) -> list[str]:
return [clean_line(line) for line in source.read_text(encoding="latin-1").splitlines()]
def _find_line_index(lines: list[str], needle: str, start: int = 0) -> int:
for idx in range(start, len(lines)):
if lines[idx].strip() == needle:
return idx
raise ValueError(f"could not find line: {needle}")
def _collect_paragraphs(lines: list[str]) -> list[str]:
paragraphs: list[str] = []
current: list[str] = []
for raw in lines:
stripped = raw.strip()
if not stripped or is_page_artifact(stripped):
if current:
paragraphs.append(escape_latex(" ".join(current)))
current.clear()
continue
current.append(stripped)
if current:
paragraphs.append(escape_latex(" ".join(current)))
return paragraphs
def _collect_uppercase_heading_lines(lines: list[str], start_idx: int) -> tuple[list[str], int]:
heading_lines: list[str] = []
idx = start_idx
while idx < len(lines):
candidate = lines[idx].strip()
idx += 1
if not candidate or is_page_artifact(candidate):
continue
if re.fullmatch(r"(CHAPTER|APPENDIX)\s+\d+", candidate):
idx -= 1
break
if candidate.isupper() and len(candidate) <= 120:
heading_lines.append(candidate)
continue
idx -= 1
break
return heading_lines, idx
def convert_thesis_file(source: Path, destination: Path, title: str) -> None:
lines = _clean_lines(source)
ack_idx = _find_line_index(lines, "ACKNOWLEDGEMENTS")
abs_idx = _find_line_index(lines, "ABSTRACT", ack_idx + 1)
toc_idx = _find_line_index(lines, "TABLE OF CONTENTS", abs_idx + 1)
chap1_idx = _find_line_index(lines, "CHAPTER 1", abs_idx + 1)
bib_idx = _find_line_index(lines, "BIBLIOGRAPHY", chap1_idx + 1)
acknowledgements = _collect_paragraphs(lines[ack_idx + 1 : abs_idx])
abstract_lines = []
for line in lines[abs_idx + 1 : toc_idx]:
stripped = line.strip()
if not stripped or is_page_artifact(stripped):
abstract_lines.append("")
continue
if stripped in {
"INTEGRATION AND HYBRIDIZATION IN NEURAL NETWORK MODELLING",
"Wesley Royce Elsberry, M.S.",
"The University of Texas at Arlington, 1989",
"Supervising Professor: Karan Briggs",
}:
continue
if ". . ." in stripped:
continue
abstract_lines.append(stripped)
abstract = _collect_paragraphs(abstract_lines)
body = _convert_thesis_body(lines[chap1_idx:bib_idx])
tex = [
r"\documentclass[12pt]{report}",
r"\usepackage[utf8]{inputenc}",
r"\usepackage[T1]{fontenc}",
r"\usepackage{geometry}",
r"\geometry{margin=1in}",
r"\title{" + escape_latex(title) + "}",
r"\author{Wesley Royce Elsberry}",
r"\date{August 1989}",
r"\begin{document}",
r"\begin{titlepage}",
r"\centering",
r"{\Large " + escape_latex(title) + r"\par}",
r"\vspace{1.5cm}",
r"{\large Wesley Royce Elsberry\par}",
r"\vspace{1cm}",
r"Presented to the Faculty of the Graduate School of\par",
r"The University of Texas at Arlington in Partial Fulfillment\par",
r"of the Requirements for the Degree of\par",
r"\vspace{0.5cm}",
r"{\large Master of Science in Computer Science\par}",
r"\vfill",
r"The University of Texas at Arlington\par",
r"August 1989\par",
r"\end{titlepage}",
r"\chapter*{Acknowledgements}",
*[p + "\n" for p in acknowledgements],
r"\chapter*{Abstract}",
*[p + "\n" for p in abstract],
r"\tableofcontents",
*body,
r"\nocite{*}",
r"\bibliographystyle{plain}",
r"\bibliography{integration_and_hybridization_in_neural_network_modelling}",
r"\end{document}",
"",
]
destination.write_text("\n".join(tex), encoding="utf-8")
def _convert_thesis_body(lines: list[str]) -> list[str]:
body: list[str] = []
paragraph: list[str] = []
idx = 0
in_appendix = False
appendix_mode = "normal"
def flush_paragraph() -> None:
if paragraph:
body.append(escape_latex(" ".join(paragraph)))
body.append("")
paragraph.clear()
while idx < len(lines):
stripped = lines[idx].strip()
idx += 1
if not stripped or is_page_artifact(stripped):
flush_paragraph()
continue
if stripped.startswith("."):
flush_paragraph()
body.append(f"% raw formatter directive: {escape_latex(stripped)}")
continue
if re.fullmatch(r"CHAPTER\s+\d+", stripped):
flush_paragraph()
heading_lines, idx = _collect_uppercase_heading_lines(lines, idx)
title = " ".join(heading_lines) if heading_lines else stripped
body.append(rf"\chapter{{{escape_latex(title.title())}}}")
continue
if re.fullmatch(r"APPENDIX\s+\d+", stripped):
flush_paragraph()
if not in_appendix:
body.append(r"\appendix")
in_appendix = True
heading_lines, idx = _collect_uppercase_heading_lines(lines, idx)
title = " ".join(heading_lines) if heading_lines else stripped
body.append(rf"\chapter{{{escape_latex(title.title())}}}")
if (
"PROGRAM SOURCE LISTING" in title
or "DATA FILE LISTING" in title
):
appendix_mode = "listing"
body.append(
"This appendix is represented in the repository by the legacy source and data files in \\texttt{THES/}. "
"The automated thesis conversion suppresses the full listing here to keep the document manageable."
)
body.append("")
else:
appendix_mode = "normal"
continue
if in_appendix and appendix_mode == "listing":
continue
if stripped.isupper() and len(stripped.split()) <= 10 and len(stripped) > 4:
flush_paragraph()
body.append(rf"\section*{{{escape_latex(stripped.title())}}}")
continue
paragraph.append(stripped)
flush_paragraph()
return body
def main() -> int:
OUT.mkdir(parents=True, exist_ok=True)
for source_name, config in FILES.items():
source = THES / source_name
destination = OUT / f"{config['target']}.tex"
if config["mode"] == "thesis":
convert_thesis_file(source, destination, config["title"])
else:
convert_text_file(source, destination, config["title"])
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@ -0,0 +1,437 @@
from __future__ import annotations
from pathlib import Path
import re
ROOT = Path(__file__).resolve().parents[1]
THES = ROOT / "THES" / "INT_ANN.TXT"
OUT = ROOT / "latex" / "integration_and_hybridization_in_neural_network_modelling.bib"
MANUAL_OVERRIDES = {
"Farhat1986": {
"kind": "incollection",
"title": "Neural net models and optical computing: an overview",
"booktitle": "Hybrid and Optical Computing",
"editor": "Harold Szu",
"publisher": "SPIE",
"address": "Bellingham, Washington",
"volume": "634",
"pages": "277-306",
},
"Harmon1970": {
"kind": "incollection",
"title": "Neural subsystems: an interpretive summary",
"booktitle": "The Neurosciences Second Study Program",
"editor": "F. O. Schmitt",
"publisher": "Rockefeller University Press",
"address": "New York",
"pages": "486-494",
},
"HechtNielsen1986": {
"kind": "incollection",
"title": "Performance limits of optical, electro-optical, and electronic neurocomputers",
"booktitle": "Hybrid and Optical Computing",
"editor": "H. Szu",
"publisher": "SPIE",
"address": "Bellingham, Washington",
"volume": "634",
"pages": "277-306",
},
"Hopfield1982": {
"kind": "article",
"journal": "Proceedings of the National Academy of Sciences",
"volume": "79",
"pages": "2554-2558",
},
"Leven1987a": {
"kind": "phdthesis",
"title": "Choice and Neural process: A dissertation",
"school": "University of Texas at Arlington",
"note": "Chapter 5: Neural process and form -- mathematics and meaning.",
},
"Leven1987b": {
"kind": "inproceedings",
"title": "S.A.M.: a triune extension to the ART model",
"note": "Poster presentation at the North Texas State University Symposium on Neural Networks.",
},
"Levine1990": {
"kind": "unpublished",
"note": "To appear in Motivation, Emotion, and Goal Direction in Neural Networks, D. Levine and S. Leven, eds., Erlbaum, Hillsdale, New Jersey.",
},
"Lippmann1987": {
"kind": "article",
"journal": "IEEE ASSP Magazine",
"month": "apr",
"pages": "4-22",
},
"MacLean1970": {
"kind": "incollection",
"title": "The triune brain, emotion, and scientific bias",
"booktitle": "The Neurosciences Second Study Program",
"editor": "F. O. Schmitt",
"publisher": "Rockefeller University Press",
"address": "New York",
"pages": "486-494",
},
"Matsuoka1989": {
"author": "Matsuoka, T. and Hamada, H. and Nakatsu, R.",
},
"Neuroscience1988": {
"kind": "techreport",
"author": "{Metroplex Study Group on Computational Neuroscience}",
"institution": "North Texas Commission Regional Technology Program",
"note": "Report to the North Texas Commission Regional Technology Program.",
},
"Newell1976": {
"kind": "article",
"journal": "Communications of the ACM",
"volume": "19",
"number": "3",
"pages": "113-126",
},
"Nottebohm1989": {
"kind": "article",
"journal": "Scientific American",
"month": "feb",
"pages": "74-79",
},
"Pao1989": {
"kind": "book",
"publisher": "Addison-Wesley",
"address": "Reading, Massachusetts",
},
"Parker1985": {
"kind": "techreport",
"institution": "Massachusetts Institute of Technology, Center for Computational Research in Economics and Management Science",
"address": "Cambridge, Massachusetts",
"number": "TR-47",
"title": "Learning-logic",
},
"Rumelhart1986": {
"kind": "incollection",
"title": "Learning internal representations by back propagation",
"booktitle": "Parallel Distributed Processing",
"editor": "D. Rumelhart and J. McClelland and the PDP Research Group",
"publisher": "MIT Press",
"address": "Cambridge, Massachusetts",
"volume": "1",
"pages": "365-422",
},
"Simpson1988": {
"kind": "unpublished",
"note": "Submitted to CRC Critical Reviews in Artificial Intelligence.",
},
"Sontag1989": {
"kind": "inproceedings",
"title": "Back-propagation separates when perceptrons do",
"booktitle": "Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-89) Vol. I",
"pages": "639-642",
},
"Tsutsumi1989": {
"kind": "inproceedings",
"title": "A multi-layered neural network composed of backprop. and Hopfield nets and internal space representation",
"booktitle": "Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-89) Vol. I",
"pages": "507-512",
},
"Hewitt1985": {
"kind": "article",
"journal": "Byte",
"volume": "10",
"number": "4",
"pages": "223-242",
},
"Widrow1988": {
"kind": "article",
"journal": "IEEE Computer",
"volume": "21",
"number": "3",
"pages": "25-39",
},
"Charniak1985": {
"kind": "book",
"publisher": "Addison-Wesley",
"address": "Reading, Massachusetts",
"note": "701 pp.",
},
"Hebb1949": {
"kind": "book",
"publisher": "Wiley",
"address": "New York",
},
}
def clean_line(line: str) -> str:
line = re.sub(r"[\x00-\x1f]", "", line)
line = (
line.replace("<EFBFBD>", "'")
.replace("®", "'")
.replace("÷", "-")
.replace("`", "'")
)
return re.sub(r"\s+", " ", line).strip()
def load_entries() -> list[str]:
lines = THES.read_text(encoding="latin-1").splitlines()
start = next(i for i, line in enumerate(lines) if line.strip() == "BIBLIOGRAPHY")
chunks: list[list[str]] = []
current: list[str] = []
for raw in lines[start + 1 :]:
line = clean_line(raw)
if not line or re.fullmatch(r"[ivxlcdmIVXLCDM]+|\d+", line):
if current:
chunks.append(current)
current = []
continue
current.append(line)
if current:
chunks.append(current)
return [" ".join(chunk) for chunk in chunks]
def bib_key(entry: str, index: int) -> str:
match = re.match(r"([A-Za-z][A-Za-z\-\.\s,&']+?)\s+(\d{4}[a-z]?)\.", entry)
if match:
surname = re.sub(r"[^A-Za-z]", "", match.group(1).split(",")[0].split()[-1])
year = match.group(2)
return f"{surname}{year}"
return f"elsberryRef{index:03d}"
def entry_type(entry: str) -> str:
lowered = entry.lower()
if "dissertation" in lowered:
return "phdthesis"
if "personal communication" in lowered:
return "misc"
if "proceedings" in lowered or "conference" in lowered or "poster presentation" in lowered:
return "inproceedings"
if re.search(r"\b\d+\s*,\s*\d+\s*-\s*\d+\.?$", entry):
return "article"
if "press" in lowered or "books" in lowered or "wiley" in lowered or "addison-wesley" in lowered:
return "book"
if "journal" in lowered or "magazine" in lowered or "cybernetics" in lowered or "biosciences" in lowered:
return "article"
return "misc"
def split_author_year(entry: str) -> tuple[str, str, str]:
match = re.match(r"(.+?)\s+(\d{4}[a-z]?)\.\s+(.*)$", entry)
if not match:
return "Unknown", "0000", entry
return match.group(1).strip(), match.group(2), match.group(3).strip()
def split_title_note(rest: str) -> tuple[str, str]:
cues = (
"Proceedings",
"Proc.",
"In ",
"American ",
"Computer ",
"Applied ",
"Mathematical ",
"Studies ",
"International ",
"Rockefeller ",
"Bantam ",
"Wiley",
"SPIE",
"Biological ",
"Byte ",
"Communications ",
"Scientific ",
"Bell ",
"IEEE ",
"IRE ",
"Neural Networks ",
"Bull.",
"Addison-Wesley",
"Massachusetts Institute",
"Report ",
"Submitted ",
"To appear ",
"Unpublished ",
"University ",
"Poster ",
"'North",
)
for cue in cues:
pattern = rf"^(?P<title>.+?)\.\s+(?P<note>{re.escape(cue)}.*)$"
match = re.match(pattern, rest)
if match:
return match.group("title").strip(), match.group("note").strip()
title = rest.split(".")[0].strip()
note = rest[len(title):].strip().lstrip(".").strip()
return title, note
def normalize_author(author: str) -> str:
author = re.sub(r"\s+", " ", author.strip().rstrip("."))
author = author.replace("Foo,Y.", "Foo, Y.")
author = author.replace("Pao,Y.-H.", "Pao, Y.-H.")
author = author.replace("F. O. Scmitt", "F. O. Schmitt")
return author
def _field(name: str, value: str) -> str:
return f" {name} = {{{value}}}"
def _extract_inproceedings_fields(note: str) -> list[str]:
fields: list[str] = []
proceedings_match = re.search(
r"(Proceedings of .*?(?:\(.*?\))?(?:\s+Vol\.\s*[IVX0-9]+)?)"
r"(?:\.\s*|,\s*(?:pp\.|\d)|$)",
note,
flags=re.IGNORECASE,
)
if proceedings_match:
booktitle = proceedings_match.group(1).rstrip(" .,;")
fields.append(_field("booktitle", booktitle))
pages_match = re.search(r"(\d+\s*-\s*\d+)", note)
if pages_match:
fields.append(_field("pages", pages_match.group(1).replace(" ", "")))
return fields
def _extract_article_fields(note: str) -> list[str]:
fields: list[str] = []
journal_match = re.match(r"(.+?)\s+(\d+),\s*(\d+\s*-\s*\d+)\.?\s*$", note)
if journal_match:
fields.append(_field("journal", journal_match.group(1).rstrip(" .,;")))
fields.append(_field("volume", journal_match.group(2)))
fields.append(_field("pages", journal_match.group(3).replace(" ", "")))
return fields
journal_match = re.match(r"(.+?)\s+(\d+)\s*,\s*(\d+\s*-\s*\d+)\.?\s*$", note)
if journal_match:
fields.append(_field("journal", journal_match.group(1).rstrip(" .,;")))
fields.append(_field("volume", journal_match.group(2)))
fields.append(_field("pages", journal_match.group(3).replace(" ", "")))
return fields
def _extract_book_fields(note: str) -> list[str]:
fields: list[str] = []
publisher_match = re.match(r"([^,.]+(?:Press|Books|Wiley|Addison-Wesley|SPIE|University Press))[,\.]\s*(.*)$", note)
if publisher_match:
fields.append(_field("publisher", publisher_match.group(1).strip()))
if publisher_match.group(2).strip():
fields.append(_field("address", publisher_match.group(2).strip(" .")))
return fields
if note:
fields.append(_field("note", note))
return fields
def _extract_phdthesis_fields(note: str) -> list[str]:
fields: list[str] = []
school_match = re.search(r"(The University of .*?|.*?University.*?)\.", note)
if school_match:
fields.append(_field("school", school_match.group(1).strip()))
remainder = note.replace(school_match.group(0), "", 1).strip(" .")
if remainder:
fields.append(_field("note", remainder))
return fields
if note:
fields.append(_field("note", note))
return fields
def _extra_fields(kind: str, note: str) -> list[str]:
normalized = note.replace("In '", "In ").replace(",'", ",")
if not normalized:
return []
if kind == "inproceedings":
fields = _extract_inproceedings_fields(normalized)
if fields:
return fields
if kind == "article":
fields = _extract_article_fields(normalized)
if fields:
return fields
if kind == "book":
return _extract_book_fields(normalized)
if kind == "phdthesis":
return _extract_phdthesis_fields(normalized)
return [_field("note", normalized)]
def apply_override(
key: str,
kind: str,
author: str,
year: str,
title: str,
note: str,
fields: list[str],
) -> tuple[str, str, str, str, list[str]]:
override = MANUAL_OVERRIDES.get(key)
if not override:
return kind, author, year, title, fields
kind = override.get("kind", kind)
author = override.get("author", author)
title = override.get("title", title)
ordered_fields = [
_field("author", author),
_field("year", year),
_field("title", title),
]
for name in (
"journal",
"booktitle",
"editor",
"publisher",
"institution",
"school",
"address",
"volume",
"number",
"pages",
"month",
"note",
):
value = override.get(name)
if value:
ordered_fields.append(_field(name, value))
return kind, author, year, title, ordered_fields
def to_bibtex(entry: str, index: int) -> str:
key = bib_key(entry, index)
kind = entry_type(entry)
author, year, rest = split_author_year(entry)
author = normalize_author(author)
title, note = split_title_note(rest)
fields = [
_field("author", author),
_field("year", year),
_field("title", title),
]
fields.extend(_extra_fields(kind, note))
kind, author, year, title, fields = apply_override(
key, kind, author, year, title, note, fields
)
return "@{kind}{{{key},\n{fields}\n}}".format(
kind=kind,
key=key,
fields=",\n".join(fields),
)
def main() -> int:
entries = load_entries()
OUT.parent.mkdir(parents=True, exist_ok=True)
OUT.write_text(
"\n\n".join(to_bibtex(entry, idx) for idx, entry in enumerate(entries, start=1)) + "\n",
encoding="utf-8",
)
return 0
if __name__ == "__main__":
raise SystemExit(main())