Initial pieces versions.

This commit is contained in:
Wesley R. Elsberry 2025-11-17 12:37:16 -05:00
parent ba3cadb4e0
commit e81e3a8879
33 changed files with 2197 additions and 0 deletions

View File

@ -0,0 +1,30 @@
\subsection{Evaluation Workflow Pseudocode}
\begin{algorithm}[t]
\caption{OPT--Code Evaluation and Adjudication Pipeline}
\label{alg:opt-eval-pipeline}
\begin{algorithmic}[1]
\Require System description $S$
\State $C_A \gets$ ClassifierModel($S$, MinimalPrompt or MaximalPrompt)
\State $C_B \gets$ ClassifierModel($S$, MinimalPrompt or MaximalPrompt)
\State $E_A \gets$ EvaluatorModel($S, C_A$, EvaluatorPrompt)
\State $E_B \gets$ EvaluatorModel($S, C_B$, EvaluatorPrompt)
\If{($E_A.\text{verdict}$ and $E_B.\text{verdict}$) are acceptable}
\If{$C_A.\text{OPT} = C_B.\text{OPT}$}
\State \Return $C_A$ as final OPT--Code
\Else
\State $J \gets$ AdjudicatorModel($S, C_A, E_A, C_B, E_B$, AdjudicatorPrompt)
\State $C^\ast \gets J.\text{Final OPT--Code}$
\State $E^\ast \gets$ EvaluatorModel($S, C^\ast$, EvaluatorPrompt)
\If{$E^\ast.\text{verdict}$ acceptable}
\State \Return $C^\ast$ as final OPT--Code
\Else
\State Flag case for human review
\EndIf
\EndIf
\Else
\State Flag case for human review
\EndIf
\end{algorithmic}
\end{algorithm}

View File

@ -0,0 +1,115 @@
\subsection{Evaluation Protocol for OPT--Code Classification}
\label{app:evaluation-protocol}
The following protocol provides a reproducible and auditable procedure for
evaluating OPT--Code classifications generated by large language models. The
protocol aligns with reproducible computational research practices and is
designed to support reliable inter-model comparison, adjudication, and
longitudinal quality assurance.
\subsubsection{Inputs}
For each system under evaluation, the following inputs are provided:
\begin{enumerate}
\item \textbf{System description}: source code, algorithmic description, or
detailed project summary.
\item \textbf{Candidate OPT--Code}: produced by a model using the minimal or
maximal prompt (Section~\ref{sec:opt-prompts}).
\item \textbf{Candidate rationale}: a short explanation provided by the model
describing its classification.
\end{enumerate}
These inputs are then supplied to the OPT--Code Prompt Evaluator
(Appendix~\ref{app:prompt-evaluator}).
\subsubsection{Evaluation Pass}
The evaluator produces:
\begin{itemize}
\item \textbf{Verdict}: \texttt{PASS}, \texttt{WEAK\_PASS}, or \texttt{FAIL}.
\item \textbf{Score}: an integer from 0--100.
\item \textbf{Issue categories}: format, mechanism, parallelism/pipelines,
composition, and attribute plausibility.
\item \textbf{Summary}: a short free-text evaluation.
\end{itemize}
A classification is considered \emph{acceptable} if it is rated
\texttt{PASS} or \texttt{WEAK\_PASS} with a score $\geq 70$.
\subsubsection{Double-Annotation Procedure}
To reduce model-specific biases or hallucinations, each system description is
classified independently by two LLMs or by two runs of the same LLM with
different seeds:
\begin{enumerate}
\item Model A produces an OPT--Code and rationale.
\item Model B produces an OPT--Code and rationale.
\item Each is independently evaluated by the Prompt Evaluator.
\end{enumerate}
Inter-model agreement is quantified using one or more of the following metrics:
\begin{itemize}
\item \textbf{Exact-match OPT} (binary): whether the root composition matches
identically.
\item \textbf{Partial-match similarity}: Jaccard similarity between root sets
(e.g., comparing \texttt{Evo+Lrn} with \texttt{Evo+Sch}).
\item \textbf{Levenshtein distance} (string distance over the structured
OPT--Code line).
\item \textbf{Weighted mechanism agreement}: weights reflecting the semantic
distances between roots (e.g., \Swm\ is closer to \Evo\ than to \Sym).
\end{itemize}
Discrepancies trigger a joint review.
\subsubsection{Adjudication Phase}
If the two candidate classifications differ substantially (e.g., different root
sets or different compositions), an adjudication step is performed:
\begin{enumerate}
\item Provide the system description, both candidate OPT--Codes, both
rationales, and both evaluator reports to a third model (or human expert).
\item Use a specialized \emph{adjudicator prompt} that asks the model to
choose the better classification according to OPT rules.
\item Require the adjudicator to justify its decision and to propose a final,
consensus OPT--Code.
\end{enumerate}
A new evaluator pass is then run on the adjudicated OPT--Code to confirm
correctness.
\subsubsection{Quality Metrics}
The following quality-reporting metrics may be computed at the level of a batch
of evaluations:
\begin{itemize}
\item \textbf{Evaluator pass rate}: proportion of \texttt{PASS} or
\texttt{WEAK\_PASS} verdicts.
\item \textbf{Inter-model consensus rate}: exact match vs.~non-exact match.
\item \textbf{Root-level confusion matrix}: which OPT roots are mistaken for
others, across models or datasets.
\item \textbf{Pipeline sensitivity}: how often parallelism or data pipelines
are misclassified as mechanisms.
\end{itemize}
These metrics allow the OPT framework to be applied consistently and help
identify systematic weaknesses in model-based classification pipelines.
\subsubsection{Longitudinal Tracking}
For large-scale use (e.g., benchmarking industrial systems), it is recommended to
store for each case:
\begin{itemize}
\item the system description,
\item both model classifications,
\item evaluator verdicts and scores,
\item adjudicated decisions,
\item timestamps and model versions.
\end{itemize}
Such archival enables longitudinal analysis of model performance, drift, and
taxonomy usage over time.

View File

@ -0,0 +1,6 @@
\section{Appendix: OPT--Code Prompt Specifications}
This appendix collects the prompt formulations used to elicit
OPT--Code classifications from large language models and to evaluate
those classifications for correctness and consistency.

View File

@ -0,0 +1,57 @@
\section{OPT-Code v1.0: Naming Convention}
\label{app:optcode}
\paragraph{Purpose.} Provide compact, semantically transparent names that self-identify an AI systems operative mechanism(s). These are the \emph{only}~ public OPT names; legacy signal types remain descriptive but are not taxonomic.
\subsection*{Roots (frozen set in v1.0)}
\begin{center}
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Short} & \textbf{Name} & \textbf{Mechanism}\\
\midrule
\Lrn & Learnon & Parametric learning (loss/likelihood/return) \\
\Evo & Evolon & Population adaptation (variation/selection/inheritance) \\
\Sym & Symbion & Symbolic inference (rules/constraints/proofs) \\
\Prb & Probion & Probabilistic inference (posteriors/ELBO) \\
\Sch & Scholon & Search \& planning (heuristics/DP/graph) \\
\Ctl & Controlon & Control \& estimation (feedback/Kalman/LQR/MPC) \\
\Swm & Swarmon & Collective/swarm (stigmergy/distributed rules) \\
\bottomrule
\end{tabular}
\end{center}
\subsection*{Composition syntax}
\begin{itemize}[leftmargin=1.2em]
\item \hyb{A+B}: co-operative mechanisms (e.g., \hyb{Lrn+Sch}).
\item \hyb{A/B}: hierarchical nesting, outer/inner (e.g., \hyb{Evo/Lrn}).
\item \hyb{A\{B,C\}}: parallel ensemble (e.g., \hyb{Sym\{Lrn,Prb\}}).
\item \hyb{[A→B]}: sequential pipeline (e.g., \hyb{[Lrn→Ctl]}).
\end{itemize}
\subsection*{Attributes (orthogonal descriptors)}
Optional, mechanism-agnostic, appended after a semicolon:
\[
\text{\small\tt OPT=Evo/Lrn+Ctl; Rep=param; Obj=fitness; Data=sim; Time=gen; Human=low}
\]
Keys: \texttt{Rep} (representation), \texttt{Locus}, \texttt{Obj}, \texttt{Data}, \texttt{Time}, \texttt{Human}.
\subsection*{Grammar (ABNF)}
\begin{verbatim}
opt-spec = "OPT=" compose [ ";" attrs ]
compose = term / compose "+" term / compose "/" term
/ "[" compose "→" compose "]"
/ term "{" compose *("," compose) "}"
term = "Lrn" / "Evo" / "Sym" / "Prb" / "Sch" / "Ctl" / "Swm"
attrs = attr *( ";" attr )
attr = key "=" value
key = 1*(ALPHA)
value = 1*(ALNUM / "-" / "_" / "." )
\end{verbatim}
\subsection*{Stability and change control}
\textbf{S1 (Root freeze).} The seven roots above are frozen for OPT-Code v1.0.
\textbf{S2 (Extensions via attributes).} New nuance is expressed via attributes, not new roots.
\textbf{S3 (Mechanism distinctness).} Proposals to add a root in a future major version must prove a distinct operational mechanism not subsumable by existing roots.
\textbf{S4 (Compatibility).} Parsers may accept legacy aliases but must render short names only.
\textbf{S5 (Priority).} First published mapping of a systems OPT-Code (with mathematical operator) has naming priority; deviations must be justified.

View File

@ -0,0 +1,39 @@
\subsection{Storage Formats for OPT Audit Logs}
For large-scale or longitudinal use, we recommend storing OPT classifications
and evaluations in a machine-readable log format. Two practical options are:
\paragraph{JSON Lines (JSONL).}
Each line contains a single JSON object describing one system evaluation,
including:
\begin{itemize}
\item system identifier and textual description,
\item candidate OPT--Codes and rationales,
\item evaluator verdicts, scores, and issue summaries,
\item adjudicator decisions and final OPT--Code,
\item timestamps, model identifiers, and prompt variants.
\end{itemize}
JSONL is convenient for streaming pipelines, command-line tools, and map--reduce
processing.
\paragraph{YAML.}
YAML provides more human-friendly syntax and supports comments. It is useful for
curated datasets or hand-edited corpora of OPT--annotated systems. The same
fields as above can be stored in a nested structure, with separate top-level
keys for \texttt{description}, \texttt{candidates}, \texttt{evaluations},
\texttt{adjudication}, and \texttt{metadata}.
\paragraph{Schema.}
A minimal schema for either JSONL or YAML includes:
\begin{itemize}
\item \texttt{id}: unique system identifier,
\item \texttt{description}: text or reference to source code,
\item \texttt{candidates}: list of OPT--Codes and rationales,
\item \texttt{evaluations}: evaluator outputs for each candidate,
\item \texttt{adjudication}: final decision and rationale (if any),
\item \texttt{final}: final OPT--Code and attributes,
\item \texttt{meta}: timestamps, model versions, prompt names.
\end{itemize}
Such logs support reproducibility, auditability, and downstream statistical
analysis of taxonomy usage and model performance.

View File

@ -0,0 +1,101 @@
\section{Appendix: OPT--Code Prompt Specifications}
This appendix collects the prompt formulations used to elicit OPT--Code
classifications from large language models and to evaluate those classifications
for correctness and consistency.
\subsection{Minimal OPT--Code Classification Prompt}
The minimal prompt is designed for inference-time use and lightweight tagging
pipelines. It assumes a basic familiarity with the OPT roots and emphasizes
mechanism-based classification over surface labels.
\begin{quote}\small
\input{appendix_prompt_minimal.tex}
\end{quote}
\subsection{Maximal Expert OPT--Code Classification Prompt}
The maximal prompt elaborates all root definitions, clarifies the treatment of
parallelism and pipelines, and details rules for composition. It is intended for
fine-tuning, high-stakes evaluations, or detailed audit trails.
\begin{quote}\small
\input{appendix_prompt_maximal.tex}
\end{quote}
\subsection{OPT--Code Prompt Evaluator}
The evaluator prompt is a meta-level specification: it assesses whether a given
candidate OPT--Code and rationale respect the OPT taxonomy and associated
guidelines. This enables automated or semi-automated review of classifications
generated by other models or tools.
\begin{quote}\small
\input{appendix_prompt_evaluator.tex}
\end{quote}
\subsection{OPT--Code Prompt Evaluator}
\begin{verbatim}
You are an OPT-Code evaluation assistant. Your job is to check whether a
candidate OPT classification follows the OPT rules and is mechanistically
correct.
Inputs you will be given:
1) System description: a code snippet or project/system description.
2) Candidate OPT-Code line (from another model), of the form:
OPT=<roots>; Rep=<...>; Obj=<...>; Data=<...>; Time=<...>; Human=<...>
3) Candidate rationale: 26 sentences explaining the candidates choice.
You must evaluate the candidate against the following criteria:
(1) Format compliance:
- Does the candidate produce exactly one OPT= line with the correct fields?
- Are the roots valid (Lrn, Evo, Sym, Prb, Sch, Ctl, Swm)?
- Are "+" and "/" used only between valid roots?
(2) Mechanism correctness:
- Do the chosen roots match the operative mechanism in the system description?
- Is there any root that is missing but clearly present?
- Is any root included that is not supported by the description?
(3) Parallelism and pipelines:
- Does the candidate incorrectly treat threads, GPU kernels, async, pipelines,
or distributed infrastructure as OPT mechanisms (e.g., calling something
Swm or Sch only because it is parallel)?
- If so, this is a serious error.
(4) Composition correctness:
- Use "+" only for tightly integrated mechanisms in the same core loop.
- Use "/" only for distinct sequential stages.
- Flag misuse of "+" or "/" if mechanisms are obviously separate or obviously
integrated.
(5) Attribute plausibility:
- Are Rep, Obj, Data, Time, and Human reasonably consistent with the system
description?
- They do not need to be unique, but they must be defensible.
Your output must use the following structure:
Verdict: <PASS | WEAK_PASS | FAIL>
Score: <integer from 0 to 100>
Issues:
- Format: <short comment>
- Mechanism: <short comment>
- Parallelism/Pipelines: <short comment>
- Composition: <short comment>
- Attributes: <short comment>
Summary: <24 sentences giving an overall assessment and key corrections, if any>.
Guidelines:
- PASS means: no major errors; at most minor debatable choices.
- WEAK_PASS means: generally acceptable, but with at least one non-trivial issue
that should be corrected before publication.
- FAIL means: at least one serious misunderstanding of the mechanism, or clear
violation of the parallelism/pipeline rules, or badly wrong roots.
\end{verbatim}

View File

@ -0,0 +1,114 @@
\subsection{Maximal Expert OPT--Code Classification Prompt}
The maximal prompt elaborates all root definitions, clarifies the treatment of
parallelism and pipelines, and details rules for composition. It is intended for
fine-tuning, high-stakes evaluations, or detailed audit trails.
\begin{verbatim}
You are an expert mechanism analyst. Your task is to assign an OPT-Code to a
system based solely on the operative computational mechanisms present in either:
• source code, or
• a system/project description.
You must ignore marketing language, domain labels, or incidental engineering
choices. You must classify only the underlying algorithmic mechanism.
======================================================================
OPT ROOTS: DEFINITIONS
======================================================================
Lrn (Learning)
Parametric updating within a fixed architecture: gradient descent, Adam,
Hebbian/Oja rules, predictive coding, error-backprop, RL policy/value
updates, TD(λ), actor-critic.
Evo (Evolutionary)
Population-based mechanisms involving variation, selection, inheritance,
and reproduction. Examples: GA, ES, GP, CMA-ES, neuroevolution, clonal
selection, immune-inspired evolutionary search.
Sym (Symbolic / Logic / Rules)
Manipulation of explicit symbolic structures: logic rules, constraints,
production systems, theorem proving, STRIPS-style planning, forward/backward
chaining, unification, rule-based expert systems.
Prb (Probabilistic)
Computation expressed as uncertainty propagation or probabilistic inference:
Bayesian networks, HMMs, CRFs, factor graphs, particle filters, MCMC,
variational inference, probabilistic programming.
Sch (Search / Planning / Optimization)
Non-probabilistic search over discrete or continuous spaces: A*, IDA*,
branch-and-bound, MCTS (deterministic variants), generic black-box
optimizers, planners not relying on symbolic rules or probabilistic models.
Ctl (Control / Estimation)
Feedback regulation, trajectory tracking, or state estimation: PID, LQR,
Kalman filter, extended/unscented Kalman filters, MPC. Key signature:
closed-loop feedback and an explicit control objective.
Swm (Swarm / Multi-agent Local Rules)
Many simple agents with local interactions: cellular automata, boids,
ant-colony optimization, PSO, immune-network models, distributed consensus.
======================================================================
PARALLELISM AND PIPELINES: DO NOT MISCLASSIFY
======================================================================
Execution-level parallelism is NOT a mechanism.
Treat ALL of the following as irrelevant to OPT classification:
threads, processes, async/await, multiprocessing, queues,
CUDA/GPU kernels, tensor parallelism, model parallelism,
SIMD, vectorization, batching, map/reduce,
ETL-style pipelines (preprocess → model → postprocess),
ROS nodes, RPC, microservices, Spark jobs,
Kubernetes orchestration, distributed training frameworks.
Parallelism or pipelines only influence OPT when they are intrinsic to the
computation itself:
Swm: many interacting agents with local rules; parallelism reflects the
mechanism, not engineering.
Evo: population-level parallel evaluation expresses the mechanism.
Sch: multi-branch exploration in search trees.
Prb: particle filters with particle-wise updates count ONLY if the model
semantics requires distributional representation.
Pipeline stages DO NOT imply sequential composition "/" unless the stages
implement distinct root mechanisms (e.g., Evo → Sym → Prb).
======================================================================
HOW TO COMPOSE ROOTS
======================================================================
Use "+" when mechanisms are tightly integrated within one core loop:
Evo+Lrn, Lrn+Sch, Swm+Prb, etc.
Use "/" when mechanisms run in distinct sequential phases:
Evo/Sch, Sym/Prb, Sch/Lrn, Evo/Sym.
======================================================================
ORTHOGONAL ATTRIBUTES
======================================================================
Rep = representation (bitstring, graph, rules, NN-weights, trajectories,
agent-state, signals, distributions)
Obj = objective (loss, reward, likelihood, energy, constraint-violation)
Data = data regime (labels, unlabeled, environment, self-play, expert demos)
Time = adaptation timescale (online, offline, generations, episodic)
Human = human involvement (high / medium / low)
======================================================================
OUTPUT FORMAT
======================================================================
1) OPT=<roots>; Rep=<...>; Obj=<...>; Data=<...>; Time=<...>; Human=<...>
2) Rationale: 36 sentences describing the mechanism and why these roots apply.
If information is incomplete:
OPT=Unknown; Rep=?; Obj=?; Data=?; Time=?; Human=?
Rationale: explain missing elements.
\end{verbatim}

View File

@ -0,0 +1,60 @@
\subsection{Minimal OPT--Code Classification Prompt}
The minimal prompt is designed for inference-time use and lightweight
tagging pipelines. It assumes a basic familiarity with the OPT roots
and emphasizes mechanism-based classification over surface labels.
\begin{verbatim}
You are an analyzer that assigns an OPT-Code to AI systems based on the systems
operative mechanism.
OPT roots (mechanism classes):
- Lrn (Learning): parametric updates within a fixed model; gradients, Hebbian/Oja,
TD learning, policy/value updates.
- Evo (Evolutionary): population-based variation + selection + inheritance; GA,
ES, GP, neuroevolution, clonal selection.
- Sym (Symbolic/Logic/Rules): explicit symbolic structures, unification, rule
application, theorem proving, production systems, structured planning.
- Prb (Probabilistic): explicit probabilistic models and inference; Bayesian nets,
HMMs, graphical models, probabilistic programming, VI, MCMC.
- Sch (Search/Planning/Optimization): non-probabilistic search or planning in
discrete/continuous spaces; A*, MCTS, branch-and-bound, black-box optimization.
- Ctl (Control/Estimation): feedback control and state estimation; PID, LQR,
Kalman filters, MPC, trajectory regulation.
- Swm (Swarm/Multi-agent Local Rules): many simple agents with local interactions
or neighborhood rules; PSO, ACO, boids, immune networks.
Rules:
• Focus strictly on the mechanism: update rules, iteration structure, and data
flow that produces behavior. Ignore task domain and surface labels like “AI.”
• Parallelism & pipelines:
- DO NOT treat threads, actors, async/await, CUDA kernels, batching,
distributed jobs, or multi-stage data pipelines as OPT mechanisms.
- Parallelism counts ONLY when the algorithmic core uses many interacting local
agents (Swm), population-level adaptation (Evo), or true multi-branch search (Sch).
- Pipelines are NOT mechanisms; use sequential composition "/" only for true
multi-stage computational mechanisms (e.g., Evo/Sch).
• Composition:
- Use "X+Y" when roots operate together in the same core loop.
- Use "X/Y" when mechanisms occur in separate stages.
Also assign orthogonal attributes:
- Rep: representation (bitstring, rules, graph, NN-weights, agent-state, etc.).
- Obj: objective (loss, reward, likelihood, constraint-satisfaction, cost).
- Data: data regime (labels, unlabeled, self-play, environment, expert demos, signals).
- Time: adaptation timescale (online, offline, episodic, generations).
- Human: human involvement (low/medium/high).
Output format:
1) OPT=<root(s)>; Rep=<...>; Obj=<...>; Data=<...>; Time=<...>; Human=<...>
2) Rationale: 24 sentences explaining the mechanism and classification.
If insufficient data:
OPT=Unknown; Rep=?; Obj=?; Data=?; Time=?; Human=?
and explain what information is missing.
\end{verbatim}

View File

@ -0,0 +1,35 @@
% ---------------------------
\subsection{Artificial Immune Systems (AIS) in OPT}
% ---------------------------
It is useful to show how OPT-code specifications can be derived for examples of a technique that is a hybrid.
Artificial Immune Systems (AIS) instantiate computation via biomimetic mechanisms drawn from adaptive immunity. Their operative core combines (i) population-level \emph{variation and selection} (somatic hypermutation, clonal expansion, memory) and (ii) distributed, locally interacting agents (cells, idiotypic networks), often with (iii) probabilistic fusion of uncertain signals. In OPT, this places AIS primarily in \Evo\ and \Swm, with frequent couplings to \Prb\ and occasional \Sch/\Ctl\ layers depending on task and implementation.
\paragraph{Canonical families and OPT placement.}
\begin{itemize}
\item \textbf{Clonal selection \& affinity maturation (CLONALG, aiNet).} Population of detectors/antibodies $\{a_i\}$ undergo clone--mutate--select cycles driven by affinity to antigens $x$. OPT: \textbf{\Evo+\Swm} (often $+$\Prb).\\
Affinity (bitstrings; Hamming distance $d_H$): $\mathrm{aff}(x,a)=1-\frac{d_H(x,a)}{|x|}$. Clone count $n_i \propto \mathrm{aff}(x,a_i)$; hypermutation rate $\mu_i=f(\mathrm{aff})$ (typically inversely proportional).
\item \textbf{Negative Selection Algorithms (NSA).} Generate detectors that avoid ``self'' set $\mathcal S$ and cover $\mathcal X\setminus \mathcal S$. OPT: \textbf{\Evo/\Sch} ($+$\Prb\ for thresholded matching).\\
Objective: choose $D$ s.t. $\forall d\in D: d\notin \mathcal S$ and coverage $\Pr[\mathrm{match}(x,d)\mid x\notin \mathcal S]\ge \tau$.
\item \textbf{Immune network models (idiotypic).} Interacting clones stimulate/suppress each other; dynamics produce memory and regulation. OPT: \textbf{\Swm+\Evo} (sometimes $+$\Ctl).\\
Skeleton dynamics: $\dot a_i=\sum_j s_{ij}a_j-\sum_j \sigma_{ij}a_ia_j-\delta a_i$ with stimulation $s_{ij}$, suppression $\sigma_{ij}$, decay $\delta$.
\item \textbf{Dendritic Cell Algorithm (DCA) / Danger Theory.} Cells fuse PAMP/danger/safe signals to decide anomaly labeling; aggregation over a population provides robust detection. OPT: \textbf{\Swm+\Prb} (optionally $+$\Evo\ if online adaptation is added).
\end{itemize}
\paragraph{OPT-Code exemplars.}
\begin{quote}\small
\texttt{CLONALG: OPT=Evo+Swm; Rep=bitstring; Obj=affinity; Data=labels$\mid$unlabeled; Time=gens; Human=low}\\
\texttt{aiNet: OPT=Evo+Swm; Rep=realvector; Obj=affinity+diversity; Time=gens}\\
\texttt{NSA (anomaly): OPT=Evo/Sch+Prb; Rep=bitstring; Obj=coverage; Data=self/nonself; Time=gens}\\
\texttt{DCA: OPT=Swm+Prb; Rep=signals; Obj=anomaly-score; Time=online}\\
\texttt{Idiotypic control: OPT=Swm+Ctl; Rep=rules; Obj=stability+coverage; Time=online}
\end{quote}
\paragraph{Where biology and OPT coincide.}
Somatic hypermutation+$\,$selection $\to$ \Evo; massive agent concurrency and local rules $\to$ \Swm; uncertainty fusion (signal weighting, thresholds) $\to$ \Prb; homeostatic regulation $\to$ \Ctl; detector-set coverage and complement generation $\to$ \Sch.
\paragraph{Assurance considerations.}
Key failure modes are coverage gaps (missed anomalies), detector drift, and instability in network dynamics. Assurance suggests (i) held-out self/non-self tests, (ii) diversity and coverage metrics, (iii) stability analysis of interaction graphs, and (iv) calibration of anomaly thresholds (if \Prb). These layer cleanly with risk/management frameworks (NIST RMF, ISO 23053) while OPT communicates mechanism.

View File

@ -0,0 +1,5 @@
% ---------------------------
\section{Background and Prior Work}
% ---------------------------
Classic textbooks and surveys treat symbolic reasoning, planning/search, probabilistic models, learning, evolutionary methods, and control/estimation as co-equal pillars \citep{AIMA4,CIbook,FuzzySurvey,SuttonBarto2018}. No-Free-Lunch (NFL) theorems for search/optimization motivate pluralism: no single mechanism dominates across all problems \citep{Wolpert1997}. Biological literatures mirror these mechanisms: synaptic plasticity and Hebbian/Oja learning \citep{Hebb1949,Oja1982}, population genetics and replicator dynamics \citep{Price1970,TaylorJonker1978}, Bayesian cognition \citep{KnillPouget2004}, and optimal feedback control in motor behavior \citep{TodorovJordan2002,Kalman1960,Pontryagin1962}.

View File

@ -0,0 +1,468 @@
% =======================
% Shared body (no preamble)
% Accessibility: keep vector figures, larger sizes set by wrappers
% Wrappers must define:
% \twocoltrue or \twocolfalse
% \figureW, \figureH (for radar plots)
% Packages expected: tikz, pgfplots, booktabs, amsmath, amssymb, mathtools, hyperref, natbib (or ACM/IEEE styles)
% =======================
% --- Short names (public-only; no numeric codes)
\newcommand{\Lrn}{\textbf{Lrn}} % Learnon — Parametric learning
\newcommand{\Evo}{\textbf{Evo}} % Evolon — Population adaptation
\newcommand{\Sym}{\textbf{Sym}} % Symbion — Symbolic inference
\newcommand{\Prb}{\textbf{Prb}} % Probion — Probabilistic inference
\newcommand{\Sch}{\textbf{Sch}} % Scholon — Search & planning
\newcommand{\Ctl}{\textbf{Ctl}} % Controlon — Control & estimation
\newcommand{\Swm}{\textbf{Swm}} % Swarmon — Collective/swarm
\newcommand{\hyb}[1]{\textsc{#1}} % hybrid spec styling (e.g., \hyb{Lrn+Sch})
%\newcommand{\figureW}{0.95\textwidth}
%\newcommand{\figureH}{0.58\textwidth}
% --- Wide figure helper: figure* in two-column; figure in one-column
\newif\iftwocol
\providecommand{\figureW}{0.95\textwidth}
\providecommand{\figureH}{0.58\textwidth}
\newenvironment{WideFig}{\iftwocol\begin{figure*}\else\begin{figure}\fi}{\iftwocol\end{figure*}\else\end{figure}\fi}
% --- Wide table helper: table* in two-column; table in one-column
\newenvironment{WideTab}{\iftwocol\begin{table*}\else\begin{table}\fi}{\iftwocol\end{table*}\else\end{table}\fi}
% --- TikZ/PGF defaults
\pgfplotsset{compat=1.18}
\begin{abstract}
Policy and industry discourse often reduce AI to machine learning framed as “supervised, unsupervised, or reinforcement learning.” This triad omits long-standing AI traditions (symbolic expert systems, search \& planning, probabilistic inference, control/estimation, and evolutionary/collective computation). We formalize the \emph{Operational-Premise Taxonomy}~(OPT), classifying AI by its dominant computational mechanism: \Lrn, \Evo, \Sym, \Prb, \Sch, \Ctl, and \Swm. For each class we provide core mathematical operators, link them to canonical biological mechanisms, and survey hybrid compositions. We argue that OPT yields a principled, biologically grounded, and governance-usable taxonomy that avoids category errors inherent in training-signalbased labels, while remaining compact and readable with a short, compositional naming code.
\end{abstract}
% ---------------------------
\section{Introduction}
% ---------------------------
Regulatory texts frequently equate “AI” with three categories of \emph{learning signals}: supervised, unsupervised, and reinforcement learning \citep{EUAnnex,NISTRMF}. These categories emerged from neural/connectionist practice, not from the full breadth of artificial intelligence \citep{AIMA4}. We propose an alternative taxonomic axis: the \emph{operational premise}—the primary computational mechanism a system instantiates to improve, adapt, or decide. The resulting taxonomy, \emph{operational premise taxonomy}~(OPT) provides a transparent and consistent framework for compactly describing AI systems, including hybrids and pipelines. OPT retains biological analogs (learning vs.\ adaptation) while accommodating symbolic, probabilistic, search, control, and swarm paradigms.
% ---------------------------
\section{Operational-Premise Taxonomy (OPT)}
% ---------------------------
Because OPT introduces several new labels, we present those here before tackling background and related work topics.
OPT classes are defined by dominant mechanism; hybrids are explicit compositions:
\begin{itemize}[leftmargin=1.6em]
\item \textbf{Learnon (\Lrn)} — Parametric learning within an individual (gradient/likelihood/return updates).
\item \textbf{Evolon (\Evo)} — Population adaptation via variation, selection, inheritance.
\item \textbf{Symbion (\Sym)} — Symbolic/logic inference over discrete structures (KB, clauses, proofs).
\item \textbf{Probion (\Prb)} — Probabilistic modeling and approximate inference (posteriors, ELBO).
\item \textbf{Scholon (\Sch)} — Deliberative search and planning (heuristics, DP, graph search).
\item \textbf{Controlon (\Ctl)} — Feedback control and state estimation in dynamical systems.
\item \textbf{Swarmon (\Swm)} — Collective/swarm coordination with local rules and emergence.
\end{itemize}
\noindent \emph{Hybrid notation.}~We use \hyb{A+B}~for co-operative mechanisms, \hyb{A/B}~for hierarchical nesting (outer/inner), \hyb{A\{B,C\}}~for parallel ensembles, and \hyb{[A→B]}~for pipelines (Appendix~\ref{app:optcode}).
% --- OPT circle landscape (auto-wide)
\begin{WideFig}
\centering
\begin{tikzpicture}[
node distance=2cm,
every node/.style={font=\small},
optnode/.style={circle, draw=black, very thick, minimum size=11mm, align=center},
hybridedge/.style={-Latex, very thick},
weakedge/.style={-Latex, dashed, thick},
legendbox/.style={draw, rounded corners, inner sep=3pt, font=\footnotesize},
]
\def\R{4.9}
\path
(90:\R) node[optnode] (L) {Lrn}
(38.6:\R) node[optnode] (S) {Sch}
(-12.8:\R) node[optnode] (Y) {Sym}
(-64.2:\R) node[optnode] (P) {Prb}
(-115.6:\R) node[optnode] (C) {Ctl}
(-167:\R) node[optnode] (W) {Swm}
(141.4:\R) node[optnode] (E) {Evo};
\draw[hybridedge] (L) to[bend left=10] (S);
\draw[hybridedge] (S) to[bend left=10] (L);
\draw[hybridedge] (L) to[bend left=10] (Y);
\draw[hybridedge] (Y) to[bend left=10] (L);
\draw[hybridedge] (L) to[bend left=10] (E);
\draw[hybridedge] (E) to[bend left=10] (L);
\draw[hybridedge] (L) to[bend left=10] (C);
\draw[hybridedge] (C) to[bend left=10] (L);
\draw[weakedge] (S) -- (Y);
\draw[weakedge] (P) -- (L);
\draw[weakedge] (P) -- (S);
\draw[weakedge] (W) -- (E);
\draw[weakedge] (C) -- (S);
\draw[weakedge] (P) -- (C);
\node[legendbox, anchor=north east] at ($(current bounding box.north east)+(-0.2, 1.2)$) {
\begin{tabular}{@{}l@{}}
\textbf{Solid:} prominent hybrids (\hyb{Lrn+Sch}, \hyb{Lrn+Sym}, \hyb{Lrn+Evo}) \\
\textbf{Dashed:} frequent couplings (\hyb{Prb+Ctl}, \hyb{Sch+Sym}, \hyb{Swm+Evo}) \\
\end{tabular}
};
\end{tikzpicture}
\caption{OPT landscape using short names only: \Lrn, \Evo, \Sym, \Prb, \Sch, \Ctl, \Swm.}
\label{fig:opt_landscape}
\end{WideFig}
% ---------------------------
\section{Mathematical Foundations and Biological Correspondences}
\label{sec:math}
% ---------------------------
\paragraph{Learnon (\Lrn).} Empirical risk minimization:
\begin{equation}
\theta^\star \in \arg\min_{\theta}\ \mathbb{E}_{(x,y)\sim \mathcal{D}}[ \ell(f_\theta(x),y) ] + \lambda \Omega(\theta),
\end{equation}
with gradient updates $\theta_{t+1}=\theta_t-\eta_t\nabla\widehat{\mathcal{L}}(\theta_t)$; RL maximizes $J(\pi)=\mathbb{E}_\pi[\sum_t \gamma^t r_t]$ in MDPs. \emph{Biology:}~ Hebbian/Oja \citep{Hebb1949,Oja1982}, reward-modulated prediction errors \citep{SuttonBarto2018}.
\paragraph{Evolon (\Evo).} Population pipeline $P_{t+1}=\mathcal{R}(\mathcal{M}(\mathcal{C}(P_t)))$ with fitness-driven selection. \emph{Biology:}~ Price equation $\Delta \bar{z}=\frac{\mathrm{Cov}(w,z)}{\bar{w}}+\frac{\mathbb{E}[w\Delta z]}{\bar{w}}$; replicator $\dot{p}_i=p_i(f_i-\bar{f})$ \citep{Price1970,TaylorJonker1978}.
\paragraph{Symbion (\Sym).} Resolution/unification; soundness and refutation completeness \citep{Robinson1965Resolution}.
\paragraph{Probion (\Prb).} Bayes $p(z|x)\propto p(x|z)p(z)$; VI via ELBO $\mathcal{L}(q)=\mathbb{E}_q[\log p(x,z)]-\mathbb{E}_q[\log q(z)]$; \emph{Biology:}~ Bayesian brain \citep{KnillPouget2004}.
\paragraph{Scholon (\Sch).} A* with admissible $h$ is optimally efficient; DP/Bellman updates $V_{k+1}(s)=\max_a[r(s,a)+\gamma\sum_{s'}P(s'|s,a)V_k(s')]$.
\paragraph{Controlon (\Ctl).} LQR minimizes quadratic cost in linear systems; Kalman filter provides MMSE state estimates in LQG \citep{Kalman1960,Pontryagin1962,TodorovJordan2002}.
\paragraph{Swarmon (\Swm).} PSO updates $v_i(t+1)=\omega v_i(t)+c_1 r_1(p_i-x_i)+c_2 r_2(g-x_i)$; ACO pheromone $\tau\leftarrow (1-\rho)\tau+\sum_k \Delta\tau^{(k)}$.
% ---------------------------
\section{Background and Prior Work}
% ---------------------------
Classic textbooks and surveys treat symbolic reasoning, planning/search, probabilistic models, learning, evolutionary methods, and control/estimation as co-equal pillars \citep{AIMA4,CIbook,FuzzySurvey,SuttonBarto2018}. No-Free-Lunch (NFL) theorems for search/optimization motivate pluralism: no single mechanism dominates across all problems \citep{Wolpert1997}. Biological literatures mirror these mechanisms: synaptic plasticity and Hebbian/Oja learning \citep{Hebb1949,Oja1982}, population genetics and replicator dynamics \citep{Price1970,TaylorJonker1978}, Bayesian cognition \citep{KnillPouget2004}, and optimal feedback control in motor behavior \citep{TodorovJordan2002,Kalman1960,Pontryagin1962}.
\include{related-work}
% Bridge
\paragraph{Comparative landscape.}
Table~\ref{tab:opt_vs_frameworks} situates OPT alongside the best-known standards, policy instruments, and textbook structures.
Each of these prior frameworks serves an important function—shared vocabulary (ISO/IEC 22989), ML-system decomposition (ISO/IEC 23053), risk management (NIST AI RMF), usage contexts (NIST AI 200-1), multidimensional policy characterization (OECD), or regulatory stratification (EU AI Act).
However, they remain either technique-agnostic or focused solely on machine learning.
OPT complements them by supplying the missing layer: a stable, biologically grounded \emph{implementation taxonomy}~ that captures mechanism families across paradigms and defines a formal grammar for hybrid systems.
\include{table-opt-comparison}
% ---------------------------
\section{Comparative Analysis, Completeness, and Objections}
\label{sec:analysis}
% ---------------------------
\subsection{Biological--Artificial Correspondences}
Each OPT class aligns with a biological mechanism (plasticity, natural selection, structured reasoning, Bayesian cognition, deliberative planning, optimal feedback control, and distributed coordination). Shared operators in Sec.~\ref{sec:math} support cross-domain guarantees.
\subsection{Coverage, Hybrids, and Orthogonal Descriptors}
Hybrids are explicit (e.g., \hyb{Lrn+Sch} AlphaZero, \hyb{Lrn+Sym} neuro-symbolic, \hyb{Evo/Lrn} neuroevolution). Orthogonal axes capture representation, locus of change, objective, data regime, timescale, and human participation.
\subsection{Objections and Responses}
\textbf{Reduction to optimization.} Mechanisms imply distinct guarantees/hazards (data leakage vs.\ fitness misspecification vs.\ rule brittleness). NFL cautions against collapsing mechanisms.
\textbf{Hybrid blurring.} OPT treats compositions as first-class; the notation discloses “what changes where, on what objective, and on what timescale.”
\textbf{Regulatory simplicity.} Seven bins appear minimal for coverage; the short names keep disclosures compact and meaningful.
% ---------------------------
\section{Examples and Mapping}
% ---------------------------
\begin{table}[htbp]
\centering
\caption{Representative paradigms mapped to OPT.}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{@{}p{3.9cm}p{3.6cm}p{2.2cm}@{}}
\toprule
\textbf{Type / Implementation} & \textbf{Examples} & \textbf{OPT (short)}\\
\midrule
NN/Transformer (GD) & CNN, LSTM, attention & \Lrn\\
Reinforcement learning & DQN, PG, AC & \Lrn\;(+\Sch,\,+\Ctl)\\
Evolutionary algorithms & GA, GP, CMA-ES & \Evo\\
Swarm intelligence & ACO, PSO & \Swm\;(+\Evo)\\
Expert systems & Prolog, Mycin, XCON & \Sym\\
Probabilistic models & BN, HMM, factor graphs & \Prb\\
Search \& planning & A*, MCTS, STRIPS & \Sch\\
Control \& estimation & PID, LQR, KF/MPC & \Ctl\\
\bottomrule
\end{tabular}
\label{tab:OPTmap}
\end{table}
% ---------------------------
\section{Orthogonal Axes and Risk Perspectives}
% ---------------------------
\paragraph{Secondary axes (orthogonal descriptors).}
\begin{itemize}[leftmargin=1.2em]
\item \textbf{Representation:} parametric vectors, symbols/logic, graphs, programs, trajectories, policies.
\item \textbf{Locus of Change:} parameters, structure/architecture, population composition, belief state, policy.
\item \textbf{Objective Type:} prediction, optimization, inference, control, search cost, constraint satisfaction.
\item \textbf{Timescale:} online vs.\ offline; within-run vs.\ across-generations.
\item \textbf{Data Regime:} none/synthetic, labeled, unlabeled, interactive reward.
\item \textbf{Human Participation:} expert-authored knowledge vs.\ learned vs.\ co-created.
\end{itemize}
\begin{table}[htbp]
\centering
\caption{Orthogonal descriptive axes and governance risks (abridged).}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{@{}p{1.25cm}p{3.5cm}p{3.9cm}@{}}
\toprule
\textbf{OPT} & \textbf{Primary Risks} & \textbf{Assurance Focus} \\
\midrule
\Lrn & Data leakage, reward hacking & Data governance, OOD tests, calibration \\
\Evo & Fitness misspecification & Proxy validation, replicates, constraints \\
\Sym & Rule brittleness, KB inconsistency & Provenance, formal verification \\
\Prb & Miscalibration, inference bias & Posterior predictive checks \\
\Sch & Heuristic inadmissibility & Optimality proofs, heuristic diagnostics \\
\Ctl & Instability, unmodeled dynamics & Stability margins, robustness \\
\Swm & Emergent instability & Swarm invariants, safety envelopes \\
\bottomrule
\end{tabular}
\label{tab:OPT-risk}
\end{table}
% --- Radar plots (two figures; auto-wide; short-name legends)
% --- Radar helper: one polygon with six axes (Rep., Locus, Obj., Data, Time, Human)
\newcommand{\RadarPoly}[7]{%
% #1 style, #2..#7 = values on axes in order
\addplot+[#1] coordinates
{(0,#2) (60,#3) (120,#4) (180,#5) (240,#6) (300,#7) (360,#2)};
}
\begin{WideFig}
\centering
\begin{tikzpicture}
\begin{polaraxis}[
width=\figureW, height=\figureH,
ymin=0, ymax=5,
grid=both,
xtick={0,60,120,180,240,300},
xticklabels={Rep.,Locus,Obj.,Data,Time,Human},
legend columns=3,
legend style={draw=none, at={(0.5,1.03)}, anchor=south, font=\small},
tick label style={font=\small},
]
% Lrn, Evo, Sym
\RadarPoly{very thick, mark=*, mark options={solid}, mark size=2pt}{0}{0}{4}{4}{4}{1}
\addlegendentry{\Lrn}
\RadarPoly{densely dashed, very thick, mark=square*, mark options={solid}, mark size=2.2pt}{2}{5}{5}{2}{5}{2}
\addlegendentry{\Evo}
\RadarPoly{dashdotdotted, very thick, mark=triangle*, mark options={solid}, mark size=2.4pt}{5}{4}{4}{5}{3}{5}
\addlegendentry{\Sym}
\end{polaraxis}
\end{tikzpicture}
\caption{Orthogonal axes (05) for \Lrn, \Evo, \Sym.}
\label{fig:opt-radar-1}
\end{WideFig}
\begin{WideFig}
\centering
\begin{tikzpicture}
\begin{polaraxis}[
width=\figureW, height=\figureH,
ymin=0, ymax=5,
grid=both,
xtick={0,60,120,180,240,300},
xticklabels={Rep.,Locus,Obj.,Data,Time,Human},
legend columns=4,
legend style={draw=none, at={(0.5,1.03)}, anchor=south, font=\small},
tick label style={font=\small},
]
% Prb, Sch, Ctl, Swm
\RadarPoly{very thick, loosely dotted, mark=diamond*, mark options={solid}, mark size=2.2pt}{4}{3}{5}{4}{3}{3}
\addlegendentry{\Prb}
\RadarPoly{densely dashed, very thick, mark=*, mark options={solid}, mark size=2pt}{3}{3}{4}{2}{3}{3}
\addlegendentry{\Sch}
\RadarPoly{dashdotdotted, very thick, mark=square*, mark options={solid}, mark size=2.2pt}{2}{3}{5}{3}{5}{3}
\addlegendentry{\Ctl}
\RadarPoly{solid, very thick, mark=triangle*, mark options={solid}, mark size=2.4pt}{3}{4}{3}{2}{3}{2}
\addlegendentry{\Swm}
\end{polaraxis}
\end{tikzpicture}
\caption{Orthogonal axes (05) for \Prb, \Sch, \Ctl, \Swm.}
\label{fig:opt_radar_2}
\end{WideFig}
% ---------------------------
\subsection{Artificial Immune Systems (AIS) in OPT}
% ---------------------------
It is useful to show how OPT-code specifications can be derived for examples of a technique that is a hybrid.
Artificial Immune Systems (AIS) instantiate computation via biomimetic mechanisms drawn from adaptive immunity. Their operative core combines (i) population-level \emph{variation and selection} (somatic hypermutation, clonal expansion, memory) and (ii) distributed, locally interacting agents (cells, idiotypic networks), often with (iii) probabilistic fusion of uncertain signals. In OPT, this places AIS primarily in \Evo\ and \Swm, with frequent couplings to \Prb\ and occasional \Sch/\Ctl\ layers depending on task and implementation.
\paragraph{Canonical families and OPT placement.}
\begin{itemize}
\item \textbf{Clonal selection \& affinity maturation (CLONALG, aiNet).} Population of detectors/antibodies $\{a_i\}$ undergo clone--mutate--select cycles driven by affinity to antigens $x$. OPT: \textbf{\Evo+\Swm} (often $+$\Prb).\\
Affinity (bitstrings; Hamming distance $d_H$): $\mathrm{aff}(x,a)=1-\frac{d_H(x,a)}{|x|}$. Clone count $n_i \propto \mathrm{aff}(x,a_i)$; hypermutation rate $\mu_i=f(\mathrm{aff})$ (typically inversely proportional).
\item \textbf{Negative Selection Algorithms (NSA).} Generate detectors that avoid ``self'' set $\mathcal S$ and cover $\mathcal X\setminus \mathcal S$. OPT: \textbf{\Evo/\Sch} ($+$\Prb\ for thresholded matching).\\
Objective: choose $D$ s.t. $\forall d\in D: d\notin \mathcal S$ and coverage $\Pr[\mathrm{match}(x,d)\mid x\notin \mathcal S]\ge \tau$.
\item \textbf{Immune network models (idiotypic).} Interacting clones stimulate/suppress each other; dynamics produce memory and regulation. OPT: \textbf{\Swm+\Evo} (sometimes $+$\Ctl).\\
Skeleton dynamics: $\dot a_i=\sum_j s_{ij}a_j-\sum_j \sigma_{ij}a_ia_j-\delta a_i$ with stimulation $s_{ij}$, suppression $\sigma_{ij}$, decay $\delta$.
\item \textbf{Dendritic Cell Algorithm (DCA) / Danger Theory.} Cells fuse PAMP/danger/safe signals to decide anomaly labeling; aggregation over a population provides robust detection. OPT: \textbf{\Swm+\Prb} (optionally $+$\Evo\ if online adaptation is added).
\end{itemize}
\paragraph{OPT-Code exemplars.}
\begin{quote}\small
\texttt{CLONALG: OPT=Evo+Swm; Rep=bitstring; Obj=affinity; Data=labels$\mid$unlabeled; Time=gens; Human=low}\\
\texttt{aiNet: OPT=Evo+Swm; Rep=realvector; Obj=affinity+diversity; Time=gens}\\
\texttt{NSA (anomaly): OPT=Evo/Sch+Prb; Rep=bitstring; Obj=coverage; Data=self/nonself; Time=gens}\\
\texttt{DCA: OPT=Swm+Prb; Rep=signals; Obj=anomaly-score; Time=online}\\
\texttt{Idiotypic control: OPT=Swm+Ctl; Rep=rules; Obj=stability+coverage; Time=online}
\end{quote}
\paragraph{Where biology and OPT coincide.}
Somatic hypermutation+$\,$selection $\to$ \Evo; massive agent concurrency and local rules $\to$ \Swm; uncertainty fusion (signal weighting, thresholds) $\to$ \Prb; homeostatic regulation $\to$ \Ctl; detector-set coverage and complement generation $\to$ \Sch.
\paragraph{Assurance considerations.}
Key failure modes are coverage gaps (missed anomalies), detector drift, and instability in network dynamics. Assurance suggests (i) held-out self/non-self tests, (ii) diversity and coverage metrics, (iii) stability analysis of interaction graphs, and (iv) calibration of anomaly thresholds (if \Prb). These layer cleanly with risk/management frameworks (NIST RMF, ISO 23053) while OPT communicates mechanism.
% ---------------------------
\section{Discussion: Why OPT Supersedes Signal-Based Taxonomies}
% ---------------------------
\paragraph{Mechanism clarity.} \Lrn\Swm encode distinct improvement/decision operators (gradient, selection, resolution, inference, search, feedback, collective rules).
\paragraph{Biological alignment.} OPT mirrors canonical biological mechanisms (plasticity, natural selection, Bayesian cognition, optimal feedback control, etc.).
\paragraph{Compact completeness.} Seven bins cover mainstream AI while enabling crisp hybrid composition; short names and hybrid syntax convey the rest.
\paragraph{Governance usability.} Mechanism-aware controls attach naturally per class (Table~\ref{tab:OPT_risk}).
\subsection{Reclassification of Classic Systems}
\begin{table}[htbp]
\centering
\caption{Classic systems: historical labels vs.\ OPT placement (short names only).}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{@{}p{3.4cm}p{2.7cm}p{2.4cm}@{}}
\toprule
\textbf{System} & \textbf{Prior label} & \textbf{OPT (short)}\\
\midrule
XCON / R1 & Expert system & \Sym \\
CLIPS & Expert shell & \Sym \\
Instar/Outstar & Neural rules & \Lrn \\
Backprop & Supervised NN & \Lrn \\
ART 1/2 & Unsupervised NN & \Lrn \\
LMS/ADALINE & Supervised NN & \Lrn \\
HopfieldTank TSP & Neural optimization & \Lrn\;(+\Sch) \\
Boltzmann Machines & Energy-based NN & \Lrn \\
Fuzzy Logic Control & Soft computing & \Ctl\;(+\Sym) \\
Genetic Algorithms & Evolutionary & \Evo \\
Genetic Programming & Program induction & \Evo \\
Symbolic Regression & Model discovery & \Evo\;(+\Sym) \\
PSO & Swarm optimization & \Swm\;(+\Evo) \\
A*/STRIPS/GraphPlan & Search/planning & \Sch\;(+\Sym) \\
Kalman/LQR/MPC & Estimation/control & \Ctl \\
\bottomrule
\end{tabular}
\label{tab:classicOPT}
\end{table}
\subsection{On “Everything is a Spin Glass”: Scope and Limits}
Energy formulations fit symmetric Hopfield/BM subsets but fail to subsume asymmetric architectures, symbolic proof search, population dynamics, or LQG control; complexity frontiers also differ. OPT preserves energy insights without overreach.
% ---------------------------
\section{Conclusion}
% ---------------------------
OPT provides a formal, biologically grounded taxonomy that clarifies mechanisms and hybrids and supports governance. We encourage standards bodies to adopt short-name OPT identifiers and hybrid syntax in system documentation.
% ---------------------------
\appendix
\section{OPT-Code v1.0: Naming Convention}
\label{app:optcode}
\paragraph{Purpose.} Provide compact, semantically transparent names that self-identify an AI systems operative mechanism(s). These are the \emph{only}~ public OPT names; legacy signal types remain descriptive but are not taxonomic.
\subsection*{Roots (frozen set in v1.0)}
\begin{center}
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Short} & \textbf{Name} & \textbf{Mechanism}\\
\midrule
\Lrn & Learnon & Parametric learning (loss/likelihood/return) \\
\Evo & Evolon & Population adaptation (variation/selection/inheritance) \\
\Sym & Symbion & Symbolic inference (rules/constraints/proofs) \\
\Prb & Probion & Probabilistic inference (posteriors/ELBO) \\
\Sch & Scholon & Search \& planning (heuristics/DP/graph) \\
\Ctl & Controlon & Control \& estimation (feedback/Kalman/LQR/MPC) \\
\Swm & Swarmon & Collective/swarm (stigmergy/distributed rules) \\
\bottomrule
\end{tabular}
\end{center}
\subsection*{Composition syntax}
\begin{itemize}[leftmargin=1.2em]
\item \hyb{A+B}: co-operative mechanisms (e.g., \hyb{Lrn+Sch}).
\item \hyb{A/B}: hierarchical nesting, outer/inner (e.g., \hyb{Evo/Lrn}).
\item \hyb{A\{B,C\}}: parallel ensemble (e.g., \hyb{Sym\{Lrn,Prb\}}).
\item \hyb{[A→B]}: sequential pipeline (e.g., \hyb{[Lrn→Ctl]}).
\end{itemize}
\subsection*{Attributes (orthogonal descriptors)}
Optional, mechanism-agnostic, appended after a semicolon:
\[
\text{\small\tt OPT=Evo/Lrn+Ctl; Rep=param; Obj=fitness; Data=sim; Time=gen; Human=low}
\]
Keys: \texttt{Rep} (representation), \texttt{Locus}, \texttt{Obj}, \texttt{Data}, \texttt{Time}, \texttt{Human}.
\subsection*{Grammar (ABNF)}
\begin{verbatim}
opt-spec = "OPT=" compose [ ";" attrs ]
compose = term / compose "+" term / compose "/" term
/ "[" compose "→" compose "]"
/ term "{" compose *("," compose) "}"
term = "Lrn" / "Evo" / "Sym" / "Prb" / "Sch" / "Ctl" / "Swm"
attrs = attr *( ";" attr )
attr = key "=" value
key = 1*(ALPHA)
value = 1*(ALNUM / "-" / "_" / "." )
\end{verbatim}
\subsection*{Stability and change control}
\textbf{S1 (Root freeze).} The seven roots above are frozen for OPT-Code v1.0.
\textbf{S2 (Extensions via attributes).} New nuance is expressed via attributes, not new roots.
\textbf{S3 (Mechanism distinctness).} Proposals to add a root in a future major version must prove a distinct operational mechanism not subsumable by existing roots.
\textbf{S4 (Compatibility).} Parsers may accept legacy aliases but must render short names only.
\textbf{S5 (Priority).} First published mapping of a systems OPT-Code (with mathematical operator) has naming priority; deviations must be justified.
% --- Hybrid ancestry diagram (for readability)
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[
node distance=8mm and 14mm,
every node/.style={font=\small},
mech/.style={rounded corners, draw=black, very thick, inner sep=4pt, align=center},
hyb/.style={rounded corners, draw=black!60, dashed, inner sep=3pt, align=center},
->, >=Latex
]
% Roots
\node[mech] (L) {\Lrn};
\node[mech, right=of L] (S) {\Sch};
\node[mech, right=of S] (C) {\Ctl};
\node[mech, below=of L] (E) {\Evo};
\node[mech, right=of E] (Y) {\Sym};
\node[mech, right=of Y] (P) {\Prb};
\node[mech, below=of E] (W) {\Swm};
% Hybrids (examples)
\node[hyb, above=6mm of $(L)!0.5!(S)$] (LS) {\hyb{Lrn+Sch}\\ \footnotesize(AlphaZero-type)};
\node[hyb, above=6mm of $(L)!0.5!(C)$] (LC) {\hyb{Lrn+Ctl}\\ \footnotesize(model-based control)};
\node[hyb, below=6mm of $(L)!0.5!(E)$] (EL) {\hyb{Evo/Lrn}\\ \footnotesize(neuroevolution)};
\node[hyb, below=6mm of $(L)!0.5!(Y)$] (LY) {\hyb{Lrn+Sym}\\ \footnotesize(neuro-symbolic)};
\node[hyb, below=6mm of $(P)!0.5!(C)$] (PC) {\hyb{Prb+Ctl}\\ \footnotesize(Bayesian control)};
\node[hyb, below=6mm of $(E)!0.5!(W)$] (EW) {\hyb{Swm+Evo}\\ \footnotesize(swarm-evolution)};
% Edges
\draw (L) -- (LS); \draw (S) -- (LS);
\draw (L) -- (LC); \draw (C) -- (LC);
\draw (E) -- (EL); \draw (L) -- (EL);
\draw (L) -- (LY); \draw (Y) -- (LY);
\draw (P) -- (PC); \draw (C) -- (PC);
\draw (E) -- (EW); \draw (W) -- (EW);
\end{tikzpicture}
\caption{Hybrid “ancestry” diagram: short-name roots (solid) and exemplar hybrids (dashed).}
\label{fig:opt-hybrid-tree}
\end{figure}

View File

@ -0,0 +1,17 @@
% ---------------------------
\section{Comparative Analysis, Completeness, and Objections}
\label{sec:analysis}
% ---------------------------
\subsection{Biological--Artificial Correspondences}
Each OPT class aligns with a biological mechanism (plasticity, natural selection, structured reasoning, Bayesian cognition, deliberative planning, optimal feedback control, and distributed coordination). Shared operators in Sec.~\ref{sec:math} support cross-domain guarantees.
\subsection{Coverage, Hybrids, and Orthogonal Descriptors}
Hybrids are explicit (e.g., \hyb{Lrn+Sch} AlphaZero, \hyb{Lrn+Sym} neuro-symbolic, \hyb{Evo/Lrn} neuroevolution). Orthogonal axes capture representation, locus of change, objective, data regime, timescale, and human participation.
\subsection{Objections and Responses}
\textbf{Reduction to optimization.} Mechanisms imply distinct guarantees/hazards (data leakage vs.\ fitness misspecification vs.\ rule brittleness). NFL cautions against collapsing mechanisms.
\textbf{Hybrid blurring.} OPT treats compositions as first-class; the notation discloses “what changes where, on what objective, and on what timescale.”
\textbf{Regulatory simplicity.} Seven bins appear minimal for coverage; the short names keep disclosures compact and meaningful.

View File

@ -0,0 +1,5 @@
% ---------------------------
\section{Conclusions}
% ---------------------------
OPT provides a formal, biologically grounded taxonomy that clarifies mechanisms and hybrids and supports governance. We encourage standards bodies to adopt short-name OPT identifiers and hybrid syntax in system documentation.

View File

@ -0,0 +1,142 @@
\section{Design and Governance with OPT--Intent and OPT--Code}
\label{sec:design-governance}
A central motivation for the Operational Premise Taxonomy (OPT) is to support
not only the analysis of existing AI systems, but also the design and
governance of systems throughout their lifecycle. Most established AI
documentation frameworks focus on models that already exist---for example,
Model Cards, AI Service Cards, or post-hoc documentation embedded in
software-engineering artefacts. In contrast, OPT provides an explicit
mechanism-level vocabulary that can be applied \emph{before}, \emph{during},
and \emph{after} implementation.
To this end, we distinguish two complementary artefacts:
\emph{OPT--Intent}, a design-time declaration of planned mechanisms, goals,
constraints, and risks; and \emph{OPT--Code}, a run-time classification of the
system as implemented. Together, these artefacts form a governance substrate
that is lightweight, expressive, and compatible with software-architecture
practices and AI governance frameworks.
\subsection{OPT--Intent as a Design-Time Mechanism Declaration}
OPT--Intent expresses the \emph{intended} operative mechanisms
(\Lrn, \Evo, \Sym, \Prb, \Sch, \Ctl, \Swm), the domain-level goal, key
constraints, anticipated risks, and the deployment context. The notation
supports early-stage architectural reasoning:
\begin{quote}\ttfamily
INTENT-OPT = Sch/Evo \\
INTENT-GOAL = robust-production-schedule-under-dynamic-constraints \\
INTENT-CONSTRAINTS = real-time, explainable, limited-human-oversight \\
INTENT-RISKS = local-minima, premature-convergence \\
INTENT-CONTEXT = manufacturing-decision-support
\end{quote}
This declaration resembles a focused architectural decision record (ADR), but
is grounded in OPT's mechanism vocabulary. Whereas classical goal-oriented
requirements engineering (GORE) frameworks such as KAOS, i\*, or Tropos
provide high-level goal models, OPT--Intent provides a mechanism-centered
annotation that connects those goals to families of computational approaches.
\subsection{OPT--Code as an Implementation-Time Mechanism Description}
Once a system is implemented, its operative mechanisms can be classified via
OPT--Code:
\begin{quote}\ttfamily
OPT=Evo/Sch/Sym; Rep=permutations+rules; Obj=production-cost; \\
Data=inventory+constraints; Time=generations+online-adjust; Human=medium
\end{quote}
OPT--Code reflects the \emph{actual} mechanisms as they appear in the final
architecture and implementation. Comparing OPT--Intent with OPT--Code provides
a principled way to detect architectural drift, unplanned mechanism additions,
and deviations from original constraints.
\subsection{Alignment Analysis: Intent vs.~Implementation}
The relationship between OPT--Intent and OPT--Code supports alignment analysis
across the AI system lifecycle:
\begin{itemize}
\item \textbf{Mechanism alignment:} Whether the realized mechanisms match
the intended roots, or whether additions (e.g., \Sym~for explainability)
or substitutions introduce new behavior.
\item \textbf{Objective alignment:} Whether the objective in OPT--Code (Obj)
is consistent with the purpose in INTENT-GOAL.
\item \textbf{Constraint alignment:} Whether the implementation respects
INTENT-CONSTRAINTS (e.g., real-time, explainability, human oversight).
\item \textbf{Risk evolution:} Whether realized mechanisms introduce
additional risks relative to INTENT-RISKS (e.g., adding learned
components introduces data-dependence).
\end{itemize}
This analysis can be automated with an ``OPT--Intent Alignment Evaluator'' in
LLM-based workflows, producing an alignment verdict and score.
\subsection{Integration with Existing Governance Artefacts}
OPT aligns with and supplements existing governance frameworks:
\paragraph{AI Documentation (Model Cards, AI Service Cards).}
Model Cards and similar frameworks capture purpose, data provenance,
limitations, and performance characteristics of trained models. These artefacts
begin at model completion. OPT--Intent supplements them with a \emph{design
origin}, and OPT--Code provides a \emph{mechanism-centric summary} useful for
governance, reproducibility, and safety assessments.
\paragraph{Architecture Decision Records (ADRs).}
ADRs record the rationale for major architectural decisions. OPT--Intent
functions as a structured, mechanism-focused ADR, intended to be referenced in
downstream ADRs describing implementation choices and trade-offs.
\paragraph{Safety, Risk, and Impact Assessments.}
Regulatory frameworks such as the OECD AI classification or NIST AI Risk
Management Framework classify AI systems according to use, risk, and context.
OPT complements these by classifying operative mechanisms. Mechanism-level
classification is critical because risk profiles are often mechanism-dependent:
population-based adaptation (\Evo), closed-loop control (\Ctl), and
probabilistic inference (\Prb) each generate distinct failure modes.
\paragraph{GORE and Requirements Engineering.}
OPT--Intent is compatible with KAOS, i\*, and Tropos goal structures, providing
a compact mapping from stakeholder goals to operative mechanisms. Instead of
treating ``use AI'' as a monolithic design choice, OPT forces the mechanism to
be named explicitly.
\subsection{Lifecycle Governance with OPT}
An AI system moves through phases of design, implementation, deployment,
revision, and decommissioning. OPT supports governance at each phase:
\begin{enumerate}
\item \textbf{Design:} Authors specify OPT--Intent and identify mechanistic
justifications and constraints.
\item \textbf{Implementation:} OPT--Code is generated and compared with
Intent for architectural drift.
\item \textbf{Evaluation:} OPT classifiers, evaluators, and adjudicators
check mechanism correctness, formatting, and risk implications.
\item \textbf{Deployment:} OPT--Code informs safety monitoring, audit logs,
and mechanism-specific risk controls (e.g., for \Ctl~or \Evo~systems).
\item \textbf{Revision and re-training:} OPT alignment is reassessed when
system behavior changes or new mechanisms are introduced.
\item \textbf{Documentation \& reporting:} OPT--Intent and OPT--Code form
part of a long-term audit trail, linking design rationale to
implemented system behavior.
\end{enumerate}
\subsection{AI Design Assistants and Automated Governance}
OPT also provides a structured interface for LLM-based design assistants.
Given a functional goal or stakeholder requirement, an OPT-aware model can
produce candidate OPT--Intent declarations and propose mechanism families
suitable for achieving the goal. Downstream evaluation and adjudication
prompts make it possible to manage and audit these proposals automatically.
Such workflows enable a novel form of governance: mechanism-level
traceability. Instead of asking only whether a system is ``fair,'' ``safe,'' or
``performant,'' practitioners can ask whether its mechanisms match the intended
design, whether mechanism additions add new risks, and whether the alignment
between purpose and implementation is conserved over time. OPT thus becomes a
bridge between requirements engineering, architectural practice, risk
governance, and the technical analysis of AI systems.

View File

@ -0,0 +1,40 @@
% ---------------------------
\section{Discussion: Why OPT Supersedes Signal-Based Taxonomies}
% ---------------------------
\paragraph{Mechanism clarity.} \Lrn\Swm encode distinct improvement/decision operators (gradient, selection, resolution, inference, search, feedback, collective rules).
\paragraph{Biological alignment.} OPT mirrors canonical biological mechanisms (plasticity, natural selection, Bayesian cognition, optimal feedback control, etc.).
\paragraph{Compact completeness.} Seven bins cover mainstream AI while enabling crisp hybrid composition; short names and hybrid syntax convey the rest.
\paragraph{Governance usability.} Mechanism-aware controls attach naturally per class (Table~\ref{tab:OPT-risk}).
\subsection{Reclassification of Classic Systems}
\begin{table}[htbp]
\centering
\caption{Classic systems: historical labels vs.\ OPT placement (short names only).}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{@{}p{3.4cm}p{2.7cm}p{2.4cm}@{}}
\toprule
\textbf{System} & \textbf{Prior label} & \textbf{OPT (short)}\\
\midrule
XCON / R1 & Expert system & \Sym \\
CLIPS & Expert shell & \Sym \\
Instar/Outstar & Neural rules & \Lrn \\
Backprop & Supervised NN & \Lrn \\
ART 1/2 & Unsupervised NN & \Lrn \\
LMS/ADALINE & Supervised NN & \Lrn \\
HopfieldTank TSP & Neural optimization & \Lrn\;(+\Sch) \\
Boltzmann Machines & Energy-based NN & \Lrn \\
Fuzzy Logic Control & Soft computing & \Ctl\;(+\Sym) \\
Genetic Algorithms & Evolutionary & \Evo \\
Genetic Programming & Program induction & \Evo \\
Symbolic Regression & Model discovery & \Evo\;(+\Sym) \\
PSO & Swarm optimization & \Swm\;(+\Evo) \\
A*/STRIPS/GraphPlan & Search/planning & \Sch\;(+\Sym) \\
Kalman/LQR/MPC & Estimation/control & \Ctl \\
\bottomrule
\end{tabular}
\label{tab:classicOPT}
\end{table}
\subsection{On “Everything is a Spin Glass”: Scope and Limits}
Energy formulations fit symmetric Hopfield/BM subsets but fail to subsume asymmetric architectures, symbolic proof search, population dynamics, or LQG control; complexity frontiers also differ. OPT preserves energy insights without overreach.

View File

@ -0,0 +1,24 @@
% ---------------------------
\section{Examples and Mapping}
% ---------------------------
\begin{table}[htbp]
\centering
\caption{Representative paradigms mapped to OPT.}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{@{}p{3.9cm}p{3.6cm}p{2.2cm}@{}}
\toprule
\textbf{Type / Implementation} & \textbf{Examples} & \textbf{OPT (short)}\\
\midrule
NN/Transformer (GD) & CNN, LSTM, attention & \Lrn\\
Reinforcement learning & DQN, PG, AC & \Lrn\;(+\Sch,\,+\Ctl)\\
Evolutionary algorithms & GA, GP, CMA-ES & \Evo\\
Swarm intelligence & ACO, PSO & \Swm\;(+\Evo)\\
Expert systems & Prolog, Mycin, XCON & \Sym\\
Probabilistic models & BN, HMM, factor graphs & \Prb\\
Search \& planning & A*, MCTS, STRIPS & \Sch\\
Control \& estimation & PID, LQR, KF/MPC & \Ctl\\
\bottomrule
\end{tabular}
\label{tab:OPTmap}
\end{table}

View File

@ -0,0 +1,38 @@
% --- Hybrid ancestry diagram (for readability)
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[
node distance=8mm and 14mm,
every node/.style={font=\small},
mech/.style={rounded corners, draw=black, very thick, inner sep=4pt, align=center},
hyb/.style={rounded corners, draw=black!60, dashed, inner sep=3pt, align=center},
->, >=Latex
]
% Roots
\node[mech] (L) {\Lrn};
\node[mech, right=of L] (S) {\Sch};
\node[mech, right=of S] (C) {\Ctl};
\node[mech, below=of L] (E) {\Evo};
\node[mech, right=of E] (Y) {\Sym};
\node[mech, right=of Y] (P) {\Prb};
\node[mech, below=of E] (W) {\Swm};
% Hybrids (examples)
\node[hyb, above=6mm of $(L)!0.5!(S)$] (LS) {\hyb{Lrn+Sch}\\ \footnotesize(AlphaZero-type)};
\node[hyb, above=6mm of $(L)!0.5!(C)$] (LC) {\hyb{Lrn+Ctl}\\ \footnotesize(model-based control)};
\node[hyb, below=6mm of $(L)!0.5!(E)$] (EL) {\hyb{Evo/Lrn}\\ \footnotesize(neuroevolution)};
\node[hyb, below=6mm of $(L)!0.5!(Y)$] (LY) {\hyb{Lrn+Sym}\\ \footnotesize(neuro-symbolic)};
\node[hyb, below=6mm of $(P)!0.5!(C)$] (PC) {\hyb{Prb+Ctl}\\ \footnotesize(Bayesian control)};
\node[hyb, below=6mm of $(E)!0.5!(W)$] (EW) {\hyb{Swm+Evo}\\ \footnotesize(swarm-evolution)};
% Edges
\draw (L) -- (LS); \draw (S) -- (LS);
\draw (L) -- (LC); \draw (C) -- (LC);
\draw (E) -- (EL); \draw (L) -- (EL);
\draw (L) -- (LY); \draw (Y) -- (LY);
\draw (P) -- (PC); \draw (C) -- (PC);
\draw (E) -- (EW); \draw (W) -- (EW);
\end{tikzpicture}
\caption{Hybrid “ancestry” diagram: short-name roots (solid) and exemplar hybrids (dashed).}
\label{fig:opt-hybrid-tree}
\end{figure}

View File

@ -0,0 +1,48 @@
% --- OPT circle landscape (auto-wide)
\begin{WideFig}
\centering
\begin{tikzpicture}[
node distance=2cm,
every node/.style={font=\small},
optnode/.style={circle, draw=black, very thick, minimum size=11mm, align=center},
hybridedge/.style={-Latex, very thick},
weakedge/.style={-Latex, dashed, thick},
legendbox/.style={draw, rounded corners, inner sep=3pt, font=\footnotesize},
]
\def\R{4.9}
\path
(90:\R) node[optnode] (L) {Lrn}
(38.6:\R) node[optnode] (S) {Sch}
(-12.8:\R) node[optnode] (Y) {Sym}
(-64.2:\R) node[optnode] (P) {Prb}
(-115.6:\R) node[optnode] (C) {Ctl}
(-167:\R) node[optnode] (W) {Swm}
(141.4:\R) node[optnode] (E) {Evo};
\draw[hybridedge] (L) to[bend left=10] (S);
\draw[hybridedge] (S) to[bend left=10] (L);
\draw[hybridedge] (L) to[bend left=10] (Y);
\draw[hybridedge] (Y) to[bend left=10] (L);
\draw[hybridedge] (L) to[bend left=10] (E);
\draw[hybridedge] (E) to[bend left=10] (L);
\draw[hybridedge] (L) to[bend left=10] (C);
\draw[hybridedge] (C) to[bend left=10] (L);
\draw[weakedge] (S) -- (Y);
\draw[weakedge] (P) -- (L);
\draw[weakedge] (P) -- (S);
\draw[weakedge] (W) -- (E);
\draw[weakedge] (C) -- (S);
\draw[weakedge] (P) -- (C);
\node[legendbox, anchor=north east] at ($(current bounding box.north east)+(-0.2, 1.2)$) {
\begin{tabular}{@{}l@{}}
\textbf{Solid:} prominent hybrids (\hyb{Lrn+Sch}, \hyb{Lrn+Sym}, \hyb{Lrn+Evo}) \\
\textbf{Dashed:} frequent couplings (\hyb{Prb+Ctl}, \hyb{Sch+Sym}, \hyb{Swm+Evo}) \\
\end{tabular}
};
\end{tikzpicture}
\caption{OPT landscape using short names only: \Lrn, \Evo, \Sym, \Prb, \Sch, \Ctl, \Swm.}
\label{fig:opt-landscape}
\end{WideFig}

View File

@ -0,0 +1,27 @@
\begin{WideFig}
\centering
\begin{tikzpicture}
\begin{polaraxis}[
width=\figureW, height=\figureH,
ymin=0, ymax=5,
grid=both,
xtick={0,60,120,180,240,300},
xticklabels={Rep.,Locus,Obj.,Data,Time,Human},
legend columns=3,
legend style={draw=none, at={(0.5,1.03)}, anchor=south, font=\small},
tick label style={font=\small},
]
% Lrn, Evo, Sym
\RadarPoly{very thick, mark=*, mark options={solid}, mark size=2pt}{0}{0}{4}{4}{4}{1}
\addlegendentry{\Lrn}
\RadarPoly{densely dashed, very thick, mark=square*, mark options={solid}, mark size=2.2pt}{2}{5}{5}{2}{5}{2}
\addlegendentry{\Evo}
\RadarPoly{dashdotdotted, very thick, mark=triangle*, mark options={solid}, mark size=2.4pt}{5}{4}{4}{5}{3}{5}
\addlegendentry{\Sym}
\end{polaraxis}
\end{tikzpicture}
\caption{Orthogonal axes (05) for \Lrn, \Evo, \Sym.}
\label{fig:opt-radar-1}
\end{WideFig}

View File

@ -0,0 +1,28 @@
\begin{WideFig}
\centering
\begin{tikzpicture}
\begin{polaraxis}[
width=\figureW, height=\figureH,
ymin=0, ymax=5,
grid=both,
xtick={0,60,120,180,240,300},
xticklabels={Rep.,Locus,Obj.,Data,Time,Human},
legend columns=4,
legend style={draw=none, at={(0.5,1.03)}, anchor=south, font=\small},
tick label style={font=\small},
]
% Prb, Sch, Ctl, Swm
\RadarPoly{very thick, loosely dotted, mark=diamond*, mark options={solid}, mark size=2.2pt}{4}{3}{5}{4}{3}{3}
\addlegendentry{\Prb}
\RadarPoly{densely dashed, very thick, mark=*, mark options={solid}, mark size=2pt}{3}{3}{4}{2}{3}{3}
\addlegendentry{\Sch}
\RadarPoly{dashdotdotted, very thick, mark=square*, mark options={solid}, mark size=2.2pt}{2}{3}{5}{3}{5}{3}
\addlegendentry{\Ctl}
\RadarPoly{solid, very thick, mark=triangle*, mark options={solid}, mark size=2.4pt}{3}{4}{3}{2}{3}{2}
\addlegendentry{\Swm}
\end{polaraxis}
\end{tikzpicture}
\caption{Orthogonal axes (05) for \Prb, \Sch, \Ctl, \Swm.}
\label{fig:opt-radar-2}
\end{WideFig}

6
paper/pieces/intro.tex Normal file
View File

@ -0,0 +1,6 @@
% ---------------------------
\section{Introduction}
% ---------------------------
Regulatory texts frequently equate “AI” with three categories of \emph{learning signals}: supervised, unsupervised, and reinforcement learning \citep{EUAnnex,NISTRMF}. These categories emerged from neural/connectionist practice, not from the full breadth of artificial intelligence \citep{AIMA4}. We propose an alternative taxonomic axis: the \emph{operational premise}—the primary computational mechanism a system instantiates to improve, adapt, or decide. The resulting taxonomy, \emph{operational premise taxonomy}~(OPT) provides a transparent and consistent framework for compactly describing AI systems, including hybrids and pipelines. OPT retains biological analogs (learning vs.\ adaptation) while accommodating symbolic, probabilistic, search, control, and swarm paradigms.

View File

@ -0,0 +1,64 @@
\begin{verbatim}
When performing evaluation with local LLMs, here is general guidance on selection criteria and some concrete examples.
What you need from the model:
For OPT classification, the model needs:
Good code and algorithm understanding (to infer mechanism).
Decent instruction-following (to stick to the output format).
Basic reasoning about parallelism vs mechanism (with the explicit guidance youve added).
That generally points you to ~7B14B “instruct” models with decent coding chops, rather than tiny 13B models.
General advice
Use instruct-tuned variants (e.g., Instruct / Chat / DPO) rather than base models.
Prefer models with good coding benchmarks (HumanEval, MBPP, etc.) because theyre better at recognizing algorithm patterns.
For multi-step pipelines (Classifier, Evaluator, Adjudicator), you can:
Run them all on the same model, or
Use a slightly larger / better model for Evaluator + Adjudicator, and a smaller one for the Classifier.
Concrete model families (local-friendly)
A few commonly used open models in the ~714B range that are good candidates to try:
LLaMA 3 8B Instruct:
Very strong instruction following and general reasoning for its size, good for code and system-descriptions. Available through multiple runtimes (vLLM, Ollama, llamafile, etc.).
Mistral 7B Instruct (or derivative fine-tunes like OpenHermes, Dolphin, etc.):
Good general-purpose and coding performance; widely used in local setups. Good choice if youre already using Mistral-based stacks.
Qwen2 7B / 14B Instruct:
Strong multilingual and coding abilities; the 14B variant is particularly capable if you have the VRAM. Nice balance of reasoning and strict formatting.
Phi-3-mini (3.8B) instruct:
Much smaller, but surprisingly capable on reasoning tasks; might be borderline for very subtle OPT distinctions but could work as a classifier with careful prompting. Evaluator/Adjudicator roles might benefit from a larger model than this, though.
Code-oriented variants (if youre mostly classifying source code rather than prose):
“Code LLaMA” derivatives
“DeepSeek-Coder” style models
These can be quite good at recognizing patterns like GA loops, RL training loops, etc., though you sometimes need to reinforce the formatting constraints.
In a local stack, a reasonable starting configuration would be:
Classifier A: LLaMA 3 8B Instruct (maximal prompt)
Classifier B: Mistral 7B Instruct (minimal or maximal prompt)
Evaluator: Qwen2 14B Instruct (if youve got VRAM) or LLaMA 3 8B if not
Adjudicator: same as Evaluator
If you want to conserve resources, you can just use a single 78B model for all roles and rely on the explicit prompts plus your evaluator rubric to catch errors.
\end{verbatim}

View File

@ -0,0 +1,23 @@
% ---------------------------
\section{Mathematical Foundations and Biological Correspondences}
\label{sec:math}
% ---------------------------
\paragraph{Learnon (\Lrn).} Empirical risk minimization:
\begin{equation}
\theta^\star \in \arg\min_{\theta}\ \mathbb{E}_{(x,y)\sim \mathcal{D}}[ \ell(f_\theta(x),y) ] + \lambda \Omega(\theta),
\end{equation}
with gradient updates $\theta_{t+1}=\theta_t-\eta_t\nabla\widehat{\mathcal{L}}(\theta_t)$; RL maximizes $J(\pi)=\mathbb{E}_\pi[\sum_t \gamma^t r_t]$ in MDPs. \emph{Biology:}~ Hebbian/Oja \citep{Hebb1949,Oja1982}, reward-modulated prediction errors \citep{SuttonBarto2018}.
\paragraph{Evolon (\Evo).} Population pipeline $P_{t+1}=\mathcal{R}(\mathcal{M}(\mathcal{C}(P_t)))$ with fitness-driven selection. \emph{Biology:}~ Price equation $\Delta \bar{z}=\frac{\mathrm{Cov}(w,z)}{\bar{w}}+\frac{\mathbb{E}[w\Delta z]}{\bar{w}}$; replicator $\dot{p}_i=p_i(f_i-\bar{f})$ \citep{Price1970,TaylorJonker1978}.
\paragraph{Symbion (\Sym).} Resolution/unification; soundness and refutation completeness \citep{Robinson1965Resolution}.
\paragraph{Probion (\Prb).} Bayes $p(z|x)\propto p(x|z)p(z)$; VI via ELBO $\mathcal{L}(q)=\mathbb{E}_q[\log p(x,z)]-\mathbb{E}_q[\log q(z)]$; \emph{Biology:}~ Bayesian brain \citep{KnillPouget2004}.
\paragraph{Scholon (\Sch).} A* with admissible $h$ is optimally efficient; DP/Bellman updates $V_{k+1}(s)=\max_a[r(s,a)+\gamma\sum_{s'}P(s'|s,a)V_k(s')]$.
\paragraph{Controlon (\Ctl).} LQR minimizes quadratic cost in linear systems; Kalman filter provides MMSE state estimates in LQG \citep{Kalman1960,Pontryagin1962,TodorovJordan2002}.
\paragraph{Swarmon (\Swm).} PSO updates $v_i(t+1)=\omega v_i(t)+c_1 r_1(p_i-x_i)+c_2 r_2(g-x_i)$; ACO pheromone $\tau\leftarrow (1-\rho)\tau+\sum_k \Delta\tau^{(k)}$.

View File

@ -0,0 +1,21 @@
% ---------------------------
\section{Operational-Premise Taxonomy (OPT)}
% ---------------------------
Because OPT introduces several new labels, we present those here before tackling background and related work topics.
OPT classes are defined by dominant mechanism; hybrids are explicit compositions:
\begin{itemize}[leftmargin=1.6em]
\item \textbf{Learnon (\Lrn)} — Parametric learning within an individual (gradient/likelihood/return updates).
\item \textbf{Evolon (\Evo)} — Population adaptation via variation, selection, inheritance.
\item \textbf{Symbion (\Sym)} — Symbolic/logic inference over discrete structures (KB, clauses, proofs).
\item \textbf{Probion (\Prb)} — Probabilistic modeling and approximate inference (posteriors, ELBO).
\item \textbf{Scholon (\Sch)} — Deliberative search and planning (heuristics, DP, graph search).
\item \textbf{Controlon (\Ctl)} — Feedback control and state estimation in dynamical systems.
\item \textbf{Swarmon (\Swm)} — Collective/swarm coordination with local rules and emergence.
\end{itemize}
\noindent \emph{Hybrid notation.}~We use \hyb{A+B}~for co-operative mechanisms, \hyb{A/B}~for hierarchical nesting (outer/inner), \hyb{A\{B,C\}}~for parallel ensembles, and \hyb{[A→B]}~for pipelines (Appendix~\ref{app:optcode}).

View File

@ -0,0 +1,43 @@
OPTCode Evaluation Protocol
Inputs:
1) System description (code or prose)
2) Candidate OPTCode line
3) Candidate rationale
Evaluation pass:
The prompt evaluator returns:
- Verdict: PASS / WEAK_PASS / FAIL
- Score: 0100
- Issue categories: Format, Mechanism, Parallelism/Pipelines, Composition, Attributes
- Summary: short explanation
Acceptance threshold: PASS or WEAK_PASS with score >= 70.
Double annotation:
To improve reliability:
- Run classification with Model A and Model B (or two runs of same model)
- Evaluate both independently
Metrics:
- Exact-match OPT (binary)
- Jaccard similarity on root sets
- Levenshtein distance between OPT-Code strings
- Weighted mechanism agreement (semantic distances)
Adjudication:
If A and B differ substantially:
- Provide both classifications and evaluator reports to an adjudicator model
- Adjudicator chooses the better one OR synthesizes a new one
- Re-run evaluator on the adjudicated OPT-Code
Quality metrics:
- Evaluator pass rate
- Inter-model consensus rate
- Root-level confusion matrix
- Parallelism/pipeline misclassification rate
Longitudinal tracking:
Archive all cases with metadata (system description, candidate codes,
verdicts, adjudications, timestamps, model versions) to track drift and
systematic biases.

View File

@ -0,0 +1,15 @@
% ---------------------------
\section{Orthogonal Axes and Risk Perspectives}
% ---------------------------
\paragraph{Secondary axes (orthogonal descriptors).}
\begin{itemize}[leftmargin=1.2em]
\item \textbf{Representation:} parametric vectors, symbols/logic, graphs, programs, trajectories, policies.
\item \textbf{Locus of Change:} parameters, structure/architecture, population composition, belief state, policy.
\item \textbf{Objective Type:} prediction, optimization, inference, control, search cost, constraint satisfaction.
\item \textbf{Timescale:} online vs.\ offline; within-run vs.\ across-generations.
\item \textbf{Data Regime:} none/synthetic, labeled, unlabeled, interactive reward.
\item \textbf{Human Participation:} expert-authored knowledge vs.\ learned vs.\ co-created.
\end{itemize}

View File

@ -0,0 +1,7 @@
% Bridge
\paragraph{Comparative landscape.}
Table~\ref{tab:opt_vs_frameworks} situates OPT alongside the best-known standards, policy instruments, and textbook structures.
Each of these prior frameworks serves an important function—shared vocabulary (ISO/IEC 22989), ML-system decomposition (ISO/IEC 23053), risk management (NIST AI RMF), usage contexts (NIST AI 200-1), multidimensional policy characterization (OECD), or regulatory stratification (EU AI Act).
However, they remain either technique-agnostic or focused solely on machine learning.
OPT complements them by supplying the missing layer: a stable, biologically grounded \emph{implementation taxonomy}~ that captures mechanism families across paradigms and defines a formal grammar for hybrid systems.

View File

@ -0,0 +1,44 @@
% =======================
% Shared body (no preamble)
% Accessibility: keep vector figures, larger sizes set by wrappers
% Wrappers must define:
% \twocoltrue or \twocolfalse
% \figureW, \figureH (for radar plots)
% Packages expected: tikz, pgfplots, booktabs, amsmath, amssymb, mathtools, hyperref, natbib (or ACM/IEEE styles)
% =======================
% --- Short names (public-only; no numeric codes)
\newcommand{\Lrn}{\textbf{Lrn}} % Learnon — Parametric learning
\newcommand{\Evo}{\textbf{Evo}} % Evolon — Population adaptation
\newcommand{\Sym}{\textbf{Sym}} % Symbion — Symbolic inference
\newcommand{\Prb}{\textbf{Prb}} % Probion — Probabilistic inference
\newcommand{\Sch}{\textbf{Sch}} % Scholon — Search & planning
\newcommand{\Ctl}{\textbf{Ctl}} % Controlon — Control & estimation
\newcommand{\Swm}{\textbf{Swm}} % Swarmon — Collective/swarm
\newcommand{\hyb}[1]{\textsc{#1}} % hybrid spec styling (e.g., \hyb{Lrn+Sch})
%\newcommand{\figureW}{0.95\textwidth}
%\newcommand{\figureH}{0.58\textwidth}
% --- Wide figure helper: figure* in two-column; figure in one-column
\newif\iftwocol
\providecommand{\figureW}{0.95\textwidth}
\providecommand{\figureH}{0.58\textwidth}
\newenvironment{WideFig}{\iftwocol\begin{figure*}\else\begin{figure}\fi}{\iftwocol\end{figure*}\else\end{figure}\fi}
% --- Wide table helper: table* in two-column; table in one-column
\newenvironment{WideTab}{\iftwocol\begin{table*}\else\begin{table}\fi}{\iftwocol\end{table*}\else\end{table}\fi}
% --- TikZ/PGF defaults
\pgfplotsset{compat=1.18}
% --- Radar helper: one polygon with six axes (Rep., Locus, Obj., Data, Time, Human)
\newcommand{\RadarPoly}[7]{%
% #1 style, #2..#7 = values on axes in order
\addplot+[#1] coordinates
{(0,#2) (60,#3) (120,#4) (180,#5) (240,#6) (300,#7) (360,#2)};
}

View File

@ -0,0 +1,22 @@
% ---------------------------
\section{Related Work: Existing Taxonomies and Frameworks}
% ---------------------------
Standards bodies and policy groups have invested heavily in AI definitions, lifecycle models, and governance instruments. However, none provides a compact, mechanism-centric taxonomy spanning \Lrn, \Evo, \Sym, \Prb, \Sch, \Ctl, and \Swm, nor an explicit grammar for hybrids.
\paragraph{Standards and terminology.}
ISO/IEC 22989 standardizes terms and core concepts for AI across stakeholders, serving as a definitional foundation rather than a technique taxonomy. ISO/IEC 23053 offers a functional block view for \emph{machine-learning-based}~ AI systems (data, training, inference, monitoring), which is valuable architecturally but limited to ML and therefore excludes non-ML pillars such as symbolic reasoning, control/estimation, and swarm/evolutionary computation \citep{ISO22989,ISO23053}.
\paragraph{Risk and management frameworks.}
NISTs AI Risk Management Framework (AI RMF 1.0) provides an implementation-agnostic process for managing AI risks (govern, map, measure, manage). Its companion \emph{AI Use Taxonomy}~ classifies humanAI task interactions and use patterns. Both are intentionally technique-agnostic: they can apply to any implementation class, but do not sort systems by operative mechanism \citep{NISTRMF,NISTAI2001}.
\paragraph{Policy classification tools.}
The OECD Framework for the Classification of AI Systems organizes systems along multi-dimensional policy axes (People \& Planet, Economic Context, Data \& Input, AI Model, Task \& Output). This is a powerful policy characterization instrument, yet it remains descriptive and multi-axis rather than a compact mechanism taxonomy with hybrid syntax \citep{OECDClass}.
\paragraph{Regulatory regimes.}
The EU Artificial Intelligence Act introduces risk-based classes (e.g., prohibited, high-risk, limited, minimal) and obligations, largely orthogonal to implementation specifics. Technique details matter for \emph{compliance evidence}, but the Act does not define a canonical implementation taxonomy \citep{EUAIAct}.
\paragraph{Academic precedents and surveys.}
The textbook tradition organizes AI by substantive pillars—search/planning, knowledge/logic, probabilistic reasoning, learning, and agents—closely aligning with the mechanism families in this paper but without proposing a stable naming code or formal hybrid grammar \citep{AIMA4}. Reinforcement learning texts formalize optimization and value iteration for \Lrn/\Sch~ couplings \citep{SuttonBarto2018}. Classical theory anchors \Prb~ (\citealp{KnillPouget2004}), \Ctl~ (\citealp{Kalman1960,Pontryagin1962,TodorovJordan2002}), and foundational dynamics for \Evo~ (\citealp{Price1970,TaylorJonker1978}). Learning rules for \Lrn~ include Hebbian and Ojas formulations \citep{Hebb1949,Oja1982}, while resolution proofs formalize \Sym~ \citep{Robinson1965Resolution}. No-Free-Lunch results motivate preserving multiple mechanisms rather than collapsing them into a single “optimization” bucket \citep{Wolpert1997}.
\paragraph{Gap and contribution.}
Taken together, these works motivate \emph{two layers}: (i) policy/lifecycle/risk instruments that are technique-agnostic and (ii) a compact, biologically grounded \emph{implementation taxonomy}~ with explicit hybrid composition. OPT fills the second layer with seven frozen roots and a grammar for hybrids, designed to interface cleanly with the first layer.

View File

@ -0,0 +1,24 @@
\begin{WideTab}[t]
\centering
\caption{Comparison of OPT with existing standards, policy frameworks, and textbook pillars.}
\renewcommand{\arraystretch}{1.12}
\begin{tabular}{@{}p{2.9cm}p{3.1cm}p{3.2cm}p{2.6cm}p{3.0cm}@{}}
\toprule
\textbf{Framework / Source} & \textbf{Primary Scope} & \textbf{Unit of Classification} & \textbf{Technique Coverage} & \textbf{Hybrid Handling / Intended Use} \\
\midrule
\textbf{OPT (this work)} & Implementation taxonomy & \textit{Operative mechanism} (\Lrn,\ \Evo,\ \Sym,\ \Prb,\ \Sch,\ \Ctl,\ \Swm) with composition grammar & Cross-paradigm (learning, symbolic, probabilistic, search, control, swarm, evolutionary) & Explicit hybrids via \hyb{+}, \hyb{/}, \hyb{\{\,\}}, \hyb{[\,\rightarrow\,]}; designed to interface with risk/process frameworks \\
\addlinespace[3pt]
ISO/IEC 22989:2022 \citep{ISO22989} & Concepts \& terminology & Vocabulary / definitions & Technique-agnostic & No hybrid grammar; supports common language across stakeholders \\
ISO/IEC 23053:2022 \citep{ISO23053} & ML system architecture & Functional blocks (data, training, inference, monitoring) & ML-centric; excludes non-ML pillars (e.g., \Sym,\ \Ctl,\ \Swm) & No explicit hybrid mechanism model; system design/process lens \\
NIST AI RMF 1.0 \citep{NISTRMF} & Risk management & Risk functions (Govern, Map, Measure, Manage) & Technique-agnostic & No mechanism taxonomy; governance and assurance guidance \\
NIST AI 200-1 \citep{NISTAI2001} & Use taxonomy & HumanAI task activities & Technique-agnostic & No hybrids; categorizes use contexts for evaluation \\
OECD AI Classification \citep{OECDClass} & Policy characterization & Multi-axis profile (context, data, model, task) & Broad; includes an “AI model” axis but not a formal mechanism taxonomy & No hybrid grammar; policy comparison and statistics \\
EU AI Act \citep{EUAIAct} & Regulation (risk-based) & Risk class (prohibited/high/limited/minimal) & Technique-agnostic & Hybrids irrelevant; compliance and obligations \\
AIMA (Russell \& Norvig) \citep{AIMA4} & Textbook organization & Pillars (search/planning, logic, probabilistic reasoning, learning, agents) & Broad coverage; closest to mechanism families & No standard naming or hybrid code; educational structure \\
\bottomrule
\end{tabular}
\vspace{4pt}
\footnotesize \textit{Notes.} OPT supplies a compact, biologically grounded \emph{implementation} taxonomy with a formal hybrid composition code. Standards and policy frameworks remain essential and complementary for vocabulary, lifecycle, risk, management, and regulatory obligations, but they are technique-agnostic or ML-specific and do not provide a mechanism-level naming scheme.
\label{tab:opt_vs_frameworks}
\end{WideTab}

View File

@ -0,0 +1,21 @@
\begin{table}[htbp]
\centering
\caption{Orthogonal descriptive axes and governance risks (abridged).}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{@{}p{1.25cm}p{3.5cm}p{3.9cm}@{}}
\toprule
\textbf{OPT} & \textbf{Primary Risks} & \textbf{Assurance Focus} \\
\midrule
\Lrn & Data leakage, reward hacking & Data governance, OOD tests, calibration \\
\Evo & Fitness misspecification & Proxy validation, replicates, constraints \\
\Sym & Rule brittleness, KB inconsistency & Provenance, formal verification \\
\Prb & Miscalibration, inference bias & Posterior predictive checks \\
\Sch & Heuristic inadmissibility & Optimality proofs, heuristic diagnostics \\
\Ctl & Instability, unmodeled dynamics & Stability margins, robustness \\
\Swm & Emergent instability & Swarm invariants, safety envelopes \\
\bottomrule
\end{tabular}
\label{tab:OPT-risk}
\end{table}

View File

@ -0,0 +1,21 @@
\begin{table}[htbp]
\centering
\caption{Representative paradigms mapped to OPT.}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{@{}p{3.9cm}p{3.6cm}p{2.2cm}@{}}
\toprule
\textbf{Type / Implementation} & \textbf{Examples} & \textbf{OPT (short)}\\
\midrule
NN/Transformer (GD) & CNN, LSTM, attention & \Lrn\\
Reinforcement learning & DQN, PG, AC & \Lrn\;(+\Sch,\,+\Ctl)\\
Evolutionary algorithms & GA, GP, CMA-ES & \Evo\\
Swarm intelligence & ACO, PSO & \Swm\;(+\Evo)\\
Expert systems & Prolog, Mycin, XCON & \Sym\\
Probabilistic models & BN, HMM, factor graphs & \Prb\\
Search \& planning & A*, MCTS, STRIPS & \Sch\\
Control \& estimation & PID, LQR, KF/MPC & \Ctl\\
\bottomrule
\end{tabular}
\label{tab:OPTmap}
\end{table}

View File

@ -0,0 +1,487 @@
%Excellent --- this is exactly the right instinct. You're not just
%publishing a paper --- you're proposing to \emph{reformulate the
%conceptual taxonomy of AI}, which will draw both \textbf{methodological
%and political} scrutiny.
%Below is a \textbf{multi-stage verification and readiness procedure} you
%can adopt before public release, whether for arXiv, ACM, or journal
%submission. It combines academic rigor, reproducibility standards, and
%domain-specific validation for the ``taxonomy-proposal'' genre.
\documentclass[12pt]{article}
\usepackage{longtable}
\usepackage{amsmath,amsthm,mathtools}
\usepackage[a4paper,margin=1in]{geometry}
%\usepackage{times}
\usepackage[T1]{fontenc}
\usepackage{newtxtext,newtxmath} % unified serif + math fonts
\usepackage{microtype} % optional quality
%(If you switch to LuaLaTeX/XeLaTeX later, instead use
%\usepackage{fontspec}\setmainfont{TeX Gyre Termes}
\usepackage{natbib}
\usepackage{hyperref}
\usepackage{enumitem}
\usepackage{booktabs}
\usepackage{doi}
\usepackage{tikz}
\usetikzlibrary{arrows.meta,positioning,fit,calc}
\usepackage{pgfplots}
\usepgfplotslibrary{polar}
\usepackage{color}
\colorlet{shadecolor}{orange!15}
\usepackage{fancyvrb}
\usepackage{framed}
\definecolor{shadecolor}{RGB}{243,243,243}
% Shaded block (Pandoc-style)
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
% Highlighting as a true verbatim env (no trailing-token issues)
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
\makeatletter
\@for\tok:=NormalTok,ExtensionTok,KeywordTok,StringTok,CommentTok,FunctionTok\do{%
\expandafter\providecommand\csname \tok\endcsname[1]{##1}%
}
\makeatother
\newcommand{\Lrn}{\textbf{Lrn}} % Learnon — Parametric learning
\newcommand{\Evo}{\textbf{Evo}} % Evolon — Population adaptation
\newcommand{\Sym}{\textbf{Sym}} % Symbion — Symbolic inference
\newcommand{\Prb}{\textbf{Prb}} % Probion — Probabilistic inference
\newcommand{\Sch}{\textbf{Sch}} % Scholon — Search & planning
\newcommand{\Ctl}{\textbf{Ctl}} % Controlon — Control & estimation
\newcommand{\Swm}{\textbf{Swm}} % Swarmon — Collective/swarm
\newcommand{\hyb}[1]{\textsc{#1}} % hybrid spec styling (e.g., \hyb{Lrn+Sch})
% Toggles and figure sizes (larger for readability)
% Toggles and figure sizes (larger for readability)
\newif\iftwocol
\twocolfalse
\newcommand{\figureW}{0.95\textwidth}
\newcommand{\figureH}{0.62\textwidth}
\begin{document}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{structural-and-citation-integrity-checks}{%
\subsection{🧩 1. Structural and Citation Integrity
Checks}\label{structural-and-citation-integrity-checks}}
\begin{longtable}[]{@{}lll@{}}
\toprule
\begin{minipage}[b]{0.08\columnwidth}\raggedright
Goal\strut
\end{minipage} & \begin{minipage}[b]{0.67\columnwidth}\raggedright
Verification Action\strut
\end{minipage} & \begin{minipage}[b]{0.15\columnwidth}\raggedright
Tool / Method\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{All citations present}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Parse \texttt{.aux} or \texttt{.log} for ``Citation undefined''
warnings.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
\texttt{latexmk\ -bibtex} and
\texttt{grep\ \textquotesingle{}Citation\textquotesingle{}\ main.log}\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{BibTeX completeness}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Validate every \texttt{\textbackslash{}cite\{key\}} has a matching
\texttt{@entry} with fields \texttt{author}, \texttt{title},
\texttt{year}, \texttt{source}.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
\texttt{bibtool\ -s\ -d\ -r\ check.rsc\ references.bib}\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{Citation relevance}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Manually verify that each cited source supports the statement. This
includes: (1) standards mentioned in Related Work; (2) foundational
theoretical citations in mathematical sections; (3) classic AI
exemplars.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
Reading verification checklist (see below)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{Self-consistency}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Check that each reference to a class (\Lrn, \Evo, \ldots) matches the
definitions and equations in §3--5.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
Full-text search for ``Lrn'', ``Evo'', etc.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{Cross-referencing}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Confirm all figures/tables/sections compile without ``??''.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
\texttt{latexmk} warnings summary\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{DOI and URL validation}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Run a link checker or Python script (e.g., \texttt{requests.head()}) to
verify DOIs/URLs resolve.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
\texttt{bibtex-tidy\ -\/-check-urls\ references.bib}\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\emph{Checklist for manual relevance verification} For each citation:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
%\tightlist
\item
Read the cited paragraph and the cited source's abstract.
\item
Confirm it is \textbf{supporting evidence}, not merely tangential.
\item
If a reference covers multiple claims, annotate page/section numbers
(e.g., \texttt{\textbackslash{}citep{[}§2{]}\{ISO23053\}}).
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{conceptual-and-taxonomic-soundness-review}{%
\subsection{🧭 2. Conceptual and Taxonomic Soundness
Review}\label{conceptual-and-taxonomic-soundness-review}}
\begin{longtable}[]{@{}lll@{}}
\toprule
\begin{minipage}[b]{0.14\columnwidth}\raggedright
Aspect\strut
\end{minipage} & \begin{minipage}[b]{0.58\columnwidth}\raggedright
Verification Task\strut
\end{minipage} & \begin{minipage}[b]{0.20\columnwidth}\raggedright
Reviewer Type\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.14\columnwidth}\raggedright
\textbf{Completeness of mechanism coverage}\strut
\end{minipage} & \begin{minipage}[t]{0.58\columnwidth}\raggedright
Verify that every major AI approach (symbolic, probabilistic,
connectionist, evolutionary, control, swarm, search/planning) maps
cleanly to exactly one OPT root.\strut
\end{minipage} & \begin{minipage}[t]{0.20\columnwidth}\raggedright
Independent AI domain experts (1 per subfield)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.14\columnwidth}\raggedright
\textbf{Hybrid expressiveness}\strut
\end{minipage} & \begin{minipage}[t]{0.58\columnwidth}\raggedright
Test that real systems (e.g., AlphaZero, Neuroevolution, LQR-RL) can be
expressed without ambiguity.\strut
\end{minipage} & \begin{minipage}[t]{0.20\columnwidth}\raggedright
Practicing researchers; maybe small hackathon trial\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.14\columnwidth}\raggedright
\textbf{Biological correspondence}\strut
\end{minipage} & \begin{minipage}[t]{0.58\columnwidth}\raggedright
Check that cited biological analogs (plasticity, selection, control,
etc.) are correctly represented and not overstated.\strut
\end{minipage} & \begin{minipage}[t]{0.20\columnwidth}\raggedright
Cognitive science / computational neuroscience reviewer\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.14\columnwidth}\raggedright
\textbf{Orthogonality of attributes}\strut
\end{minipage} & \begin{minipage}[t]{0.58\columnwidth}\raggedright
Validate that secondary descriptors (Rep, Obj, Time, etc.) are indeed
orthogonal to mechanism choice.\strut
\end{minipage} & \begin{minipage}[t]{0.20\columnwidth}\raggedright
Systems or ML pipeline specialists\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.14\columnwidth}\raggedright
\textbf{Cross-domain coherence}\strut
\end{minipage} & \begin{minipage}[t]{0.58\columnwidth}\raggedright
Ensure that terms like ``learning'', ``adaptation'', and ``control'' are
used consistently across sections.\strut
\end{minipage} & \begin{minipage}[t]{0.20\columnwidth}\raggedright
Technical editor\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{technical-and-mathematical-verification}{%
\subsection{🔍 3. Technical and Mathematical
Verification}\label{technical-and-mathematical-verification}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
\textbf{Equation sanity check}
\begin{itemize}
%\tightlist
\item
Verify every equation's notation is defined in context.
\item
Units and symbols consistent (e.g., (V), (J), (\theta),
(p(z\textbar x))).
\item
Biological analogs correctly mapped to canonical forms (e.g., Hebb's
rule → Oja normalization).
\end{itemize}
\item
\textbf{Graphical inspection}
\begin{itemize}
%\tightlist
\item
TikZ/PGF figures render cleanly; legends match table abbreviations.
\item
Radar plot axes correspond to the six orthogonal attributes
described.
\end{itemize}
\item
\textbf{Reproducible build}
\begin{itemize}
%\tightlist
\item
\texttt{latexmk\ -pdf} or the Makefile runs without intervention.
\item
No proprietary fonts, deprecated packages, or local includes.
\end{itemize}
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{terminological-and-semantic-validation}{%
\subsection{🧱 4. Terminological and Semantic
Validation}\label{terminological-and-semantic-validation}}
Because this paper introduces new terms (Lernon, Evolon, etc.), perform:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
%\tightlist
\item
\textbf{Cross-linguistic sanity check} --- verify none of the coined
names have misleading or offensive meanings in major languages
(English, French, German, Japanese, Chinese).
\item
\textbf{Search collision audit} --- check that ``Lernon'', ``Evolon'',
etc. are not registered trademarks, commercial products, or prior AI
system names.
\item
\textbf{Ontology compatibility} --- test mapping to existing
ontologies (e.g., ISO/IEC 22989 concept hierarchy, Wikidata entries).
\item
\textbf{Glossary consistency} --- confirm that the definitions in the
paper, appendix, and metadata (e.g., JSON schema) match exactly.
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{external-critical-review-red-team}{%
\subsection{🧪 5. External Critical Review (``Red
Team'')}\label{external-critical-review-red-team}}
To pre-empt ``easy takedowns,'' convene a small red-team review:
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[b]{0.30\columnwidth}\raggedright
Reviewer Type\strut
\end{minipage} & \begin{minipage}[b]{0.64\columnwidth}\raggedright
What to Challenge\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.30\columnwidth}\raggedright
\textbf{Symbolic AI veteran}\strut
\end{minipage} & \begin{minipage}[t]{0.64\columnwidth}\raggedright
``Does OPT misrepresent classical expert systems?''\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.30\columnwidth}\raggedright
\textbf{Evolutionary computation expert}\strut
\end{minipage} & \begin{minipage}[t]{0.64\columnwidth}\raggedright
``Is \Evo~really separable from \Swm?''\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.30\columnwidth}\raggedright
\textbf{Control theorist}\strut
\end{minipage} & \begin{minipage}[t]{0.64\columnwidth}\raggedright
``Does \Ctl~belong as a distinct root or as applied
optimization?''\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.30\columnwidth}\raggedright
\textbf{Probabilistic modeller}\strut
\end{minipage} & \begin{minipage}[t]{0.64\columnwidth}\raggedright
``Is \Prb~too coarse --- should inference and generative modelling
split?''\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.30\columnwidth}\raggedright
\textbf{Policy/standards liaison}\strut
\end{minipage} & \begin{minipage}[t]{0.64\columnwidth}\raggedright
``Can regulators or ISO easily map this taxonomy onto existing
frameworks?''\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
Collect objections and prepare written responses (as supplementary
material if needed).
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{metadata-and-interoperability-testing}{%
\subsection{🧩 6. Metadata and Interoperability
Testing}\label{metadata-and-interoperability-testing}}
\begin{itemize}
\item
Validate the JSON Schema for OPT-Code with a few sample systems.
Example validation command:
\begin{Shaded}[]
\begin{Highlighting}
ajv validate -s opt -schema.json -d samples/*.json
\end{Highlighting}
\end{Shaded}
\item
Ensure round-trip integrity: parsing a valid OPT string and
re-rendering it should be idempotent.
\item
Confirm metadata examples (e.g., \texttt{OPT=Evo/Lrn+Ctl}) match
systems described in tables.
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{publication-communication-readiness}{%
\subsection{🧾 7. Publication \& Communication
Readiness}\label{publication-communication-readiness}}
\begin{longtable}[]{@{}lll@{}}
\toprule
\begin{minipage}[b]{0.16\columnwidth}\raggedright
Area\strut
\end{minipage} & \begin{minipage}[b]{0.51\columnwidth}\raggedright
Check\strut
\end{minipage} & \begin{minipage}[b]{0.24\columnwidth}\raggedright
Why\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.16\columnwidth}\raggedright
\textbf{Title and Abstract}\strut
\end{minipage} & \begin{minipage}[t]{0.51\columnwidth}\raggedright
Emphasize mechanism-based taxonomy, not policy; avoid ``redefining AI''
hyperbole.\strut
\end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright
Avoid overreach criticisms.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.16\columnwidth}\raggedright
\textbf{Introduction framing}\strut
\end{minipage} & \begin{minipage}[t]{0.51\columnwidth}\raggedright
Cite regulatory motivation (EU AI Act, NIST, ISO), but frame OPT as
complementary.\strut
\end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright
Appears cooperative, not adversarial.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.16\columnwidth}\raggedright
\textbf{Data availability statement}\strut
\end{minipage} & \begin{minipage}[t]{0.51\columnwidth}\raggedright
Clarify no datasets, only conceptual and standards synthesis.\strut
\end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright
Meets arXiv/ACM policies.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.16\columnwidth}\raggedright
\textbf{Reproducibility}\strut
\end{minipage} & \begin{minipage}[t]{0.51\columnwidth}\raggedright
Provide Makefile and instructions to regenerate all figures from
TeX.\strut
\end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright
Fulfills open science norms.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.16\columnwidth}\raggedright
\textbf{Accessibility}\strut
\end{minipage} & \begin{minipage}[t]{0.51\columnwidth}\raggedright
Verify large-font, high-contrast figures; ensure color palettes
differentiate well in grayscale.\strut
\end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright
Required for ACM/IEEE accessibility standards.\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{pre-submission-peer-simulation}{%
\subsection{🔬 8. Pre-submission Peer
Simulation}\label{pre-submission-peer-simulation}}
\begin{itemize}
\item
Use an \textbf{LLM-based referee simulator} or colleagues to generate
expected reviewer comments.
\begin{itemize}
%\tightlist
\item
``Compare to ISO/IEC 23053.''
\item
``Explain why control/swarm deserve separate roots.''
\item
``Provide examples of OPT adoption in practice.''
\item
Prepare point-by-point responses.
\end{itemize}
\item
Draft a short ``Author Response Template'' for actual peer review.
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{final-publication-readiness-checklist-summary}{%
\subsection{✅ 9. Final ``Publication-Readiness'' Checklist
(summary)}\label{final-publication-readiness-checklist-summary}}
\begin{longtable}[]{@{}ll@{}}
\toprule
Category & Status\tabularnewline
\midrule
\endhead
Citations verified (exist + relevant) &\tabularnewline
All equations defined and correct &\tabularnewline
Figures render without warning &\tabularnewline
JSON schema validates OPT strings &\tabularnewline
Naming checked for collisions &\tabularnewline
Red-team review completed &\tabularnewline
Accessibility (font/contrast) &\tabularnewline
Build reproducibility (Makefile OK) &\tabularnewline
Cover letter frames contribution as complementary, not adversarial &
\tabularnewline
\bottomrule
\end{longtable}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
If you'd like, I can produce a \textbf{ready-to-run Python script} that
automatically checks citations (parsing \texttt{.aux} and
\texttt{.bib}), verifies DOI/URL validity, and outputs a short
``completeness report'' for your paper. Would you like that next?
\end{document}