Initial manuscript files commit

This commit is contained in:
Wesley R. Elsberry 2025-11-13 20:49:09 -05:00
parent e6b68afb0c
commit 47ce7aa08b
7 changed files with 1980 additions and 0 deletions

468
doc/body_shared.tex Normal file
View File

@ -0,0 +1,468 @@
% =======================
% Shared body (no preamble)
% Accessibility: keep vector figures, larger sizes set by wrappers
% Wrappers must define:
% \twocoltrue or \twocolfalse
% \figureW, \figureH (for radar plots)
% Packages expected: tikz, pgfplots, booktabs, amsmath, amssymb, mathtools, hyperref, natbib (or ACM/IEEE styles)
% =======================
% --- Short names (public-only; no numeric codes)
\newcommand{\Lrn}{\textbf{Lrn}} % Learnon — Parametric learning
\newcommand{\Evo}{\textbf{Evo}} % Evolon — Population adaptation
\newcommand{\Sym}{\textbf{Sym}} % Symbion — Symbolic inference
\newcommand{\Prb}{\textbf{Prb}} % Probion — Probabilistic inference
\newcommand{\Sch}{\textbf{Sch}} % Scholon — Search & planning
\newcommand{\Ctl}{\textbf{Ctl}} % Controlon — Control & estimation
\newcommand{\Swm}{\textbf{Swm}} % Swarmon — Collective/swarm
\newcommand{\hyb}[1]{\textsc{#1}} % hybrid spec styling (e.g., \hyb{Lrn+Sch})
%\newcommand{\figureW}{0.95\textwidth}
%\newcommand{\figureH}{0.58\textwidth}
% --- Wide figure helper: figure* in two-column; figure in one-column
\newif\iftwocol
\providecommand{\figureW}{0.95\textwidth}
\providecommand{\figureH}{0.58\textwidth}
\newenvironment{WideFig}{\iftwocol\begin{figure*}\else\begin{figure}\fi}{\iftwocol\end{figure*}\else\end{figure}\fi}
% --- Wide table helper: table* in two-column; table in one-column
\newenvironment{WideTab}{\iftwocol\begin{table*}\else\begin{table}\fi}{\iftwocol\end{table*}\else\end{table}\fi}
% --- TikZ/PGF defaults
\pgfplotsset{compat=1.18}
\begin{abstract}
Policy and industry discourse often reduce AI to machine learning framed as “supervised, unsupervised, or reinforcement learning.” This triad omits long-standing AI traditions (symbolic expert systems, search \& planning, probabilistic inference, control/estimation, and evolutionary/collective computation). We formalize the \emph{Operational-Premise Taxonomy}~(OPT), classifying AI by its dominant computational mechanism: \Lrn, \Evo, \Sym, \Prb, \Sch, \Ctl, and \Swm. For each class we provide core mathematical operators, link them to canonical biological mechanisms, and survey hybrid compositions. We argue that OPT yields a principled, biologically grounded, and governance-usable taxonomy that avoids category errors inherent in training-signalbased labels, while remaining compact and readable with a short, compositional naming code.
\end{abstract}
% ---------------------------
\section{Introduction}
% ---------------------------
Regulatory texts frequently equate “AI” with three categories of \emph{learning signals}: supervised, unsupervised, and reinforcement learning \citep{EUAnnex,NISTRMF}. These categories emerged from neural/connectionist practice, not from the full breadth of artificial intelligence \citep{AIMA4}. We propose an alternative taxonomic axis: the \emph{operational premise}—the primary computational mechanism a system instantiates to improve, adapt, or decide. The resulting taxonomy, \emph{operational premise taxonomy}~(OPT) provides a transparent and consistent framework for compactly describing AI systems, including hybrids and pipelines. OPT retains biological analogs (learning vs.\ adaptation) while accommodating symbolic, probabilistic, search, control, and swarm paradigms.
% ---------------------------
\section{Operational-Premise Taxonomy (OPT)}
% ---------------------------
Because OPT introduces several new labels, we present those here before tackling background and related work topics.
OPT classes are defined by dominant mechanism; hybrids are explicit compositions:
\begin{itemize}[leftmargin=1.6em]
\item \textbf{Learnon (\Lrn)} — Parametric learning within an individual (gradient/likelihood/return updates).
\item \textbf{Evolon (\Evo)} — Population adaptation via variation, selection, inheritance.
\item \textbf{Symbion (\Sym)} — Symbolic/logic inference over discrete structures (KB, clauses, proofs).
\item \textbf{Probion (\Prb)} — Probabilistic modeling and approximate inference (posteriors, ELBO).
\item \textbf{Scholon (\Sch)} — Deliberative search and planning (heuristics, DP, graph search).
\item \textbf{Controlon (\Ctl)} — Feedback control and state estimation in dynamical systems.
\item \textbf{Swarmon (\Swm)} — Collective/swarm coordination with local rules and emergence.
\end{itemize}
\noindent \emph{Hybrid notation.}~We use \hyb{A+B}~for co-operative mechanisms, \hyb{A/B}~for hierarchical nesting (outer/inner), \hyb{A\{B,C\}}~for parallel ensembles, and \hyb{[A→B]}~for pipelines (Appendix~\ref{app:optcode}).
% --- OPT circle landscape (auto-wide)
\begin{WideFig}
\centering
\begin{tikzpicture}[
node distance=2cm,
every node/.style={font=\small},
optnode/.style={circle, draw=black, very thick, minimum size=11mm, align=center},
hybridedge/.style={-Latex, very thick},
weakedge/.style={-Latex, dashed, thick},
legendbox/.style={draw, rounded corners, inner sep=3pt, font=\footnotesize},
]
\def\R{4.9}
\path
(90:\R) node[optnode] (L) {Lrn}
(38.6:\R) node[optnode] (S) {Sch}
(-12.8:\R) node[optnode] (Y) {Sym}
(-64.2:\R) node[optnode] (P) {Prb}
(-115.6:\R) node[optnode] (C) {Ctl}
(-167:\R) node[optnode] (W) {Swm}
(141.4:\R) node[optnode] (E) {Evo};
\draw[hybridedge] (L) to[bend left=10] (S);
\draw[hybridedge] (S) to[bend left=10] (L);
\draw[hybridedge] (L) to[bend left=10] (Y);
\draw[hybridedge] (Y) to[bend left=10] (L);
\draw[hybridedge] (L) to[bend left=10] (E);
\draw[hybridedge] (E) to[bend left=10] (L);
\draw[hybridedge] (L) to[bend left=10] (C);
\draw[hybridedge] (C) to[bend left=10] (L);
\draw[weakedge] (S) -- (Y);
\draw[weakedge] (P) -- (L);
\draw[weakedge] (P) -- (S);
\draw[weakedge] (W) -- (E);
\draw[weakedge] (C) -- (S);
\draw[weakedge] (P) -- (C);
\node[legendbox, anchor=north east] at ($(current bounding box.north east)+(-0.2, 1.2)$) {
\begin{tabular}{@{}l@{}}
\textbf{Solid:} prominent hybrids (\hyb{Lrn+Sch}, \hyb{Lrn+Sym}, \hyb{Lrn+Evo}) \\
\textbf{Dashed:} frequent couplings (\hyb{Prb+Ctl}, \hyb{Sch+Sym}, \hyb{Swm+Evo}) \\
\end{tabular}
};
\end{tikzpicture}
\caption{OPT landscape using short names only: \Lrn, \Evo, \Sym, \Prb, \Sch, \Ctl, \Swm.}
\label{fig:opt_landscape}
\end{WideFig}
% ---------------------------
\section{Mathematical Foundations and Biological Correspondences}
\label{sec:math}
% ---------------------------
\paragraph{Learnon (\Lrn).} Empirical risk minimization:
\begin{equation}
\theta^\star \in \arg\min_{\theta}\ \mathbb{E}_{(x,y)\sim \mathcal{D}}[ \ell(f_\theta(x),y) ] + \lambda \Omega(\theta),
\end{equation}
with gradient updates $\theta_{t+1}=\theta_t-\eta_t\nabla\widehat{\mathcal{L}}(\theta_t)$; RL maximizes $J(\pi)=\mathbb{E}_\pi[\sum_t \gamma^t r_t]$ in MDPs. \emph{Biology:}~ Hebbian/Oja \citep{Hebb1949,Oja1982}, reward-modulated prediction errors \citep{SuttonBarto2018}.
\paragraph{Evolon (\Evo).} Population pipeline $P_{t+1}=\mathcal{R}(\mathcal{M}(\mathcal{C}(P_t)))$ with fitness-driven selection. \emph{Biology:}~ Price equation $\Delta \bar{z}=\frac{\mathrm{Cov}(w,z)}{\bar{w}}+\frac{\mathbb{E}[w\Delta z]}{\bar{w}}$; replicator $\dot{p}_i=p_i(f_i-\bar{f})$ \citep{Price1970,TaylorJonker1978}.
\paragraph{Symbion (\Sym).} Resolution/unification; soundness and refutation completeness \citep{Robinson1965Resolution}.
\paragraph{Probion (\Prb).} Bayes $p(z|x)\propto p(x|z)p(z)$; VI via ELBO $\mathcal{L}(q)=\mathbb{E}_q[\log p(x,z)]-\mathbb{E}_q[\log q(z)]$; \emph{Biology:}~ Bayesian brain \citep{KnillPouget2004}.
\paragraph{Scholon (\Sch).} A* with admissible $h$ is optimally efficient; DP/Bellman updates $V_{k+1}(s)=\max_a[r(s,a)+\gamma\sum_{s'}P(s'|s,a)V_k(s')]$.
\paragraph{Controlon (\Ctl).} LQR minimizes quadratic cost in linear systems; Kalman filter provides MMSE state estimates in LQG \citep{Kalman1960,Pontryagin1962,TodorovJordan2002}.
\paragraph{Swarmon (\Swm).} PSO updates $v_i(t+1)=\omega v_i(t)+c_1 r_1(p_i-x_i)+c_2 r_2(g-x_i)$; ACO pheromone $\tau\leftarrow (1-\rho)\tau+\sum_k \Delta\tau^{(k)}$.
% ---------------------------
\section{Background and Prior Work}
% ---------------------------
Classic textbooks and surveys treat symbolic reasoning, planning/search, probabilistic models, learning, evolutionary methods, and control/estimation as co-equal pillars \citep{AIMA4,CIbook,FuzzySurvey,SuttonBarto2018}. No-Free-Lunch (NFL) theorems for search/optimization motivate pluralism: no single mechanism dominates across all problems \citep{Wolpert1997}. Biological literatures mirror these mechanisms: synaptic plasticity and Hebbian/Oja learning \citep{Hebb1949,Oja1982}, population genetics and replicator dynamics \citep{Price1970,TaylorJonker1978}, Bayesian cognition \citep{KnillPouget2004}, and optimal feedback control in motor behavior \citep{TodorovJordan2002,Kalman1960,Pontryagin1962}.
\include{related-work}
% Bridge
\paragraph{Comparative landscape.}
Table~\ref{tab:opt_vs_frameworks} situates OPT alongside the best-known standards, policy instruments, and textbook structures.
Each of these prior frameworks serves an important function—shared vocabulary (ISO/IEC 22989), ML-system decomposition (ISO/IEC 23053), risk management (NIST AI RMF), usage contexts (NIST AI 200-1), multidimensional policy characterization (OECD), or regulatory stratification (EU AI Act).
However, they remain either technique-agnostic or focused solely on machine learning.
OPT complements them by supplying the missing layer: a stable, biologically grounded \emph{implementation taxonomy}~ that captures mechanism families across paradigms and defines a formal grammar for hybrid systems.
\include{table-opt-comparison}
% ---------------------------
\section{Comparative Analysis, Completeness, and Objections}
\label{sec:analysis}
% ---------------------------
\subsection{Biological--Artificial Correspondences}
Each OPT class aligns with a biological mechanism (plasticity, natural selection, structured reasoning, Bayesian cognition, deliberative planning, optimal feedback control, and distributed coordination). Shared operators in Sec.~\ref{sec:math} support cross-domain guarantees.
\subsection{Coverage, Hybrids, and Orthogonal Descriptors}
Hybrids are explicit (e.g., \hyb{Lrn+Sch} AlphaZero, \hyb{Lrn+Sym} neuro-symbolic, \hyb{Evo/Lrn} neuroevolution). Orthogonal axes capture representation, locus of change, objective, data regime, timescale, and human participation.
\subsection{Objections and Responses}
\textbf{Reduction to optimization.} Mechanisms imply distinct guarantees/hazards (data leakage vs.\ fitness misspecification vs.\ rule brittleness). NFL cautions against collapsing mechanisms.
\textbf{Hybrid blurring.} OPT treats compositions as first-class; the notation discloses “what changes where, on what objective, and on what timescale.”
\textbf{Regulatory simplicity.} Seven bins appear minimal for coverage; the short names keep disclosures compact and meaningful.
% ---------------------------
\section{Examples and Mapping}
% ---------------------------
\begin{table}[htbp]
\centering
\caption{Representative paradigms mapped to OPT.}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{@{}p{3.9cm}p{3.6cm}p{2.2cm}@{}}
\toprule
\textbf{Type / Implementation} & \textbf{Examples} & \textbf{OPT (short)}\\
\midrule
NN/Transformer (GD) & CNN, LSTM, attention & \Lrn\\
Reinforcement learning & DQN, PG, AC & \Lrn\;(+\Sch,\,+\Ctl)\\
Evolutionary algorithms & GA, GP, CMA-ES & \Evo\\
Swarm intelligence & ACO, PSO & \Swm\;(+\Evo)\\
Expert systems & Prolog, Mycin, XCON & \Sym\\
Probabilistic models & BN, HMM, factor graphs & \Prb\\
Search \& planning & A*, MCTS, STRIPS & \Sch\\
Control \& estimation & PID, LQR, KF/MPC & \Ctl\\
\bottomrule
\end{tabular}
\label{tab:OPTmap}
\end{table}
% ---------------------------
\section{Orthogonal Axes and Risk Perspectives}
% ---------------------------
\paragraph{Secondary axes (orthogonal descriptors).}
\begin{itemize}[leftmargin=1.2em]
\item \textbf{Representation:} parametric vectors, symbols/logic, graphs, programs, trajectories, policies.
\item \textbf{Locus of Change:} parameters, structure/architecture, population composition, belief state, policy.
\item \textbf{Objective Type:} prediction, optimization, inference, control, search cost, constraint satisfaction.
\item \textbf{Timescale:} online vs.\ offline; within-run vs.\ across-generations.
\item \textbf{Data Regime:} none/synthetic, labeled, unlabeled, interactive reward.
\item \textbf{Human Participation:} expert-authored knowledge vs.\ learned vs.\ co-created.
\end{itemize}
\begin{table}[htbp]
\centering
\caption{Orthogonal descriptive axes and governance risks (abridged).}
\renewcommand{\arraystretch}{1.05}
\begin{tabular}{@{}p{1.25cm}p{3.5cm}p{3.9cm}@{}}
\toprule
\textbf{OPT} & \textbf{Primary Risks} & \textbf{Assurance Focus} \\
\midrule
\Lrn & Data leakage, reward hacking & Data governance, OOD tests, calibration \\
\Evo & Fitness misspecification & Proxy validation, replicates, constraints \\
\Sym & Rule brittleness, KB inconsistency & Provenance, formal verification \\
\Prb & Miscalibration, inference bias & Posterior predictive checks \\
\Sch & Heuristic inadmissibility & Optimality proofs, heuristic diagnostics \\
\Ctl & Instability, unmodeled dynamics & Stability margins, robustness \\
\Swm & Emergent instability & Swarm invariants, safety envelopes \\
\bottomrule
\end{tabular}
\label{tab:OPT_risk}
\end{table}
% --- Radar plots (two figures; auto-wide; short-name legends)
% --- Radar helper: one polygon with six axes (Rep., Locus, Obj., Data, Time, Human)
\newcommand{\RadarPoly}[7]{%
% #1 style, #2..#7 = values on axes in order
\addplot+[#1] coordinates
{(0,#2) (60,#3) (120,#4) (180,#5) (240,#6) (300,#7) (360,#2)};
}
\begin{WideFig}
\centering
\begin{tikzpicture}
\begin{polaraxis}[
width=\figureW, height=\figureH,
ymin=0, ymax=5,
grid=both,
xtick={0,60,120,180,240,300},
xticklabels={Rep.,Locus,Obj.,Data,Time,Human},
legend columns=3,
legend style={draw=none, at={(0.5,1.03)}, anchor=south, font=\small},
tick label style={font=\small},
]
% Lrn, Evo, Sym
\RadarPoly{very thick, mark=*, mark options={solid}, mark size=2pt}{0}{0}{4}{4}{4}{1}
\addlegendentry{\Lrn}
\RadarPoly{densely dashed, very thick, mark=square*, mark options={solid}, mark size=2.2pt}{2}{5}{5}{2}{5}{2}
\addlegendentry{\Evo}
\RadarPoly{dashdotdotted, very thick, mark=triangle*, mark options={solid}, mark size=2.4pt}{5}{4}{4}{5}{3}{5}
\addlegendentry{\Sym}
\end{polaraxis}
\end{tikzpicture}
\caption{Orthogonal axes (05) for \Lrn, \Evo, \Sym.}
\label{fig:opt_radar_1}
\end{WideFig}
\begin{WideFig}
\centering
\begin{tikzpicture}
\begin{polaraxis}[
width=\figureW, height=\figureH,
ymin=0, ymax=5,
grid=both,
xtick={0,60,120,180,240,300},
xticklabels={Rep.,Locus,Obj.,Data,Time,Human},
legend columns=4,
legend style={draw=none, at={(0.5,1.03)}, anchor=south, font=\small},
tick label style={font=\small},
]
% Prb, Sch, Ctl, Swm
\RadarPoly{very thick, loosely dotted, mark=diamond*, mark options={solid}, mark size=2.2pt}{4}{3}{5}{4}{3}{3}
\addlegendentry{\Prb}
\RadarPoly{densely dashed, very thick, mark=*, mark options={solid}, mark size=2pt}{3}{3}{4}{2}{3}{3}
\addlegendentry{\Sch}
\RadarPoly{dashdotdotted, very thick, mark=square*, mark options={solid}, mark size=2.2pt}{2}{3}{5}{3}{5}{3}
\addlegendentry{\Ctl}
\RadarPoly{solid, very thick, mark=triangle*, mark options={solid}, mark size=2.4pt}{3}{4}{3}{2}{3}{2}
\addlegendentry{\Swm}
\end{polaraxis}
\end{tikzpicture}
\caption{Orthogonal axes (05) for \Prb, \Sch, \Ctl, \Swm.}
\label{fig:opt_radar_2}
\end{WideFig}
% ---------------------------
\subsection{Artificial Immune Systems (AIS) in OPT}
% ---------------------------
It is useful to show how OPT-code specifications can be derived for examples of a technique that is a hybrid.
Artificial Immune Systems (AIS) instantiate computation via biomimetic mechanisms drawn from adaptive immunity. Their operative core combines (i) population-level \emph{variation and selection} (somatic hypermutation, clonal expansion, memory) and (ii) distributed, locally interacting agents (cells, idiotypic networks), often with (iii) probabilistic fusion of uncertain signals. In OPT, this places AIS primarily in \Evo\ and \Swm, with frequent couplings to \Prb\ and occasional \Sch/\Ctl\ layers depending on task and implementation.
\paragraph{Canonical families and OPT placement.}
\begin{itemize}
\item \textbf{Clonal selection \& affinity maturation (CLONALG, aiNet).} Population of detectors/antibodies $\{a_i\}$ undergo clone--mutate--select cycles driven by affinity to antigens $x$. OPT: \textbf{\Evo+\Swm} (often $+$\Prb).\\
Affinity (bitstrings; Hamming distance $d_H$): $\mathrm{aff}(x,a)=1-\frac{d_H(x,a)}{|x|}$. Clone count $n_i \propto \mathrm{aff}(x,a_i)$; hypermutation rate $\mu_i=f(\mathrm{aff})$ (typically inversely proportional).
\item \textbf{Negative Selection Algorithms (NSA).} Generate detectors that avoid ``self'' set $\mathcal S$ and cover $\mathcal X\setminus \mathcal S$. OPT: \textbf{\Evo/\Sch} ($+$\Prb\ for thresholded matching).\\
Objective: choose $D$ s.t. $\forall d\in D: d\notin \mathcal S$ and coverage $\Pr[\mathrm{match}(x,d)\mid x\notin \mathcal S]\ge \tau$.
\item \textbf{Immune network models (idiotypic).} Interacting clones stimulate/suppress each other; dynamics produce memory and regulation. OPT: \textbf{\Swm+\Evo} (sometimes $+$\Ctl).\\
Skeleton dynamics: $\dot a_i=\sum_j s_{ij}a_j-\sum_j \sigma_{ij}a_ia_j-\delta a_i$ with stimulation $s_{ij}$, suppression $\sigma_{ij}$, decay $\delta$.
\item \textbf{Dendritic Cell Algorithm (DCA) / Danger Theory.} Cells fuse PAMP/danger/safe signals to decide anomaly labeling; aggregation over a population provides robust detection. OPT: \textbf{\Swm+\Prb} (optionally $+$\Evo\ if online adaptation is added).
\end{itemize}
\paragraph{OPT-Code exemplars.}
\begin{quote}\small
\texttt{CLONALG: OPT=Evo+Swm; Rep=bitstring; Obj=affinity; Data=labels$\mid$unlabeled; Time=gens; Human=low}\\
\texttt{aiNet: OPT=Evo+Swm; Rep=realvector; Obj=affinity+diversity; Time=gens}\\
\texttt{NSA (anomaly): OPT=Evo/Sch+Prb; Rep=bitstring; Obj=coverage; Data=self/nonself; Time=gens}\\
\texttt{DCA: OPT=Swm+Prb; Rep=signals; Obj=anomaly-score; Time=online}\\
\texttt{Idiotypic control: OPT=Swm+Ctl; Rep=rules; Obj=stability+coverage; Time=online}
\end{quote}
\paragraph{Where biology and OPT coincide.}
Somatic hypermutation+$\,$selection $\to$ \Evo; massive agent concurrency and local rules $\to$ \Swm; uncertainty fusion (signal weighting, thresholds) $\to$ \Prb; homeostatic regulation $\to$ \Ctl; detector-set coverage and complement generation $\to$ \Sch.
\paragraph{Assurance considerations.}
Key failure modes are coverage gaps (missed anomalies), detector drift, and instability in network dynamics. Assurance suggests (i) held-out self/non-self tests, (ii) diversity and coverage metrics, (iii) stability analysis of interaction graphs, and (iv) calibration of anomaly thresholds (if \Prb). These layer cleanly with risk/management frameworks (NIST RMF, ISO 23053) while OPT communicates mechanism.
% ---------------------------
\section{Discussion: Why OPT Supersedes Signal-Based Taxonomies}
% ---------------------------
\paragraph{Mechanism clarity.} \Lrn\Swm encode distinct improvement/decision operators (gradient, selection, resolution, inference, search, feedback, collective rules).
\paragraph{Biological alignment.} OPT mirrors canonical biological mechanisms (plasticity, natural selection, Bayesian cognition, optimal feedback control, etc.).
\paragraph{Compact completeness.} Seven bins cover mainstream AI while enabling crisp hybrid composition; short names and hybrid syntax convey the rest.
\paragraph{Governance usability.} Mechanism-aware controls attach naturally per class (Table~\ref{tab:OPT_risk}).
\subsection{Reclassification of Classic Systems}
\begin{table}[htbp]
\centering
\caption{Classic systems: historical labels vs.\ OPT placement (short names only).}
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{@{}p{3.4cm}p{2.7cm}p{2.4cm}@{}}
\toprule
\textbf{System} & \textbf{Prior label} & \textbf{OPT (short)}\\
\midrule
XCON / R1 & Expert system & \Sym \\
CLIPS & Expert shell & \Sym \\
Instar/Outstar & Neural rules & \Lrn \\
Backprop & Supervised NN & \Lrn \\
ART 1/2 & Unsupervised NN & \Lrn \\
LMS/ADALINE & Supervised NN & \Lrn \\
HopfieldTank TSP & Neural optimization & \Lrn\;(+\Sch) \\
Boltzmann Machines & Energy-based NN & \Lrn \\
Fuzzy Logic Control & Soft computing & \Ctl\;(+\Sym) \\
Genetic Algorithms & Evolutionary & \Evo \\
Genetic Programming & Program induction & \Evo \\
Symbolic Regression & Model discovery & \Evo\;(+\Sym) \\
PSO & Swarm optimization & \Swm\;(+\Evo) \\
A*/STRIPS/GraphPlan & Search/planning & \Sch\;(+\Sym) \\
Kalman/LQR/MPC & Estimation/control & \Ctl \\
\bottomrule
\end{tabular}
\label{tab:classicOPT}
\end{table}
\subsection{On “Everything is a Spin Glass”: Scope and Limits}
Energy formulations fit symmetric Hopfield/BM subsets but fail to subsume asymmetric architectures, symbolic proof search, population dynamics, or LQG control; complexity frontiers also differ. OPT preserves energy insights without overreach.
% ---------------------------
\section{Conclusion}
% ---------------------------
OPT provides a formal, biologically grounded taxonomy that clarifies mechanisms and hybrids and supports governance. We encourage standards bodies to adopt short-name OPT identifiers and hybrid syntax in system documentation.
% ---------------------------
\appendix
\section{OPT-Code v1.0: Naming Convention}
\label{app:optcode}
\paragraph{Purpose.} Provide compact, semantically transparent names that self-identify an AI systems operative mechanism(s). These are the \emph{only}~ public OPT names; legacy signal types remain descriptive but are not taxonomic.
\subsection*{Roots (frozen set in v1.0)}
\begin{center}
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Short} & \textbf{Name} & \textbf{Mechanism}\\
\midrule
\Lrn & Learnon & Parametric learning (loss/likelihood/return) \\
\Evo & Evolon & Population adaptation (variation/selection/inheritance) \\
\Sym & Symbion & Symbolic inference (rules/constraints/proofs) \\
\Prb & Probion & Probabilistic inference (posteriors/ELBO) \\
\Sch & Scholon & Search \& planning (heuristics/DP/graph) \\
\Ctl & Controlon & Control \& estimation (feedback/Kalman/LQR/MPC) \\
\Swm & Swarmon & Collective/swarm (stigmergy/distributed rules) \\
\bottomrule
\end{tabular}
\end{center}
\subsection*{Composition syntax}
\begin{itemize}[leftmargin=1.2em]
\item \hyb{A+B}: co-operative mechanisms (e.g., \hyb{Lrn+Sch}).
\item \hyb{A/B}: hierarchical nesting, outer/inner (e.g., \hyb{Evo/Lrn}).
\item \hyb{A\{B,C\}}: parallel ensemble (e.g., \hyb{Sym\{Lrn,Prb\}}).
\item \hyb{[A→B]}: sequential pipeline (e.g., \hyb{[Lrn→Ctl]}).
\end{itemize}
\subsection*{Attributes (orthogonal descriptors)}
Optional, mechanism-agnostic, appended after a semicolon:
\[
\text{\small\tt OPT=Evo/Lrn+Ctl; Rep=param; Obj=fitness; Data=sim; Time=gen; Human=low}
\]
Keys: \texttt{Rep} (representation), \texttt{Locus}, \texttt{Obj}, \texttt{Data}, \texttt{Time}, \texttt{Human}.
\subsection*{Grammar (ABNF)}
\begin{verbatim}
opt-spec = "OPT=" compose [ ";" attrs ]
compose = term / compose "+" term / compose "/" term
/ "[" compose "→" compose "]"
/ term "{" compose *("," compose) "}"
term = "Lrn" / "Evo" / "Sym" / "Prb" / "Sch" / "Ctl" / "Swm"
attrs = attr *( ";" attr )
attr = key "=" value
key = 1*(ALPHA)
value = 1*(ALNUM / "-" / "_" / "." )
\end{verbatim}
\subsection*{Stability and change control}
\textbf{S1 (Root freeze).} The seven roots above are frozen for OPT-Code v1.0.
\textbf{S2 (Extensions via attributes).} New nuance is expressed via attributes, not new roots.
\textbf{S3 (Mechanism distinctness).} Proposals to add a root in a future major version must prove a distinct operational mechanism not subsumable by existing roots.
\textbf{S4 (Compatibility).} Parsers may accept legacy aliases but must render short names only.
\textbf{S5 (Priority).} First published mapping of a systems OPT-Code (with mathematical operator) has naming priority; deviations must be justified.
% --- Hybrid ancestry diagram (for readability)
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[
node distance=8mm and 14mm,
every node/.style={font=\small},
mech/.style={rounded corners, draw=black, very thick, inner sep=4pt, align=center},
hyb/.style={rounded corners, draw=black!60, dashed, inner sep=3pt, align=center},
->, >=Latex
]
% Roots
\node[mech] (L) {\Lrn};
\node[mech, right=of L] (S) {\Sch};
\node[mech, right=of S] (C) {\Ctl};
\node[mech, below=of L] (E) {\Evo};
\node[mech, right=of E] (Y) {\Sym};
\node[mech, right=of Y] (P) {\Prb};
\node[mech, below=of E] (W) {\Swm};
% Hybrids (examples)
\node[hyb, above=6mm of $(L)!0.5!(S)$] (LS) {\hyb{Lrn+Sch}\\ \footnotesize(AlphaZero-type)};
\node[hyb, above=6mm of $(L)!0.5!(C)$] (LC) {\hyb{Lrn+Ctl}\\ \footnotesize(model-based control)};
\node[hyb, below=6mm of $(L)!0.5!(E)$] (EL) {\hyb{Evo/Lrn}\\ \footnotesize(neuroevolution)};
\node[hyb, below=6mm of $(L)!0.5!(Y)$] (LY) {\hyb{Lrn+Sym}\\ \footnotesize(neuro-symbolic)};
\node[hyb, below=6mm of $(P)!0.5!(C)$] (PC) {\hyb{Prb+Ctl}\\ \footnotesize(Bayesian control)};
\node[hyb, below=6mm of $(E)!0.5!(W)$] (EW) {\hyb{Swm+Evo}\\ \footnotesize(swarm-evolution)};
% Edges
\draw (L) -- (LS); \draw (S) -- (LS);
\draw (L) -- (LC); \draw (C) -- (LC);
\draw (E) -- (EL); \draw (L) -- (EL);
\draw (L) -- (LY); \draw (Y) -- (LY);
\draw (P) -- (PC); \draw (C) -- (PC);
\draw (E) -- (EW); \draw (W) -- (EW);
\end{tikzpicture}
\caption{Hybrid “ancestry” diagram: short-name roots (solid) and exemplar hybrids (dashed).}
\label{fig:opt_hybrid_tree}
\end{figure}

39
doc/main.tex Normal file
View File

@ -0,0 +1,39 @@
\documentclass[12pt]{article}
\usepackage{amsmath,amsthm,mathtools}
\usepackage[a4paper,margin=1in]{geometry}
%\usepackage{times}
\usepackage[T1]{fontenc}
\usepackage{newtxtext,newtxmath} % unified serif + math fonts
\usepackage{microtype} % optional quality
%(If you switch to LuaLaTeX/XeLaTeX later, instead use
%\usepackage{fontspec}\setmainfont{TeX Gyre Termes}
\usepackage{natbib}
\usepackage{hyperref}
\usepackage{enumitem}
\usepackage{booktabs}
\usepackage{doi}
\usepackage{tikz}
\usetikzlibrary{arrows.meta,positioning,fit,calc}
\usepackage{pgfplots}
\usepgfplotslibrary{polar}
% Toggles and figure sizes (larger for readability)
% Toggles and figure sizes (larger for readability)
\newif\iftwocol
\twocolfalse
\newcommand{\figureW}{0.95\textwidth}
\newcommand{\figureH}{0.62\textwidth}
\title{Beyond “Supervised vs.\ Unsupervised”:\\
An Operational-Premise Taxonomy for Artificial Intelligence}
\author{Wesley R.~Elsberry}
\date{\today}
\begin{document}
\maketitle
\input{body_shared}
\bibliographystyle{plainnat}
\bibliography{references}
\end{document}

175
doc/mkbib.py Normal file
View File

@ -0,0 +1,175 @@
#!/usr/bin/env python3
"""
BibTeX Aggregator
Recursively collects BibTeX entries from .bib files in a directory,
merges unique entries by BibTeX key, handles conflicts by selecting
the longest entry, and reports all options with the selected one tagged.
"""
import os
import sys
import argparse
import re
from datetime import datetime
from collections import defaultdict
def backup_existing_file(filepath):
"""Rename existing file by appending a timestamp."""
if os.path.exists(filepath):
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
name, ext = os.path.splitext(filepath)
backup_name = f"{name}_{timestamp}{ext}"
os.rename(filepath, backup_name)
print(f"Backed up existing {filepath} to {backup_name}")
def extract_bibtex_key(entry):
"""Extract the BibTeX key from an entry string."""
# Match @type{key, ...}
match = re.match(r'^@[a-zA-Z]+\s*{\s*([^,}\s]+)', entry.strip(), re.IGNORECASE)
if match:
return match.group(1)
return None
def parse_bib_file(filepath):
"""
Parse a .bib file and return a dict of BibTeX keys to (entry, filepath).
"""
entries = {}
try:
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
except UnicodeDecodeError:
# Try with latin-1 if utf-8 fails
with open(filepath, 'r', encoding='latin-1') as f:
content = f.read()
# Split entries by '@' but keep the '@'
raw_entries = re.split(r'(?=@[a-zA-Z]+\s*{)', content)
for raw_entry in raw_entries:
if not raw_entry.strip() or not raw_entry.strip().startswith('@'):
continue
key = extract_bibtex_key(raw_entry)
if key:
entries[key] = (raw_entry, filepath)
return entries
def collect_bib_entries(root_dir):
"""Recursively collect all BibTeX entries from .bib files."""
all_entries = {}
conflicts = defaultdict(list)
for dirpath, _, filenames in os.walk(root_dir):
for filename in filenames:
if filename.lower().endswith('.bib'):
filepath = os.path.join(dirpath, filename)
try:
entries = parse_bib_file(filepath)
for key, (entry, source) in entries.items():
if key in all_entries:
conflicts[key].append((entry, source))
else:
all_entries[key] = (entry, source)
except Exception as e:
print(f"Warning: Skipping {filepath} due to error: {e}", file=sys.stderr)
# Process conflicts: select the longest entry and prepare conflict data
resolved_conflicts = {}
conflict_data = {}
for key in conflicts:
# Include the first occurrence that was in all_entries
candidates = [all_entries[key]] + conflicts[key]
# Select the longest entry by character count
selected = max(candidates, key=lambda x: len(x[0]))
all_entries[key] = selected
conflict_data[key] = {
'candidates': candidates,
'selected': selected
}
return all_entries, conflict_data
def write_bib_file(entries, output_file):
"""Write sorted BibTeX entries to output file."""
sorted_keys = sorted(entries.keys())
with open(output_file, 'w', encoding='utf-8') as f:
for key in sorted_keys:
entry, _ = entries[key]
f.write(entry)
if not entry.endswith('\n\n'):
f.write('\n\n')
def write_conflicts(conflicts, output_file):
"""Write conflict report to org-mode file, tagging the selected entry."""
if not conflicts:
return
with open(output_file, 'w', encoding='utf-8') as f:
f.write("#+TITLE: BibTeX Conflicts Report\n")
f.write("#+AUTHOR: BibTeX Aggregator\n")
f.write(f"#+DATE: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n")
for key in sorted(conflicts.keys()):
f.write(f"* Conflict for BibTeX key: {key}\n")
candidates = conflicts[key]['candidates']
selected = conflicts[key]['selected']
for i, (entry, source) in enumerate(candidates, 1):
is_selected = (entry == selected[0] and source == selected[1])
tag = " << SELECTED >>" if is_selected else ""
f.write(f"** Source {i}: {os.path.relpath(source, os.getcwd())}{tag}\n")
f.write("#+BEGIN_SRC bibtex\n")
f.write(entry.strip())
f.write("\n#+END_SRC\n\n")
def main():
parser = argparse.ArgumentParser(description="Aggregate BibTeX entries from multiple files.")
parser.add_argument('--working-dir', '-w', default='.',
help='Working directory to search for .bib files (default: .)')
parser.add_argument('--output', '-o', default='refs.bib',
help='Output filename (default: refs.bib)')
args = parser.parse_args()
working_dir = os.path.abspath(args.working_dir)
output_file = os.path.abspath(args.output)
conflicts_file = os.path.join(os.path.dirname(output_file), 'bib_conflicts.org')
if not os.path.exists(working_dir):
print(f"Error: Working directory '{working_dir}' does not exist.", file=sys.stderr)
sys.exit(1)
# Backup existing output file if it exists
backup_existing_file(output_file)
# Only backup conflicts file if it exists (don't create empty backup)
if os.path.exists(conflicts_file):
backup_existing_file(conflicts_file)
# Collect entries
print(f"Searching for .bib files in '{working_dir}'...")
entries, conflicts = collect_bib_entries(working_dir)
# Write output files
write_bib_file(entries, output_file)
write_conflicts(conflicts, conflicts_file)
# Summary
print(f"Written {len(entries)} unique entries to {output_file}")
if conflicts:
print(f"Found {len(conflicts)} conflicting keys; details in {conflicts_file}")
else:
print("No conflicts found.")
if __name__ == '__main__':
main()

765
doc/references.bib Normal file
View File

@ -0,0 +1,765 @@
@book{AIMA4,
author = {Russell, Stuart Jonathan and Norvig, Peter},
title = {{Artificial Intelligence: A Modern Approach}},
year = {2020},
month = apr,
isbn = {978-0-13461099-3},
publisher = {Pearson},
address = {London, England, UK},
url = {https://books.google.com/books/about/Artificial_Intelligence.html?id=koFptAEACAAJ}
}
@book{AIMA4_AI,
author = {Stuart J. Russell and Peter Norvig},
title = {Artificial Intelligence: A Modern Approach},
edition = {4th},
year = {2020},
publisher = {Pearson},
isbn = {978-0134610993},
url = {https://aima.cs.berkeley.edu/},
bibsource = {ChatGPT},
eval = {}
}
@book{CIBook,
author = {Fogel, David B. and Liu, Derong and Keller, James M.},
title = {{Fundamentals of Computational Intelligence}},
journal = {Wiley Online Library},
year = {2016},
month = jun,
isbn = {978-1-11921434-2},
doi = {10.1002/9781119214403}
}
@book{CIbook_AI,
author = {Ognjen Kukolj},
title = {Fundamentals of Computational Intelligence: Neural Networks, Fuzzy Systems, and Evolutionary Computation},
year = {2016},
publisher = {Wiley},
doi = {10.1002/9781119093306},
isbn = {9781119093269},
bibsource = {ChatGPT},
eval = {Messed up: author; Correct: title, year, publisher; Discrepant: isbn, doi}
}
@book{CIsurvey,
author = {James M. Keller and Derong Liu and David B. Fogel},
title = {Fundamentals of Computational Intelligence: Fuzzy, Neural, and Evolutionary Computation},
year = {2016},
publisher = {IEEE Press / Wiley},
bibsource = {ChatGPT},
isbn = {9781119214350}
}
@misc{CLIPS,
author = {{NASA/JSC}},
title = {CLIPS: C Language Integrated Production System},
year = {1986},
howpublished = {\url{https://www.clipsrules.net/}},
bibsource = {ChatGPT},
}
@article{CarpenterGrossberg1987ART1,
author = {Gail A. Carpenter and Stephen Grossberg},
title = {ART 1: Self-Organizing Pattern Recognition with a Stability-Plasticity Dilemma},
journal = {Neural Networks},
year = {1987},
volume = {1},
number = {1},
pages = {71--102},
doi = {10.1016/0893-6080(88)90020-3},
bibsource = {ChatGPT}
}
@article{CarpenterGrossberg1987ART2,
author = {Gail A. Carpenter and Stephen Grossberg},
title = {ART 2: Self-Organization of Stable Category Recognition Codes for Analog Input Patterns},
journal = {Applied Optics},
year = {1987},
volume = {26},
number = {23},
pages = {4919--4930},
doi = {10.1364/AO.26.004919},
bibsource = {ChatGPT}
}
@article{EFSTaxonomy,
author = {Oscar Cord{\'o}n and Francisco Herrera and Roberto Alcal{\'a} and Luis Magdalena},
title = {Revisiting Evolutionary Fuzzy Systems: Taxonomy, Applications, and Challenges},
journal = {Knowledge-Based Systems},
volume = {80},
pages = {109--121},
year = {2015},
doi = {10.1016/j.knosys.2015.01.013},
bibsource = {ChatGPT}
}
@misc{EUAnnex,
author = {{European Parliament and Council of the European Union}},
title = {Artificial Intelligence Act -- Annex~I: Artificial Intelligence Techniques and Approaches},
year = {2023},
howpublished = {\url{https://artificialintelligenceact.eu/annex-i-artificial-intelligence-techniques-and-approaches/}},
note = {Consolidated explanatory version accessed 2025-11-11},
bibsource = {ChatGPT}
}
@misc{EUAIAct,
title = {Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (AI Act)},
howpublished = {Official Journal of the European Union, 12 July 2024},
year = {2024},
url = {https://eur-lex.europa.eu/eli/reg/2024/1689/oj},
bibsource = {ChatGPT}
}
% --- Textbooks & Surveys
@article{FuzzySurvey,
author = {Alcala-Fdez, Jesus and Alonso, Jose M.},
title = {{A Survey of Fuzzy Systems Software: Taxonomy, Current Research Trends and Prospects}},
journal = {IEEE Trans. Fuzzy Syst.},
volume = {24},
number = {1},
pages = {40--56},
year = {2016},
month = feb,
issn = {1063-6706},
publisher = {IEEE (Institute of Electrical and Electronics Engineers)},
doi = {10.1109/TFUZZ.2015.2426212}
}
@article{FuzzySurvey-AI-badcite,
author = {Jes{\'u}s Alcal{\'a}-Fdez and Jos{\'e} M. Alonso},
title = {A Survey of Fuzzy Systems Software: Taxonomy, Trends, and Prospects},
journal = {Information Sciences},
volume = {377},
pages = {233--257},
year = {2017},
doi = {10.1016/j.ins.2016.10.040},
bibsource = {ChatGPT},
eval = {Wrong journal, year, volume, pages}
}
@article{Grossberg1976InstarOutstar,
author = {Stephen Grossberg},
title = {Adaptive Pattern Classification and Universal Recoding},
journal = {Biological Cybernetics},
year = {1976},
volume = {23},
pages = {121--134},
doi = {10.1007/BF00344744},
bibsource = {ChatGPT}
}
@article{Hart1968AStar,
author = {Peter E. Hart and Nils J. Nilsson and Bertram Raphael},
title = {A Formal Basis for the Heuristic Determination of Minimum Cost Paths},
journal = {IEEE Transactions on Systems Science and Cybernetics},
year = {1968},
volume = {4},
number = {2},
pages = {100--107},
doi = {10.1109/TSSC.1968.300136},
bibsource = {ChatGPT}
}
@book{Hebb1949,
author = {Hebb, Donald Olding},
title = {{The Organization of Behavior: A Neuropsychological Theory}},
year = {1949},
isbn = {978-0-47136727-7},
publisher = {Wiley},
address = {Hoboken, NJ, USA},
url = {https://books.google.com/books/about/The_Organization_of_Behavior.html?id=dZ0eDiLTwuEC}
}
@article{Hebb1949_AI,
author = {Donald O. Hebb},
title = {The Organization of Behavior: A Neuropsychological Theory},
journal = {Wiley},
year = {1949},
bibsource = {ChatGPT}
}
@article{Brown2020Dec,
author = {Brown, Richard E.},
title = {{Donald O. Hebb and the Organization of Behavior: 17 years in the writing}},
journal = {Molecular Brain},
volume = {13},
number = {1},
year = {2020},
month = dec,
issn = {1756-6606},
publisher = {Springer Nature},
doi = {10.1186/s13041-020-00567-8},
abstract = {The Organization of Behavior has played a significant part in the development of behavioural neuroscience for the
last 70 years. This book introduced the concepts of the “Hebb synapse”, the “Hebbian cell assembly”and the “Phase
sequence”. The most frequently cited of these is the Hebb synapse, but the cell assembly may be Hebbs most
important contribution. Even after 70 years, Hebbs theory is still relevant because it is a general framework for
relating behavior to synaptic organization through the development of neural networks. The Organization of
Behavior was Hebbs 40th publication. His first published papers in 1937 were on the innate organization of the
visual system and he first used the phrase “the organization of behavior”in 1938. However, Hebb wrote a number
of unpublished papers between 1932 and 1945 in which he developed the ideas published in The Organization of
Behavior. Thus, the concept of the neural organization of behavior was central to Hebbs thinking from the
beginning of his academic career. But his thinking about the organization of behavior in 1949 was different from
what it was between 1932 and 1937. This paper examines Hebbs early ideas on the neural basis of behavior and
attempts to trace the rather arduous series of steps through which he developed these ideas into the book that
was published as The Organization of Behavior. Using the 1946 typescript and Hebbs correspondence we can see a
number of changes made in the book before it was published. Finally, a number of issues arising from the book,
and the importance of the book today are discussed.}
}
@article{HintonSejnowski1985BM,
author = {Geoffrey E. Hinton and Terrence J. Sejnowski},
title = {Learning and Relearning in {B}oltzmann Machines},
journal = {Parallel Distributed Processing},
year = {1986},
volume = {1},
pages = {282--317},
bibsource = {ChatGPT}
}
@book{Holland1975Adaptation,
author = {John H. Holland},
title = {Adaptation in Natural and Artificial Systems},
year = {1975},
publisher = {University of Michigan Press},
bibsource = {ChatGPT}
}
@article{HopfieldTank1985TSP,
author = {John J. Hopfield and David W. Tank},
title = {``Neural'' Computation of Decisions in Optimization Problems},
journal = {Biological Cybernetics},
year = {1985},
volume = {52},
pages = {141--152},
doi = {10.1007/BF00339943},
bibsource = {ChatGPT}
}
@standard{ISO22989,
title = {Information technology — Artificial intelligence — Concepts and terminology},
organization = {ISO/IEC},
number = {ISO/IEC 22989:2022},
year = {2022},
url = {https://www.iso.org/standard/74296.html},
bibsource = {ChatGPT}
}
@standard{ISO23053,
title = {Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)},
organization = {ISO/IEC},
number = {ISO/IEC 23053:2022},
year = {2022},
url = {https://www.iso.org/standard/74438.html},
bibsource = {ChatGPT},
eval = {TBD}
}
@article{Kalman1960,
author = {Kalman, R. E.},
title = {{A New Approach to Linear Filtering and Prediction Problems}},
journal = {J. Basic Eng.},
volume = {82},
number = {1},
pages = {35--45},
year = {1960},
month = mar,
issn = {0021-9223},
publisher = {American Society of Mechanical Engineers Digital Collection},
doi = {10.1115/1.3662552}
}
@article{Kalman1960_AI,
author = {R. E. Kalman},
title = {A New Approach to Linear Filtering and Prediction Problems},
journal = {Journal of Basic Engineering},
volume = {82},
number = {1},
pages = {35--45},
year = {1960},
doi = {10.1115/1.3662552},
bibsource = {ChatGPT},
eval = {Basic info and DOI correct, lacked month and publisher fields}
}
@inproceedings{KennedyEberhart1995PSO,
author = {James Kennedy and Russell Eberhart},
title = {Particle Swarm Optimization},
booktitle = {Proceedings of IEEE International Conference on Neural Networks},
year = {1995},
pages = {1942--1948},
doi = {10.1109/ICNN.1995.488968},
bibsource = {ChatGPT},
eval = {}
}
@article{KnillPouget2004,
author = {Knill, David C. and Pouget, Alexandre},
title = {{The Bayesian brain: the role of uncertainty in neural coding and computation}},
journal = {Trends Neurosci.},
volume = {27},
number = {12},
pages = {712--719},
year = {2004},
month = dec,
issn = {0166-2236},
publisher = {Elsevier Current Trends},
doi = {10.1016/j.tins.2004.10.007}
}
@article{KnillPouget2004_AI,
author = {David C. Knill and Alexandre Pouget},
title = {The Bayesian brain: the role of uncertainty in neural coding and computation},
journal = {Trends in Neurosciences},
volume = {27},
number = {12},
pages = {712--719},
year = {2004},
doi = {10.1016/j.tins.2004.10.007},
eval = {Basic info and DOI correct, missing month and publisher},
bibsource = {ChatGPT},
eval = {}
}
@book{Koza1992GP,
author = {John R. Koza},
title = {Genetic Programming: On the Programming of Computers by Means of Natural Selection},
year = {1992},
publisher = {MIT Press},
isbn = {978-0262111706},
bibsource = {ChatGPT},
eval = {}
}
@article{Mamdani1975FLC,
author = {Ebrahim H. Mamdani},
title = {An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller},
journal = {International Journal of Man-Machine Studies},
year = {1975},
volume = {7},
number = {1},
pages = {1--13},
doi = {10.1016/S0020-7373(75)80002-2},
bibsource = {ChatGPT},
eval = {}
}
@inproceedings{McDermott1982XCON,
author = {John McDermott},
title = {R1 (XCON) at {D}igital {E}quipment {C}orporation},
booktitle = {Proceedings of AAAI},
year = {1982},
bibsource = {ChatGPT},
eval = {}
}
@techreport{NISTAI2001,
author = {Theofanos, Mary Frances and Choong, Yee-Yin and Jensen, Theodore},
title = {{AI Use Taxonomy: A Human-Centered Approach}},
journal = {NIST},
year = {2024},
month = mar,
url = {https://www.nist.gov/publications/ai-use-taxonomy-human-centered-approach}
}
@techreport{NISTAI2001_AI,
author = {Mary F. Theofanos and others},
title = {AI Use Taxonomy: A Human-Centered Approach},
institution = {National Institute of Standards and Technology},
number = {NIST AI 200-1},
year = {2024},
month = {March},
url = {https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.200-1.pdf},
doi = {10.6028/NIST.AI.200-1},
bibsource = {ChatGPT},
eval = {Missing other authors, can't check various fields}
}
@article{Tabassi2023Jan,
author = {Tabassi, Elham},
title = {{Artificial Intelligence Risk Management Framework (AI RMF 1.0)}},
journal = {NIST},
year = {2023},
month = jan,
url = {https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-ai-rmf-10},
bibsource = {NIST website}
}
@techreport{NISTRMF,
author = {Elham Tabassi and others},
title = {Artificial Intelligence Risk Management Framework (AI RMF 1.0)},
institution = {National Institute of Standards and Technology},
number = {NIST AI 100-1},
year = {2023},
month = {January},
url = {https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf},
doi = {10.6028/NIST.AI.100-1},
bibsource = {ChatGPT},
eval = {Pretty good, hard to verify details}
}
@inproceedings{Nilsson1980STRIPS,
author = {Nils J. Nilsson},
title = {Principles of Artificial Intelligence (STRIPS overview)},
booktitle = {Morgan Kaufmann},
year = {1980},
bibsource = {ChatGPT},
eval = {}
}
@techreport{OECDClass,
title = {OECD Framework for the Classification of AI Systems},
institution = {OECD},
year = {2022},
url = {https://oecd.ai/en/classification},
bibsource = {ChatGPT},
eval = {}
}
@article{Oja1982,
author = {Oja, Erkki},
title = {{Simplified neuron model as a principal component analyzer}},
journal = {J. Math. Biol.},
volume = {15},
number = {3},
pages = {267--273},
year = {1982},
month = nov,
issn = {1432-1416},
publisher = {Springer-Verlag},
doi = {10.1007/BF00275687}
}
@article{Oja1982_AI,
author = {Erkki Oja},
title = {A Simplified Neuron Model as a Principal Component Analyzer},
journal = {Journal of Mathematical Biology},
volume = {15},
number = {3},
pages = {267--273},
year = {1982},
doi = {10.1007/BF00275687},
eval = {Correct: all; Missing: publisher},
bibsource = {ChatGPT},
eval = {}
}
@article{Bittner1963,
author = {Bittner, L.},
title = {{L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze, E. F. Mishechenko, The Mathematical Theory of Optimal Processes. VIII + 360 S. New York/London 1962. John Wiley {\&} Sons. Preis 90/{\textendash}}},
journal = {Zamm-zeitschrift Fur Angewandte Mathematik Und Mechanik},
year = {1963},
url = {https://www.semanticscholar.org/paper/L.-S.-Pontryagin%2C-V.-G.-Boltyanskii%2C-R.-V.-E.-F.-of-Bittner/7984b664bdfc1828050d1cb1a06d164a5fe64dd8}
}
@book{Pontryagin2018May,
author = {Pontryagin, L. S.},
title = {{Mathematical Theory of Optimal Processes}},
year = {2018},
month = may,
isbn = {978-0-20374931-9},
publisher = {Taylor {\&} Francis},
address = {Andover, England, UK},
doi = {10.1201/9780203749319}
}
@book{Pontryagin1962,
author = {L. S. Pontryagin and V. G. Boltyanskii and R. V. Gamkrelidze and E. F. Mishchenko},
title = {The Mathematical Theory of Optimal Processes},
publisher = {Wiley Interscience},
year = {1962},
eval = {Actually closer than several current sources online; missing "Wiley" from "Wiley Interscience"},
bibsource = {ChatGPT},
eval = {}
}
@article{Price1970,
author = {Price, George R.},
title = {{Selection and Covariance}},
journal = {Nature},
volume = {227},
pages = {520--521},
year = {1970},
month = aug,
issn = {1476-4687},
publisher = {Nature Publishing Group},
doi = {10.1038/227520a0}
}
@article{Price1970_AI,
author = {George R. Price},
title = {Selection and Covariance},
journal = {Nature},
volume = {227},
pages = {520--521},
year = {1970},
doi = {10.1038/227520a0},
eval = {Missing: month, publisher},
bibsource = {ChatGPT},
eval = {}
}
@article{Robinson1965Resolution,
author = {Robinson, J. A.},
title = {{A Machine-Oriented Logic Based on the Resolution Principle}},
journal = {J. ACM},
volume = {12},
number = {1},
pages = {23--41},
year = {1965},
month = jan,
issn = {0004-5411},
publisher = {Association for Computing Machinery},
doi = {10.1145/321250.321253}
}
@article{Robinson1965Resolution_AI,
author = {J. A. Robinson},
title = {A Machine-Oriented Logic Based on the Resolution Principle},
journal = {Journal of the ACM},
volume = {12},
number = {1},
pages = {23--41},
year = {1965},
doi = {10.1145/321250.321253},
eval = {Missing: month, issn, publisher},
bibsource = {ChatGPT},
eval = {}
}
@article{Rumelhart1986Backprop,
author = {David E. Rumelhart and Geoffrey E. Hinton and Ronald J. Williams},
title = {Learning Representations by Back-Propagating Errors},
journal = {Nature},
year = {1986},
volume = {323},
pages = {533--536},
doi = {10.1038/323533a0},
bibsource = {ChatGPT},
eval = {}
}
@misc{SuttonBarto2025,
author = {Sutton, Richard S. and Barto, Andrew G.},
title = {{Reinforcement Learning, second edition: An Introduction (Adaptive Computation and Machine Learning series)}},
year = {2025},
month = nov,
isbn = {978-0-26203924-6},
publisher = {Bradford Books},
note = {[Online; accessed 12. Nov. 2025]},
url = {https://www.amazon.com/Reinforcement-Learning-Introduction-Adaptive-Computation/dp/0262039249},
bibsource = {Amazon.com book page}
}
@book{SuttonBarto2018_AI,
author = {Richard S. Sutton and Andrew G. Barto},
title = {Reinforcement Learning: An Introduction},
edition = {2nd},
year = {2018},
publisher = {MIT Press},
url = {https://mitpress.mit.edu/9780262039246/reinforcement-learning/},
isbn = {9780262039246},
eval = {Mismatch on publisher, year},
bibsource = {ChatGPT},
eval = {}
}
@article{TaylorJonker1978,
author = {Taylor, Peter D. and Jonker, Leo B.},
title = {{Evolutionary stable strategies and game dynamics}},
journal = {Math. Biosci.},
volume = {40},
number = {1},
pages = {145--156},
year = {1978},
month = jul,
issn = {0025-5564},
publisher = {Elsevier},
doi = {10.1016/0025-5564(78)90077-9}
}
@article{TaylorJonker1978_AI,
author = {Peter D. Taylor and Leo B. Jonker},
title = {Evolutionary stable strategies and game dynamics},
journal = {Mathematical Biosciences},
volume = {40},
number = {1--2},
pages = {145--156},
year = {1978},
doi = {10.1016/0025-5564(78)90077-9},
bibsource = {ChatGPT},
eval = {Missing: month, issn, publisher}
}
@article{TodorovJordan2002,
author = {Todorov, Emanuel and Jordan, Michael I.},
title = {{Optimal feedback control as a theory of motor coordination}},
journal = {Nat. Neurosci.},
volume = {5},
pages = {1226--1235},
year = {2002},
month = nov,
issn = {1546-1726},
publisher = {Nature Publishing Group},
doi = {10.1038/nn963}
}
@article{TodorovJordan2002_AI,
author = {Emanuel Todorov and Michael I. Jordan},
title = {Optimal feedback control as a theory of motor coordination},
journal = {Nature Neuroscience},
volume = {5},
number = {11},
pages = {1226--1235},
year = {2002},
doi = {10.1038/nn963},
bibsource = {ChatGPT},
eval = {Missing: number, publisher, month, issn; otherwise good}
}
@article{WidrowHoff1960LMS,
author = {Bernard Widrow and Marcian E. Hoff},
title = {Adaptive Switching Circuits},
journal = {1960 IRE WESCON Convention Record},
year = {1960},
volume = {4},
pages = {96--104},
bibsource = {ChatGPT},
eval = {}
}
@article{Wolpert1997,
author = {Wolpert, D. H. and Macready, W. G.},
title = {{No free lunch theorems for optimization}},
journal = {IEEE Trans. Evol. Comput.},
volume = {1},
number = {1},
pages = {67--82},
year = {1997},
month = apr,
publisher = {IEEE},
doi = {10.1109/4235.585893}
}
@article{Wolpert1997_AI,
author = {David H. Wolpert and William G. Macready},
title = {No Free Lunch Theorems for Optimization},
journal = {IEEE Transactions on Evolutionary Computation},
volume = {1},
number = {1},
pages = {67--82},
year = {1997},
doi = {10.1109/4235.585893},
bibsource = {ChatGPT},
eval = {Missing: month, publisher; otherwise good}
}
@article{Zadeh1965Fuzzy,
author = {Lotfi A. Zadeh},
title = {Fuzzy Sets},
journal = {Information and Control},
year = {1965},
volume = {8},
number = {3},
pages = {338--353},
doi = {10.1016/S0019-9958(65)90241-X},
bibsource = {ChatGPT},
eval = {}
}
@article{FarmerPerelson1986,
author = {J. Doyne Farmer and Norman H. Packard and Alan S. Perelson},
title = {The immune system, adaptation, and machine learning},
journal = {Physica D: Nonlinear Phenomena},
year = {1986},
volume = {22},
number = {1-3},
pages = {187--204},
doi = {10.1016/0167-2789(86)90240-X}
}
@inproceedings{Forrest1994NSA,
author = {Stephanie Forrest and Alan S. Perelson and Lawrence Allen and Rajesh Cherukuri},
title = {SelfNonself Discrimination in a Computer},
booktitle = {Proceedings of the 1994 IEEE Symposium on Security and Privacy},
year = {1994},
pages = {202--212},
doi = {10.1109/SECPRI.1994.305366}
}
@book{DeCastroTimmis2002Book,
author = {Leandro N. de Castro and Jamie Timmis},
title = {Artificial Immune Systems: A New Computational Intelligence Approach},
publisher = {Springer},
series = {Natural Computing Series},
year = {2002},
isbn = {978-1-85233-594-6}
}
@article{DeCastroVonZuben2002Clonal,
author = {Leandro N. de Castro and Fernando J. Von Zuben},
title = {Learning and Optimization Using the Clonal Selection Principle},
journal = {IEEE Transactions on Evolutionary Computation},
year = {2002},
volume = {6},
number = {3},
pages = {239--251},
doi = {10.1109/TEVC.2002.1011539}
}
@article{DeCastroVonZuben2001aiNet,
author = {Leandro N. de Castro and Fernando J. Von Zuben},
title = {The aiNet: An Artificial Immune Network for Data Analysis},
journal = {Proceedings of ICANN (LNCS)},
year = {2001},
volume = {2130},
pages = {395--404},
publisher = {Springer},
doi = {10.1007/3-540-44668-0_58}
}
@article{Timmis2008Survey,
author = {Jamie Timmis and Mark Neal and Jonathan Hunt},
title = {An artificial immune system for data analysis},
journal = {Biosystems},
year = {2000},
volume = {55},
number = {1-3},
pages = {143--150},
doi = {10.1016/S0303-2647(99)00093-5}
}
@inproceedings{Greensmith2005DCA,
author = {Julie Greensmith and Uwe Aickelin and Steve Cayzer},
title = {Introducing Dendritic Cells as a Novel Immune-Inspired Algorithm for Anomaly Detection},
booktitle = {ICARIS 2005: Artificial Immune Systems},
series = {LNCS},
volume = {3627},
pages = {153--167},
publisher = {Springer},
year = {2005},
doi = {10.1007/11536444_12}
}
@article{Greensmith2007DCA,
author = {Julie Greensmith and Uwe Aickelin and Jamie Twycross},
title = {Articulation and Clarification of the Dendritic Cell Algorithm},
journal = {Proceedings of ICARIS},
year = {2006}
}
@article{Dasgupta1999AIS,
author = {Dipankar Dasgupta},
title = {An Overview of Artificial Immune Systems and Their Applications},
journal = {Proceedings of GECCO Workshop on Artificial Immune Systems and Their Applications},
year = {1999}
}

22
doc/related-work.tex Normal file
View File

@ -0,0 +1,22 @@
% ---------------------------
\section{Related Work: Existing Taxonomies and Frameworks}
% ---------------------------
Standards bodies and policy groups have invested heavily in AI definitions, lifecycle models, and governance instruments. However, none provides a compact, mechanism-centric taxonomy spanning \Lrn, \Evo, \Sym, \Prb, \Sch, \Ctl, and \Swm, nor an explicit grammar for hybrids.
\paragraph{Standards and terminology.}
ISO/IEC 22989 standardizes terms and core concepts for AI across stakeholders, serving as a definitional foundation rather than a technique taxonomy. ISO/IEC 23053 offers a functional block view for \emph{machine-learning-based}~ AI systems (data, training, inference, monitoring), which is valuable architecturally but limited to ML and therefore excludes non-ML pillars such as symbolic reasoning, control/estimation, and swarm/evolutionary computation \citep{ISO22989,ISO23053}.
\paragraph{Risk and management frameworks.}
NISTs AI Risk Management Framework (AI RMF 1.0) provides an implementation-agnostic process for managing AI risks (govern, map, measure, manage). Its companion \emph{AI Use Taxonomy}~ classifies humanAI task interactions and use patterns. Both are intentionally technique-agnostic: they can apply to any implementation class, but do not sort systems by operative mechanism \citep{NISTRMF,NISTAI2001}.
\paragraph{Policy classification tools.}
The OECD Framework for the Classification of AI Systems organizes systems along multi-dimensional policy axes (People \& Planet, Economic Context, Data \& Input, AI Model, Task \& Output). This is a powerful policy characterization instrument, yet it remains descriptive and multi-axis rather than a compact mechanism taxonomy with hybrid syntax \citep{OECDClass}.
\paragraph{Regulatory regimes.}
The EU Artificial Intelligence Act introduces risk-based classes (e.g., prohibited, high-risk, limited, minimal) and obligations, largely orthogonal to implementation specifics. Technique details matter for \emph{compliance evidence}, but the Act does not define a canonical implementation taxonomy \citep{EUAIAct}.
\paragraph{Academic precedents and surveys.}
The textbook tradition organizes AI by substantive pillars—search/planning, knowledge/logic, probabilistic reasoning, learning, and agents—closely aligning with the mechanism families in this paper but without proposing a stable naming code or formal hybrid grammar \citep{AIMA4}. Reinforcement learning texts formalize optimization and value iteration for \Lrn/\Sch~ couplings \citep{SuttonBarto2018}. Classical theory anchors \Prb~ (\citealp{KnillPouget2004}), \Ctl~ (\citealp{Kalman1960,Pontryagin1962,TodorovJordan2002}), and foundational dynamics for \Evo~ (\citealp{Price1970,TaylorJonker1978}). Learning rules for \Lrn~ include Hebbian and Ojas formulations \citep{Hebb1949,Oja1982}, while resolution proofs formalize \Sym~ \citep{Robinson1965Resolution}. No-Free-Lunch results motivate preserving multiple mechanisms rather than collapsing them into a single “optimization” bucket \citep{Wolpert1997}.
\paragraph{Gap and contribution.}
Taken together, these works motivate \emph{two layers}: (i) policy/lifecycle/risk instruments that are technique-agnostic and (ii) a compact, biologically grounded \emph{implementation taxonomy}~ with explicit hybrid composition. OPT fills the second layer with seven frozen roots and a grammar for hybrids, designed to interface cleanly with the first layer.

View File

@ -0,0 +1,24 @@
\begin{WideTab}[t]
\centering
\caption{Comparison of OPT with existing standards, policy frameworks, and textbook pillars.}
\renewcommand{\arraystretch}{1.12}
\begin{tabular}{@{}p{2.9cm}p{3.1cm}p{3.2cm}p{2.6cm}p{3.0cm}@{}}
\toprule
\textbf{Framework / Source} & \textbf{Primary Scope} & \textbf{Unit of Classification} & \textbf{Technique Coverage} & \textbf{Hybrid Handling / Intended Use} \\
\midrule
\textbf{OPT (this work)} & Implementation taxonomy & \textit{Operative mechanism} (\Lrn,\ \Evo,\ \Sym,\ \Prb,\ \Sch,\ \Ctl,\ \Swm) with composition grammar & Cross-paradigm (learning, symbolic, probabilistic, search, control, swarm, evolutionary) & Explicit hybrids via \hyb{+}, \hyb{/}, \hyb{\{\,\}}, \hyb{[\,\rightarrow\,]}; designed to interface with risk/process frameworks \\
\addlinespace[3pt]
ISO/IEC 22989:2022 \citep{ISO22989} & Concepts \& terminology & Vocabulary / definitions & Technique-agnostic & No hybrid grammar; supports common language across stakeholders \\
ISO/IEC 23053:2022 \citep{ISO23053} & ML system architecture & Functional blocks (data, training, inference, monitoring) & ML-centric; excludes non-ML pillars (e.g., \Sym,\ \Ctl,\ \Swm) & No explicit hybrid mechanism model; system design/process lens \\
NIST AI RMF 1.0 \citep{NISTRMF} & Risk management & Risk functions (Govern, Map, Measure, Manage) & Technique-agnostic & No mechanism taxonomy; governance and assurance guidance \\
NIST AI 200-1 \citep{NISTAI2001} & Use taxonomy & HumanAI task activities & Technique-agnostic & No hybrids; categorizes use contexts for evaluation \\
OECD AI Classification \citep{OECDClass} & Policy characterization & Multi-axis profile (context, data, model, task) & Broad; includes an “AI model” axis but not a formal mechanism taxonomy & No hybrid grammar; policy comparison and statistics \\
EU AI Act \citep{EUAIAct} & Regulation (risk-based) & Risk class (prohibited/high/limited/minimal) & Technique-agnostic & Hybrids irrelevant; compliance and obligations \\
AIMA (Russell \& Norvig) \citep{AIMA4} & Textbook organization & Pillars (search/planning, logic, probabilistic reasoning, learning, agents) & Broad coverage; closest to mechanism families & No standard naming or hybrid code; educational structure \\
\bottomrule
\end{tabular}
\vspace{4pt}
\footnotesize \textit{Notes.} OPT supplies a compact, biologically grounded \emph{implementation} taxonomy with a formal hybrid composition code. Standards and policy frameworks remain essential and complementary for vocabulary, lifecycle, risk, management, and regulatory obligations, but they are technique-agnostic or ML-specific and do not provide a mechanism-level naming scheme.
\label{tab:opt_vs_frameworks}
\end{WideTab}

487
doc/verification-tasks.tex Normal file
View File

@ -0,0 +1,487 @@
%Excellent --- this is exactly the right instinct. You're not just
%publishing a paper --- you're proposing to \emph{reformulate the
%conceptual taxonomy of AI}, which will draw both \textbf{methodological
%and political} scrutiny.
%Below is a \textbf{multi-stage verification and readiness procedure} you
%can adopt before public release, whether for arXiv, ACM, or journal
%submission. It combines academic rigor, reproducibility standards, and
%domain-specific validation for the ``taxonomy-proposal'' genre.
\documentclass[12pt]{article}
\usepackage{longtable}
\usepackage{amsmath,amsthm,mathtools}
\usepackage[a4paper,margin=1in]{geometry}
%\usepackage{times}
\usepackage[T1]{fontenc}
\usepackage{newtxtext,newtxmath} % unified serif + math fonts
\usepackage{microtype} % optional quality
%(If you switch to LuaLaTeX/XeLaTeX later, instead use
%\usepackage{fontspec}\setmainfont{TeX Gyre Termes}
\usepackage{natbib}
\usepackage{hyperref}
\usepackage{enumitem}
\usepackage{booktabs}
\usepackage{doi}
\usepackage{tikz}
\usetikzlibrary{arrows.meta,positioning,fit,calc}
\usepackage{pgfplots}
\usepgfplotslibrary{polar}
\usepackage{color}
\colorlet{shadecolor}{orange!15}
\usepackage{fancyvrb}
\usepackage{framed}
\definecolor{shadecolor}{RGB}{243,243,243}
% Shaded block (Pandoc-style)
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
% Highlighting as a true verbatim env (no trailing-token issues)
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
\makeatletter
\@for\tok:=NormalTok,ExtensionTok,KeywordTok,StringTok,CommentTok,FunctionTok\do{%
\expandafter\providecommand\csname \tok\endcsname[1]{##1}%
}
\makeatother
\newcommand{\Lrn}{\textbf{Lrn}} % Learnon — Parametric learning
\newcommand{\Evo}{\textbf{Evo}} % Evolon — Population adaptation
\newcommand{\Sym}{\textbf{Sym}} % Symbion — Symbolic inference
\newcommand{\Prb}{\textbf{Prb}} % Probion — Probabilistic inference
\newcommand{\Sch}{\textbf{Sch}} % Scholon — Search & planning
\newcommand{\Ctl}{\textbf{Ctl}} % Controlon — Control & estimation
\newcommand{\Swm}{\textbf{Swm}} % Swarmon — Collective/swarm
\newcommand{\hyb}[1]{\textsc{#1}} % hybrid spec styling (e.g., \hyb{Lrn+Sch})
% Toggles and figure sizes (larger for readability)
% Toggles and figure sizes (larger for readability)
\newif\iftwocol
\twocolfalse
\newcommand{\figureW}{0.95\textwidth}
\newcommand{\figureH}{0.62\textwidth}
\begin{document}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{structural-and-citation-integrity-checks}{%
\subsection{🧩 1. Structural and Citation Integrity
Checks}\label{structural-and-citation-integrity-checks}}
\begin{longtable}[]{@{}lll@{}}
\toprule
\begin{minipage}[b]{0.08\columnwidth}\raggedright
Goal\strut
\end{minipage} & \begin{minipage}[b]{0.67\columnwidth}\raggedright
Verification Action\strut
\end{minipage} & \begin{minipage}[b]{0.15\columnwidth}\raggedright
Tool / Method\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{All citations present}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Parse \texttt{.aux} or \texttt{.log} for ``Citation undefined''
warnings.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
\texttt{latexmk\ -bibtex} and
\texttt{grep\ \textquotesingle{}Citation\textquotesingle{}\ main.log}\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{BibTeX completeness}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Validate every \texttt{\textbackslash{}cite\{key\}} has a matching
\texttt{@entry} with fields \texttt{author}, \texttt{title},
\texttt{year}, \texttt{source}.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
\texttt{bibtool\ -s\ -d\ -r\ check.rsc\ references.bib}\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{Citation relevance}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Manually verify that each cited source supports the statement. This
includes: (1) standards mentioned in Related Work; (2) foundational
theoretical citations in mathematical sections; (3) classic AI
exemplars.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
Reading verification checklist (see below)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{Self-consistency}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Check that each reference to a class (\Lrn, \Evo, \ldots) matches the
definitions and equations in §3--5.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
Full-text search for ``Lrn'', ``Evo'', etc.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{Cross-referencing}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Confirm all figures/tables/sections compile without ``??''.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
\texttt{latexmk} warnings summary\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.08\columnwidth}\raggedright
\textbf{DOI and URL validation}\strut
\end{minipage} & \begin{minipage}[t]{0.67\columnwidth}\raggedright
Run a link checker or Python script (e.g., \texttt{requests.head()}) to
verify DOIs/URLs resolve.\strut
\end{minipage} & \begin{minipage}[t]{0.15\columnwidth}\raggedright
\texttt{bibtex-tidy\ -\/-check-urls\ references.bib}\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\emph{Checklist for manual relevance verification} For each citation:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
%\tightlist
\item
Read the cited paragraph and the cited source's abstract.
\item
Confirm it is \textbf{supporting evidence}, not merely tangential.
\item
If a reference covers multiple claims, annotate page/section numbers
(e.g., \texttt{\textbackslash{}citep{[}§2{]}\{ISO23053\}}).
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{conceptual-and-taxonomic-soundness-review}{%
\subsection{🧭 2. Conceptual and Taxonomic Soundness
Review}\label{conceptual-and-taxonomic-soundness-review}}
\begin{longtable}[]{@{}lll@{}}
\toprule
\begin{minipage}[b]{0.14\columnwidth}\raggedright
Aspect\strut
\end{minipage} & \begin{minipage}[b]{0.58\columnwidth}\raggedright
Verification Task\strut
\end{minipage} & \begin{minipage}[b]{0.20\columnwidth}\raggedright
Reviewer Type\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.14\columnwidth}\raggedright
\textbf{Completeness of mechanism coverage}\strut
\end{minipage} & \begin{minipage}[t]{0.58\columnwidth}\raggedright
Verify that every major AI approach (symbolic, probabilistic,
connectionist, evolutionary, control, swarm, search/planning) maps
cleanly to exactly one OPT root.\strut
\end{minipage} & \begin{minipage}[t]{0.20\columnwidth}\raggedright
Independent AI domain experts (1 per subfield)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.14\columnwidth}\raggedright
\textbf{Hybrid expressiveness}\strut
\end{minipage} & \begin{minipage}[t]{0.58\columnwidth}\raggedright
Test that real systems (e.g., AlphaZero, Neuroevolution, LQR-RL) can be
expressed without ambiguity.\strut
\end{minipage} & \begin{minipage}[t]{0.20\columnwidth}\raggedright
Practicing researchers; maybe small hackathon trial\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.14\columnwidth}\raggedright
\textbf{Biological correspondence}\strut
\end{minipage} & \begin{minipage}[t]{0.58\columnwidth}\raggedright
Check that cited biological analogs (plasticity, selection, control,
etc.) are correctly represented and not overstated.\strut
\end{minipage} & \begin{minipage}[t]{0.20\columnwidth}\raggedright
Cognitive science / computational neuroscience reviewer\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.14\columnwidth}\raggedright
\textbf{Orthogonality of attributes}\strut
\end{minipage} & \begin{minipage}[t]{0.58\columnwidth}\raggedright
Validate that secondary descriptors (Rep, Obj, Time, etc.) are indeed
orthogonal to mechanism choice.\strut
\end{minipage} & \begin{minipage}[t]{0.20\columnwidth}\raggedright
Systems or ML pipeline specialists\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.14\columnwidth}\raggedright
\textbf{Cross-domain coherence}\strut
\end{minipage} & \begin{minipage}[t]{0.58\columnwidth}\raggedright
Ensure that terms like ``learning'', ``adaptation'', and ``control'' are
used consistently across sections.\strut
\end{minipage} & \begin{minipage}[t]{0.20\columnwidth}\raggedright
Technical editor\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{technical-and-mathematical-verification}{%
\subsection{🔍 3. Technical and Mathematical
Verification}\label{technical-and-mathematical-verification}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
\textbf{Equation sanity check}
\begin{itemize}
%\tightlist
\item
Verify every equation's notation is defined in context.
\item
Units and symbols consistent (e.g., (V), (J), (\theta),
(p(z\textbar x))).
\item
Biological analogs correctly mapped to canonical forms (e.g., Hebb's
rule → Oja normalization).
\end{itemize}
\item
\textbf{Graphical inspection}
\begin{itemize}
%\tightlist
\item
TikZ/PGF figures render cleanly; legends match table abbreviations.
\item
Radar plot axes correspond to the six orthogonal attributes
described.
\end{itemize}
\item
\textbf{Reproducible build}
\begin{itemize}
%\tightlist
\item
\texttt{latexmk\ -pdf} or the Makefile runs without intervention.
\item
No proprietary fonts, deprecated packages, or local includes.
\end{itemize}
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{terminological-and-semantic-validation}{%
\subsection{🧱 4. Terminological and Semantic
Validation}\label{terminological-and-semantic-validation}}
Because this paper introduces new terms (Lernon, Evolon, etc.), perform:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
%\tightlist
\item
\textbf{Cross-linguistic sanity check} --- verify none of the coined
names have misleading or offensive meanings in major languages
(English, French, German, Japanese, Chinese).
\item
\textbf{Search collision audit} --- check that ``Lernon'', ``Evolon'',
etc. are not registered trademarks, commercial products, or prior AI
system names.
\item
\textbf{Ontology compatibility} --- test mapping to existing
ontologies (e.g., ISO/IEC 22989 concept hierarchy, Wikidata entries).
\item
\textbf{Glossary consistency} --- confirm that the definitions in the
paper, appendix, and metadata (e.g., JSON schema) match exactly.
\end{enumerate}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{external-critical-review-red-team}{%
\subsection{🧪 5. External Critical Review (``Red
Team'')}\label{external-critical-review-red-team}}
To pre-empt ``easy takedowns,'' convene a small red-team review:
\begin{longtable}[]{@{}ll@{}}
\toprule
\begin{minipage}[b]{0.30\columnwidth}\raggedright
Reviewer Type\strut
\end{minipage} & \begin{minipage}[b]{0.64\columnwidth}\raggedright
What to Challenge\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.30\columnwidth}\raggedright
\textbf{Symbolic AI veteran}\strut
\end{minipage} & \begin{minipage}[t]{0.64\columnwidth}\raggedright
``Does OPT misrepresent classical expert systems?''\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.30\columnwidth}\raggedright
\textbf{Evolutionary computation expert}\strut
\end{minipage} & \begin{minipage}[t]{0.64\columnwidth}\raggedright
``Is \Evo~really separable from \Swm?''\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.30\columnwidth}\raggedright
\textbf{Control theorist}\strut
\end{minipage} & \begin{minipage}[t]{0.64\columnwidth}\raggedright
``Does \Ctl~belong as a distinct root or as applied
optimization?''\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.30\columnwidth}\raggedright
\textbf{Probabilistic modeller}\strut
\end{minipage} & \begin{minipage}[t]{0.64\columnwidth}\raggedright
``Is \Prb~too coarse --- should inference and generative modelling
split?''\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.30\columnwidth}\raggedright
\textbf{Policy/standards liaison}\strut
\end{minipage} & \begin{minipage}[t]{0.64\columnwidth}\raggedright
``Can regulators or ISO easily map this taxonomy onto existing
frameworks?''\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
Collect objections and prepare written responses (as supplementary
material if needed).
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{metadata-and-interoperability-testing}{%
\subsection{🧩 6. Metadata and Interoperability
Testing}\label{metadata-and-interoperability-testing}}
\begin{itemize}
\item
Validate the JSON Schema for OPT-Code with a few sample systems.
Example validation command:
\begin{Shaded}[]
\begin{Highlighting}
ajv validate -s opt -schema.json -d samples/*.json
\end{Highlighting}
\end{Shaded}
\item
Ensure round-trip integrity: parsing a valid OPT string and
re-rendering it should be idempotent.
\item
Confirm metadata examples (e.g., \texttt{OPT=Evo/Lrn+Ctl}) match
systems described in tables.
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{publication-communication-readiness}{%
\subsection{🧾 7. Publication \& Communication
Readiness}\label{publication-communication-readiness}}
\begin{longtable}[]{@{}lll@{}}
\toprule
\begin{minipage}[b]{0.16\columnwidth}\raggedright
Area\strut
\end{minipage} & \begin{minipage}[b]{0.51\columnwidth}\raggedright
Check\strut
\end{minipage} & \begin{minipage}[b]{0.24\columnwidth}\raggedright
Why\strut
\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.16\columnwidth}\raggedright
\textbf{Title and Abstract}\strut
\end{minipage} & \begin{minipage}[t]{0.51\columnwidth}\raggedright
Emphasize mechanism-based taxonomy, not policy; avoid ``redefining AI''
hyperbole.\strut
\end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright
Avoid overreach criticisms.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.16\columnwidth}\raggedright
\textbf{Introduction framing}\strut
\end{minipage} & \begin{minipage}[t]{0.51\columnwidth}\raggedright
Cite regulatory motivation (EU AI Act, NIST, ISO), but frame OPT as
complementary.\strut
\end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright
Appears cooperative, not adversarial.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.16\columnwidth}\raggedright
\textbf{Data availability statement}\strut
\end{minipage} & \begin{minipage}[t]{0.51\columnwidth}\raggedright
Clarify no datasets, only conceptual and standards synthesis.\strut
\end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright
Meets arXiv/ACM policies.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.16\columnwidth}\raggedright
\textbf{Reproducibility}\strut
\end{minipage} & \begin{minipage}[t]{0.51\columnwidth}\raggedright
Provide Makefile and instructions to regenerate all figures from
TeX.\strut
\end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright
Fulfills open science norms.\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.16\columnwidth}\raggedright
\textbf{Accessibility}\strut
\end{minipage} & \begin{minipage}[t]{0.51\columnwidth}\raggedright
Verify large-font, high-contrast figures; ensure color palettes
differentiate well in grayscale.\strut
\end{minipage} & \begin{minipage}[t]{0.24\columnwidth}\raggedright
Required for ACM/IEEE accessibility standards.\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{pre-submission-peer-simulation}{%
\subsection{🔬 8. Pre-submission Peer
Simulation}\label{pre-submission-peer-simulation}}
\begin{itemize}
\item
Use an \textbf{LLM-based referee simulator} or colleagues to generate
expected reviewer comments.
\begin{itemize}
%\tightlist
\item
``Compare to ISO/IEC 23053.''
\item
``Explain why control/swarm deserve separate roots.''
\item
``Provide examples of OPT adoption in practice.''
\item
Prepare point-by-point responses.
\end{itemize}
\item
Draft a short ``Author Response Template'' for actual peer review.
\end{itemize}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\hypertarget{final-publication-readiness-checklist-summary}{%
\subsection{✅ 9. Final ``Publication-Readiness'' Checklist
(summary)}\label{final-publication-readiness-checklist-summary}}
\begin{longtable}[]{@{}ll@{}}
\toprule
Category & Status\tabularnewline
\midrule
\endhead
Citations verified (exist + relevant) &\tabularnewline
All equations defined and correct &\tabularnewline
Figures render without warning &\tabularnewline
JSON schema validates OPT strings &\tabularnewline
Naming checked for collisions &\tabularnewline
Red-team review completed &\tabularnewline
Accessibility (font/contrast) &\tabularnewline
Build reproducibility (Makefile OK) &\tabularnewline
Cover letter frames contribution as complementary, not adversarial &
\tabularnewline
\bottomrule
\end{longtable}
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
If you'd like, I can produce a \textbf{ready-to-run Python script} that
automatically checks citations (parsing \texttt{.aux} and
\texttt{.bib}), verifies DOI/URL validity, and outputs a short
``completeness report'' for your paper. Would you like that next?
\end{document}