Added OPT for agentic AI parts

This commit is contained in:
welsberr 2026-02-17 13:11:57 -05:00
parent e81e3a8879
commit dab7e4b084
40 changed files with 903 additions and 107 deletions

70
paper/main.tex Normal file → Executable file
View File

@ -53,57 +53,79 @@ An Operational-Premise Taxonomy for Artificial Intelligence}
\input{pieces/background-intro}
\include{pieces/related-work}
\input{pieces/related-work}
\include{pieces/para-bridge-comparative-landscape}
\input{pieces/para-bridge-comparative-landscape}
\include{pieces/table-opt-comparison}
\input{pieces/table-opt-comparison}
\include{pieces/comparative-analysis}
\input{pieces/comparative-analysis}
\include{pieces/examples-and-mapping}
\input{pieces/tab-opt-comparison-with-others}
\include{pieces/table-optmap}
\input{pieces/examples-and-mapping}
\include{pieces/orthogonal-axes-and-risks}
%\input{pieces/table-optmap}
\include{pieces/table-opt-risk}
\input{pieces/orthogonal-axes-and-risks}
\include{pieces/fig-opt-radar-1}
\input{pieces/table-opt-risk}
\include{pieces/fig-opt-radar-2}
\input{pieces/fig-opt-radar-1}
\include{pieces/artificial-immune-systems}
\input{pieces/fig-opt-radar-2}
\include{pieces/design-governance}
\input{pieces/artificial-immune-systems}
\include{pieces/discussion}
\input{pieces/design-governance}
\input{pieces/opt-agentic}
\input{pieces/recent-context}
\input{pieces/discussion}
\input{pieces/math-foundations-bio-correspondence}
\include{pieces/conclusions}
\input{pieces/conclusions}
\appendix
\include{pieces/app-opt-code-spec}
\input{pieces/app-glossary}
\include{pieces/app-opt-code-evaluation-protocol}
\input{pieces/app-opt-code-spec}
\include{pieces/app-code-evaluation-workflow-pseudocode}
\input{pieces/app-opt-formal-grammar}
\include{pieces/app-opt-code-prompt-section}
\input{pieces/app-opt-code-evaluation-protocol}
\include{pieces/app-prompt-minimal}
\input{pieces/app-code-evaluation-workflow-pseudocode}
\include{pieces/app-prompt-maximal}
\input{pieces/app-opt-code-prompt-section}
\include{pieces/app-opt-storage-logs}
\input{pieces/app-prompt-minimal}
\section{Guidance on LLM Choice for Evaluation Workflow}
\input{pieces/app-prompt-maximal}
\begin{verbatim}[\input{pieces/local-llm-choice.txt}]
\end{verbatim}
\input{pieces/app-prompt-evaluator}
%\input{pieces/app-code-evaluator}
\input{pieces/app-opt-storage-logs}
\input{pieces/local-llm-choice}
\input{pieces/app-opt-agent-skills}
\input{pieces/tab-opt-roots-agent-failure}
\input{pieces/fig-opt-agent-loop}
\input{pieces/app-opt-agent-example}
\input{pieces/app-opt-tool-support}
\section{References}

Binary file not shown.

0
paper/pieces/abstract.tex Normal file → Executable file
View File

View File

91
paper/pieces/app-glossary.tex Executable file
View File

@ -0,0 +1,91 @@
\section{Etymological Glossary of OPT Class Names}
\label{app:opt-etymology}
The Operational Premise Taxonomy (OPT) uses short three-letter codes to denote
fundamental operative mechanisms (\Lrn, \Evo, \Sym, \Prb, \Sch, \Ctl, \Swm).
For mnemonic and conceptual coherence, each mechanism is also associated with a
semantically suggestive ``particle-style'' label in \emph{-on}, evoking both a
unit of behavior and an operative principle (by analogy with terms such as
``neuron'', ``phonon'', ``boson'', ``fermion''). This appendix summarizes the
etymological motivations for these labels.
\begin{description}
\item[\textbf{Lrn} --- \emph{Learnon}.]
The mechanism \Lrn~covers parametric learning systems: differentiable models
with trainable parameters (e.g., neural networks trained by gradient descent,
linear models with least-squares updates, temporal-difference learning).
The label \emph{Learnon} combines modern English \emph{learn} with the suffix
\emph{-on} to denote a basic unit or agent of learning activity. The verb
\emph{learn} traces back to Old English \emph{leornian}, ``to acquire
knowledge, to study'', from Proto-Germanic \emph{*liznojan}. \emph{Learnon}
thus names the operative principle ``that which learns by adjusting its
internal parameters''.
\item[\textbf{Evo} --- \emph{Evolon}.]
The mechanism \Evo~comprises population-based adaptive systems: genetic
algorithms, genetic programming, evolutionary strategies, and related methods
grounded in variation, inheritance, and selection. The label \emph{Evolon}
derives from Latin \emph{evolutio} (``unrolling, unfolding'') via
\emph{evolution}, plus \emph{-on} as a unit suffix. \emph{Evolon} names ``a
unit of evolutionary adaptation''---that is, a system whose primary
operation is the evolutionary updating of a population of candidate
solutions.
\item[\textbf{Sym} --- \emph{Symon}.]
The mechanism \Sym~denotes symbolic reasoning: rule-based expert systems,
theorem provers, logic programming, and other forms of explicit symbolic
manipulation. The label \emph{Symon} is rooted in Greek \emph{symbolon}
(``token, sign'') and \emph{symballein} (``to throw together, to compare''),
via Latin \emph{symbolum} and modern English \emph{symbol}. The \emph{-on}
suffix again marks a unit or agent, so \emph{Symon} denotes systems whose
defining operation is the manipulation of explicit symbols and rules.
\item[\textbf{Prb} --- \emph{Probion}.]
The mechanism \Prb~captures probabilistic inference: Bayesian networks,
probabilistic graphical models, Monte Carlo methods, and related stochastic
reasoning tools. The label \emph{Probion} derives from Latin
\emph{probabilis} (``provable, likely'') via \emph{probability}, plus
\emph{-on}. A \emph{Probion} system is one whose central operative premise is
updating or querying probability distributions, rather than deterministic
logic, parametric learning, or search over explicit alternatives.
\item[\textbf{Sch} --- \emph{Scholon}.]
The mechanism \Sch~covers search and related operations: heuristic search,
combinatorial optimization, constraint satisfaction, and state-space
exploration. The label \emph{Scholon} is based on Greek \emph{scholē}
(``leisure devoted to learning, study'') and its descendants in Latin
\emph{schola} and modern English \emph{school}, \emph{scholastic}. These
terms historically refer to structured inquiry and systematic examination.
The \emph{-on} suffix yields \emph{Scholon} as ``an agent or unit of ordered
inquiry'', emphasizing that \Sch mechanisms operate by disciplined search
through a space of possibilities.
\item[\textbf{Ctl} --- \emph{Controlon}.]
The mechanism \Ctl~denotes control and feedback systems: classical PID
controllers, modern state-space controllers, and feedback architectures that
adjust actions based on error or state estimates. The label \emph{Controlon}
derives from English \emph{control}, itself from Old French
\emph{contrerolle} (``a register, a counter-roll'') and Medieval Latin
\emph{contrarotulus}. In OPT usage, \emph{Controlon} refers to systems whose
defining operation is closed-loop regulation around a target, rather than
learning a model, performing search, or conducting probabilistic inference.
\item[\textbf{Swm} --- \emph{Swarmon}.]
The mechanism \Swm~comprises swarm and collective-behavior systems:
particle-swarm optimization, ant-colony optimization, boids-like flocking,
and other methods based on many simple agents following local rules. The
label \emph{Swarmon} blends English \emph{swarm}, from Old English
\emph{swarma} (``a mass of bees or other insects in motion''), with the
\emph{-ion/-on} particle suffix. A \emph{Swarmon} system is characterized by
emergent behavior from populations of locally interacting units, rather than
global parametric learning or a single, centralized search procedure.
\end{description}
Taken together, these labels provide a mnemonic and etymologically grounded
lexicon for referring to OPT mechanisms at a slightly more narrative level
than the three-letter codes. They are intended as aids to memory and
exposition; the formal taxonomy remains defined in terms of the canonical
roots \Lrn, \Evo, \Sym, \Prb, \Sch, \Ctl, and \Swm.

View File

@ -0,0 +1,102 @@
\section{Conceptual Example: OPT in an Agentic Development Workflow}
\label{sec:opt-agent-example}
To illustrate the practical role of OPT in agentic AI systems, we consider a
particular scenario: an autonomous development agent tasked with constructing
and maintaining a production scheduling system under dynamic constraints.
\subsection{Step 1: Goal Intake and OPT--Intent Declaration}
The agent receives the following high-level goal:
\begin{quote}
Design a system to optimize production schedules under variable supply,
equipment downtime, and priority constraints.
\end{quote}
The agent proposes the following OPT--Intent:
\begin{quote}\ttfamily
INTENT-OPT = Sch/Sym \\
INTENT-GOAL = minimize-production-delay \\
INTENT-CONSTRAINTS = deterministic, explainable, real-time \\
INTENT-RISKS = combinatorial-explosion
\end{quote}
The agent explicitly selects search (\Sch) for combinatorial optimization and
symbolic reasoning (\Sym) for constraint enforcement, while avoiding learning
mechanisms to preserve determinism and explainability.
\subsection{Step 2: Implementation and OPT--Code Observation}
During implementation, the agent integrates:
\begin{itemize}
\item A heuristic search planner,
\item A rule-based constraint validator,
\item A neural network model for predicting machine failure.
\end{itemize}
The resulting OPT--Code is:
\begin{quote}\ttfamily
OPT = Sch/Sym/Lrn; \\
Rep = permutations + rules + predictive-model; \\
Time = iterative + online-adjust
\end{quote}
\subsection{Step 3: Drift Detection}
Comparison with OPT--Intent reveals mechanism drift:
\begin{itemize}
\item \Lrn was introduced,
\item Determinism constraint may no longer hold,
\item Risk profile has changed.
\end{itemize}
The agent flags this deviation and evaluates whether predictive learning
violates declared governance constraints.
\subsection{Step 4: Mechanism-Guided Error}
Suppose the system exhibits unstable schedules under rare supply patterns.
Given the OPT--Code, the agent attributes the issue primarily to the
learning-based failure predictor (\Lrn), potentially due to distributional
shift.
\subsection{Step 5: Constraint-Preserving Repair}
The agent proposes two alternatives:
\begin{enumerate}
\item Replace the neural predictor with symbolic failure rules (\Sym),
\item Retain \Lrn but update OPT--Intent and governance constraints.
\end{enumerate}
The first option preserves the original intent. The second requires explicit
authorization.
\subsection{Step 6: Verified Repair}
If the symbolic replacement is adopted, the new OPT--Code becomes:
\begin{quote}\ttfamily
OPT = Sch/Sym
\end{quote}
Alignment with OPT--Intent is restored, and mechanism drift is resolved.
\subsection{Discussion}
This example illustrates how OPT provides:
\begin{itemize}
\item Mechanism-aware planning,
\item Explicit drift detection,
\item Targeted error diagnosis,
\item Governance-compatible repair,
\item Structured traceability.
\end{itemize}
Importantly, OPT does not constrain the agents architecture. Instead, it
provides a stable abstraction layer that connects design commitments,
implementation choices, and remediation strategies.

View File

@ -0,0 +1,71 @@
\section{OPT-Aware Agent Skills}
\label{app:opt-agent-skills}
This appendix enumerates conceptual skills that may be incorporated into
agentic AI systems to support mechanism-aware planning, execution, diagnosis,
and repair using the Operational Premise Taxonomy (OPT). These skills are
descriptive rather than prescriptive; they define \emph{capabilities} that may
be implemented using a variety of agent architectures, programming languages,
or reasoning engines.
\subsection{OPT Classification Skill}
\textbf{Purpose.}
Identify the operative mechanisms present in a system description, source code,
tool invocation, or execution trace.
\textbf{Description.}
Given a representation of a system or subcomponent, the agent produces an
OPT--Code describing the dominant and supporting mechanisms. This skill enables
mechanism awareness and provides the foundation for subsequent reasoning.
\subsection{OPT--Intent Proposal Skill}
\textbf{Purpose.}
Generate an OPT--Intent declaration during design or planning.
\textbf{Description.}
Given a goal, constraints, and deployment context, the agent proposes a set of
intended operative mechanisms, anticipated risks, and constraints. This skill
supports disciplined planning and avoids unexamined default choices.
\subsection{OPT Alignment Evaluation Skill}
\textbf{Purpose.}
Assess consistency between OPT--Intent and OPT--Code.
\textbf{Description.}
The agent compares intended mechanisms against those actually employed,
identifying additions, omissions, or substitutions. Deviations are flagged as
mechanism drift and may trigger repair or escalation.
\subsection{Mechanism-Guided Error Diagnosis Skill}
\textbf{Purpose.}
Interpret errors in terms of underlying operative mechanisms.
\textbf{Description.}
Rather than diagnosing failures solely at the level of outputs, the agent
conditions its diagnosis on the OPT root(s) involved, narrowing the space of
plausible failure modes and repair strategies.
\subsection{Constraint-Preserving Repair Skill}
\textbf{Purpose.}
Propose repairs that respect declared mechanism constraints.
\textbf{Description.}
Given an error and an OPT--Intent declaration, the agent proposes corrective
actions that preserve or explicitly justify changes to the operative
mechanisms.
\subsection{OPT Memory and Traceability Skill}
\textbf{Purpose.}
Maintain a mechanism-aware record of decisions and outcomes.
\textbf{Description.}
The agent records OPT--Intent declarations, OPT--Code observations, alignment
checks, and repairs, supporting auditability and longitudinal analysis.

4
paper/pieces/app-opt-code-evaluation-protocol.tex Normal file → Executable file
View File

@ -15,13 +15,13 @@ For each system under evaluation, the following inputs are provided:
\item \textbf{System description}: source code, algorithmic description, or
detailed project summary.
\item \textbf{Candidate OPT--Code}: produced by a model using the minimal or
maximal prompt (Section~\ref{sec:opt-prompts}).
maximal prompt (Section~\ref{maximal-prompt}).
\item \textbf{Candidate rationale}: a short explanation provided by the model
describing its classification.
\end{enumerate}
These inputs are then supplied to the OPT--Code Prompt Evaluator
(Appendix~\ref{app:prompt-evaluator}).
(Appendix~\ref{subsec:opt-code-evaluator}).
\subsubsection{Evaluation Pass}

1
paper/pieces/app-opt-code-prompt-section.tex Normal file → Executable file
View File

@ -1,4 +1,5 @@
\section{Appendix: OPT--Code Prompt Specifications}
\label{sec:opt-prompt-specs}
This appendix collects the prompt formulations used to elicit
OPT--Code classifications from large language models and to evaluate

0
paper/pieces/app-opt-code-spec.tex Normal file → Executable file
View File

View File

@ -0,0 +1,73 @@
\section{Formal Grammar of OPT--Code and OPT--Intent}
\label{sec:opt-formal-grammar}
To enable automated verification, interoperability, and agentic reasoning,
OPT expressions are defined using a formal grammar. The grammar below is
expressed in Extended BackusNaur Form (EBNF).
\subsection{Lexical Conventions}
\begin{itemize}
\item Identifiers are case-sensitive.
\item Root tokens are one of: Lrn, Evo, Sym, Prb, Sch, Ctl, Swm.
\item Whitespace is insignificant except as separator.
\item Strings are sequences of non-semicolon characters.
\end{itemize}
\subsection{OPT--Code Grammar}
\begin{verbatim}
OPTCode ::= "OPT" "=" RootExpr ";" FieldList
RootExpr ::= Root
| Root "/" RootExpr
| Root "+" RootExpr
Root ::= "Lrn" | "Evo" | "Sym" | "Prb"
| "Sch" | "Ctl" | "Swm"
FieldList ::= Field (";" Field)*
Field ::= "Rep" "=" Value
| "Obj" "=" Value
| "Data" "=" Value
| "Time" "=" Value
| "Human" "=" Value
Value ::= Token (("+" | "-" | "_")? Token)*
Token ::= letter (letter | digit | "-" | "_")*
\end{verbatim}
\subsection{OPT--Intent Grammar}
\begin{verbatim}
OPTIntent ::= "INTENT-OPT" "=" RootExpr ";"
IntentFieldList
IntentFieldList ::= IntentField (";" IntentField)*
IntentField ::= "INTENT-GOAL" "=" Value
| "INTENT-CONSTRAINTS" "=" Value
| "INTENT-RISKS" "=" Value
| "INTENT-CONTEXT" "=" Value
\end{verbatim}
\subsection{Composition Semantics}
The operators have the following interpretation:
\begin{itemize}
\item "/" denotes hybrid composition with integrated interaction.
\item "+" denotes additive coexistence of mechanisms.
\end{itemize}
Associativity is left-to-right. Precedence is equal unless specified
otherwise in an implementation.
\subsection{Extensibility}
Future OPT revisions may introduce additional fields or metadata extensions
without altering the core RootExpr grammar. Implementations should ignore
unknown fields while preserving structural validity.

0
paper/pieces/app-opt-storage-logs.tex Normal file → Executable file
View File

View File

@ -0,0 +1,44 @@
\section{Tool Support for OPT-Aware Agentic Systems}
\label{sec:opt-agent-tooling}
While OPT is fundamentally a conceptual taxonomy, its utility is enhanced by
tooling that supports classification, verification, and alignment analysis.
In agentic AI systems, such tooling enables partial automation of mechanism-
aware reasoning and governance.
\subsection{OPT Classification and Verification Tools}
Automated classifiers may infer OPT--Code from source code, architectural
descriptions, or execution traces. Verification tools can then assess syntactic
validity, semantic consistency, and completeness of OPT--Code expressions.
These tools support both static analysis and runtime introspection.
\subsection{OPT--Intent and Alignment Evaluation}
OPT--Intent declarations provide a reference against which agent behavior can
be evaluated. Tooling that compares OPT--Intent with observed OPT--Code enables
the detection of mechanism drift and unplanned changes in operative premises.
Such comparisons are particularly valuable in long-running or self-modifying
agentic systems.
\subsection{LLM-Supported Reasoning}
Large language models can assist in OPT classification, intent proposal, and
alignment evaluation when guided by structured prompts. Importantly, OPT
constrains these models to reason explicitly about operative mechanisms, reducing
the risk of category errors and unexamined defaults.
\subsection{Integration into Agentic Workflows}
OPT-aware tools may be invoked as part of planning, evaluation, or repair
phases in agentic workflows. By exposing mechanism-level information to the
agent, these tools enable more disciplined planning, more targeted remediation,
and more transparent reporting.
\subsection{Governance and Auditability}
Finally, OPT tooling supports governance by producing durable, machine-readable
records of mechanism choices and changes over time. These records can be used
for internal review, external audit, or regulatory compliance without requiring
access to proprietary model internals.

33
paper/pieces/app-prompt-evaluator.tex Normal file → Executable file
View File

@ -1,40 +1,13 @@
\section{Appendix: OPT--Code Prompt Specifications}
This appendix collects the prompt formulations used to elicit OPT--Code
classifications from large language models and to evaluate those classifications
for correctness and consistency.
\subsection{Evaluating OPT--Codes}
\label{subsec:opt-code-evaluator}
\subsection{Minimal OPT--Code Classification Prompt}
The minimal prompt is designed for inference-time use and lightweight tagging
pipelines. It assumes a basic familiarity with the OPT roots and emphasizes
mechanism-based classification over surface labels.
\begin{quote}\small
\input{appendix_prompt_minimal.tex}
\end{quote}
\subsection{Maximal Expert OPT--Code Classification Prompt}
The maximal prompt elaborates all root definitions, clarifies the treatment of
parallelism and pipelines, and details rules for composition. It is intended for
fine-tuning, high-stakes evaluations, or detailed audit trails.
\begin{quote}\small
\input{appendix_prompt_maximal.tex}
\end{quote}
\subsection{OPT--Code Prompt Evaluator}
The evaluator prompt is a meta-level specification: it assesses whether a given
candidate OPT--Code and rationale respect the OPT taxonomy and associated
guidelines. This enables automated or semi-automated review of classifications
generated by other models or tools.
\begin{quote}\small
\input{appendix_prompt_evaluator.tex}
\end{quote}
\subsection{OPT--Code Prompt Evaluator}
\subsection{OPT--Code Evaluator Prompt}
\begin{verbatim}
You are an OPT-Code evaluation assistant. Your job is to check whether a

1
paper/pieces/app-prompt-maximal.tex Normal file → Executable file
View File

@ -1,4 +1,5 @@
\subsection{Maximal Expert OPT--Code Classification Prompt}
\label{maximal-prompt}
The maximal prompt elaborates all root definitions, clarifies the treatment of
parallelism and pipelines, and details rules for composition. It is intended for

0
paper/pieces/app-prompt-minimal.tex Normal file → Executable file
View File

0
paper/pieces/artificial-immune-systems.tex Normal file → Executable file
View File

0
paper/pieces/background-intro.tex Normal file → Executable file
View File

0
paper/pieces/comparative-analysis.tex Normal file → Executable file
View File

0
paper/pieces/conclusions.tex Normal file → Executable file
View File

0
paper/pieces/design-governance.tex Normal file → Executable file
View File

2
paper/pieces/discussion.tex Normal file → Executable file
View File

@ -1,7 +1,7 @@
% ---------------------------
\section{Discussion: Why OPT Supersedes Signal-Based Taxonomies}
% ---------------------------
\paragraph{Mechanism clarity.} \Lrn\Swm encode distinct improvement/decision operators (gradient, selection, resolution, inference, search, feedback, collective rules).
\paragraph{Mechanism clarity.} \Lrn\Swm~encode distinct improvement/decision operators (gradient, selection, resolution, inference, search, feedback, collective rules).
\paragraph{Biological alignment.} OPT mirrors canonical biological mechanisms (plasticity, natural selection, Bayesian cognition, optimal feedback control, etc.).
\paragraph{Compact completeness.} Seven bins cover mainstream AI while enabling crisp hybrid composition; short names and hybrid syntax convey the rest.
\paragraph{Governance usability.} Mechanism-aware controls attach naturally per class (Table~\ref{tab:OPT-risk}).

0
paper/pieces/examples-and-mapping.tex Normal file → Executable file
View File

View File

@ -0,0 +1,24 @@
\begin{figure}[ht]
\centering
\fbox{\parbox{0.9\linewidth}{
\centering
\textbf{Conceptual Agentic Workflow with OPT Integration}\\[6pt]
Goal Intake $\rightarrow$ Planning $\rightarrow$ Tool Invocation $\rightarrow$
Execution $\rightarrow$ Evaluation $\rightarrow$ Repair $\rightarrow$ Memory\\[6pt]
\emph{OPT Checkpoints:}
\begin{itemize}
\item OPT--Intent declaration during Planning
\item OPT--Code observation during Execution
\item Intent--Code alignment during Evaluation
\item Mechanism-constrained repair during Repair
\item OPT-aware trace stored in Memory
\end{itemize}
}}
\caption{Agentic AI workflow annotated with OPT checkpoints. OPT operates as a
mechanism-aware abstraction layer that augments planning, diagnosis, repair,
and governance without prescribing a specific agent architecture.}
\label{fig:opt-agent-loop}
\end{figure}

14
paper/pieces/fig-opt-hybrid-tree.tex Normal file → Executable file
View File

@ -18,12 +18,14 @@
\node[mech, below=of E] (W) {\Swm};
% Hybrids (examples)
\node[hyb, above=6mm of $(L)!0.5!(S)$] (LS) {\hyb{Lrn+Sch}\\ \footnotesize(AlphaZero-type)};
\node[hyb, above=6mm of $(L)!0.5!(C)$] (LC) {\hyb{Lrn+Ctl}\\ \footnotesize(model-based control)};
\node[hyb, below=6mm of $(L)!0.5!(E)$] (EL) {\hyb{Evo/Lrn}\\ \footnotesize(neuroevolution)};
\node[hyb, below=6mm of $(L)!0.5!(Y)$] (LY) {\hyb{Lrn+Sym}\\ \footnotesize(neuro-symbolic)};
\node[hyb, below=6mm of $(P)!0.5!(C)$] (PC) {\hyb{Prb+Ctl}\\ \footnotesize(Bayesian control)};
\node[hyb, below=6mm of $(E)!0.5!(W)$] (EW) {\hyb{Swm+Evo}\\ \footnotesize(swarm-evolution)};
\node[hyb, above=16mm of $(L)!0.5!(S)$, xshift=-10mm] (LS) {\hyb{Lrn+Sch}\\ \footnotesize(AlphaZero-type)};
\node[hyb, above=16mm of $(L)!0.5!(C)$, xshift=+10mm] (LC) {\hyb{Lrn+Ctl}\\ \footnotesize(model-based control)};
\node[hyb, below=0mm of $(L)!0.5!(E)$, xshift=-25mm] (EL) {\hyb{Evo/Lrn}\\ \footnotesize(neuroevolution)};
\node[hyb, below=16mm of $(L)!0.5!(Y)$, xshift=+16mm] (LY) {\hyb{Lrn+Sym}\\ \footnotesize(neuro-symbolic)};
%\node[hyb, below=16mm of $(P)!0.5!(C)$, xshift=-30mm] (PC) {\hyb{Prb+Ctl}\\ \footnotesize(Bayesian control)};
\node[hyb, below=6mm of $(L)!0.5!(E)$, xshift=+73mm] (PC) {\hyb{Prb+Ctl}\\ \footnotesize(Bayesian control)};
%\node[hyb, below=16mm of $(E)!0.5!(W)$, xshift=+15mm] (EW) {\hyb{Swm+Evo}\\ \footnotesize(swarm-evolution)};
\node[hyb, below=16mm of $(L)!0.5!(E)$, xshift=-25mm] (EW) {\hyb{Swm+Evo}\\ \footnotesize(swarm-evolution)};
% Edges
\draw (L) -- (LS); \draw (S) -- (LS);

0
paper/pieces/fig-opt-landscape.tex Normal file → Executable file
View File

0
paper/pieces/intro.tex Normal file → Executable file
View File

83
paper/pieces/local-llm-choice.tex Normal file → Executable file
View File

@ -1,64 +1,61 @@
\begin{verbatim}
\section{Guidance on LLM Choice for Evaluation Workflow}
When performing evaluation with local LLMs, here is general guidance on selection criteria and some concrete examples.
What you need from the model:
\subsection*{What you need from the model}
For OPT classification, the model needs:
\begin{itemize}
\item Good code and algorithm understanding (to infer mechanism).
\item Decent instruction-following (to stick to the output format).
\item Basic reasoning about parallelism vs mechanism (with the explicit guidance youve added).
\end{itemize}
Good code and algorithm understanding (to infer mechanism).
That generally points you to \textasciitilde7B--14B ``instruct'' models with decent coding chops, rather than tiny 1--3B models.
Decent instruction-following (to stick to the output format).
\subsection*{General advice}
\begin{itemize}
\item Use instruct-tuned variants (e.g., Instruct / Chat / DPO) rather than base models.
\item Prefer models with good coding benchmarks (HumanEval, MBPP, etc.) because theyre better at recognizing algorithm patterns.
\item For multi-step pipelines (Classifier, Evaluator, Adjudicator), you can:
\begin{itemize}
\item Run them all on the same model, or
\item Use a slightly larger / better model for Evaluator + Adjudicator, and a smaller one for the Classifier.
\end{itemize}
\end{itemize}
Basic reasoning about parallelism vs mechanism (with the explicit guidance youve added).
\subsection*{Concrete model families (local-friendly)}
That generally points you to ~7B14B “instruct” models with decent coding chops, rather than tiny 13B models.
General advice
Use instruct-tuned variants (e.g., Instruct / Chat / DPO) rather than base models.
Prefer models with good coding benchmarks (HumanEval, MBPP, etc.) because theyre better at recognizing algorithm patterns.
For multi-step pipelines (Classifier, Evaluator, Adjudicator), you can:
Run them all on the same model, or
Use a slightly larger / better model for Evaluator + Adjudicator, and a smaller one for the Classifier.
Concrete model families (local-friendly)
A few commonly used open models in the ~714B range that are good candidates to try:
LLaMA 3 8B Instruct:
A few commonly used open models in the \textasciitilde7--14B range that are good candidates to try:
\begin{itemize}
\item \textbf{LLaMA 3 8B Instruct}:\\
Very strong instruction following and general reasoning for its size, good for code and system-descriptions. Available through multiple runtimes (vLLM, Ollama, llamafile, etc.).
Mistral 7B Instruct (or derivative fine-tunes like OpenHermes, Dolphin, etc.):
\item \textbf{Mistral 7B Instruct} (or derivative fine-tunes like OpenHermes, Dolphin, etc.):\\
Good general-purpose and coding performance; widely used in local setups. Good choice if youre already using Mistral-based stacks.
Qwen2 7B / 14B Instruct:
\item \textbf{Qwen2 7B / 14B Instruct}:\\
Strong multilingual and coding abilities; the 14B variant is particularly capable if you have the VRAM. Nice balance of reasoning and strict formatting.
Phi-3-mini (3.8B) instruct:
\item \textbf{Phi-3-mini (3.8B) instruct}:\\
Much smaller, but surprisingly capable on reasoning tasks; might be borderline for very subtle OPT distinctions but could work as a classifier with careful prompting. Evaluator/Adjudicator roles might benefit from a larger model than this, though.
Code-oriented variants (if youre mostly classifying source code rather than prose):
\item \textbf{Code-oriented variants} (if youre mostly classifying source code rather than prose):
\begin{itemize}
\item ``Code LLaMA'' derivatives
\item ``DeepSeek-Coder'' style models
\end{itemize}
These can be quite good at recognizing patterns like GA loops, RL training loops, etc., though you sometimes need to reinforce the formatting constraints.
\end{itemize}
“Code LLaMA” derivatives
“DeepSeek-Coder” style models
These can be quite good at recognizing patterns like GA loops, RL training loops, etc., though you sometimes need to reinforce the formatting constraints.
\subsection*{Suggested local stack configuration}
In a local stack, a reasonable starting configuration would be:
\begin{itemize}
\item \textbf{Classifier A}: LLaMA 3 8B Instruct (maximal prompt)
\item \textbf{Classifier B}: Mistral 7B Instruct (minimal or maximal prompt)
\item \textbf{Evaluator}: Qwen2 14B Instruct (if youve got VRAM) or LLaMA 3 8B if not
\item \textbf{Adjudicator}: same as Evaluator
\end{itemize}
Classifier A: LLaMA 3 8B Instruct (maximal prompt)
Classifier B: Mistral 7B Instruct (minimal or maximal prompt)
Evaluator: Qwen2 14B Instruct (if youve got VRAM) or LLaMA 3 8B if not
Adjudicator: same as Evaluator
If you want to conserve resources, you can just use a single 78B model for all roles and rely on the explicit prompts plus your evaluator rubric to catch errors.
\end{verbatim}
If you want to conserve resources, you can just use a single 7--8B model for all roles and rely on the explicit prompts plus your evaluator rubric to catch errors.

0
paper/pieces/math-foundations-bio-correspondence.tex Normal file → Executable file
View File

2
paper/pieces/operational-premise-taxonomy.tex Normal file → Executable file
View File

@ -2,7 +2,7 @@
\section{Operational-Premise Taxonomy (OPT)}
% ---------------------------
Because OPT introduces several new labels, we present those here before tackling background and related work topics.
Because OPT introduces several new labels, we present those here before tackling background and related work topics. (See also Appendix \ref{app:opt-etymology}.)
OPT classes are defined by dominant mechanism; hybrids are explicit compositions:

119
paper/pieces/opt-agentic.tex Executable file
View File

@ -0,0 +1,119 @@
\section{OPT and Agentic AI Workflows}
\label{sec:opt-agentic}
Recent advances in agentic artificial intelligence emphasize systems that
plan, act, evaluate, and repair their own behavior through iterative
interaction with tools, environments, and internal models. Such systems
typically decompose goals, invoke tools, assess outcomes, and revise plans in
closed loops. While these architectures have proven powerful, they frequently
lack an explicit representation of the \emph{operative mechanisms} through
which actions are taken and errors arise. This omission complicates reasoning
about failure modes, governance constraints, and design trade-offs.
The Operational Premise Taxonomy (OPT) provides a mechanism-level abstraction
layer that can be integrated into agentic workflows to address these gaps.
Rather than prescribing a particular agent architecture, OPT supplies a shared
vocabulary and analytical framework that agentic systems can use to reason
about how tasks are performed, how errors should be interpreted, and how
repairs should be constrained.
\subsection{Mechanism Awareness in Agentic Systems}
Agentic workflows are often described in terms of high-level functional stages
(planning, execution, critique, repair), but these stages are agnostic to the
computational mechanisms employed. In practice, however, the behavior and risk
profile of an agentic system depend critically on whether its actions rely on
parametric learning (\Lrn), symbolic reasoning (\Sym), search (\Sch),
probabilistic inference (\Prb), control (\Ctl), evolutionary adaptation (\Evo),
or swarm dynamics (\Swm), or some hybrid combination thereof.
OPT introduces explicit mechanism awareness into agentic reasoning. An
OPT-aware agent can classify its own components, tools, or subplans in terms of
OPT roots, enabling it to reason not merely about \emph{what} is being done, but
about \emph{how} it is being done. This distinction becomes especially
important in hybrid agentic systems that combine learning-based components with
search, symbolic constraints, or control loops.
\subsection{OPT--Intent in Agentic Planning}
During goal intake and planning, agentic systems must decide not only which
actions to take, but which classes of computational strategies are appropriate.
OPT--Intent provides a compact way to express these design-time commitments.
An OPT--Intent declaration specifies the intended operative mechanisms, the
systems goal, relevant constraints, and anticipated risks.
In an agentic context, OPT--Intent functions as a planning constraint. It
guides the selection of tools and strategies, discourages unprincipled
mechanism substitution (e.g., defaulting to learning-based solutions when
symbolic or search-based approaches are more appropriate), and provides an
explicit reference against which subsequent behavior can be evaluated.
\subsection{OPT--Code and Runtime Self-Description}
As an agent executes plans and invokes tools, its effective operative
mechanisms may diverge from those originally intended. OPT--Code provides a
runtime or post-hoc description of the mechanisms actually employed. In
agentic systems, this enables self-description and introspection: the agent
can record and report which mechanisms were used to achieve a result.
Comparing OPT--Code against OPT--Intent enables the detection of \emph{mechanism
drift}, where new mechanisms are introduced implicitly or intended mechanisms
are bypassed. This capability is particularly relevant for long-running or
self-modifying agentic systems, where accumulated changes can undermine
assumptions about safety, explainability, or compliance.
\subsection{Mechanism-Guided Error Interpretation}
A central challenge in agentic AI is automated error remediation. Errors in
agentic systems are often diagnosed at the surface level (e.g., “the output was
incorrect”), without regard to the underlying mechanism that produced the
error. OPT enables mechanism-guided error interpretation by associating
distinct classes of failure modes with different operative premises.
For example, failures in \Lrn-dominated systems often involve generalization
error or distributional shift, while failures in \Sch systems may involve
heuristic bias or combinatorial explosion. Control-oriented systems (\Ctl) are
prone to instability or oscillation, and evolutionary systems (\Evo) may suffer
from premature convergence or loss of diversity. By classifying the operative
mechanism, an agent can narrow the space of plausible diagnoses and select
repair strategies that are appropriate to the mechanism in use.
\subsection{Constraint-Preserving Repair and Governance}
OPT also supports constraint-aware repair. In governance-sensitive contexts,
repairs must not introduce new operative mechanisms without justification, as
doing so may alter the systems risk profile or regulatory status. An
OPT-aware agent can evaluate proposed repairs against OPT--Intent to determine
whether they preserve or violate intended constraints.
This capability enables a form of \emph{mechanism-level governance} within
agentic workflows. Rather than relying solely on external oversight, agents can
self-monitor compliance with declared mechanism constraints, flag deviations,
and require explicit authorization for changes that introduce new operative
premises.
\subsection{Multi-Agent Differentiation and Coordination}
In multi-agent systems, OPT provides a principled basis for role differentiation
and coordination. Agents may be specialized according to dominant operative
mechanisms (e.g., search-focused agents, symbolic-reasoning agents, or
learning-focused agents), reducing cognitive load and improving interpretability.
OPT also provides a shared vocabulary for resolving conflicts when agents
propose incompatible strategies, enabling negotiation in terms of mechanism
trade-offs rather than ad hoc preferences.
\subsection{Implications for Agentic AI Design}
Incorporating OPT into agentic workflows does not require abandoning existing
architectures. Instead, OPT functions as an intermediate abstraction layer that
connects goals, mechanisms, and outcomes. By making operative premises explicit,
OPT enhances planning discipline, improves error diagnosis, supports governance
constraints, and provides a foundation for more transparent and accountable
agentic AI systems.
As agentic AI continues to move toward greater autonomy and complexity, the
ability to reason explicitly about operative mechanisms will become
increasingly important. OPT offers a structured and extensible framework for
supporting this capability within both single-agent and multi-agent systems.

0
paper/pieces/orthogonal-axes-and-risks.tex Normal file → Executable file
View File

0
paper/pieces/para-bridge-comparative-landscape.tex Normal file → Executable file
View File

91
paper/pieces/recent-context.tex Executable file
View File

@ -0,0 +1,91 @@
\section{Recent Developments and Real-World Context}
\label{sec:recent-context}
Since the initial formulation of the Operational Premise Taxonomy (OPT), the
real-world context surrounding artificial intelligence has continued to evolve
in ways that further motivate a mechanism-level approach to classification,
design, and governance. Developments in regulation, governance frameworks,
incident reporting, and enterprise deployment all point toward increasing
complexity, heterogeneity, and hybridization of AI systems—precisely the
conditions under which coarse or historically contingent taxonomies become
misleading.
\subsection{Shift Toward Operational and Layered Governance}
Recent analyses of global AI governance emphasize the inadequacy of
single-axis or model-centric classification schemes, instead advocating
\emph{layered} or \emph{multi-level} frameworks that distinguish between policy,
organizational, and technical layers \citep{Lawfare2025LayeredGovernance}.
This shift reflects growing recognition that meaningful oversight must engage
with the \emph{operative characteristics} of systems, not merely their declared
purpose or application domain.
OPT is aligned with this direction by explicitly operating at the technical
mechanism layer, while remaining compatible with higher-level governance
frameworks. In contrast to policy taxonomies that classify systems by risk
category or deployment context, OPT provides a vocabulary for describing what
a system \emph{does computationally}, enabling principled connections between
technical design and governance concerns.
\subsection{Regulatory Developments and Classification Pressure}
The entry into force of the European Union Artificial Intelligence Act
\citep{EUAIAct2024} and related digital governance initiatives has intensified
the demand for precise, defensible system descriptions. While the EU AI Act
classifies systems primarily by risk category and intended use, compliance
requirements increasingly rely on technical documentation that explains system
behavior, adaptivity, and decision-making structure.
Similarly, the OECDs ongoing work on AI definitions and classification
highlights characteristics such as autonomy, adaptiveness, and learning
capacity as central to governance \citep{OECD2022AIClassification,OECD2025AgenticAI}.
These characteristics are not independent of underlying mechanisms: for
example, evolutionary adaptation (\Evo) and parametric learning (\Lrn) imply
very different forms of adaptivity and risk. OPT complements these regulatory
frameworks by making such mechanism-level distinctions explicit and
machine-readable.
\subsection{Rising Attention to AI Incidents and Risk Profiles}
Independent reporting indicates a continued increase in documented AI-related
incidents and harms across sectors, including safety-critical domains
\citep{Time2025AIHarms,OECD2023AIIncidents}. This trend has prompted renewed
interest in standardized incident reporting and causal analysis frameworks.
Mechanism-level classification is directly relevant to this effort. Different
OPT roots correspond to distinct risk profiles: for example, closed-loop
control systems (\Ctl) raise stability and safety concerns; evolutionary
systems (\Evo) raise issues of unpredictability and emergent behavior; and
probabilistic inference systems (\Prb) raise concerns related to uncertainty
propagation and calibration. OPT thus provides a principled substrate for
connecting observed incidents to underlying computational causes, rather than
treating AI systems as homogeneous entities.
\subsection{Enterprise Adoption and Documentation Demands}
Enterprise adoption of AI continues to accelerate, with increasing emphasis on
deploying hybrid systems that combine learning, search, symbolic reasoning, and
control \citep{Menlo2025EnterpriseAI}. At the same time, organizations face
mounting pressure to document, justify, and audit these systems for internal
risk management and external compliance.
Existing documentation artefacts such as Model Cards and AI Service Cards
address aspects of transparency but remain largely model-centric. OPT extends
this documentation landscape by enabling concise, mechanism-oriented summaries
that remain stable even as specific models or implementations change. In this
sense, OPT functions as an architectural descriptor rather than a model report.
\subsection{Implications for OPT}
Taken together, these developments reinforce the core motivation for OPT.
AI governance is moving toward operational realism; regulatory frameworks
increasingly require technical specificity; incident reporting demands causal
clarity; and enterprise practice is producing ever more hybrid systems. A
taxonomy that classifies AI systems by their operative mechanisms is therefore
not merely philosophically attractive, but practically necessary.
OPT does not replace policy-oriented classifications; rather, it provides a
technical backbone that can support them. By grounding classification in modes
of operation, OPT offers a stable reference frame for design, documentation,
audit, and governance amid rapid technological change.

0
paper/pieces/related-work.tex Normal file → Executable file
View File

View File

@ -0,0 +1,105 @@
\section{Comparison of OPT with Other AI Classification Frameworks}
\label{sec:comparison}
\begin{center}
\begin{tabular}{p{3cm} p{1.6cm} p{1.6cm} p{1.8cm} p{1.6cm} p{1.6cm}}
\hline
\textbf{Feature} &
\textbf{Supervised / Unsupervised} &
\textbf{Symbolic / Subsymbolic} &
\textbf{OECD / Policy Frameworks} &
\textbf{Model Cards / ADR} &
\textbf{OPT} \\
\hline
Mechanism-Level Classification &
No &
Partial &
No &
No &
Yes \\
\hline
Supports Hybrid Systems Explicitly &
Limited &
Limited &
No &
No &
Yes \\
\hline
Biological Correspondence &
Partial &
Limited &
No &
No &
Yes \\
\hline
Covers Evolutionary Methods &
No &
Partial &
No &
No &
Yes \\
\hline
Covers Control Systems &
No &
No &
No &
No &
Yes \\
\hline
Covers Swarm Methods &
No &
No &
No &
No &
Yes \\
\hline
Formal Grammar Defined &
No &
No &
No &
No &
Yes \\
\hline
Supports Governance Mapping &
Indirect &
No &
Yes &
Yes &
Yes \\
\hline
Detects Mechanism Drift &
No &
No &
No &
No &
Yes \\
\hline
Supports Agentic AI Reasoning &
No &
No &
Limited &
No &
Yes \\
\hline
Enables Mechanism-Guided Remediation &
No &
No &
No &
No &
Yes \\
\hline
\hline
\end{tabular}
\end{center}

View File

@ -0,0 +1,33 @@
\section*{OPT Roots and Characteristic Agent Failure Modes}
\label{tab:opt-agent-failures}
\begin{center}
\begin{tabular}{p{2.5cm} p{4.5cm} p{6cm}}
\hline
\textbf{OPT Root} & \textbf{Primary Mechanism} & \textbf{Characteristic Failure Modes} \\
\hline
\Lrn & Parametric learning &
Overfitting, distributional shift, catastrophic forgetting, spurious
correlations \\
\Evo & Evolutionary adaptation &
Premature convergence, loss of diversity, unstable fitness dynamics \\
\Sym & Symbolic reasoning &
Rule inconsistency, brittleness, combinatorial rule explosion \\
\Prb & Probabilistic inference &
Miscalibration, uncertainty collapse, incorrect priors \\
\Sch & Search and optimization &
Heuristic bias, local minima, combinatorial explosion \\
\Ctl & Control and feedback &
Oscillation, instability, delayed response, unsafe transients \\
\Swm & Swarm dynamics &
Incoherent emergence, sensitivity to noise, coordination failure \\
\hline
\end{tabular}
\end{center}

0
paper/pieces/table-opt-comparison.tex Normal file → Executable file
View File

0
paper/pieces/table-opt-risk.tex Normal file → Executable file
View File

47
paper/references.bib Normal file → Executable file
View File

@ -859,3 +859,50 @@ and the importance of the book today are discussed.}
doi = {10.1109/ICSE.2003.1201248}
}
@article{Lawfare2025LayeredGovernance,
author = {Multiple Authors},
title = {Understanding Global AI Governance Through a Three-Layer Framework},
journal = {Lawfare},
year = {2025}
}
@misc{EUAIAct2024,
author = {{European Union}},
title = {Artificial Intelligence Act},
year = {2024},
note = {Regulation (EU) 2024/1684}
}
@report{OECD2022AIClassification,
author = {{OECD}},
title = {OECD Framework for the Classification of AI Systems},
year = {2022},
institution = {Organisation for Economic Co-operation and Development}
}
@report{OECD2025AgenticAI,
author = {{OECD}},
title = {The Agentic AI Landscape and Its Conceptual Foundations},
year = {2025},
institution = {Organisation for Economic Co-operation and Development}
}
@article{Time2025AIHarms,
author = {Time Magazine Staff},
title = {What the Numbers Show About AIs Harms},
journal = {Time},
year = {2025}
}
@report{OECD2023AIIncidents,
author = {{OECD}},
title = {Towards a Common Reporting Framework for AI Incidents},
year = {2023},
institution = {Organisation for Economic Co-operation and Development}
}
@article{Menlo2025EnterpriseAI,
author = {Menlo Ventures},
title = {The State of Generative AI in the Enterprise},
year = {2025}
}