143 lines
7.2 KiB
TeX
143 lines
7.2 KiB
TeX
\section{Design and Governance with OPT--Intent and OPT--Code}
|
|
\label{sec:design-governance}
|
|
|
|
A central motivation for the Operational Premise Taxonomy (OPT) is to support
|
|
not only the analysis of existing AI systems, but also the design and
|
|
governance of systems throughout their lifecycle. Most established AI
|
|
documentation frameworks focus on models that already exist---for example,
|
|
Model Cards, AI Service Cards, or post-hoc documentation embedded in
|
|
software-engineering artefacts. In contrast, OPT provides an explicit
|
|
mechanism-level vocabulary that can be applied \emph{before}, \emph{during},
|
|
and \emph{after} implementation.
|
|
|
|
To this end, we distinguish two complementary artefacts:
|
|
\emph{OPT--Intent}, a design-time declaration of planned mechanisms, goals,
|
|
constraints, and risks; and \emph{OPT--Code}, a run-time classification of the
|
|
system as implemented. Together, these artefacts form a governance substrate
|
|
that is lightweight, expressive, and compatible with software-architecture
|
|
practices and AI governance frameworks.
|
|
|
|
\subsection{OPT--Intent as a Design-Time Mechanism Declaration}
|
|
|
|
OPT--Intent expresses the \emph{intended} operative mechanisms
|
|
(\Lrn, \Evo, \Sym, \Prb, \Sch, \Ctl, \Swm), the domain-level goal, key
|
|
constraints, anticipated risks, and the deployment context. The notation
|
|
supports early-stage architectural reasoning:
|
|
|
|
\begin{quote}\ttfamily
|
|
INTENT-OPT = Sch/Evo \\
|
|
INTENT-GOAL = robust-production-schedule-under-dynamic-constraints \\
|
|
INTENT-CONSTRAINTS = real-time, explainable, limited-human-oversight \\
|
|
INTENT-RISKS = local-minima, premature-convergence \\
|
|
INTENT-CONTEXT = manufacturing-decision-support
|
|
\end{quote}
|
|
|
|
This declaration resembles a focused architectural decision record (ADR), but
|
|
is grounded in OPT's mechanism vocabulary. Whereas classical goal-oriented
|
|
requirements engineering (GORE) frameworks such as KAOS, i\*, or Tropos
|
|
provide high-level goal models, OPT--Intent provides a mechanism-centered
|
|
annotation that connects those goals to families of computational approaches.
|
|
|
|
\subsection{OPT--Code as an Implementation-Time Mechanism Description}
|
|
|
|
Once a system is implemented, its operative mechanisms can be classified via
|
|
OPT--Code:
|
|
|
|
\begin{quote}\ttfamily
|
|
OPT=Evo/Sch/Sym; Rep=permutations+rules; Obj=production-cost; \\
|
|
Data=inventory+constraints; Time=generations+online-adjust; Human=medium
|
|
\end{quote}
|
|
|
|
OPT--Code reflects the \emph{actual} mechanisms as they appear in the final
|
|
architecture and implementation. Comparing OPT--Intent with OPT--Code provides
|
|
a principled way to detect architectural drift, unplanned mechanism additions,
|
|
and deviations from original constraints.
|
|
|
|
\subsection{Alignment Analysis: Intent vs.~Implementation}
|
|
|
|
The relationship between OPT--Intent and OPT--Code supports alignment analysis
|
|
across the AI system lifecycle:
|
|
|
|
\begin{itemize}
|
|
\item \textbf{Mechanism alignment:} Whether the realized mechanisms match
|
|
the intended roots, or whether additions (e.g., \Sym~for explainability)
|
|
or substitutions introduce new behavior.
|
|
\item \textbf{Objective alignment:} Whether the objective in OPT--Code (Obj)
|
|
is consistent with the purpose in INTENT-GOAL.
|
|
\item \textbf{Constraint alignment:} Whether the implementation respects
|
|
INTENT-CONSTRAINTS (e.g., real-time, explainability, human oversight).
|
|
\item \textbf{Risk evolution:} Whether realized mechanisms introduce
|
|
additional risks relative to INTENT-RISKS (e.g., adding learned
|
|
components introduces data-dependence).
|
|
\end{itemize}
|
|
|
|
This analysis can be automated with an ``OPT--Intent Alignment Evaluator'' in
|
|
LLM-based workflows, producing an alignment verdict and score.
|
|
|
|
\subsection{Integration with Existing Governance Artefacts}
|
|
|
|
OPT aligns with and supplements existing governance frameworks:
|
|
|
|
\paragraph{AI Documentation (Model Cards, AI Service Cards).}
|
|
Model Cards and similar frameworks capture purpose, data provenance,
|
|
limitations, and performance characteristics of trained models. These artefacts
|
|
begin at model completion. OPT--Intent supplements them with a \emph{design
|
|
origin}, and OPT--Code provides a \emph{mechanism-centric summary} useful for
|
|
governance, reproducibility, and safety assessments.
|
|
|
|
\paragraph{Architecture Decision Records (ADRs).}
|
|
ADRs record the rationale for major architectural decisions. OPT--Intent
|
|
functions as a structured, mechanism-focused ADR, intended to be referenced in
|
|
downstream ADRs describing implementation choices and trade-offs.
|
|
|
|
\paragraph{Safety, Risk, and Impact Assessments.}
|
|
Regulatory frameworks such as the OECD AI classification or NIST AI Risk
|
|
Management Framework classify AI systems according to use, risk, and context.
|
|
OPT complements these by classifying operative mechanisms. Mechanism-level
|
|
classification is critical because risk profiles are often mechanism-dependent:
|
|
population-based adaptation (\Evo), closed-loop control (\Ctl), and
|
|
probabilistic inference (\Prb) each generate distinct failure modes.
|
|
|
|
\paragraph{GORE and Requirements Engineering.}
|
|
OPT--Intent is compatible with KAOS, i\*, and Tropos goal structures, providing
|
|
a compact mapping from stakeholder goals to operative mechanisms. Instead of
|
|
treating ``use AI'' as a monolithic design choice, OPT forces the mechanism to
|
|
be named explicitly.
|
|
|
|
\subsection{Lifecycle Governance with OPT}
|
|
|
|
An AI system moves through phases of design, implementation, deployment,
|
|
revision, and decommissioning. OPT supports governance at each phase:
|
|
|
|
\begin{enumerate}
|
|
\item \textbf{Design:} Authors specify OPT--Intent and identify mechanistic
|
|
justifications and constraints.
|
|
\item \textbf{Implementation:} OPT--Code is generated and compared with
|
|
Intent for architectural drift.
|
|
\item \textbf{Evaluation:} OPT classifiers, evaluators, and adjudicators
|
|
check mechanism correctness, formatting, and risk implications.
|
|
\item \textbf{Deployment:} OPT--Code informs safety monitoring, audit logs,
|
|
and mechanism-specific risk controls (e.g., for \Ctl~or \Evo~systems).
|
|
\item \textbf{Revision and re-training:} OPT alignment is reassessed when
|
|
system behavior changes or new mechanisms are introduced.
|
|
\item \textbf{Documentation \& reporting:} OPT--Intent and OPT--Code form
|
|
part of a long-term audit trail, linking design rationale to
|
|
implemented system behavior.
|
|
\end{enumerate}
|
|
|
|
\subsection{AI Design Assistants and Automated Governance}
|
|
|
|
OPT also provides a structured interface for LLM-based design assistants.
|
|
Given a functional goal or stakeholder requirement, an OPT-aware model can
|
|
produce candidate OPT--Intent declarations and propose mechanism families
|
|
suitable for achieving the goal. Downstream evaluation and adjudication
|
|
prompts make it possible to manage and audit these proposals automatically.
|
|
|
|
Such workflows enable a novel form of governance: mechanism-level
|
|
traceability. Instead of asking only whether a system is ``fair,'' ``safe,'' or
|
|
``performant,'' practitioners can ask whether its mechanisms match the intended
|
|
design, whether mechanism additions add new risks, and whether the alignment
|
|
between purpose and implementation is conserved over time. OPT thus becomes a
|
|
bridge between requirements engineering, architectural practice, risk
|
|
governance, and the technical analysis of AI systems.
|