// Reference

AI Governance Glossary

Key terms for enterprise decision teams, compliance officers, and regulated-industry professionals evaluating AI governance frameworks. Definitions are written for practitioners, not researchers.

Term 01

AI Governance

The set of policies, processes, and technical mechanisms an organization uses to ensure AI-generated outputs are reliable, auditable, and aligned with institutional risk standards. Effective AI governance answers three questions: who is accountable for the output, how was the output produced, and how would a disagreement between the AI output and actual facts be detected and corrected. See also: AI governance for financial services.

Term 02

First-Mover Bias

The documented tendency for AI models to anchor on the first response in a chain, causing subsequent outputs to converge toward the initial position rather than reasoning independently. In practice, if you ask a second AI model to evaluate the first model's answer, the second model is statistically likely to agree — not because it independently reached the same conclusion, but because the first answer has already framed the problem. Multi-model deliberation that eliminates first-mover bias requires all models to generate positions independently before any cross-review occurs. This is the core architectural constraint of AI Consensus Phase 1.

Term 03

Confidence Score (AI)

A quantitative measure of agreement across multiple AI model outputs. In a structured deliberation system, confidence is computed from the language patterns of cross-review responses — the presence of explicit agreement language increases the score, while conflict language decreases it. A confidence score provides decision-makers with a signal that is actionable: a 91% score and a 43% score on the same question warrant different levels of additional scrutiny. Used by investment research teams to assess how settled a thesis is before LP presentation.

Term 04

Conflict Detection

The automated process of identifying and attributing factual disagreements between AI model outputs. In a single-model query, factual uncertainty is invisible — the model produces a confident-sounding answer regardless of how contested the underlying facts are. Conflict detection surfaces disagreements explicitly, identifies which model dissented, and records the nature of the dispute in an audit-ready format. Critical for government contractor deliverables that must withstand procurement review.

Term 05

Agentic Recovery

An automated process in which an AI system detects its own knowledge limitations during synthesis — typically signaled by phrases such as "unable to verify" or "as of my training cutoff" — and initiates a live information retrieval step before completing its output. Agentic recovery reduces hallucination risk by preventing the system from generating confident-sounding answers on topics where training data is thin or outdated. AI Consensus implements this via automatic Perplexity web search, triggered once per analysis when data gaps are detected.

Term 06

Multi-Model Deliberation

A structured analytical process in which multiple AI models independently evaluate a question, cross-review each other's positions, and synthesize findings under a neutral moderator. The architecture is designed to replicate the epistemic benefits of an expert panel: independent reasoning, structured disagreement, and a defensible final recommendation. The critical design constraint is phase separation — models must not see each other's outputs until all Phase 1 positions are recorded. Read more: What Is AI Consensus?

Term 07

BLUF (Bottom Line Up Front)

A structured communication format, originally from military and intelligence writing, in which the key conclusion is stated in the first sentence before any supporting evidence or analysis. AI Consensus uses BLUF formatting for all Phase 3 synthesis outputs because it matches the reading behavior of executive decision-makers: conclusion first, evidence second, dissent noted, next steps last. BLUF-formatted outputs are directly usable in board materials and procurement review packages.

Term 08

AI Auditability

The capacity of an AI system to produce a verifiable record of how a recommendation was generated — including which models or processes were involved, where disagreements occurred, and what confidence the system assigns to its output. Auditability is a prerequisite for using AI output in regulated decision-making contexts. A single-model chatbot response is not auditable by definition; it produces an answer without a traceable analytical process. See: compliance requirements for financial services.

Term 09

Council Mode

A structured AI deliberation format in which each participating model is assigned a distinct analytical persona — The Realist, The Humanist, The Futurist, The Architect, The Analyst, The Strategist, and The Contrarian — before the question is posed. Council Mode is designed to surface analytical angles that a homogeneous panel of identically-prompted models would miss, particularly dissenting or unconventional perspectives that improve the overall quality of the synthesis.

Term 10

Precision Mode

A constrained AI analysis format in which all participating models are required to ground their responses in a specific document, citing passage locations for each claim using the format [Ref: "exact quote"]. Any assertion not supported by the document must be labeled [External Knowledge]. Precision Mode is used when the question requires analysis of a specific contract, report, or regulatory filing rather than general knowledge — common in investment due diligence and government proposal review.

See These Concepts in Action

Run a free analysis and observe confidence scoring, conflict detection, and agentic recovery in a live deliberation.

Start Free Analysis