AI-Mediated Decisions and the Emerging Evidentiary Control Gap

2 min read Original article ↗

Published December 12, 2025 | Version 1.0

Journal article Open

Description

Includes Appendix B: The AI Decision Surface Control Gap (canonical governance diagram).

AI assistants are now routinely used by employees, customers, suppliers, and partners to evaluate vendors, interpret obligations, compare products, and assess organisational suitability. These systems increasingly shape decisions before formal procurement, legal review, or internal approval processes begin.

Enterprises generally assume these assistants behave like stable analysts: consistent, re-constructable, and broadly aligned with approved disclosures. That assumption is no longer valid.

Under fixed conditions, leading AI systems generate compressed judgments that vary across runs, contradict prior outputs, silently substitute facts, and introduce representations that cannot be traced to any approved internal source. These outputs are not logged, governed, or reproducible within existing enterprise control frameworks, yet they influence decisions as if they were authoritative.

What has emerged is not a tooling issue, a marketing problem, or a debate about model quality. It is a governance and evidentiary control gap.

AI assistants now operate as a parallel decision surface that sits outside established systems of disclosure, assurance, and accountability.

This paper includes a canonical diagram and control taxonomy intended for citation, board briefings, and audit committee discussions.

This document may be cited or referenced in governance, risk, and assurance discussions.

Files

AI-Mediated Decisions and the Emerging Evidentiary Control Gap.pdf

Files (118.5 kB)