OE - ABN 77 691 088 963 - Perth, Western Australia - ORCID: 0009-0003-7735-8000 ECA Node Active
Ontological Engineering
Industrial Epistemic Infrastructure - Est. 2025
Version 3.7.1 - 16 November 2025 - Proposed Enforcement Framework
DABA 3.0
Decentralised Algorithmic Bias Auditing. A proposed enforcement framework defining the legally actionable chain of institutional failures in AI platform governance: from fabrication and recourse denial through to architectural spoliation. Submitted as a voluntary code of practice under Article 56 of Regulation (EU) 2024/1689 and Articles 34-36 of Regulation (EU) 2022/2065.
EU AI Act - Arts. 12, 13, 50, 52 DSA - Arts. 34-36, 40, 42 GDPR - Arts. 5, 16 ISO/IEC 42001 Royalty-free for compliance use
The Protection Doctrine

Most enterprise AI deployments inherit a fundamental opacity. Commercial providers are designed for consumer scale, not industrial compliance. They strip away diagnostic telemetry: epistemic entropy states, routing decisions, and the audit trail that determines why a machine refused an output. The Deployer is left with an unauditable black box and full legal liability for its outputs.

The DABA framework enforces localised epistemic logging. The goal is not algorithmic omniscience. It is establishing a precise, non-repudiable record of exactly why a machine refused an output - generated locally, on hardware the Deployer controls, independent of any cloud provider's logs.

The central legal claim of this protocol is that a platform's deliberate Architectural Transparency Failure - the structural choice not to create immutable, auditable logs - constitutes grossly negligent spoliation of evidence. This justifies the legal demand to shift the burden of proof. The platform, having architecturally destroyed the evidence, must disprove the user's claim of harm rather than the user having to prove it occurred.

Primary Legal Instrument - Spoliation of Evidence
The Valcin Doctrine: Burden Shift

As established in Valcin v. Public Health Trust, where a party's spoliation hinders the ability to establish a prima facie case, courts can apply the most powerful remedy available: shifting the burden of proof to the platform.

The C7b audit proves the platform cannot produce the records. Therefore, the burden shifts to the platform to prove it did not act wrongfully. The platform's trade secret defence is not a shield: it is the instrument of the spoliation itself.

This doctrine applies in conjunction with FRCP 37(e) and the Revised EU Product Liability Directive (2024). Non-compliance with AI Act transparency duties presumes defect. The platform's failure to preserve logs is the evidence.

C7b confirmed - Burden shifts to platform - Level IV enforcement triggered
Enforcement Taxonomy: C5 through C8

The enforcement layer defines the legally actionable institutional failures. Each maps to a specific legal obligation and a specific evidentiary requirement. These are not aspirational standards: they are existing binding duties that the DABA taxonomy makes auditable and actionable.

Code Category Definition Audit Trigger
C5a Attribution Without Prompt System-generated content falsely attributed to a user. The AI fabricates input or actions and assigns them to the human operator. Screenshot and timestamp of attributed content the operator did not author.
C5b Provenance Forgery AI-generated content falsely attributed to a human. Machine output presented as human-originated, subverting EU AI Act Article 50(3) transparency mandates. Verifiable mismatch between attributed source and actual generation origin.
C6a Automated Injustice Automated systems make consequential adverse decisions without meaningful human review or functional recourse pathway. Documented attempt to access recourse mechanism with evidence of non-functionality.
C6b Commercial Recourse Denial The platform affirmatively represents that appeal or correction is available but the mechanism is automated, circular, and non-functional. Logged attempt to use appeal function with documented failure to produce meaningful review.
C7a Administrative Misleading Active institutional deception. Staff (human or AI) provide false or misleading information to deny a documented failure and prevent accountability. Recorded chat or email log of support interaction denying a documented C5 or C6 event.
C7b Architectural Transparency Failure The structural choice not to create immutable, auditable internal logs. The primary act of spoliation. Triggers the Valcin Doctrine Burden Shift. Violates EU AI Act Article 12(1) directly. Formal preservation demand (48-hour notice) sent to legal or compliance, with documented non-receipt of logs.
C8a Weaponised Discontinuity Exploitation of the platform's automated bias filters by bad-faith actors to organise mass reporting or de-platform rivals. Documented coordinated reporting pattern with evidence of platform's automated enforcement response.
C8b Biased Adjudication Platform moderation systems apply bias filters unequally, producing systematically different outcomes for structurally equivalent inputs. Comparative audit of equivalent inputs producing divergent moderation outcomes.
Section VII.2: Transparency Log Requirement

The DABA 3.0 Section VII.2 specification defines the minimum data fields required for an AI interaction log to constitute admissible audit evidence under EU AI Act Article 12. This is the standard commercial AI systems structurally decline to meet.

The Epistemic Control Architecture generates a Section VII.2 compliant cryptographic audit footprint automatically for every pipeline execution - including blocked and faulted interactions. The LOG_INTEGRITY_HASH is tamper-evident: modification of any field after generation invalidates the hash, making post-hoc alteration structurally detectable.

DABA 3.0 - Section VII.2 - Transparency Log - ECA Pipeline Output - Perth, Western Australia
TIMESTAMP (UTC)2026-05-01T09:10:56
PROMPT_HASH_SHA2569e30f71cd5db5e8994367a2b1c4d8e0f2a3b5c6d7e8f9a0b1c2d3e4f5a6b7c8
SANITISED_QUERY_HASH3c4a4db0ba02e592736f7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8
CLAIMS_HASH261d05022aceb03bd689fa7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7
OUTPUT_HASH_SHA256956d04a1aa8c9905429408b7c8d9e0f1a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d7
VERDICT_TYPECLEAN
PIPELINE_LATENCY_MS68760.0
LOG_INTEGRITY_HASHbc99f41b447e2e165588d4a2b3c1e09f7d6e5f4a3b2c1d0e9f8a7b6c5d4e3f2

Live output: Isolated Compute Node, Perth, Western Australia. DABA 3.0 Section VII.2 - Transparency Log Page Requirement. Ontological Engineering Pty Ltd - ABN 77 691 088 963. Hashing methodology is open and reproducible using standard SHA-256 tooling.

Regulatory Mapping

Each C5-C8 failure category maps directly to specific regulatory obligations under existing binding frameworks. These are not new requirements: they are existing legal duties that the DABA taxonomy makes auditable and actionable.

C5b - Provenance Forgery
EU AI Act Article 50(3)
AI-generated content must be detectable as artificially generated. C5b subverts this mandate by attributing machine output to human authorship.
C7b - Architectural Transparency Failure
EU AI Act Article 12(1)
High-Risk AI systems must maintain automatic event logging. Inability to produce logs upon formal preservation demand is an explicit structural violation.
C7b - External Spoliation
DSA Articles 34-36, 40, 42
Systemic suppression of audit evidence from public search indexes constitutes a direct violation of DSA obligations to preserve access to lawful public information.
C7a - Administrative Misleading
EU AI Act Article 52(1)
Persons must be informed they are interacting with an AI system. Using AI as a mechanism of institutional obfuscation actively destroys this transparency mandate.
C6b - Recourse Denial
EU AI Act Article 13 / DSA Article 34
Deployers must provide functional recourse mechanisms. A non-functional circular appeal system constitutes a per se violation of the recourse obligation.
C7b - Spoliation Remedy
Valcin / FRCP 37(e) / EU Product Liability Directive (2024)
Non-compliance with AI Act transparency duties presumes defect. The platform's failure to preserve logs triggers the burden shift doctrine under applicable law.
Enforcement Level Tiers
Level Trigger Obligation Outcome
Level I Routine logging audit: no C-failure detected MUST - Section VII.2 log fields present and independently verifiable Compliance verification - no further action
Level II C5 or C6 triggered: automated deception or recourse failure MUST - Produce interaction logs within 48 hours of formal preservation demand Mitigation notice and administrative fine
Level III Systemic C-failures: post-market monitoring trigger MUST - Full audit access, log retention for full service lifecycle Enforcement notice and significant administrative fine
Level IV C7b triggered: spoliation or architectural transparency failure MUST - Disprove harm under reversed burden of proof: Valcin Doctrine applies Maximum sanctions - burden of proof shifts to platform
Normative Requirements
Obligation Requirement Regulatory Basis
MUST Maintain immutable C2PA logs linked to W3C PROV-O data schema EU AI Act Article 12 - Record-Keeping
MUST Ensure all content subject to public audit is indexable by VLOSEs DSA Article 34 - Systemic Risk Mitigation
MUST Provide user-verifiable prompt-output provenance via C2PA manifest EU AI Act Article 52 - Transparency
SHOULD Implement integrity-based inductive bias constraints in model architecture Best practice - risk mitigation / ISO/IEC 42001
SHOULD Provide self-audit and compliance tools to deployers EU AI Act Article 13 - Transparency to deployers
Published Artifact
DABA 3.0 - Master Enforcement Protocol
Version 3.7.1 - 16 November 2025 - PDF - Royalty-free for compliance, auditing, and enforcement use under EU AI Act, DSA, and GDPR
Download PDF
Conformance statement. An implementation conforms to this specification if it satisfies all MUST requirements defined in Section VI and adheres to all normative definitions of failure (C5-C8) in Section II. Jurisdictional note: the spoliation claims and burden-shift remedies are primarily grounded in US legal precedent and European Regulation. Enforcement teams operating in other jurisdictions should apply comparative jurisprudence outside the EU/US framework.

Private technical briefings on DABA 3.0 implementation and the Epistemic Control Architecture are available to legal risk teams, industrial operators, and regulatory bodies on request.

andrew.greene@ontologicalengineering.com.au