<aside>

Site Navigation

Home | In Short | The Problem | Why Trust | The Response | Scenarios | Why Now | Convergence | Questions

</aside>

<aside>

Page Navigation

</aside>

Case Pattern: Epistemic Degradation Under Scale

In the epistemic fog, a better detector is not the answer. A better chain of custody is.

The Deadlock

AI is lowering the cost of generating plausible content while doing nothing to lower the cost of knowing whom to trust. Synthetic media makes authenticity contestable and verification expensive. By 2026, generative AI reached a level of fidelity where synthetic media became increasingly difficult to distinguish from reality. The result is what some have called the Great Trust Recession: surveys indicate a growing erosion of trust in online content, with a significant share of users expressing uncertainty about what they see. Users are increasingly seeking environments that offer stronger guarantees of authenticity and human accountability.

The instinct is to build a bigger lighthouse — more detection, more watermarks, more centralized verification. But a lighthouse has a fixed beam. The environment is no longer bounded.

The Failure of Control: The Lighthouse Problem

The detection arms race. Centralised deepfake detectors function like a lighthouse in a thickening fog — each improvement in detection feeds the next generation of synthetic models. Every new detector is immediately used to train a more deceptive AI model. The Red Queen Effect applies with particular force here: the energy required to maintain detection superiority exceeds the energy required to generate convincing fakes. The arms race is structurally unwinnable through detection alone.

Receding institutional authority. Traditional news organisations and government sites saw trust erode as "plausible deniability" became weaponised. Bad actors dismiss real evidence as AI-generated; AI-generated content is presented as authentic. When everything is contestable, nothing is credible. The control layer — editorial gatekeeping, fact-checking, content moderation — recedes not because it is wrong but because the volume and sophistication of synthetic content outpaces its capacity.

Platform complicity. Under the extraction model, platforms profit from engagement regardless of whether the content is authentic. Outrage scales; truth does not. The incentive structure of current digital architecture actively works against epistemic clarity.

Detection is structurally doomed. Every improvement trains better deception. Stop building a bigger lighthouse. Start trusting the crew.

Stop detecting fakes. Start proving provenance.

The Convergence Response: From Detection to Provenance

The solution is emerging not by spotting fakes but by verifying lineage — shifting from "is this content real?" to "has this content been handled by a chain of actors accountable for its integrity?"