beyond-decay.org
Size DE

DEMOS

Detection and Exposure of Manufactured Opinion at Scale — Why transparency is the only architecture that permanently disarms disinformation
beyond-decay.org — March 2026

I. Bucharest, 24 November 2024

Călin Georgescu had not polled above five percent in a single survey in the two weeks before Romania's presidential election. An ultranationalist outsider who rejected vaccinations, described NATO as an existential threat, and named Russia as Romania's natural partner. On election night, 24 November, he stood at nearly 23 percent. Two million Romanians had voted for him. On election night, nearly two million people searched his name on Google — because they did not know who he was.

What had happened? Not deepfakes. Not fabricated videos of his rival committing a crime. Not the classic disinformation of the twentieth century. What had happened was an operation that worked with the physical infrastructure of digital platforms: more than 25,000 TikTok accounts and 5,000 Telegram channels were mobilised. A central Telegram channel distributed pre-cut videos and phrasing — including instructions on which hashtags to use. At least $381,000 flowed to influencers who, most of them without knowing it, were promoting a coordinated campaign. TikTok's algorithm amplified Georgescu content in tests up to fourteen times more strongly than the content of his rivals.

On 6 December 2024, Romania's Constitutional Court annulled the first round — the first time in EU history that a presidential election was annulled due to digital interference.

And then the annulment itself became a weapon. Democracy has abolished itself, said Georgescu's supporters. The deep state stole the people's votes. The disinformation generated its own counter-movement — and this counter-movement was no less real than the original operation.

That is the actual problem.

II. The Wrong Diagnosis — Deepfakes Are Not the Main Problem

Public debate about AI-assisted disinformation revolves almost exclusively around deepfakes: deceptively real images, audio and videos depicting events that never occurred. This is understandable — deepfakes are visible, tangible, dramatic. They make good headlines.

But the data tell a different story. A systematic analysis of all documented AI uses in elections worldwide in 2024 found: half of all AI use was not deceptive. Most deceptive content could have been produced cheaply without AI. And conventionally manipulated content — so-called cheap fakes — was used seven times more often than AI-generated material. The Romania problem was not a deepfake problem. It was an algorithm problem.

The most effective element of the Georgescu operation was not the quality of the content but the amplification architecture. The content was simple: hashtags, short videos, emotional appeals to spirituality and sovereignty. What made it a catastrophe was that TikTok's recommendation system delivered it millions of times, while simultaneously a coordinated bot network provided the signals that drove the algorithm.

That is the more precise diagnosis: AI-assisted disinformation is less a question of synthetic content than a question of algorithmic amplification. Those who understand and manipulate the algorithm do not need deepfakes. They need coordination, speed, and the ability to generate enough engagement signals that the platform does the work.

III. The Asymmetry — the Hardest Problem in the Series

In NUET, RIEGEL, MESH, SHADOW, AGORA and COSMOS there is a symmetric vulnerability: those who use nuclear weapons endanger their own economy. Those who close the Suwałki Gap enclose Kaliningrad. Those who destroy satellites generate debris. The self-punishment architecture works because the attacker is also exposed.

In election-context disinformation, this symmetry is weaker. Russia can manipulate Romanian elections — Romania cannot manipulate Russian elections, because none worth the name exist. China can flood Taiwan's information space — Taiwan cannot influence China's information space in the same way. Democracies are structurally more open and therefore structurally more vulnerable.

That makes DEMOS the conceptually most difficult concept in the series. The architectural response cannot lie in counter-symmetry — that is, not: we do the same back. It must lie in strengthening immunity: the ability to recognise, name and thereby disarm the operation. This changes the design problem fundamentally.

There is a self-punishment structure — it is simply more subtle. It is called the liar's dividend: those who systematically introduce synthetic and manipulated content into public discourse poison the epistemic climate for everyone — including themselves. Russia's population today lives in an information space where nothing is reliably trustworthy. State media are not believed. Western media are not believed. AI-generated content is not believed. The systematic poisoning of the information well ultimately hits the poisoner themselves — but slowly and diffusely, not quickly and precisely like the other mechanisms in the series.

IV. What the DSA Achieves — and Why It Is Too Slow

Europe has with the Digital Services Act a legal framework that points in the right direction. The DSA obliges large platforms to transparency about recommendation algorithms, to risk assessments for elections, to cooperation with researchers and to rapid responses to demonstrated influence operations. The European Commission opened a DSA investigation against TikTok after the Georgescu incident.

The problem: this investigation is still running more than a year after the event. Preliminary findings on TikTok's advertising repository were published — in a separate investigation, not the election-specific one. The final verdict on the electoral influence operation is still pending.

A disinformation operation runs in real time. It can influence an election before an authority has held even a first hearing. The DSA sets the right norms — but its enforcement mechanism is not built for the pace of the problem. This is not a failure of law, but a failure of architecture: law reacts, operations act.

V. DEMOS — The Concept

DEMOS stands for Detection and Exposure of Manufactured Opinion at Scale. The Greek demos — the people — is the origin and goal of democracy. DEMOS is the claim: the people must be able to know when their opinion is being manufactured.

The concept is not based on censorship. It is not based on a state truth authority. It is based on a single principle: transparency about the origin of political content and the mechanisms of its dissemination. Not: this content is false. Rather: this content is AI-generated. This amplification is coordinated. This campaign is financed.

The concept has four elements.

First element: Provenance labelling as obligation. Every AI-generated political content — audio, video, image, text — carries a machine-readable provenance marker. The technical standard already exists: C2PA (Coalition for Content Provenance and Authenticity) is an open standard developed by Adobe, Microsoft, Google and others that embeds invisible metadata in images and videos documenting origin and editing history. What is missing is the legal obligation to use it. In the EU this means: all AI providers operating in the EU or serving content used in the EU are obliged to apply C2PA-compliant markings to politically relevant generated content. Platforms are obliged to make these markings visible in political contexts. This does not prevent deepfakes — but it makes them identifiable. The transponder logic of SHADOW, applied to political content: those who travel without a marking declare themselves suspect.

Second element: Algorithmic transparency in election periods. The Romania problem was not a content problem — it was an amplification problem. DEMOS therefore demands not content control but algorithmic transparency in the 60 days before an election: platforms must demonstrate on request how their recommendation systems handle political content. Not the ranking in individual cases — but the systemic effect: is one candidate being structurally amplified more than others? Are specific hashtags being driven upward through coordinated activity? These data must be transmitted in real time to national electoral authorities and accredited researchers. Not because the authority should intervene — but because transparency alone makes the operation visible. And visible operations lose their effect.

Third element: Rapid exposure as antidote. The most effective counter to disinformation is not removal but inoculation — the precautionary approach that immunises people before manipulation begins. When people know how influence operations work — what patterns, what hashtag strategies, what bot signatures — they become more resistant. DEMOS anchors in the EU electoral monitoring architecture a real-time exposure system: a European body — or better, a network of national bodies under EU coordination — reports ongoing operations publicly, immediately, with source evidence. Not after months of investigation, but in the course of the operation. The Georgescu model showed: when Romania's intelligence data were declassified, public opinion turned. Georgescu lost the re-election. The exposure was the weapon — not the court.

Fourth element: Financing transparency as standard. The Georgescu operation cost at least $381,000 in documented influencer payments — from a single operator. The money flowed without disclosure. EU rules on political advertising already include transparency obligations — but only for direct advertisements, not for paid influencer collaborations and coordinated organic campaigns. DEMOS extends this obligation: every organised political communication — paid or unpaid, organic or amplified — that is demonstrably coordinated is subject to disclosure requirements. This is technically difficult, but the standard must be set: those who finance a political operation and conceal it are liable for electoral interference.

VI. The Liar's Dividend — the Self-Punishing Structure

There is a self-punishment structure that receives little attention in the disinformation debate, but is real.

Those who systematically introduce synthetic and manipulated content into public discourse poison the epistemic climate — for everyone. This is called the liar's dividend: the more real manipulations become known, the easier it becomes for politicians and actors to dismiss real, compromising content as deepfakes. Real video evidence of corruption? That's a deepfake. A real intercept transcript? AI-generated. The systematic poisoning of the information space immunises the powerful against legitimate criticism.

But this mechanism also hits the attacker. Russia has for years blanketed its domestic audience in an information fog — the effect is that a growing part of the Russian population trusts neither western nor state sources. Information about war casualties, about economic consequences, about the real situation at the front — all of it is received with suspicion by the domestic population. A government that has poisoned its information space can no longer reliably mobilise its own population. This is not a quick, precise self-punishment like the Kessler Syndrome — but it is real, and its effect accumulates over time.

DEMOS accelerates this mechanism: the faster operations are exposed, the faster they lose their effect — and the faster the price the operator pays becomes visible. Not the consequence for the victim, but the consequence of the attacker themselves.

VII. What DEMOS Is Not

DEMOS is not a truth authority. There is no European body that decides what is true and what is false. Political satire, exaggeration, rhetoric — all remain permitted. What DEMOS demands is not the truth of the content but the transparency of its origin and dissemination.

DEMOS is not a censorship system. Deepfakes are not prohibited. Coordinated campaigns are not prohibited. Paid influencers are not prohibited. They must simply be recognisable as such.

DEMOS is not a panacea against political manipulation. People vote for real reasons, not only because an algorithm shows them something. Georgescu's voters had real frustrations. DEMOS does not combat the causes of political discontent — it combats the tools that synthetically amplify this discontent.

And DEMOS is not a purely national instrument. Disinformation operations are transnational — prepared in one country, delivered via platforms headquartered in another, received in the target country. The response must be equally transnational. DEMOS is a European minimum framework that coordinates — not replaces — national authorities.

VIII. What Must Be Done

First: C2PA labelling as a European obligation. All AI models operated in the EU or serving content used in the EU are required to apply C2PA-compliant provenance markings to politically relevant generated content. Platforms are required to make these markings visible in political contexts. Sanction for violation: immediate reporting obligation and financial penalty proportional to turnover.

Second: DSA real-time mechanism for elections. The existing DSA architecture receives an electoral protocol with mandatory real-time data transmissions in the 60 days before a national election. National electoral authorities receive direct API access to amplification metrics from platforms. Response times for demonstrated operations are reduced to 24 hours — not months.

Third: A European rapid-response network for disinformation — not a new authority, but a coordination structure between national electoral authorities, intelligence coordination and accredited research institutions. Public communication of ongoing operations in real time, with source evidence, before the operation achieves its full effect. Romania showed: subsequent exposure works. Prior exposure is more effective.

Fourth: Media literacy as infrastructure. Inoculation — prior knowledge of manipulation techniques — is the only demand-side protection that works durably. European school and educational curricula that systematically convey how disinformation operations work: coordination, algorithmic amplification, bot networks, hashtag strategies. Not as a media criticism course, but as survival knowledge for the twenty-first century.

Fifth: Platform liability for electoral interference. The DSA creates obligations — but no proportionate consequences for systemic failure. A platform that demonstrably and culpably failed to detect or report a coordinated influence operation, despite having the signals available, is liable for the documented costs of the election rerun and the institutional trust losses. This is not punishment for content — it is accountability for infrastructure.

Those who manufacture an opinion
must say they have manufactured it.
That is not a restriction on freedom of expression.
It is its precondition.

DEMOS — Detection and Exposure of Manufactured Opinion at Scale — is the seventh concept in the civilisational deterrence series after NUET, RIEGEL, MESH, SHADOW, AGORA and COSMOS. DEMOS differs from the other concepts in the series in one important respect: the self-punishment structure is weaker and more indirect. The attacker harms themselves through the liar's dividend — the poisoning of their own information space — but slowly and diffusely. The architectural response therefore relies more strongly on immunisation than on reciprocity: transparency as inoculation, exposure as disarmament.

The series is published on beyond-decay.org — constructive proposals for a world that needs them.

Hans Ley & Claude (Anthropic)
Nuremberg / San Francisco, March 2026