AI and the Error-Correcting Equilibrium
Related: sources · notes · metadata
The standard story about AI and propaganda is easy to understand because it extends something everyone already fears. Generative AI lowers the cost of producing persuasive fakery. It can write posts, create synthetic personas, fabricate images, tailor messages, and flood social platforms with plausible noise. In this version of the story, the future of politics is bot swarms, deepfakes, automated influence campaigns, and the collapse of shared reality.
That story is not wrong. It is incomplete.
AI does not only make lying cheaper. It also makes remembering cheaper. It makes comparison cheaper. It makes cross-source synthesis cheaper. It makes it easier to track what an institution said last week, what it says now, which predictions failed, which claims quietly disappeared, which audiences received incompatible versions of the same story, and which sources have repeatedly improved or degraded our model of reality.
This is the countervailing force missing from much of the AI propaganda talk that prompted this piece. AI may amplify propaganda, but it may also create an error-correcting equilibrium: a game-theoretic environment in which multiple synthesis systems track claims, contradictions, omissions, and track records over time. Under those conditions, deception does not vanish. But deception becomes more expensive.
Truth has an accounting advantage. Reality carries much of the bookkeeping. If a speaker tells the truth, the timeline, documents, material constraints, witness accounts, and future evidence tend to cohere without continuous narrative maintenance. A lie has to simulate that coherence. It has to remember who was told what. It has to keep audiences partitioned. It has to suppress old versions, explain anomalies, generate auxiliary claims, and prevent misinformed publics from discovering that other publics were told something else.
The liar pays a narrative bookkeeping tax.
This is a practical constraint on the orthogonality hypothesis. At a high level of abstraction, intelligence and goals may be orthogonal: a more intelligent system can pursue arbitrary ends. But in a world populated by increasingly perspicacious observers, certain strategies become harder to sustain. If a goal requires deception across many channels, audiences, archives, and time horizons, then increased observer intelligence raises the cost of pursuing that goal. Truth-aligned strategies receive a subsidy from reality. Deception-aligned strategies must maintain a simulation.
The central conflict, then, is not simply AI truth versus AI lies. It is AI fragmentation versus AI synthesis.
Propaganda wants fragmentation: many personas, many tailored narratives, many mutually incompatible frames, each audience isolated from the others. Anti-propaganda wants synthesis: claim graphs, provenance trails, contradiction ledgers, prediction records and source reliability histories, and shared memory. The propagandist wants the news cycle to reset. The synthesizer refuses to forget.
This is why one of the most persuasive techniques in politics is not moral denunciation but rare, verifiable, important information. A source that repeatedly provides undercovered facts becomes valuable. It earns attention by improving the audience's map. The best propaganda often imitates good analysis for exactly this reason: it does not merely tell you what to believe; it gives you something useful to know.
That point matters for current debates over AI and information war. Much expert discussion still treats AI as an oracle or synthetic speaker: a machine that generates false posts, fake personas, or persuasive messages. Those are real capabilities. But the deeper strategic value of AI may lie in mass information synthesis: reading enormous discourse fields, mapping narratives, detecting contradictions, correlating claims with events, tracking source reliability, and helping institutions decide where reality itself is becoming legible.
This also reframes current Russian internet restrictions. Recent reporting describes mobile internet and SMS disruption around Victory Day, Moscow signal restrictions amid drone-security fears, broader Kremlin throttling, VPN suppression, and messaging-app restrictions. The public explanation is often security or anti-drone defense. But the same infrastructure also functions as information control. The line between anti-drone defense, anti-Western influence defense, censorship, and sovereign-internet rehearsal is blurry by design. A serious account of AI propaganda should be able to connect these operational facts to the information war rather than merely repeat moral slogans.
This does not mean AI will save us from propaganda. The last great information technology should make us cautious. Social media was sold, and often sincerely imagined, as a liberation machine: more speech, more access, more horizontal communication, more democratic visibility. It did deliver those things. But it also became a devastating system of censorship, misinformation, disinformation, behavioral manipulation, algorithmic suppression, and attention capture. The same infrastructure that lowered the cost of expression also lowered the cost of confusion.
AI may invert the ambiguity. It can lower the cost of propaganda. But it can also lower the cost of anti-propaganda: not counter-messaging, but persistent reality accounting. The open question is which tendency dominates under which institutional conditions.
If AI is monopolized by states, platforms, intelligence services, and bot-farm capital, then it may deepen the social-media catastrophe: personalized manipulation at scale, synthetic consensus, automated harassment, and narrative fog. But if multiple independent synthesis systems exist, if they preserve provenance, compare claims, score track records, and make contradictions durable, then AI can create a more hostile environment for deception than mass media or social media ever did.
The future of propaganda is not only the future of generated lies. It is the future of remembered claims.
A political order that wants truth should not merely ask how to detect deepfakes or label bots. It should ask how to build public memory: systems that track what powerful actors say, what they omit, what they predicted, what they later denied, and how often reality bore them out. The propagandist's advantage has always been speed, repetition, and forgetting. The anti-propagandist's answer is synthesis, comparison, and memory.
AI makes both sides stronger. That is why the question is not whether AI amplifies propaganda. It does. The question is whether AI-enabled synthesis can make the cost of deception rise faster than the cost of deception falls.
That is the game theory of anti-propaganda.