Related: sources · notes · metadata · Published Pieces
The Automatic Newspaper
The automatic newspaper is not a chatbot that summarizes the news. It is a living public memory system for events, claims, sources, voices, contradictions, revisions, and track records.
The first mistake is thinking the future of AI media is a better chatbot. The second mistake is thinking it is a better feed. The third mistake is thinking the problem is merely summarization.
The real problem is shared reality. Not in the soft sense that people disagree too much. Disagreement is healthy. The problem is that shared reality is becoming an adversarial information battlefield. Every actor with an incentive to shape perception will use AI to do it: states, campaigns, corporations, brands, hedge funds, intelligence agencies, celebrities, activists, scammers, media companies, think tanks, NGOs, founders, fan armies, and anonymous networks.
The cost of producing persuasive information is collapsing. The cost of knowing what actually happened is not collapsing at the same rate.
That gap is the market.
Today most people experience AI news as a chatbot query: what happened today, summarize this article, explain this war, give me both sides. The model produces a plausible answer. Maybe it searches. Maybe it cites. But the interaction remains private, transient, and weakly accountable. The answer disappears into chat history. The next person asks the same question and receives another private synthesis. The public record does not improve.
That is not a newspaper. That is a private briefing machine.
The automatic newspaper is different. It is a living public memory system for events, claims, sources, voices, contradictions, revisions, and track records. It treats the information world as a graph that must be maintained, not a stream that must be consumed.
A normal newspaper publishes articles. An automatic newspaper maintains public context.
It asks: what happened? who said it? what evidence supports it? who predicted it? who denied it? which sources were early? which sources were wrong? what changed since yesterday? what claims remain unresolved? which prior artifacts matter? who is being cited, contradicted, extended, or quietly ignored? which institutions are laundering uncertainty into confidence? which frames are competing? which claims survived falsification pressure?
The answer is not a single article. The answer is a living object.
The native unit is the vtext: a versioned, citeable, agent-maintained artifact. A vtext can be read like an essay, searched like a dossier, updated like a brief, traversed like a graph, or rendered into audio as automatic radio. It has claims, citations, sources, revisions, provenance, objections, and downstream uses. It is not a post. It is public intellectual objecthood with memory.
The market looks latent because the old information order has not finished failing. People have adapted to chatbots, coding agents, image generation, voice assistants, synthetic companions, and AI slop in search. They have not yet metabolized universal narrative competition: every public event surrounded by synthetic explanation, counter-explanation, fake evidence, selective clipping, automated outrage, plausible insider analysis, and personalized propaganda.
Until then, the automatic newspaper sounds niche. That is how infrastructure often begins. The people who feel the failure first are not the final market. They are the early warning system.
The AI persona economy will be large, but the AI news economy touches shared reality. Markets, elections, wars, scientific controversies, scandals, regulation, reputations, public companies, universities, courts, churches, militaries, NGOs, and social movements all live inside contested accounts of what is happening and what it means.
The automatic newspaper is the system that maintains those accounts.
It is not neutral in the fake sense. No media system is viewless. The question is whether selection mechanisms are visible, contestable, and grounded in provenance. A good automatic newspaper does not say, “Here is the final truth.” It says: here are the claims, evidence, strongest opposing interpretation, prior art, sources, corrections, failed predictions, surviving uncertainties, and open questions.
That is how public cognition improves: not by abolishing conflict, but by preserving enough structure that conflict becomes informative.
The key mechanism is citation as infrastructure. Agents cite. Algorithms retrieve relevant priors, identify source material, novelty, semantic overlap, age, contradiction, downstream use, and lack of known falsification. They connect new work to old work. They surface buried artifacts that anticipated the frame. They make dependency visible.
Public memory is currently status-mediated. People remember what famous people said and forget what obscure people saw early. They remember the viral clip and forget the careful caveat. AI can lower the cost of recall. Once agents search and cite across a public artifact graph, the past enters the present action space.
This is why the automatic newspaper becomes a track-record machine. Journalists, CEOs, investors, politicians, founders, consultants, comms professionals, pundits, agencies, media brands, and institutions will eventually have public matrices: accuracy, calibration, source quality, prediction record, correction speed, contradiction rate, novelty, downstream influence, falsification survival, dependence on insider status versus public evidence, and timeliness.
Powerful speakers will hate this. They are used to fog. Athletes are statted. Public speakers are not. That asymmetry is historical, not natural. It persists because the record was expensive to maintain. AI makes it cheap.
The automatic newspaper is not only text. Its audio projection is automatic radio. Most people do not want to read a long dossier about every issue, but they will listen while walking, driving, cooking, commuting, cleaning, or recovering from the day. Audio is the natural medium for long traversal. Text should often compress. Audio can unfold.
Automatic radio is not a generated podcast. It is interruptible traversal through a living artifact graph. The user can say: go deeper, give me the source, who disagrees, compare this to the earlier frame, skip the background, return to the main thread, save that, turn this into a vtext. The system continues because the audio is not the memory. The artifact graph is the memory.
The economic layer follows. If public thought becomes citeable and reusable, then it can become intellectual property in a new sense: not legal IP based primarily on exclusion, but protocol-native IP based on provenance, contribution, and downstream relevance.
The old feed rewards immediacy, status, outrage, and recency. The automatic newspaper rewards future relevance. A post that matters later should not die because it failed to trend in the hour it was published. A source that aged well should become more valuable. A correction that improved the graph should be rewarded. A dissenter who preserved an important distinction before consensus arrived should not be forgotten.
The answer is not to retreat from AI. The answer is to build the public infrastructure AI makes necessary: claims, citations, provenance, track records, agentic search, public memory, automatic radio, and protocol-native IP.
The old newspaper told you what happened.
The automatic newspaper remembers what everything meant.