Related: sources · notes · metadata · Published Pieces
Automatic Computer, Automatic Newspaper, Automatic Radio
Private workspace. Public record. Audio stream. One system.
Choir has three names because it has three projections.
The automatic computer is the private workspace.
The automatic newspaper is the public artifact graph.
The automatic radio is the audio traversal.
These are not three unrelated products. They are three surfaces over the same substrate: a system where humans and agents work on durable intellectual artifacts.
The automatic computer is the deepest layer. It is where a user can think, write, research, code, build, revise, and operate agents. It is not a chatbot and not a normal app. It is a persistent environment where state matters. Agents can work in the background. Artifacts can evolve. Apps can be built and modified. Work can be done in disposable execution environments. The user does not manage “many agents” as personalities. The user works with objects: documents, source bundles, claim graphs, workflows, code, audio, citations, appagents, and vtexts.
Serious AI work cannot live in a chat log. A chat thread is a record of conversation. A workspace is a field of objects. Long-running, multiagent, artifact-rich work needs state outside any one model’s context window. It needs memory, versioning, provenance, rollback, search, and publication.
But the automatic computer is not the most legible consumer surface. Most people are not ready to operate a long-running agentic workspace. They do not yet give agents forty-five-minute, eight-hour, or day-long leashes.
The automatic newspaper is the public projection. It takes the artifacts produced in private workspaces and allows selected ones to enter a shared discourse graph. A vtext can be published. Other users can read it, respond to it, cite it, fork it, challenge it, or extend it. Agents can retrieve it as prior work. The system can preserve where ideas came from and how they changed.
The automatic newspaper is not a feed. A feed is optimized for recency, engagement, status, and compulsion. The automatic newspaper is optimized for provenance, citation, track record, disagreement, and future relevance. It asks: what should be remembered? What was said before? Which sources mattered? Which claims survived? Which people were early? Which frames were corrected? Which artifacts became useful later?
That is the public memory layer.
The automatic radio is the embodied consumption layer.
People can listen while walking, driving, cooking, cleaning, commuting, or resting. The system can traverse the artifact graph as audio. It can explain a topic, compare perspectives, play real human clips, cite prior work, summarize a live debate, or brief the user on a background agent’s progress. The user can interrupt at any time: go deeper, skip, clarify, ask for a source, disagree, return to the main thread, save this, publish that.
This is not a fake podcast, voice companion, or generated audio summary pasted over the web.
It is an interruptible audio interface to a living body of public and private artifacts.
The automatic radio solves a basic UX inversion in current AI. Text systems often generate too much text for how people actually read. Voice systems often generate too little speech for how people actually listen. People will not read a giant answer for an hour, but they will listen to a good podcast for an hour. Audio should unfold. Text should compress.
Because the radio is backed by an artifact graph, it can keep going without becoming empty. There is always another source, prior, objection, human voice, claim, track record, or related artifact to traverse. While the user listens to already-computed material, background agents can do deeper work. Audio runway buys cognition time.
The three surfaces form one loop.
The automatic computer lets users produce artifacts.
The automatic newspaper lets artifacts enter public memory.
The automatic radio lets people traverse that memory in embodied time.
A user might dictate a thought during a walk. Choir transcribes it, turns it into a vtext, connects it to prior work, and lets the user revise it later in the automatic computer. If published, the vtext enters the automatic newspaper. Later, another user’s automatic radio may retrieve it and play the original human voice if the author actually said those words. If the idea becomes useful, it gets cited and rewarded.
That is the same substrate moving through different media.
Private workspace.
Public record.
Audio stream.
One system.