Y U S E F @ M O S I A H . O R G

12th May 2026 at 1:17pm

Related: sources · notes · metadata · Published Pieces

Cognitive Transforms

A thought is not only a claim. It is also a way of transforming other thoughts.

A thought is not only a claim. It is also a way of transforming other thoughts.

This sounds abstract until you notice how people actually think. One person looks at a startup and asks, “What is the market?” Another asks, “What breaks at scale?” Another asks, “Who benefits?” Another asks, “What would make this beautiful?” Another asks, “What happens if we invert it?” Another asks, “Which historical pattern is this repeating?” Another asks, “What would the incentives do to the weakest participant?”

These are not merely opinions. They are operations.

Charlie Munger’s “invert, always invert” is the clean example. It is not a belief. It is a reusable transformation. Take a plan and invert it. Instead of asking how to succeed, ask how to fail. Instead of asking what must go right, ask what would make the system break. Instead of asking how to become rich, ask how to become poor and then avoid those actions. The mental model takes an artifact and maps it into another artifact: a failure map, a risk surface, a reversed objective.

That is a cognitive transform.

A cognitive transform is a reusable operation over an artifact. It takes a vtext, essay, claim graph, source bundle, codebase, audio clip, plan, product thesis, or public event and produces a structured derivative: summary, critique, objection map, contradiction graph, prior-art search, source genealogy, falsification map, audio route, debate brief, or revised artifact.

It is a mental model made executable.

This matters because AI has made transforms cheap. In the old world, a mental model lived mostly inside a person. A good consultant, investor, editor, engineer, scholar, or artist could apply their way of seeing to a situation, but the operation was private and embodied. You hired the person, read the book, took the class, or absorbed the tradition.

Now a transform can be encoded. It can become a public object. It can be applied by agents. It can be modified, cited, forked, improved, and rewarded.

This is one of the underappreciated foundations of Choir.

Choir is not merely a place to publish ideas. It is a place to publish ways of thinking.

A normal media platform treats an article as a piece of content. Someone writes it. Others read it. Maybe they comment, share, or forget it. The artifact is mostly static.

In Choir, a vtext is not just something to read. It is something to transform. It can be summarized, critiqued, inverted, sourced, narrated, disputed, compressed, expanded, translated into radio, converted into an appagent, or used as prior work for another artifact. The vtext is the canonical object. Cognitive transforms are how agents and users move through it.

This is where the difference between a prompt and a transform matters.

A prompt is often ad hoc. “Critique this.” “Summarize this.” “Make this better.” “Explain this like I’m five.” It lives in the moment. The result may be useful, but the operation itself usually disappears.

A cognitive transform is a reusable media primitive. It has a name, an intention, a structure, and a quality standard. It can be applied again. It can be improved. It can be cited. It can become part of the platform’s shared intelligence.

“Critique this” becomes a transform when it knows what kind of critique to produce: technical, market, moral, epistemic, aesthetic, historical, political-economy, security, or founder-risk. “Summarize this” becomes a transform when it knows whether the user needs an abstract, a briefing, a claim list, a source map, a radio segment, or a decision memo. “Invert this” becomes a transform when it consistently produces failure modes, omitted assumptions, incentives, and downside paths.

Transforms are projections. They reveal one structure of an artifact without pretending to exhaust it.

A source genealogy projects an artifact into lineage: where did this idea come from, what traditions does it echo, what prior art does it miss? A contradiction map projects the artifact into tensions. A falsification map projects it into vulnerabilities. A radio transform projects it into time: what should be heard first, what can be deferred, where should a human voice enter, what should be repeated?

The artifact remains richer than any transform. That is the point. A good system does not flatten the artifact into one score or one summary. It preserves the artifact and lets many transforms generate many structured views.

This is why cognitive transforms are essential to automatic radio.

A single model answer might be 500 words. A vtext might be 2,000 words. That is not enough for an hour of serious audio. But if the system applies a portfolio of transforms, the artifact becomes a field of possible traversals. Extract the claims. Find the strongest objections. Search prior art. Map the sources. Identify the historical analogies. Generate the opposing frame. Produce a glossary. Find what the author assumes. Locate what was falsified. Compare this to related vtexts. Pull relevant human voice clips. Generate a radio path.

Now one artifact can yield hours of grounded content without becoming slop.

The difference between depth and slop is provenance. Slop expands by hallucinating filler. Cognitive transforms expand by exposing structure that was already latent or by retrieving structure from the surrounding graph. A good transform does not merely add words. It increases the number of valid paths through the material.

Automatic radio becomes compelling when the system has enough precomputed structure to keep unfolding. The user asks a question. The radio begins with the main claim. Then it plays a human clip. Then it explains the prior debate. Then it gives the strongest objection. Then it compares two schools of thought. Then it surfaces a related vtext. Then the user interrupts: “wait, invert that.” The system applies the inversion transform and continues.

The user is not listening to an AI ramble. The user is traversing an artifact graph through transformations.

This also makes cognitive transforms a new kind of intellectual property.

If a person contributes a powerful way of seeing, that contribution should be recognized. A transform can be authored. It can become useful to other users. It can be applied to many artifacts. It can generate better radio, better search, better critique, better writing, better learning, better code review, better public memory. If future work depends on a transform, the protocol should cite and reward it.

That means a user’s IP is not limited to essays, audio, code, or claims. Their IP can be a mental model.

A lawyer might publish a transform for identifying regulatory risk in marketing copy. A literary critic might publish a transform for reading autofiction. A mechanical engineer might publish a transform for diagnosing bad abstractions in CAD workflows. A Nigerian political analyst might publish a transform for reading state capacity and patronage networks. A Confucian thinker might publish a transform for role ethics and ritual breakdown. A dancer might publish a transform for embodied timing and spatial presence. A security engineer might publish a transform for threat modeling agentic workflows.

These are not merely “perspectives” in the shallow pluralist sense. They are operational ways of carving reality.

This is a real diversity and inclusivity feature. Not corporate diversity as demographic ornament. Cognitive diversity as infrastructure. Different cultures, disciplines, religions, professions, classes, and subcultures preserve different transforms. They notice different signals. They ask different questions. They have different failure modes, taboos, ideals, metaphors, and gradients.

A good platform should not collapse those into one universal assistant voice. It should let them become explicit, reusable, citeable, and economically rewarded.

The future of AI is not one model with one view. It is a field of artifacts and transforms. The model supplies generative capacity. The human world supplies ways of seeing. Choir’s job is to preserve, apply, compare, and reward those ways of seeing.

This is why cognitive transforms are media primitives.

The basic unit of future media is not just the post, the episode, the article, the clip, or the comment. It is the artifact plus the transformations that can be applied to it. A vtext becomes more valuable as more high-quality transforms can operate on it. A transform becomes more valuable as it improves more artifacts.

The system compounds both ways.

Better artifacts make transforms more useful.

Better transforms make artifacts more alive.