Y U S E F @ M O S I A H . O R G

10th May 2026 at 2:35pm

Making TiddlyWiki Legible to LLMs

Related: sources · notes · metadata · Drafts

Executive summary

The problem is real: the canonical Mosiah.org page is currently a single-file TiddlyWiki application. It is excellent as a human/agentic artifact surface inside a browser, but poor as a direct retrieval surface for LLMs and ordinary command-line extraction. A curl of a specific hash route returns the whole wiki application: megabytes of HTML/JavaScript/tiddler store, not the focused article a reader or model intended to fetch.

The important finding is that this does not require abandoning TiddlyWiki. TiddlyWiki already has a Node.js rendering path for static pages. Its documentation explicitly says TiddlyWiki5 can generate static HTML representations that do not need JavaScript, and its --render command can render individual tiddlers through templates.Source: TiddlyWiki static site generationSource: TiddlyWiki render command A local smoke test against the Mosiah wiki-folder rendered individual article pages with no <script> tags.

So the architectural answer is not “fork TiddlyWiki first.” The answer is: keep TiddlyWiki as the rich artifact editor/browser, but add a parallel static/semantic publication layer generated from the same tiddlers. The interactive wiki remains the living interface. The static layer becomes the web/LLM/curl interface.

Recommended direction:

  1. Keep index.html as the canonical human TiddlyWiki app.
  2. Generate static per-tiddler article pages at stable URLs such as /articles/chatbots-aint-it/ or /static/Articles%2Fchatbots-aint-it.html.
  3. Generate clean Markdown/JSON mirrors for LLMs, e.g. /llms.txt, /llms-full.txt, /articles/chatbots-aint-it.md, and /tiddlers/Articles/chatbots-aint-it.json.
  4. Add canonical/alternate links between the interactive tiddler and its static representations.
  5. Only consider a TiddlyWiki fork if the static templates, routing, or metadata hooks cannot express the needed semantics cleanly.

In other words: make Mosiah bilingual. Browser-native TiddlyWiki for humans and agents with JS. Static semantic HTML/Markdown/JSON for crawlers, LLMs, search, archival tools, and command-line readers.

The actual impedance mismatch

TiddlyWiki’s single-file design is philosophically aligned with the artifact thesis: a whole knowledge object can travel as one file, preserve its internal graph, run offline, and remain user-owned. That is valuable. The problem is not TiddlyWiki’s ontology. The problem is URL and representation.

A hash route like /#Articles%2Fchatbots-aint-it is client-side state. The server does not see the fragment. A non-JS fetcher asking for that URL receives the same root HTML as everyone else. The article may be present somewhere in the embedded store, but it is not the primary document. It is buried inside an application bundle.

For humans, the browser runs the app and opens the tiddler. For LLM fetchers, simple crawlers, curl, search snippets, and many archival tools, the page is effectively “a giant JavaScript application containing lots of text.” That is exactly the wrong shape for source-grounded AI. A model wants the object: title, author/date fields, body, citations, links, backlinks, tags, related artifacts, and maybe a concise machine-readable graph. It does not want to reverse-engineer a client app.

This creates a perverse situation: Mosiah is built to preserve artifacts, but the public web surface hides the artifact behind the artifact browser.

What the research found

TiddlyWiki has three relevant affordances already:

A local smoke test rendered three Mosiah article tiddlers with the core static tiddler template. The resulting files were roughly 13–17 KB each and had no <script> tags. By contrast, a direct fetch of the live single-file wiki returned roughly 16.9 MB and included the whole app; the target phrase from the article appeared deep inside the response rather than as the focused page body.

That is the curve-plateau finding: the path of least resistance is not a new extractor, not an LLM-specific scraper, and not immediately a fork. It is a first-class static export layer.

Architecture: dual-surface Mosiah

The clean design is a dual-surface site generated from one tiddler source of truth.

1. Interactive surface

The existing single-file TiddlyWiki remains:

  • rich browsing
  • backlinks/graph affordances
  • dynamic filters
  • editing/export affordances
  • local-first portability
  • artifact-native feel

This is the “living wiki” surface.

2. Static semantic surface

Every public article gets a focused static page:

/articles/<slug>/

or, if we want minimal implementation first:

/static/Articles%2F<slug>.html

Each page should be plain HTML with no application bundle. It should include:

  • <title> from the article caption/title
  • canonical URL for the static page
  • alternate link back to the interactive TiddlyWiki hash route
  • article body rendered server-side
  • source/notes/metadata related links
  • tags
  • publication/draft status, if public draft pages remain indexable
  • source/citation links as normal anchors
  • optional JSON-LD or embedded machine JSON describing the tiddler fields and related tiddlers

The key is that an LLM fetcher can retrieve one URL and get one object.

3. Markdown and JSON mirrors

For LLM legibility, static HTML is good; Markdown and JSON are better.

Add predictable machine surfaces:

/articles/<slug>.md
/tiddlers/Articles/<slug>.json
/llms.txt
/llms-full.txt
/sitemap.xml

llms.txt should be a compact map of the site: published pieces, important drafts if desired, source indexes, and how to fetch Markdown/JSON. llms-full.txt can concatenate selected article Markdown for models with large context windows. Per-article Markdown is the simplest high-fidelity unit for agent ingestion.

The JSON form should not be just raw TiddlyWiki internals. It should be a stable public schema:

{
  "title": "Articles/chatbots-aint-it",
  "caption": "Chatbots Ain’t It",
  "status": "draft",
  "tags": ["article", "draft", "ai"],
  "body_markdown": "...",
  "sources": [
    {"title": "...", "url": "...", "tiddler": "..."}
  ],
  "related": {
    "interactive": "https://mosiah.org/#Articles%2Fchatbots-aint-it",
    "html": "https://mosiah.org/articles/chatbots-aint-it/",
    "markdown": "https://mosiah.org/articles/chatbots-aint-it.md"
  }
}

This makes the site legible not only to LLMs, but to future Choir agents.

Where TiddlyWiki should and should not be forked

A fork is probably not the first move.

Reasons not to fork first:

  • TiddlyWiki already supports Node rendering and static generation.
  • Mosiah already has decomposed tiddlers in a wiki-folder.
  • A custom publication script can generate semantic static pages without touching core.
  • Forking core increases maintenance burden and may make plugin/theme compatibility worse.

Reasons a fork might later be justified:

  • We want first-class canonical URLs instead of hash routes.
  • We want TiddlyWiki itself to emit semantic static artifacts with stable article/source schemas.
  • We want a better server-side router that maps /articles/foo/ to the same tiddler as #Articles/foo.
  • We want crawler-aware metadata, JSON-LD, and static backlinks as core publication primitives.
  • We want TiddlyWiki to become not merely a single-file app but an artifact compiler.

The product-level fork would not be “TiddlyWiki but with AI.” It would be “TiddlyWiki as an artifact compiler”: same tiddler graph, multiple representations, explicit provenance, static semantic exports, and agent-readable indexes.

Implementation options

Option A — minimal static companion export

Add a render step that writes static HTML files for all public articles.

Pros:

  • Fastest.
  • Uses TiddlyWiki’s own renderer.
  • No fork.
  • Immediately curlable and model-readable.

Cons:

  • Static template may need customization for Mosiah styling and source/notes/metadata rows.
  • Filenames may initially be ugly if URL-encoded Tiddler titles are used.
  • Dynamic widgets may not always render as expected outside the full app.

This is the recommended first experiment.

Option B — semantic Mosiah exporter

Write a small Mosiah-specific exporter over the wiki-folder. It parses .tid files directly, selects public article/source/notes/metadata tiddlers, and emits static HTML, Markdown, JSON, sitemap, and llms files.

Pros:

  • Full control over LLM-oriented schema.
  • Can avoid TiddlyWiki app baggage entirely.
  • Can enforce publishing policy: draft vs published, private fields, citation graph, clean URLs.
  • Easier to integrate with Choir later.

Cons:

  • Must either render TiddlyWiki markup correctly or delegate body rendering back to TiddlyWiki/Pandoc.
  • More custom code.
  • Risk of drift from browser rendering.

This is likely the best medium-term direction: TiddlyWiki for interactive authoring, Mosiah exporter for semantic publication.

Option C — static-first TiddlyWiki publication template

Customize TiddlyWiki’s static templates so the official render pipeline produces Mosiah-shaped static pages.

Pros:

  • Stays close to TiddlyWiki’s rendering semantics.
  • Can preserve internal links/backlinks well.
  • Less custom parser work.

Cons:

  • TiddlyWiki templates can become arcane.
  • LLM-specific Markdown/JSON still needs custom export.
  • Clean URL routing may still require post-processing.

This is a good bridge between A and B.

Option D — server-side TiddlyWiki or edge rendering

Run a server that maps tiddler URLs to rendered pages on demand.

Pros:

  • Dynamic, no prebuild step.
  • Can serve different representations by content negotiation.
  • Could support edit/auth paths later.

Cons:

  • Mosiah currently benefits from static GitLab Pages simplicity.
  • Server operations add failure modes and cost.
  • The user wants durable publication, not another hosted app dependency.

This is not the default direction unless static generation proves insufficient.

Option E — fork TiddlyWiki

Fork core to make multi-representation artifact publication a first-class primitive.

Pros:

  • Cleanest long-term ontology if Mosiah/Choir becomes a platform.
  • Could make TiddlyWiki genuinely agent-native.
  • Could upstream some improvements if general.

Cons:

  • Highest maintenance cost.
  • Slower path to LLM legibility.
  • Easy to get trapped in framework work before validating the publication shape.

Fork only after the static export requirements are clear.

Recommended plan

Phase 1: prove static companion pages

Generate static HTML for article tiddlers only:

filter: [tag[article]]
template: static tiddler template or Mosiah-custom static article template
output: /static/<encoded-title>.html or /articles/<slug>/index.html

Acceptance criteria:

  • curl of a static article returns under ~50 KB, not the whole wiki bundle.
  • no <script> tags required for article readability.
  • body text, source links, notes/metadata links, tags, and title are present.
  • internal links point either to static pages when available or to interactive fallback URLs.
  • privacy scan passes.

Phase 2: add Markdown/JSON/llms exports

Emit:

  • per-article .md
  • per-article .json
  • llms.txt
  • llms-full.txt
  • sitemap.xml

Acceptance criteria:

  • an LLM can ingest a single article without browser rendering.
  • source graph is visible without needing TiddlyWiki runtime.
  • llms.txt explains the available representations.

Phase 3: canonical URL policy

Choose whether the canonical public URL for articles should remain the TiddlyWiki hash route or move to static clean URLs.

A likely compromise:

  • canonical for humans: static clean URL
  • interactive alternate: TiddlyWiki hash route
  • editing/live graph: TiddlyWiki
  • machine ingestion: Markdown/JSON

This would make Mosiah feel less like “a wiki app at a domain” and more like a real publication whose interactive wiki is an enhanced layer.

Phase 4: decide on fork

Only fork if static templates and Mosiah exporter cannot express the desired architecture. The fork question should be answered after the exported shape stabilizes.

The deeper frame

This is exactly the same argument as “chatbots ain’t it,” applied to TiddlyWiki itself.

A TiddlyWiki single-file app is a wonderful artifact browser. But for the open web and for LLMs, the unit of retrieval has to be the artifact, not the application. If the public representation forces every reader to download the whole machine to access one piece, the site has reproduced the chatbot problem in web form: the interface has swallowed the object.

The fix is not to abandon the artifact layer. It is to let each artifact have multiple honest bodies:

  • interactive body for humans in the wiki
  • static HTML body for the web
  • Markdown body for language models
  • JSON body for agents and provenance systems
  • graph/index body for traversal

That is the architectural north star: one tiddler graph, many representations, each optimized for a different kind of reader.

Article Notes/tiddlywiki-llm-legibility
Article Sources/tiddlywiki-llm-legibility