Skip to content

LLM wiki vs. plain Markdown: when Karpathy's gist stops working

A plain Markdown file works.

Karpathy's gist proved it. Write down your stack. Your preferences. What you're building. The simplest version of an LLM wiki is one Markdown file full of persistent context. Paste it into Claude or ChatGPT at the start of a conversation. The model picks up where you left off.

For a few hundred words, a Markdown file is completely fine.

The pattern breaks down later. Not immediately. Later.

Here is exactly where.

When plain Markdown is still enough

You do not need anything else on day one.

If your context is short, changes rarely, and only needs to be read by one AI client, a Markdown file is enough. The switch only matters when your context becomes something you maintain, reuse, search, and grow.

Start with the file. Move when it starts feeling like work.

When it stops working: the file grows past 500 words

You start with your stack. Then you add your preferences. Then notes from a project. Then a summary of something you learned. Then your writing style. Then your current goals.

Three months in, you have 1,400 words of context. You paste all of it into every conversation whether it is relevant or not. Claude spends tokens reading about your travel preferences when you are trying to debug a Rails query.

A plain file has no structure you can selectively share. It is all or nothing.

When it stops working: your AI cannot write back

This is the most important limitation. In the simplest setup, Claude or ChatGPT can read your Markdown file. Neither can reliably update, organize, and maintain it for you.

You finish a research session. Your AI synthesized three sources, made a recommendation, and explained the tradeoffs. Useful information. To save it, you copy the relevant parts, open the file, find the right section, paste it in, reformat it.

Every time.

The LLM wiki pattern was supposed to reduce re-explaining. But if you are the one doing the bookkeeping, you are still doing half the work manually.

When it stops working: you use more than one AI client

A plain Markdown file lives on your laptop. Claude Desktop reads it through a system prompt. ChatGPT does not. Cursor does not. Claude on your phone does not.

Your context is tied to one machine and one client.

If you switch tools, or use different clients for different tasks, you are back to pasting. Or maintaining multiple versions of the same file, which is worse.

When it stops working: search does not exist

At 200 words, you remember what is in the file. At 1,000 words, you scroll to find things. At 2,000 words, you have no idea what is in there anymore.

There is no AI-native search, no scoped retrieval, no tags the assistant can reliably use, and no durable links between pieces of context. The file either stays small and simple, or it becomes a document you no longer trust.

When it stops working: one set of instructions for everything

Your coding context is not your writing context. Your personal projects are not your work projects. Your travel notes have nothing to do with your research notes.

A single Markdown file gives your AI one instruction set for everything. There is no way to say: when I am working on the Rails app, use these conventions. When I am writing, use this tone.

You end up with a compromise document that is slightly wrong for every context.

The pattern is not wrong. The file is.

Everything Karpathy described is correct. Give your AI persistent context. Write it down once. Let Claude or ChatGPT use it.

The issue is not the idea. The issue is using a flat file as the implementation.

A flat file is read-only, single-client, unsearchable, and has no structure. It works until it does not.

What the next step looks like

The next step is not necessarily a big system. It is just a place where context can be stored, searched, scoped, and updated by the AI itself.

That is what a knowledge base with a built-in MCP server gives you.

Write-back. Claude and ChatGPT save notes directly into your knowledge base. You stop doing the bookkeeping manually.

Structure. Folders, tags, linked notes. Share only the context that is relevant to the current task.

Everywhere. Claude, ChatGPT, Cursor, your phone. One knowledge base. Every client that supports MCP can work from the same context.

Search. Find anything across your entire context. Your AI can search your notes too.

Per-folder instructions. Tell your AI to follow different rules in different parts of your knowledge base. Coding conventions in your dev folder. Writing tone in your content folder.

Hjarni is a knowledge base built around this. It ships with a built-in MCP server. Claude and ChatGPT connect in five minutes.

If you want the why behind the pattern, read Karpathy's LLM wiki is right. If you are ready to build one, read How to build an LLM wiki with Claude or ChatGPT and MCP.


When your Markdown file starts turning into a second brain, move it to Hjarni.

Set up Hjarni in five minutes.

Common questions

FAQ

When should I move from a Markdown file to an LLM wiki?

Move when the file passes about 500 words, when you need it on more than one machine or in more than one AI client, or when you find yourself doing the bookkeeping by hand. Below those thresholds, a plain Markdown file is genuinely enough.

Can a plain Markdown file work as a long-term LLM wiki?

For a few hundred words of stable context, yes. The pattern breaks down when the file grows, when you want Claude or ChatGPT to write back into it, when you switch clients, or when you need search and structure. At that point you want a knowledge base with a built-in MCP server, not a file.

How is Karpathy's LLM wiki pattern different from a single Markdown file?

Karpathy's pattern is the idea: persistent context an LLM maintains for you. A single Markdown file is one implementation of that idea, and the simplest one. It is read-only from the LLM's side, lives on one machine, has no search, and gives the AI one instruction set for everything. The pattern is right. The flat file is the part that stops working.

What do I lose by switching from a Markdown file to a hosted LLM wiki?

You lose git history, the local filesystem, and any Markdown-specific plugin ecosystem. You gain write-back from your AI, structure (folders, tags, links), search, per-folder instructions, and access from every MCP-capable client. For most people the trade is worth it once the file becomes work to maintain.

Write once. You both remember.

Free to start. No credit card required.

Give your AI a memory

Works with Claude and ChatGPT today. Gemini coming soon.