Karpathy's LLM Wiki is right. I just didn't want to run it locally.
Andrej Karpathy published a gist called "LLM Wiki" describing a pattern that's been bouncing around my head for a year. Instead of dumping documents into RAG and re-deriving knowledge on every query, you have an LLM agent incrementally maintain a persistent wiki of markdown files. Obsidian on one side, Claude Code on the other. The LLM does the bookkeeping. You do the thinking.
The pattern is right. I built Hjarni because I wanted to live inside it every day, and the local version kept getting in my way.
What Karpathy gets right
RAG rediscovers knowledge from scratch on every question. A maintained wiki compounds. Cross-references are already there. Contradictions have already been flagged. The synthesis tax is paid once, not on every query.
And the bottleneck isn't reading or thinking. It's bookkeeping. Updating ten pages when one new source arrives. Noticing that an article from today contradicts something you wrote three weeks ago. Humans abandon wikis because maintenance grows faster than value. LLMs don't get bored.
Where the local setup hurts
I ran the local version for months. Obsidian vault, Claude Code in a terminal, a CLAUDE.md schema, a log file, the whole thing. It works. It also has three problems that compound:
One machine, one wiki. You're at your in-laws, you remember a thing, you want to add it. Tough.
One LLM client, one island. Claude Code can edit the files. ChatGPT can't see them. Your phone's Claude app can't see them. You funnel everything through one tool because it's the only one wired in.
Sharing breaks. You can hand someone a git repo. You can't hand someone a living wiki they can query and add to from their own LLM.
None of these are dealbreakers. But friction is what kills knowledge habits, and three kinds of friction is a lot.
What Hjarni is
Hjarni is Karpathy's LLM Wiki pattern, hosted and exposed over MCP, so any LLM client can read and write to the same brain.
That's the whole pitch. Notes, containers, tags, links, wiki-style references. All the structure you'd build in Obsidian, in a hosted product that any MCP client can talk to.
Concretely: capture a thought on your phone in the Claude app, refine it later in Claude Code while you're coding, query it next week from Cursor when you need it. Same notes. Same tags. Same links. No syncing.
You don't open a Claude Code session to add a note. You talk to whatever LLM you're already in, and it writes to Hjarni. Pro plan includes seats, so two humans plus their LLMs can work out of the same brain.
What you give up vs the local pattern
Honest list:
- No git history. You can update notes safely, but it's not
git log. If you want branchable, diffable knowledge, run Karpathy's pattern. - No Obsidian graph view. Hjarni shows links between notes, but the gorgeous force-directed graph is an Obsidian thing. I miss it sometimes.
- No filesystem. Your notes are in a database, not a folder of
.mdfiles you can grep. For some people that's a hard no. I get it. - No Dataview, no Marp, no Obsidian plugin ecosystem. You trade a marketplace for a focused product.
If those tradeoffs hurt, Karpathy's setup is genuinely the better choice. I'm not going to pretend otherwise.
Who should pick which
Run Karpathy's pattern if: you live in a terminal, you love Obsidian, you want git history, and the friction of "only on this laptop" doesn't bother you.
Use Hjarni if: you want your notes everywhere. On your phone, in Claude, in ChatGPT, in Cursor. Without thinking about syncing.
The part where we agree completely
Karpathy ends the gist with a Vannevar Bush reference that I think about a lot. Memex was always a personal, curated knowledge store with associative trails. The piece Bush couldn't solve was who does the maintenance. The answer turned out to be: not humans.
Whether you build it in markdown files or use Hjarni, the move is the same. Stop dumping documents at LLMs. Start building a brain.
That's the product I wanted for myself, so I built it.
Original gist: LLM Wiki by Andrej Karpathy