The problem
Every conversation with Claude or ChatGPT starts from scratch. You explain your project structure. Your naming conventions. Why you chose Postgres over MySQL. That the auth layer uses a custom middleware. Then you do it again tomorrow.
How developers use Hjarni
Write down what your AI needs to know. Architecture decisions. API conventions. Deployment notes. Debugging playbooks. Connect Hjarni as an MCP server and your AI reads those notes at the start of every conversation.
A typical developer setup
- Architecture folder — stack overview, service boundaries, database schema notes
- Conventions folder — naming, error handling, test patterns, PR guidelines
- Debugging folder — known issues, past incidents, environment quirks
- AI instructions on each folder — "When asked about deployments, check the runbook first"
A concrete workflow
You're debugging a production issue. You ask Claude about the retry logic in your payment service. Claude already knows you use Sidekiq with exponential backoff. That retries are capped at 5. That the payment service wraps Stripe. It doesn't guess. It reads your notes.
After the fix, you save what you learned. Next conversation, it's already there.
Why not just use a README or wiki?
READMEs are for humans. Your AI doesn't read your GitHub wiki when you open a new chat. Hjarni is a note system that both you and your AI can use. Folder-level instructions shape how the AI behaves in different contexts.
Your AI forgets everything between sessions. Your notes don't have to.
What you get
- Your AI reads your notes — via MCP, no copy-paste
- Folder-level AI instructions — different behavior for different projects
- Markdown notes — plain text, full-text search, wiki-links
- REST API — integrate with your own tools and scripts
- Team knowledge base — shared conventions across your engineering team
- Data export — everything as Markdown, anytime