freshcrate
Home > MCP Servers > llm-wiki

llm-wiki

LLM-powered knowledge base from your Claude Code, Codex CLI, Copilot, Cursor & Gemini sessions. Karpathy's LLM Wiki pattern β€” implemented and shipped.

Description

LLM-powered knowledge base from your Claude Code, Codex CLI, Copilot, Cursor & Gemini sessions. Karpathy's LLM Wiki pattern β€” implemented and shipped.

README

llmwiki

LLM-powered knowledge base from your Claude Code, Codex CLI, Cursor, Gemini CLI, and Obsidian sessions. Built on Andrej Karpathy's LLM Wiki pattern.

πŸ‘‰ Live demo: pratiyush.github.io/llm-wiki

Rebuilt on every master push from the synthetic sessions in examples/demo-sessions/. No personal data. Shows every feature of the real tool (activity heatmap, tool charts, token usage, model info cards, vs-comparisons, project topics) running against safe reference data.

License: MIT Python 3.9+ Version Tests CI Link check Wiki checks Docker Works with Claude Code Works with Codex CLI Works with Copilot Works with Cursor Works with Gemini CLI Works with Obsidian


Every Claude Code, Codex CLI, Copilot, Cursor, and Gemini CLI session writes a full transcript to disk. You already have hundreds of them and never look at them again.

llmwiki turns that dormant history into a beautiful, searchable, interlinked knowledge base β€” locally, in two commands. Plus, it produces AI-consumable exports (llms.txt, llms-full.txt, JSON-LD graph, per-page .txt + .json siblings) so other AI agents can query your wiki directly.

./setup.sh                         # one-time install
./build.sh && ./serve.sh           # build + serve at http://127.0.0.1:8765

llm-wiki demo

Contributing in one line: read CONTRIBUTING.md, keep PRs focused (one concern each), use feat: / fix: / docs: / chore: / test: commit prefixes, never commit real session data (raw/ is gitignored), no new runtime deps. CI must be green to merge.

Screenshots

All screenshots below are from the public demo site which is built on every master push from the dummy example sessions. Your own wiki will look identical β€” just with your real work.

Home β€” projects overview with activity heatmap

llmwiki home page β€” LLM Wiki header, activity heatmap, and a grid of three demo projects (demo-blog-engine, demo-ml-pipeline, demo-todo-api)

All sessions β€” filterable table across every project

llmwiki sessions index β€” activity timeline above a table of eight demo sessions with project, model, date, message count, and tool-call columns

Session detail β€” full conversation + tool calls

llmwiki session detail β€” Rust blog engine scaffolding session showing summary, breadcrumbs, a TOML Cargo.toml block and a Rust main.rs block, both highlighted by highlight.js

Changelog β€” renders CHANGELOG.md as a first-class page

llmwiki changelog page β€” keep-a-changelog format with colored headings for Added / Fixed / Changed and auto-linked PR references

Projects index β€” freshness badges + per-project stats

llmwiki projects index β€” three demo project cards with green/yellow/red freshness badges showing how recently each project was touched

What you get

Human-readable

  • All your sessions, converted from .jsonl to clean, redacted markdown
  • A Karpathy-style wiki β€” sources/, entities/, concepts/, syntheses/, comparisons/, questions/ linked with [[wikilinks]]
  • A beautiful static site you can browse locally or deploy to GitHub Pages
    • Global search (Cmd+K command palette with fuzzy match over pre-built index)
    • highlight.js client-side syntax highlighting (light + dark themes)
    • Dark mode (system-aware + manual toggle with data-theme)
    • Keyboard shortcuts: / search Β· g h/p/s nav Β· j/k rows Β· ? help
    • Collapsible tool-result sections (auto-expand > 500 chars)
    • Copy-as-markdown + copy-code buttons
    • Breadcrumbs + reading progress bar
    • Filter bar on sessions table (project/model/date/text)
    • Reading time estimates (X min read)
    • Related pages panel at the bottom of every session
    • Activity heatmap on the home page
    • Model info cards with structured schema (provider, pricing, benchmarks)
    • Auto-generated vs-comparison pages between AI models
    • Append-only changelog timeline with pricing sparkline
    • Project topic chips (GitHub-style tags on project cards)
    • Agent labels (colored badges: Claude/Codex/Copilot/Cursor/Gemini)
    • Recently-updated card on the home page
    • Dataview-style structured queries in the command palette
    • Hover-to-preview wikilinks
    • Deep-link icons next to every heading
    • Mobile-responsive + print-friendly

AI-consumable (v0.4)

Every HTML page has sibling machine-readable files at the same URL:

  • <page>.html β€” human HTML with schema.org microdata
  • <page>.txt β€” plain text version (no HTML tags)
  • <page>.json β€” structured metadata + body

Site-level AI-agent entry points:

File What
/llms.txt Short index per llmstxt.org spec
/llms-full.txt Flattened plain-text dump (~5 MB cap) β€” paste into any LLM's context
/graph.jsonld Schema.org JSON-LD entity/concept/source graph
/sitemap.xml Standard sitemap with lastmod
/rss.xml RSS 2.0 feed of newest sessions
/robots.txt AI-friendly robots with llms.txt reference
/ai-readme.md AI-specific navigation instructions
/manifest.json Build manifest with SHA-256 hashes + perf budget

Every page also includes an <!-- llmwiki:metadata --> HTML comment that AI agents can parse without fetching the separate .json sibling.

Quality & governance (v1.0)

  • 4-factor confidence scoring β€” source count, source quality, recency, cross-references; with Ebbinghaus-inspired decay per content-type
  • 5-state lifecycle machine β€” draft β†’ reviewed β†’ verified β†’ stale β†’ archived with 90-day auto-stale
  • 15 lint rules β€” 8 structural (frontmatter, link integrity, orphans, freshness, duplicates, index sync…) + 3 LLM-powered (contradictions, claim verification, summary accuracy) + stale_candidates (#51) + cache_tier_consistency (#52) + tags_topics_convention (#302) + stale_reference_detection (#303)
  • Auto Dream β€” MEMORY.md consolidation after 24h + 5 sessions: resolve relative dates, prune outdated, 200-line cap
  • 9 navigation files β€” CLAUDE.md, AGENTS.md, MEMORY.md, SOUL.md, CRITICAL_FACTS.md, hints.md, hot.md + per-project hot caches

Obsidian-native experience (v1.0)

  • link-obsidian CLI β€” symlinks the whole project into an Obsidian vault; graph view + backlinks + full-text search just work
  • Dataview dashboard β€” 10 ready-to-use queries (recently updated, by confidence, by lifecycle, by project, by entity type, open questions, stale pages)
  • Templater templates β€” 4 templates for source/entity/concept/synthesis pages, seeded with confidence + lifecycle + today's date
  • Category pages β€” tag-based index pages in both Dataview (Obsidian) and static markdown (HTML) modes
  • Integration guide β€” docs/obsidian-integration.md covers 6 recommended plugins with per-plugin configs

Automation

  • SessionStart hook β€” auto-syncs new sessions in the background on every Claude Code launch
  • File watcher β€” llmwiki watch polls agent stores with debounce and runs sync on change
  • Auto-build on sync β€” /wiki-sync triggers /wiki-build (configurable; default on)
  • Configurable scheduled sync β€” llmwiki schedule generates OS-specific task files (launchd/systemd/Task Scheduler)
  • MCP server β€” 12 production tools (query, search, list, read, lint, sync, export, + confidence, lifecycle, dashboard, entity search, category browse) queryable from any MCP client (Claude Desktop, Cline, Cursor, ChatGPT desktop)
  • Multi-agent skill mirror β€” llmwiki install-skills mirrors .claude/skills/ to .codex/skills/ and .agents/skills/
  • Pending ingest queue β€” SessionStart hook converts + queues; /wiki-sync processes queue
  • No servers, no database, no npm β€” Python stdlib + markdown. Syntax highlighting loads from a highlight.js CDN at view time.

How it works

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  ~/.claude/projects/*/*.jsonl       β”‚  ← Claude Code sessions
β”‚  ~/.codex/sessions/**/*.jsonl       β”‚  ← Codex CLI sessions
β”‚  ~/Library/.../Cursor/workspaceS…   β”‚  ← Cursor
β”‚  ~/Documents/Obsidian Vault/        β”‚  ← Obsidian
β”‚  ~/.gemini/                         β”‚  ← Gemini CLI
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό   python3 -m llmwiki sync
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  raw/sessions/<project>/            β”‚  ← immutable markdown (Karpathy layer 1)
β”‚     2026-04-08-<slug>.md            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό   /wiki-ingest  (your coding agent)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  wiki/sources/<slug>.md             β”‚  ← LLM-generated wiki (Karpathy layer 2)
β”‚  wiki/entities/<Name>.md            β”‚
β”‚  wiki/concepts/<Name>.md            β”‚
β”‚  wiki/syntheses/<Name>.md           β”‚
β”‚  wiki/comparisons/<Name>.md         β”‚
β”‚  wiki/questions/<Name>.md           β”‚
β”‚  wiki/index.md, overview.md, log.md β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό   python3 -m llmwiki build
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  site/                              β”‚  ← static HTML + AI exports
β”‚  β”œβ”€β”€ index.html, style.css, ...     β”‚
β”‚  β”œβ”€β”€ sessions/<project>/<slug>.html β”‚
β”‚  β”œβ”€β”€ sessions/<project>/<slug>.txt  β”‚  (AI sibling)
β”‚  β”œβ”€β”€ sessions/<project>/<slug>.json β”‚  (AI sibling)
β”‚  β”œβ”€β”€ llms.txt, llms-full.txt        β”‚
β”‚  β”œβ”€β”€ graph.jsonld                   β”‚
β”‚  β”œβ”€β”€ sitemap.xml, rss.xml           β”‚
β”‚  β”œβ”€β”€ robots.txt, ai-readme.md       β”‚
β”‚  β”œβ”€β”€ manifest.json                  β”‚
β”‚  └── search-index.json              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

See docs/architecture.md for the full 3-layer Karpathy + 8-layer build breakdown.

Documentation

Full production documentation lives under docs/. The editorial hub is docs/index.md β€” tutorials, per-agent guides, reference, and deployment, all in one place.

Start here:

Goal Read
Install and build your first site in 10 minutes Tutorial 01 β†’ 02
Use llmwiki with Claude Code Tutorial 03
Use llmwiki with Codex CLI Tutorial 04
Query / lint / review your wiki daily Tutorial 05
Point llmwiki at an existing Obsidian / Logseq vault Tutorial 06
See four real end-to-end workflows Tutorial 07

Contributing to docs? See the style guide.

Install

macOS / Linux

git clone https://github.com/Pratiyush/llm-wiki.git
cd llm-wiki
./setup.sh

Windows

git clone https://github.com/Pratiyush/llm-wiki.git
cd llm-wiki
setup.bat

With pip (v0.3+)

pip install -e .                # basic β€” everything you need
pip install -e '.[pdf]'         # + PDF ingestion
pip install -e '.[dev]'         # + pytest + ruff
pip install -e '.[all]'         # all of the above

Syntax highlighting is now powered by highlight.js, loaded from a CDN at view time β€” no optional deps required.

What setup does

  1. Creates raw/, wiki/, site/ data directories
  2. Installs the llmwiki Python package in-place
  3. Detects your coding agents and enables matching adapters
  4. Optionally offers to install the SessionStart hook into ~/.claude/settings.json for auto-sync
  5. Runs a first sync so you see output immediately

For maintainers

Running the project? The governance scaffold lives under docs/maintainers/ and is loaded by a dedicated skill:

File What it's for
CONTRIBUTING.md Short rules for contributors β€” read this first
CODE_OF_CONDUCT.md Contributor Covenant 2.1
SECURITY.md Disclosure process for redaction bugs, XSS, data leaks
docs/maintainers/ARCHITECTURE.md One-page system diagram + layer boundaries + what NOT to add
docs/maintainers/REVIEW_CHECKLIST.md Canonical code-review criteria
docs/maintainers/RELEASE_PROCESS.md Version bump β†’ CHANGELOG β†’ tag β†’ build β†’ publish
docs/maintainers/TRIAGE.md Label taxonomy + stale-issue policy
docs/maintainers/ROADMAP.md Near-term plan + release themes
docs/maintainers/DECLINED.md Graveyard of declined ideas with reasons

Four Claude Code slash commands automate the common ops:

  • /review-pr <N> β€” apply the REVIEW_CHECKLIST to a PR and post findings
  • /triage-issue <N> β€” label + milestone + priority a new issue
  • /release <version> β€” walk the release process step by step
  • /maintainer β€” meta-skill that loads every governance doc as context

Running E2E tests

The unit suite (pytest tests/ β€” 472 tests) runs in milliseconds and covers every module. The end-to-end suite under tests/e2e/ is separate: it builds a minimal demo site, serves it on a random port, drives a real browser via Playwright, and runs scenarios written in Gherkin via pytest-bdd.

Why both? Unit tests lock the contract at the module boundary; E2E locks the contract at the user's browser. A diff that passes unit tests but breaks the Cmd+K palette will fail E2E.

Install the extras (one-time, ~300 MB for Chromium):

pip install -e '.[e2e]'
python -m playwright install chromium

Run the suite:

pytest tests/e2e/ --browser=chromium

Run a single feature:

pytest tests/e2e/test_command_palette.py --browser=chromium -v

The E2E suite is excluded from the default pytest tests/ run (see the --ignore=tests/e2e addopt in pyproject.toml) so you can iterate on the unit suite without waiting for browser installs. CI runs the E2E job as a separate workflow (.github/workflows/e2e.yml) that only fires on PRs touching build.py, the viz modules, or tests/e2e/**.

Feature files live under tests/e2e/features/ β€” one per UI area (homepage, session page, command palette, keyboard nav, mobile nav, theme toggle, copy-as-markdown, responsive breakpoints, edge cases, accessibility, visual regression). Step definitions are all in tests/e2e/steps/ui_steps.py. Adding a new scenario is usually a 2-line change to a .feature file plus maybe one new step.

Run locally with an HTML report:

pytest tests/e2e/ --browser=chromium \
  --html=e2e-report/index.html --self-contained-html
open e2e-report/index.html     # macOS β€” opens the browseable report

Where to see test reports:

What Where
Unit test results GitHub Actions β†’ ci.yml β†’ latest run β†’ lint-and-test job logs
E2E HTML report GitHub Actions β†’ e2e.yml β†’ latest run β†’ Artifacts β†’ e2e-html-report (14-day retention)
Visual regression screenshots Same run β†’ Artifacts β†’ e2e-screenshots
Playwright traces (failed runs only) Same run β†’ Artifacts β†’ playwright-traces (open with playwright show-trace <zip>)
Demo site deploy status GitHub Actions β†’ pages.yml β†’ latest run

Locally, the HTML report is one file (e2e-report/index.html) that you can open in any browser β€” pass/fail per scenario, duration, stdout/stderr, screenshot on failure.

Scheduled sync

Run llmwiki schedule to generate the right scheduled task file for your OS from your config (cadence, time, paths). Or copy a static template:

OS Auto-generate Static template Install guide
macOS llmwiki schedule --platform macos launchd.plist docs/scheduled-sync.md
Linux llmwiki schedule --platform linux systemd.timer + .service docs/scheduled-sync.md
Windows llmwiki schedule --platform windows task.xml docs/scheduled-sync.md

Cadence (daily / weekly / hourly), hour/minute, and paths are all configurable in examples/sessions_config.json. See docs/scheduled-sync.md for full instructions.

CLI reference

llmwiki init                    # scaffold raw/ wiki/ site/ + seed nav files
llmwiki sync                    # convert .jsonl β†’ markdown (auto-build + auto-lint if configured)
llmwiki build                   # compile static HTML + AI exports
llmwiki serve                   # local HTTP server on 127.0.0.1:8765
llmwiki adapters                # list available adapters + configured state (v1.0)
llmwiki graph                   # build knowledge graph (v0.2)
llmwiki watch                   # file watcher with debounce (v0.2)
llmwiki export-obsidian         # write wiki to Obsidian vault (v0.2)
llmwiki export-qmd              # export wiki as a qmd collection (v0.6)
llmwiki export-marp             # export Marp slide deck from wiki (v0.7)
llmwiki eval                    # 7-check structural quality score /100 (v0.3)
llmwiki lint                    # 11-rule wiki lint (8 basic + 3 LLM-powered, v1.0)
llmwiki check-links             # verify internal links in site/ (v0.4)
llmwiki export <format>         # AI-consumable exports (v0.4)
llmwiki synthesize              # auto-ingest synthesis pipeline (v0.5)
llmwiki manifest                # build site manifest + perf budget (v0.4)
llmwiki link-obsidian           # symlink project into Obsidian vault (v1.0)
llmwiki install-skills          # mirror .claude/skills to .codex/ and .agents/ (v1.0)
llmwiki schedule                # generate OS-specific scheduled sync task (v1.0)
llmwiki version

Each subcommand has its own --help. All commands are also wrapped in one-click shell/batch scripts: sync.sh/.bat, build.sh/.bat, serve.sh/.bat, upgrade.sh/.bat.

Works with

Agent Adapter Status Added in
Claude Code llmwiki.adapters.claude_code βœ… Production v0.1
Obsidian (input) llmwiki.adapters.obsidian βœ… Production v0.1
Obsidian (output) llmwiki.obsidian_output βœ… Production v0.2
Codex CLI llmwiki.adapters.codex_cli βœ… Production v0.3
Cursor llmwiki.adapters.cursor βœ… Production v0.5
Gemini CLI llmwiki.adapters.gemini_cli βœ… Production v0.5
PDF files llmwiki.adapters.pdf βœ… Production v0.5
Copilot Chat llmwiki.adapters.copilot_chat βœ… Production v0.9
Copilot CLI llmwiki.adapters.copilot_cli βœ… Production v0.9
OpenCode / OpenClaw β€” ⏸ Deferred β€”

Adding a new agent is one small file β€” subclass BaseAdapter, declare SUPPORTED_SCHEMA_VERSIONS, ship a fixture + snapshot test.

MCP server

llmwiki ships its own MCP server (stdio transport, no SDK dependency) so any MCP client can query your wiki directly.

python3 -m llmwiki.mcp   # runs on stdin/stdout

Twelve production tools (7 core + 5 added in v1.0 #159):

Tool What
wiki_query(question, max_pages) Keyword search + page content (no LLM synthesis)
wiki_search(term, include_raw) Raw grep over wiki/ (+ optional raw/)
wiki_list_sources(project) List raw source files with metadata
wiki_read_page(path) Read one page (path-traversal guarded)
wiki_lint() Orphans + broken-wikilinks report
wiki_sync(dry_run) Trigger the converter
wiki_export(format) Return any AI-consumable export (llms.txt, jsonld, sitemap, rss, manifest)
wiki_confidence(min, max) Pages by confidence range (v1.0)
wiki_lifecycle(state) Pages by draft/reviewed/verified/stale/archived (v1.0)
wiki_dashboard() Health summary: counts by type, lifecycle, confidence (v1.0)
wiki_entity_search(name, entity_type) Search entities by name substring or type (v1.0)
wiki_category_browse(tag) Browse tags with counts, drill into specific tag (v1.0)

Register in your MCP client's config β€” e.g. for Claude Desktop, add to ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "llmwiki": {
      "command": "python3",
      "args": ["-m", "llmwiki.mcp"]
    }
  }
}

Configuration

Single JSON config at examples/sessions_config.json. Copy to config.json and edit:

{
  "filters": {
    "live_session_minutes": 60,
    "exclude_projects": []
  },
  "redaction": {
    "real_username": "YOUR_USERNAME",
    "replacement_username": "USER",
    "extra_patterns": [
      "(?i)(api[_-]?key|secret|token|bearer|password)...",
      "sk-[A-Za-z0-9]{20,}"
    ]
  },
  "truncation": {
    "tool_result_chars": 500,
    "bash_stdout_lines": 5
  },
  "adapters": {
    "obsidian": {
      "vault_paths": ["~/Documents/Obsidian Vault"]
    }
  }
}

All paths, regexes, truncation limits, and per-adapter settings are tunable. See docs/configuration.md.

.llmwikiignore

Gitignore-style pattern file at the repo root. Skip entire projects, dates, or specific sessions without touching config:

# Skip a whole project
confidential-client/
# Skip anything before a date
*2025-*
# Keep exception
!confidential-client/public-*

Karpathy's LLM Wiki pattern

This project follows the three-layer structure described in Karpathy's gist:

  1. Raw sources (raw/) β€” immutable. Session transcripts converted from .jsonl.
  2. The wiki (wiki/) β€” LLM-generated. One page per entity, concept, source. Interlinked via [[wikilinks]].
  3. The schema (CLAUDE.md, AGENTS.md) β€” tells your agent how to ingest and query.

See docs/architecture.md for the full breakdown and how it maps to the file tree.

Design principles

  • Stdlib first β€” only mandatory runtime dep is markdown. pypdf is an optional extra for PDF ingestion.
  • Works offline β€” no Google fonts, no external CSS. Syntax highlighting loads from a highlight.js CDN but degrades gracefully without it.
  • Redact by default β€” username, API keys, tokens, emails all get redacted before entering the wiki.
  • Idempotent everything β€” re-running any command is safe and cheap.
  • Agent-agnostic core β€” the converter doesn't know which agent produced the .jsonl; adapters translate.
  • Privacy by default β€” localhost-only binding, no telemetry, no cloud calls.
  • Dual-format output (v0.4) β€” every page ships both for humans (HTML) and AI agents (TXT + JSON + JSON-LD + sitemap + llms.txt).

Docs

Per-adapter docs:

Releases

Version Focus Tag
v0.1.0 Core release β€” Claude Code adapter, god-level HTML UI, schema, CI, plugin scaffolding v0.1.0
v0.2.0 Extensions β€” 3 new slash commands, 3 new adapters, Obsidian bidirectional, full MCP server v0.2.0
v0.3.0 PyPI packaging, eval framework, i18n scaffold v0.3.0
v0.4.0 AI + human dual format β€” per-page .txt/.json siblings, llms.txt, JSON-LD graph, sitemap, RSS, schema.org microdata, reading time, related pages, activity heatmap, deep-link anchors, build manifest, link checker, wiki_export MCP tool v0.4.0
v0.5.0 – v0.9.0 Internal sprint milestones β€” features (_context.md, auto-ingest, qmd export, model-profile schema, activity heatmap, Copilot adapters, etc.) shipped consolidated under the v0.9.x line. No standalone tags were published. β€”
v0.9.1 Sprint 1 & 2 foundation β€” link-obsidian CLI, 4-factor confidence scoring, 5-state lifecycle machine, llmbook-reference skill, 7 entity types, flat raw/ naming, pending ingest queue, _context.md stubs, meeting + Jira adapters, configurable Web Clipper intake, rich log format v0.9.1
v0.9.2 Sprint 3 quality β€” 11 lint rules (8 basic + 3 LLM-powered), Auto Dream MEMORY.md consolidation, Dataview dashboard template, category pages (Dataview + static), auto-build on sync + configurable lint schedule v0.9.2
v0.9.3 Sprint 3 polish β€” Obsidian Templater templates, integration guide, two-way editing tests, MCP server 7β†’12 tools, adapter config validation, pipeline fix (sigstore, PyPI gate) v0.9.3
v0.9.4 Session C1 (Sprint 4) β€” multi-agent skill installer, enhanced search with facets, configurable scheduled sync (launchd/systemd/Task Scheduler), CI wiki-checks workflow v0.9.4
v0.9.5 Docs polish + consistency audit before v1.0.0 v0.9.5
v1.0.0 Production-ready Obsidian integration β€” full v1.0 scope v1.0.0
v1.1.0-rc1 Solo quick-win sprint β€” candidates workflow, Ollama scaffold, prompt-cache scaffold v1.1.0-rc1
v1.1.0-rc2 Session E β€” interactive graph viewer + remaining code-only v1.1 work v1.1.0-rc2
v1.1.0-rc3 Gap-sweep bundle β€” state portability, quarantine, sync --status, log CLI, synthesize --estimate breakdown, tag family, stale references, graph context menu, raw immutability, AI-sessions default v1.1.0-rc3
v1.1.0-rc4 Navigation + quality β€” graph site_url resolver (99.7% β†’ 0% dead clicks), llmwiki backlinks CLI (95% β†’ 0% orphan pages), source-code β†’ GitHub link rewriter (471 β†’ 100 broken), verify-before-fixing contribution rule v1.1.0-rc4
v1.1.0-rc5 Site audit + 5 closed batches β€” session-local ref stripping (351 β†’ 247 broken), cheatsheet, README/CONTRIBUTING compile, expanded E2E, slash-CLI parity test, 4 adapter docs, Ollama tutorial, dual-mode docs skeleton, /wiki-synthesize slash v1.1.0-rc5
v1.1.0-rc6 rc6 batch β€” fixed adapter tag hardcoded to claude-code for every adapter (#346), tutorial UX polish with in-page TOC + prev/next + edit-on-GitHub (#282), command palette now indexes 107 doc pages + 17 slash commands (#277), content-hash cache for md_to_html (#283) v1.1.0-rc6
v1.1.0-rc7 rc7 batch β€” automatic AI-suggested tags during synthesis (#351), link-checker config fix (#348, #350, #353) v1.1.0-rc7
v1.1.0-rc8 rc8 batch β€” complete Mode B agent-delegate backend (#316): new llmwiki synthesize --list-pending + --complete <uuid> CLI subcommands, /wiki-sync step 6 auto-detects pending prompts, Mode B ships end-to-end without an API key v1.1.0-rc8

Roadmap

Shipped milestones:

  • v0.5.0 β€” Folder-level _context.md, auto-ingest, adapter graduations, lazy search index, scheduled sync, WCAG, E2E tests (milestone)
  • v0.6.0 β€” qmd export, GitLab Pages CI, PyPI release automation, maintainer governance scaffold (milestone)
  • v0.7.0 β€” Structured model-profile schema, vs-comparison pages, append-only changelog timeline (milestone)
  • v0.8.0 β€” 365-day activity heatmap, tool-calling bar chart, token usage card, session metrics frontmatter (milestone)
  • v0.9.0 β€” Project topics, agent labels, Copilot adapters, image pipeline, highlight.js, public demo deployment
  • v0.9.x β€” Sprint 1-4 foundation for v1.0.0 Obsidian integration: confidence scoring, lifecycle state machine, 9 navigation files, 11 lint rules, Auto Dream, Dataview dashboard, multi-agent skills, 12-tool MCP server, meeting + Jira adapters

Active milestones:

Milestone Focus Tracking
v1.0.0 Final docs polish + PyPI trusted publisher + release Milestone
v1.1.0 Ollama backend, prompt caching, interactive graph viewer, Homebrew tap Milestone
v1.2.0 ChatGPT + OpenCode adapters, vault-overlay mode, tree-aware search, cache tiers Milestone

Deployment targets

Acknowledgements

License

MIT Β© Pratiyush

Release History

VersionChangesUrgencyDate
v1.1.0-rc8## Headline **Mode B ships end-to-end.** `llmwiki synthesize` now has `--list-pending` + `--complete <uuid>` subcommands, and `/wiki-sync` auto-detects pending agent-delegate prompts and drives them through completion. Zero API key needed. Mode A (API, #315) remains deferred per user request. ## What's new | Surface | Change | |---|---| | CLI | `llmwiki synthesize --list-pending` prints pending prompts as a table, exit 0 even when empty | | CLI | `llmwiki synthesize --complete <uuid> --page High4/21/2026
v1.1.0-rc7## Headline **Tags are now AI-generated automatically during synthesis** β€” every `/wiki-synthesize` call produces topical tags (prompt-caching, rag, github-actions, …) alongside the deterministic baseline. Zero extra API round-trips. ## Issues closed | # | Kind | Description | |---|---|---| | [#351](https://github.com/Pratiyush/llm-wiki/issues/351) | feat | Automatic AI-suggested tags during synthesis | | [#348](https://github.com/Pratiyush/llm-wiki/issues/348), [#350](https://github.com/PraHigh4/21/2026
v1.1.0-rc6## rc6 batch β€” 4 issues closed in one release | # | Kind | Description | |---|---|---| | [#346](https://github.com/Pratiyush/llm-wiki/issues/346) | fix | `tags:` was hardcoded to `claude-code` for every adapter. Sessions from Codex / Cursor / Copilot / Gemini / OpenCode / ChatGPT / Obsidian were all grouped under the Claude chip on the compiled site. | | [#282](https://github.com/Pratiyush/llm-wiki/issues/282) | feat | Tutorial UX polish: in-page TOC (collapsed `<details>`), prev/next footer, "High4/21/2026
v1.1.0-rc5## Highlights Autonomous long-run closing almost every non-manual open issue in one shot. 5 batches, 3 PRs, **12 issues closed**. ### πŸ”— Session-local ref stripping (#336) Anchors pointing at `tasks.md`, `user_profile.md`, `settings.gradle.kts`, `.kiro/steering/*`, absolute `/Users/…` paths, `../../sources/*` wikilinks, and 20+ other categories now unwrap to inline `<span class=\"session-ref dead-link\">` with the original href in the `title` attribute. Text stays visible, but the compiled sHigh4/21/2026
v1.1.0-rc4## Highlights Major usability fixes surfaced by end-to-end testing. ### 🎯 Graph viewer clicks actually work now (#331, #332) Previous builds sent **99.7%** of node clicks to 404. The viewer rewrote `wiki/entities/Foo.md` β†’ `entities/Foo.html`, but `site/entities/` doesn't exist. New `_compute_site_url()` maps each wiki page β†’ its real compiled URL: - Sources use the `source_file:` frontmatter to derive the date-prefixed `sessions/<proj>/<stem>.html` path. - Entities / concepts / syntheses /High4/20/2026
v1.1.0-rc3## Highlights Seven merged PRs closing **21 gap issues** surfaced by an end-to-end QA pass. ### Bundle 1 β€” Gap sweep (#308) 10 P0/P1/P2 bugs fixed in one PR: - πŸ”΄ **G-06** β€” slug collisions silently overwrote source pages (12Γ— `flickering-orbiting-fern` in one corpus) - πŸ”΄ **G-09** β€” `synthesize` didn't rebuild `wiki/index.md` β†’ 703 lint errors per run - πŸ”΄ **G-18** β€” home "Recently updated" card was hardcoded, now reads from `wiki/log.md` - 🟠 G-10, G-12 β€” log archive frontmatter, dummy synthHigh4/20/2026
v1.1.0-rc2Closes out the code-only tickets in the v1.1.0 scope (Session E). ## What's new since v1.1.0-rc1 ### Added - **#51 β€” `wiki/candidates/` approval workflow.** New `llmwiki/candidates.py` module (`list` / `promote` / `merge` / `discard` / `stale_candidates`), `/wiki-review` slash command, `llmwiki candidates <action>` CLI, 12th lint rule (`stale_candidates`). Non-destructive discard archives to `wiki/archive/candidates/<timestamp>/` with a `.reason.txt` audit file. - **#35 β€” Ollama backend scaffHigh4/17/2026
v1.1.0-rc1Pre-release tag for the first batch of v1.1.0 work. All 6 "solo quick wins" from the post-v1.0 roadmap shipped. ## Features - **#43** OpenCode / OpenClaw adapter - **#44** ChatGPT conversation-export adapter - **#123** Docker container + GHCR publish workflow - **#216** Shell completion (bash / zsh / fish) ## Refactor - **#217** Split `llmwiki/build.py` (3,378 β†’ 1,799 lines, 47% smaller). CSS + JS extracted into `llmwiki/render/`. Byte-identical output. ## Governance - **#215** `.editorconfigHigh4/17/2026
v1.0.0**llmwiki 1.0 is here.** The tool graduates from a session-archive into a full LLM-maintained knowledge base with quality metrics, lifecycle states, Obsidian-native UX, and a 12-tool MCP server. ## Headline features ### Obsidian-native experience - **`link-obsidian` CLI** β€” symlinks the whole project into an Obsidian vault. Graph view, backlinks, full-text search just work. - **4 Templater templates** for source/entity/concept/synthesis pages β€” one-keystroke page creation with confidence + lifHigh4/16/2026
v0.9.5Intermediate checkpoint before v1.0.0. 4 PRs merged covering docs, CSS polish, and repo hygiene. ### Added - **End-to-end setup guide** (#120) β€” `docs/tutorials/setup-guide.md`: 5 parts covering local setup β†’ GitHub Pages deploy β†’ customization β†’ multi-agent wiring ### Changed - **README refresh** (#122) β€” v0.9.0 β†’ v0.9.4, 472 β†’ 1206 tests, new v0.9.1-v0.9.4 entries, 12-tool MCP server, v1.0/1.1/1.2 roadmap - **Light-mode polish** (#119) β€” darker borders, card shadows, visible heatmap level-High4/16/2026
v0.9.45 PRs merged. All code-only work for Sprint 4 landed. 1194 tests passing. ### Added - **Multi-agent skill installer** (#160) β€” new `llmwiki install-skills` mirrors .claude/skills/ into .codex/skills/ + .agents/skills/ - **Enhanced static site search with facets** (#161) β€” confidence-weighted ranking + entity_type/lifecycle/tags filters - **Configurable scheduled sync task** (#162) β€” launchd/systemd/Task Scheduler generator from config - **CI wiki-checks workflow** (#163) β€” eval + lint + build +High4/16/2026
v0.9.3## Sprint 3 Session B 6 PRs merged β€” Obsidian experience polish + first batch of Sprint 4 (MCP enhancement). ### Added - **Obsidian Templater templates** (#152) β€” 4 templates for source/entity/concept/synthesis pages - **Obsidian integration guide** (#151) β€” \`docs/obsidian-integration.md\` with 6-plugin setup - **5 new MCP tools** (#159) β€” confidence, lifecycle, dashboard, entity search, category browse - **Adapter config validation** (#177) β€” pdf/meeting/jira/web_clipper schemas + CLI stateHigh4/16/2026
v0.9.2## Sprint 3 β€” Session A 5 PRs merged, quality & Obsidian-native features landed. ### Added - **All 11 lint rules** (#155) β€” new \`llmwiki/lint/\` package with 8 basic + 3 LLM-powered rules, \`@register\` decorator pattern, new \`llmwiki lint\` CLI - **Auto-build on sync + configurable lint schedule** (#157) β€” \`on-sync\`/\`daily\`/\`weekly\`/\`manual\`/\`never\` - **Auto Dream** (#156) β€” MEMORY.md consolidation at 24h + 5-session thresholds - **Dataview dashboard** (#153) β€” 10-query template,High4/16/2026
v0.9.1## Sprint 1 & 2 Foundation Release 16 PRs merged for the Obsidian integration effort. All 913 tests passing. ### Added - `link-obsidian` CLI command (#132) β€” symlink llm-wiki into Obsidian vault - 4-factor confidence scoring module (#135) β€” source count, quality, recency, cross-refs - 5-state lifecycle machine with auto-stale at 90 days (#136) - `llmbook-reference` bidirectional Claude Code skill (#138) - 7 entity types in frontmatter schema (#137) β€” person, org, tool, concept, api, library, High4/16/2026
v0.4.0Fourth release of **llmwiki**. Every wiki page now ships as **both** HTML for humans **and** machine-readable `.txt` + `.json` siblings for AI agents, alongside site-level exports that follow open standards. ## Part A β€” AI-consumable exports New module `llmwiki/exporters.py`. Every build now writes: | File | What | |---|---| | `llms.txt` | Short index per [llmstxt.org spec](https://llmstxt.org) with project list and AI-agent entry points | | `llms-full.txt` | Flattened plain-text dump, cappedMedium4/8/2026
v0.3.0Third release of **llmwiki**. PyPI-ready packaging, structural eval framework with 7 quality checks, Codex CLI adapter graduated to production, and i18n documentation scaffold in 3 languages. ## Added ### Packaging - **`pyproject.toml`** β€” PEP 621 metadata, PyPI-ready - Optional dep groups: `[highlight]` (pygments), `[pdf]` (pypdf), `[dev]` (pytest+ruff), `[all]` - Entry point: `llmwiki = llmwiki.cli:main` - Ruff + pytest config ### Eval framework - **`llmwiki/eval.py`** β€” 7 structural Medium4/8/2026
v0.2.0Second release of **llmwiki**. More commands, new adapters, bidirectional Obsidian sync, full MCP server, and interactive viewer features. ## Added ### Slash commands - **`/wiki-update`** β€” surgical in-place update of one wiki page without a full re-ingest - **`/wiki-graph`** β€” walks every `[[wikilink]]` across `wiki/` and produces `graph/graph.json` + interactive `graph/graph.html` (vis.js) - **`/wiki-reflect`** β€” higher-order self-reflection across the whole wiki ### Adapters - **Cursor** (Medium4/8/2026
v0.1.0First public release of **llmwiki** β€” a Karpathy-style LLM Wiki that compiles Claude Code session transcripts into a beautiful, searchable, self-hosted knowledge base. ## Highlights - **Claude Code adapter** with privacy-by-default redaction (username, API keys, tokens, emails) - **Obsidian vault input adapter** - **Codex CLI adapter** (stub) - **God-level static HTML builder** β€” Inter + JetBrains Mono, Cmd+K command palette, keyboard shortcuts, dark mode, copy buttons, breadcrumbs, filter barMedium4/8/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

zotero-mcp-liteπŸš€ Run a high-performance MCP server for Zotero, enabling customizable workflows without cloud dependency or API keys.main@2026-04-21
git-notes-memory🧠 Store and search your notes effectively with Git-native memory storage, enhancing productivity for Claude Code users.main@2026-04-21
claude-code-plugins-plus-skills423 plugins, 2,849 skills, 177 agents for Claude Code. Open-source marketplace at tonsofskills.com with the ccpi CLI package manager.v4.26.0
rails-ai-contextAuto-introspect your Rails app and expose it to AI assistants. 38 tools, zero config, works with Claude, Cursor, Copilot, and any MCP client.v5.10.0
ctxraySee how you really use AI β€” X-ray your AI coding sessions locallyv2.2.1