Karpathy-style LLM wiki from your Claude Code, Codex CLI, Cursor, and Obsidian sessions
Project description
llmwiki
LLM-powered knowledge base from your Claude Code, Codex CLI, Cursor, Gemini CLI, and Obsidian sessions. Built on Andrej Karpathy's LLM Wiki pattern.
๐ Live demo: pratiyush.github.io/llm-wiki
Rebuilt on every master push from the synthetic sessions in examples/demo-sessions/. No personal data. Shows every feature of the real tool (activity heatmap, tool charts, token usage, model info cards, vs-comparisons, project topics) running against safe reference data.
Every Claude Code, Codex CLI, Copilot, Cursor, and Gemini CLI session writes a full transcript to disk. You already have hundreds of them and never look at them again.
llmwiki turns that dormant history into a beautiful, searchable, interlinked knowledge base โ locally, in two commands. Plus, it produces AI-consumable exports (llms.txt, llms-full.txt, JSON-LD graph, per-page .txt + .json siblings) so other AI agents can query your wiki directly.
./setup.sh # one-time install
./build.sh && ./serve.sh # build + serve at http://127.0.0.1:8765
Contributing in one line: read CONTRIBUTING.md, keep PRs focused (one concern each), use feat: / fix: / docs: / chore: / test: commit prefixes, never commit real session data (raw/ is gitignored), no new runtime deps. CI must be green to merge.
Screenshots
All screenshots below are from the public demo site which is built on every master push from the dummy example sessions. Your own wiki will look identical โ just with your real work.
Home โ projects overview with activity heatmap
All sessions โ filterable table across every project
Session detail โ full conversation + tool calls
Changelog โ renders CHANGELOG.md as a first-class page
Projects index โ freshness badges + per-project stats
What you get
Human-readable
- All your sessions, converted from
.jsonlto clean, redacted markdown - A Karpathy-style wiki โ
sources/,entities/,concepts/,syntheses/,comparisons/,questions/linked with[[wikilinks]] - A beautiful static site you can browse locally or deploy to GitHub Pages
- Global search (Cmd+K command palette with fuzzy match over pre-built index)
- highlight.js client-side syntax highlighting (light + dark themes)
- Dark mode (system-aware + manual toggle with
data-theme) - Keyboard shortcuts:
/search ยทg h/p/snav ยทj/krows ยท?help - Collapsible tool-result sections (auto-expand > 500 chars)
- Copy-as-markdown + copy-code buttons
- Breadcrumbs + reading progress bar
- Filter bar on sessions table (project/model/date/text)
- Reading time estimates (
X min read) - Related pages panel at the bottom of every session
- Activity heatmap on the home page
- Model info cards with structured schema (provider, pricing, benchmarks)
- Auto-generated vs-comparison pages between AI models
- Append-only changelog timeline with pricing sparkline
- Project topic chips (GitHub-style tags on project cards)
- Agent labels (colored badges: Claude/Codex/Copilot/Cursor/Gemini)
- Recently-updated card on the home page
- Dataview-style structured queries in the command palette
- Hover-to-preview wikilinks
- Deep-link icons next to every heading
- Mobile-responsive + print-friendly
AI-consumable (v0.4)
Every HTML page has sibling machine-readable files at the same URL:
<page>.htmlโ human HTML with schema.org microdata<page>.txtโ plain text version (no HTML tags)<page>.jsonโ structured metadata + body
Site-level AI-agent entry points:
| File | What |
|---|---|
/llms.txt |
Short index per llmstxt.org spec |
/llms-full.txt |
Flattened plain-text dump (~5 MB cap) โ paste into any LLM's context |
/graph.jsonld |
Schema.org JSON-LD entity/concept/source graph |
/sitemap.xml |
Standard sitemap with lastmod |
/rss.xml |
RSS 2.0 feed of newest sessions |
/robots.txt |
AI-friendly robots with llms.txt reference |
/ai-readme.md |
AI-specific navigation instructions |
/manifest.json |
Build manifest with SHA-256 hashes + perf budget |
Every page also includes an <!-- llmwiki:metadata --> HTML comment that AI agents can parse without fetching the separate .json sibling.
Recipe โ query graph.jsonld from your terminal
The JSON-LD graph isn't just for crawlers โ you can ask quick questions about your wiki without leaving the shell. Example: print every session as a tree, grouped by project:
python3 examples/scripts/tree_from_graph.py
Output:
๐ 8 sessions across 3 projects
(site/graph.jsonld v1.3.0)
llmwiki/
โโโ demo-blog-engine/ (4 sessions)
โ โโโ 2026-03-12 scaffolding-the-rust-blog-engine
โ โโโ 2026-03-18 adding-syntax-highlighting
โ โโโ 2026-03-25 rss-feed-and-sitemap
โ โโโ 2026-04-01 dark-mode-toggle
โโโ demo-ml-pipeline/ (2 sessions)
โ โโโ 2026-01-20 training-data-pipeline
โ โโโ 2026-02-02 model-training-loop
โโโ demo-todo-api/ (2 sessions)
โโโ 2026-02-08 fastapi-project-bootstrap
โโโ 2026-02-15 adding-oauth-login
The full script is stdlib-only at examples/scripts/tree_from_graph.py. Same recipe pattern works for any aggregation question โ count sessions per model, find the largest project by token usage, list every entity that appears in 3+ sessions, etc. The graph is yours to slice.
Quality & governance (v1.0)
- 4-factor confidence scoring โ source count, source quality, recency, cross-references; with Ebbinghaus-inspired decay per content-type
- 5-state lifecycle machine โ draft โ reviewed โ verified โ stale โ archived with 90-day auto-stale
- 16 lint rules โ 8 structural (frontmatter, link integrity, orphans, freshness, duplicates, index syncโฆ) + 3 LLM-powered (contradictions, claim verification, summary accuracy) + stale_candidates (#51) + tags_topics_convention (#302) + stale_reference_detection (#303) + frontmatter_count_consistency (#378) + tools_consistency (#378)
- Auto Dream โ MEMORY.md consolidation after 24h + 5 sessions: resolve relative dates, prune outdated, 200-line cap
- 9 navigation files โ CLAUDE.md, AGENTS.md, MEMORY.md, SOUL.md, CRITICAL_FACTS.md, hints.md, hot.md + per-project hot caches
Obsidian-native experience (v1.0)
link-obsidianCLI โ symlinks the whole project into an Obsidian vault; graph view + backlinks + full-text search just work- Dataview dashboard โ 10 ready-to-use queries (recently updated, by confidence, by lifecycle, by project, by entity type, open questions, stale pages)
- Templater templates โ 4 templates for source/entity/concept/synthesis pages, seeded with confidence + lifecycle + today's date
- Category pages โ tag-based index pages in both Dataview (Obsidian) and static markdown (HTML) modes
- Integration guide โ
docs/obsidian-integration.mdcovers 6 recommended plugins with per-plugin configs
Automation
- SessionStart hook โ auto-syncs new sessions in the background on every Claude Code launch
- Auto-build on sync โ
/wiki-synctriggers/wiki-build(configurable; default on) - One-shot pipeline โ
llmwiki allruns build โ graph โ export โ lint in a single command (--strictfor CI) - MCP server โ 12 production tools (query, search, list, read, lint, sync, export, + confidence, lifecycle, dashboard, entity search, category browse) queryable from any MCP client (Claude Desktop, Cline, Cursor, ChatGPT desktop)
- Pending ingest queue โ SessionStart hook converts + queues;
/wiki-syncprocesses queue - No servers, no database, no npm โ Python stdlib +
markdown. Syntax highlighting loads from a highlight.js CDN at view time.
How it works
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ~/.claude/projects/*/*.jsonl โ โ Claude Code sessions
โ ~/.codex/sessions/**/*.jsonl โ โ Codex CLI sessions
โ ~/Library/.../Cursor/workspaceSโฆ โ โ Cursor
โ ~/Documents/Obsidian Vault/ โ โ Obsidian
โ ~/.gemini/ โ โ Gemini CLI
โโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ python3 -m llmwiki sync
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ raw/sessions/<project>/ โ โ immutable markdown (Karpathy layer 1)
โ 2026-04-08-<slug>.md โ
โโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ /wiki-ingest (your coding agent)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ wiki/sources/<slug>.md โ โ LLM-generated wiki (Karpathy layer 2)
โ wiki/entities/<Name>.md โ
โ wiki/concepts/<Name>.md โ
โ wiki/syntheses/<Name>.md โ
โ wiki/comparisons/<Name>.md โ
โ wiki/questions/<Name>.md โ
โ wiki/index.md, overview.md, log.md โ
โโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ python3 -m llmwiki build
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ site/ โ โ static HTML + AI exports
โ โโโ index.html, style.css, ... โ
โ โโโ sessions/<project>/<slug>.html โ
โ โโโ sessions/<project>/<slug>.txt โ (AI sibling)
โ โโโ sessions/<project>/<slug>.json โ (AI sibling)
โ โโโ llms.txt, llms-full.txt โ
โ โโโ graph.jsonld โ
โ โโโ sitemap.xml, rss.xml โ
โ โโโ robots.txt, ai-readme.md โ
โ โโโ manifest.json โ
โ โโโ search-index.json โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
See docs/architecture.md for the full 3-layer Karpathy + 8-layer build breakdown.
Documentation
Full production documentation lives under docs/. The editorial
hub is docs/index.md โ tutorials, per-agent guides,
reference, and deployment, all in one place.
Start here:
| Goal | Read |
|---|---|
| Install and build your first site in 10 minutes | Tutorial 01 โ 02 |
| Use llmwiki with Claude Code | Tutorial 03 |
| Use llmwiki with Codex CLI | Tutorial 04 |
| Query / lint / review your wiki daily | Tutorial 05 |
| Point llmwiki at an existing Obsidian / Logseq vault | Tutorial 06 |
| See four real end-to-end workflows | Tutorial 07 |
Contributing to docs? See the style guide.
Install
macOS / Linux
git clone https://github.com/Pratiyush/llm-wiki.git
cd llm-wiki
./setup.sh
Windows
git clone https://github.com/Pratiyush/llm-wiki.git
cd llm-wiki
setup.bat
With pip (v0.3+)
pip install -e . # basic โ everything you need
pip install -e '.[pdf]' # + PDF ingestion
pip install -e '.[dev]' # + pytest + ruff
pip install -e '.[all]' # all of the above
Syntax highlighting is now powered by highlight.js, loaded from a CDN at view time โ no optional deps required.
What setup does
- Creates
raw/,wiki/,site/data directories - Installs the
llmwikiPython package in-place - Detects your coding agents and enables matching adapters
- Optionally offers to install the
SessionStarthook into~/.claude/settings.jsonfor auto-sync - Runs a first sync so you see output immediately
For maintainers
Running the project? The governance scaffold lives under docs/maintainers/ and is loaded by a dedicated skill:
| File | What it's for |
|---|---|
CONTRIBUTING.md |
Short rules for contributors โ read this first |
CODE_OF_CONDUCT.md |
Contributor Covenant 2.1 |
SECURITY.md |
Disclosure process for redaction bugs, XSS, data leaks |
docs/maintainers/ARCHITECTURE.md |
One-page system diagram + layer boundaries + what NOT to add |
docs/maintainers/REVIEW_CHECKLIST.md |
Canonical code-review criteria |
docs/maintainers/RELEASE_PROCESS.md |
Version bump โ CHANGELOG โ tag โ build โ publish |
docs/maintainers/TRIAGE.md |
Label taxonomy + stale-issue policy |
docs/maintainers/ROADMAP.md |
Near-term plan + release themes |
docs/maintainers/DECLINED.md |
Graveyard of declined ideas with reasons |
Four Claude Code slash commands automate the common ops:
/review-pr <N>โ apply the REVIEW_CHECKLIST to a PR and post findings/triage-issue <N>โ label + milestone + priority a new issue/release <version>โ walk the release process step by step/maintainerโ meta-skill that loads every governance doc as context
Running E2E tests
The unit suite (pytest tests/ โ 472 tests) runs in milliseconds and
covers every module. The end-to-end suite under tests/e2e/ is
separate: it builds a minimal demo site, serves it on a random port,
drives a real browser via Playwright,
and runs scenarios written in Gherkin
via pytest-bdd.
Why both? Unit tests lock the contract at the module boundary; E2E locks the contract at the user's browser. A diff that passes unit tests but breaks the Cmd+K palette will fail E2E.
Install the extras (one-time, ~300 MB for Chromium):
pip install -e '.[e2e]'
python -m playwright install chromium
Run the suite:
pytest tests/e2e/ --browser=chromium
Run a single feature:
pytest tests/e2e/test_command_palette.py --browser=chromium -v
The E2E suite is excluded from the default pytest tests/ run
(see the --ignore=tests/e2e addopt in pyproject.toml) so you
can iterate on the unit suite without waiting for browser installs.
CI runs the E2E job as a separate workflow (.github/workflows/e2e.yml)
that only fires on PRs touching build.py, the viz modules, or
tests/e2e/**.
Feature files live under tests/e2e/features/ โ one per UI area
(homepage, session page, command palette, keyboard nav, mobile nav,
theme toggle, copy-as-markdown, responsive breakpoints, edge
cases, accessibility, visual regression). Step definitions
are all in tests/e2e/steps/ui_steps.py. Adding a new scenario is
usually a 2-line change to a .feature file plus maybe one new step.
Run locally with an HTML report:
pytest tests/e2e/ --browser=chromium \
--html=e2e-report/index.html --self-contained-html
open e2e-report/index.html # macOS โ opens the browseable report
Where to see test reports:
| What | Where |
|---|---|
| Unit test results | GitHub Actions โ ci.yml โ latest run โ lint-and-test job logs |
| E2E HTML report | GitHub Actions โ e2e.yml โ latest run โ Artifacts โ e2e-html-report (14-day retention) |
| Visual regression screenshots | Same run โ Artifacts โ e2e-screenshots |
| Playwright traces (failed runs only) | Same run โ Artifacts โ playwright-traces (open with playwright show-trace <zip>) |
| Demo site deploy status | GitHub Actions โ pages.yml โ latest run |
Locally, the HTML report is one file (e2e-report/index.html) that
you can open in any browser โ pass/fail per scenario, duration,
stdout/stderr, screenshot on failure.
Scheduled sync
For a daily / weekly cron-style sync, schedule llmwiki sync directly via your OS's native job runner (launchd on macOS, systemd on Linux, Task Scheduler on Windows). Paths and adapter selection come from examples/sessions_config.json.
CLI reference
llmwiki init # scaffold raw/ wiki/ site/ + seed nav files
llmwiki sync # convert .jsonl โ markdown (auto-build + auto-lint if configured)
llmwiki build # compile static HTML + AI exports
llmwiki serve # local HTTP server on 127.0.0.1:8765
llmwiki adapters # list available adapters + configured state (v1.0)
llmwiki graph # build knowledge graph (v0.2)
llmwiki lint # 16-rule wiki lint (v1.2)
llmwiki export <format> # AI-consumable exports (v0.4)
llmwiki synthesize # auto-ingest synthesis pipeline (v0.5)
llmwiki all # build โ graph โ export โ lint in one shot (v1.2)
llmwiki version
Each subcommand has its own --help. All commands are also wrapped in one-click shell/batch scripts: sync.sh/.bat, build.sh/.bat, serve.sh/.bat, upgrade.sh/.bat.
Works with
| Agent | Adapter | Status | Added in |
|---|---|---|---|
| Claude Code | llmwiki.adapters.claude_code |
โ Production | v0.1 |
| Obsidian (input) | llmwiki.adapters.obsidian |
โ Production | v0.1 |
| Obsidian (output) | llmwiki.obsidian_output |
โ Production | v0.2 |
| Codex CLI | llmwiki.adapters.codex_cli |
โ Production | v0.3 |
| Cursor | llmwiki.adapters.cursor |
โ Production | v0.5 |
| Gemini CLI | llmwiki.adapters.gemini_cli |
โ Production | v0.5 |
| Copilot Chat | llmwiki.adapters.copilot_chat |
โ Production | v0.9 |
| Copilot CLI | llmwiki.adapters.copilot_cli |
โ Production | v0.9 |
| OpenCode / OpenClaw | โ | โธ Deferred | โ |
Adding a new agent is one small file โ subclass BaseAdapter, declare SUPPORTED_SCHEMA_VERSIONS, ship a fixture + snapshot test.
MCP server
llmwiki ships its own MCP server (stdio transport, no SDK dependency) so any MCP client can query your wiki directly.
python3 -m llmwiki.mcp # runs on stdin/stdout
Twelve production tools (7 core + 5 added in v1.0 #159):
| Tool | What |
|---|---|
wiki_query(question, max_pages) |
Keyword search + page content (no LLM synthesis) |
wiki_search(term, include_raw) |
Raw grep over wiki/ (+ optional raw/) |
wiki_list_sources(project) |
List raw source files with metadata |
wiki_read_page(path) |
Read one page (path-traversal guarded) |
wiki_lint() |
Orphans + broken-wikilinks report |
wiki_sync(dry_run) |
Trigger the converter |
wiki_export(format) |
Return any AI-consumable export (llms.txt, jsonld, sitemap, rss, manifest) |
wiki_confidence(min, max) |
Pages by confidence range (v1.0) |
wiki_lifecycle(state) |
Pages by draft/reviewed/verified/stale/archived (v1.0) |
wiki_dashboard() |
Health summary: counts by type, lifecycle, confidence (v1.0) |
wiki_entity_search(name, entity_type) |
Search entities by name substring or type (v1.0) |
wiki_category_browse(tag) |
Browse tags with counts, drill into specific tag (v1.0) |
Register in your MCP client's config โ e.g. for Claude Desktop, add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"llmwiki": {
"command": "python3",
"args": ["-m", "llmwiki.mcp"]
}
}
}
Configuration
Single JSON config at examples/sessions_config.json. Copy to config.json and edit:
{
"filters": {
"live_session_minutes": 60,
"exclude_projects": []
},
"redaction": {
"real_username": "YOUR_USERNAME",
"replacement_username": "USER",
"extra_patterns": [
"(?i)(api[_-]?key|secret|token|bearer|password)...",
"sk-[A-Za-z0-9]{20,}"
]
},
"truncation": {
"tool_result_chars": 500,
"bash_stdout_lines": 5
},
"adapters": {
"obsidian": {
"vault_paths": ["~/Documents/Obsidian Vault"]
}
}
}
All paths, regexes, truncation limits, and per-adapter settings are tunable. See docs/configuration.md.
.llmwikiignore
Gitignore-style pattern file at the repo root. Skip entire projects, dates, or specific sessions without touching config:
# Skip a whole project
confidential-client/
# Skip anything before a date
*2025-*
# Keep exception
!confidential-client/public-*
Karpathy's LLM Wiki pattern
This project follows the three-layer structure described in Karpathy's gist:
- Raw sources (
raw/) โ immutable. Session transcripts converted from.jsonl. - The wiki (
wiki/) โ LLM-generated. One page per entity, concept, source. Interlinked via[[wikilinks]]. - The schema (
CLAUDE.md,AGENTS.md) โ tells your agent how to ingest and query.
See docs/architecture.md for the full breakdown and how it maps to the file tree.
Design principles
- Stdlib first โ only mandatory runtime dep is
markdown.pypdfis an optional extra for PDF ingestion. - Works offline โ no Google fonts, no external CSS. Syntax highlighting loads from a highlight.js CDN but degrades gracefully without it.
- Redact by default โ username, API keys, tokens, emails all get redacted before entering the wiki.
- Idempotent everything โ re-running any command is safe and cheap.
- Agent-agnostic core โ the converter doesn't know which agent produced the
.jsonl; adapters translate. - Privacy by default โ localhost-only binding, no telemetry, no cloud calls.
- Dual-format output (v0.4) โ every page ships both for humans (HTML) and AI agents (TXT + JSON + JSON-LD + sitemap + llms.txt).
Docs
- Getting started โ 5-minute quickstart
- Setup guide โ 15-minute end-to-end tutorial: local setup โ deploy to GitHub Pages โ customization (v1.0)
- Obsidian integration โ 5-minute setup, 6 recommended plugins, config recipes (v1.0)
- Architecture โ Karpathy 3-layer + 8-layer build breakdown
- Configuration โ every tuning knob
- Privacy โ redaction rules +
.llmwikiignore+ localhost binding - Windows setup โ Windows-specific gotchas
- Framework โ Open Source Framework v4.1 adapted for agent-native dev tools
- Research โ Phase 1.25 analysis of 15 prior LLM Wiki implementations
- Feature matrix โ all 161 features across 16 categories
- Roadmap โ Phase ร Layer ร Item MoSCoW table
- Translations: i18n/zh-CN, i18n/ja, i18n/es
Per-adapter docs:
- Claude Code adapter
- Codex CLI adapter
- Cursor adapter
- Gemini CLI adapter
- Obsidian adapter
- Copilot adapter (Chat + CLI)
Releases
| Version | Focus | Tag |
|---|---|---|
| v0.1.0 | Core release โ Claude Code adapter, god-level HTML UI, schema, CI, plugin scaffolding | v0.1.0 |
| v0.2.0 | Extensions โ 3 new slash commands, 3 new adapters, Obsidian bidirectional, full MCP server | v0.2.0 |
| v0.3.0 | PyPI packaging, eval framework, i18n scaffold | v0.3.0 |
| v0.4.0 | AI + human dual format โ per-page .txt/.json siblings, llms.txt, JSON-LD graph, sitemap, RSS, schema.org microdata, reading time, related pages, activity heatmap, deep-link anchors, build manifest, link checker, wiki_export MCP tool |
v0.4.0 |
| v0.5.0 โ v0.9.0 | Internal sprint milestones โ features (_context.md, auto-ingest, qmd export, model-profile schema, activity heatmap, Copilot adapters, etc.) shipped consolidated under the v0.9.x line. No standalone tags were published. |
โ |
| v0.9.1 | Sprint 1 & 2 foundation โ link-obsidian CLI, 4-factor confidence scoring, 5-state lifecycle machine, llmbook-reference skill, 7 entity types, flat raw/ naming, pending ingest queue, _context.md stubs, meeting + Jira adapters, configurable Web Clipper intake, rich log format |
v0.9.1 |
| v0.9.2 | Sprint 3 quality โ 11 lint rules (8 basic + 3 LLM-powered), Auto Dream MEMORY.md consolidation, Dataview dashboard template, category pages (Dataview + static), auto-build on sync + configurable lint schedule | v0.9.2 |
| v0.9.3 | Sprint 3 polish โ Obsidian Templater templates, integration guide, two-way editing tests, MCP server 7โ12 tools, adapter config validation, pipeline fix (sigstore, PyPI gate) | v0.9.3 |
| v0.9.4 | Session C1 (Sprint 4) โ multi-agent skill installer, enhanced search with facets, configurable scheduled sync (launchd/systemd/Task Scheduler), CI wiki-checks workflow | v0.9.4 |
| v0.9.5 | Docs polish + consistency audit before v1.0.0 | v0.9.5 |
| v1.0.0 | Production-ready Obsidian integration โ full v1.0 scope | v1.0.0 |
| v1.1.0-rc1 | Solo quick-win sprint โ candidates workflow, Ollama scaffold, prompt-cache scaffold | v1.1.0-rc1 |
| v1.1.0-rc2 | Session E โ interactive graph viewer + remaining code-only v1.1 work | v1.1.0-rc2 |
| v1.1.0-rc3 | Gap-sweep bundle โ state portability, quarantine, sync --status, log CLI, synthesize --estimate breakdown, tag family, stale references, graph context menu, raw immutability, AI-sessions default | v1.1.0-rc3 |
| v1.1.0-rc4 | Navigation + quality โ graph site_url resolver (99.7% โ 0% dead clicks), llmwiki backlinks CLI (95% โ 0% orphan pages), source-code โ GitHub link rewriter (471 โ 100 broken), verify-before-fixing contribution rule |
v1.1.0-rc4 |
| v1.1.0-rc5 | Site audit + 5 closed batches โ session-local ref stripping (351 โ 247 broken), cheatsheet, README/CONTRIBUTING compile, expanded E2E, slash-CLI parity test, 4 adapter docs, Ollama tutorial, dual-mode docs skeleton, /wiki-synthesize slash |
v1.1.0-rc5 |
| v1.1.0-rc6 | rc6 batch โ fixed adapter tag hardcoded to claude-code for every adapter (#346), tutorial UX polish with in-page TOC + prev/next + edit-on-GitHub (#282), command palette now indexes 107 doc pages + 17 slash commands (#277), content-hash cache for md_to_html (#283) |
v1.1.0-rc6 |
| v1.1.0-rc7 | rc7 batch โ automatic AI-suggested tags during synthesis (#351), link-checker config fix (#348, #350, #353) | v1.1.0-rc7 |
| v1.1.0-rc8 | rc8 batch โ complete Mode B agent-delegate backend (#316): new llmwiki synthesize --list-pending + --complete <uuid> CLI subcommands, /wiki-sync step 6 auto-detects pending prompts, Mode B ships end-to-end without an API key |
v1.1.0-rc8 |
| v1.2.0 | First stable on the 1.x line โ llmwiki all one-shot pipeline runner, Playwright + axe-core E2E suite (#384), project-stub auto-seeding, 2 new lint rules, critical export-fidelity + sync-collision fixes, 10 UX-critique items (#387). PyPI distribution name: llm-notebook. |
v1.2.0 |
Roadmap
Shipped milestones:
- v0.5.0 โ Folder-level
_context.md, auto-ingest, adapter graduations, lazy search index, scheduled sync, WCAG, E2E tests (milestone) - v0.6.0 โ qmd export, GitLab Pages CI, PyPI release automation, maintainer governance scaffold (milestone)
- v0.7.0 โ Structured model-profile schema, vs-comparison pages, append-only changelog timeline (milestone)
- v0.8.0 โ 365-day activity heatmap, tool-calling bar chart, token usage card, session metrics frontmatter (milestone)
- v0.9.0 โ Project topics, agent labels, Copilot adapters, image pipeline, highlight.js, public demo deployment
- v0.9.x โ Sprint 1-4 foundation for v1.0.0 Obsidian integration: confidence scoring, lifecycle state machine, 9 navigation files, 11 lint rules, Auto Dream, Dataview dashboard, multi-agent skills, 12-tool MCP server, meeting + Jira adapters
Active milestones:
| Milestone | Focus | Tracking |
|---|---|---|
| v1.0.0 | Final docs polish + PyPI trusted publisher + release | Milestone |
| v1.1.0 | Ollama backend, prompt caching, interactive graph viewer, Homebrew tap | Milestone |
| v1.2.0 | ChatGPT + OpenCode adapters, vault-overlay mode, tree-aware search, cache tiers | Milestone |
Deployment targets
- GitHub Pages โ shipped in v0.1 via
.github/workflows/pages.yml(triggers on push to master). Seedocs/deploy/github-pages.md. - Docker / GHCR โ pull and run:
docker compose pull && docker compose up -d. Image published toghcr.io/pratiyush/llm-wikion every tag push. Seedocs/deploy/docker.md. - GitLab Pages โ copy
.gitlab-ci.yml.exampleโ.gitlab-ci.yml. Seedocs/deploy/gitlab-pages.md. - Vercel / Netlify โ static deploy after
llmwiki build. Seedocs/deploy/vercel-netlify.md. - Any static host โ
llmwiki buildwrites tosite/, which you canrsync/scpanywhere.
Acknowledgements
- Andrej Karpathy for the LLM Wiki idea
- SamurAIGPT/llm-wiki-agent, lucasastorian/llmwiki, xoai/sage-wiki, and bashiraziz/llm-wiki-template โ prior art that shaped this.
- Python Markdown for the rendering pipeline, and highlight.js for client-side syntax highlighting.
- llmstxt.org for the llms.txt spec used in v0.4.
License
MIT ยฉ Pratiyush
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_notebook-1.3.39.tar.gz.
File metadata
- Download URL: llm_notebook-1.3.39.tar.gz
- Upload date:
- Size: 530.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9938988790895b81208c923e3c681ade25b247c8c9300c6aa55fe22c9892007f
|
|
| MD5 |
e4e466ae44559f9b7e812b1516518f27
|
|
| BLAKE2b-256 |
8c3e4c48c898eaadd8cb628c41cfdb9def56a76ec59ba0a2739ae6259195a6ba
|
Provenance
The following attestation bundles were made for llm_notebook-1.3.39.tar.gz:
Publisher:
release.yml on Pratiyush/llm-wiki
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llm_notebook-1.3.39.tar.gz -
Subject digest:
9938988790895b81208c923e3c681ade25b247c8c9300c6aa55fe22c9892007f - Sigstore transparency entry: 1390623815
- Sigstore integration time:
-
Permalink:
Pratiyush/llm-wiki@92ae5e44ed28ba7976917e940c2f81d9449ce6f4 -
Branch / Tag:
refs/tags/v1.3.39 - Owner: https://github.com/Pratiyush
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@92ae5e44ed28ba7976917e940c2f81d9449ce6f4 -
Trigger Event:
push
-
Statement type:
File details
Details for the file llm_notebook-1.3.39-py3-none-any.whl.
File metadata
- Download URL: llm_notebook-1.3.39-py3-none-any.whl
- Upload date:
- Size: 311.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dea0b3c1b8892d39ffbc054035eb6ec2a3c9cfeee861c16fad682400a3075f5f
|
|
| MD5 |
1499eb23aaf7d1933d755b5c653ea34e
|
|
| BLAKE2b-256 |
c2354fe0dbb45f63e13048f91494c6babb6237083f88d87418d8a4ede2b5cfdc
|
Provenance
The following attestation bundles were made for llm_notebook-1.3.39-py3-none-any.whl:
Publisher:
release.yml on Pratiyush/llm-wiki
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llm_notebook-1.3.39-py3-none-any.whl -
Subject digest:
dea0b3c1b8892d39ffbc054035eb6ec2a3c9cfeee861c16fad682400a3075f5f - Sigstore transparency entry: 1390623863
- Sigstore integration time:
-
Permalink:
Pratiyush/llm-wiki@92ae5e44ed28ba7976917e940c2f81d9449ce6f4 -
Branch / Tag:
refs/tags/v1.3.39 - Owner: https://github.com/Pratiyush
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@92ae5e44ed28ba7976917e940c2f81d9449ce6f4 -
Trigger Event:
push
-
Statement type: