Skip to main content

MCP server for semantic search over Obsidian notes using local RAG

Project description

License: MIT PyPI

obsidian-notes-rag

MCP server and CLI for semantic search over your Obsidian vault. Generates embeddings with OpenAI, Ollama, or LM Studio. Stores vectors locally in sqlite-vec (~200KB, no telemetry, no network calls).

What it does

Search your notes by meaning, not just keywords:

obsidian-rag search "project architecture decisions" -n 5
obsidian-rag similar "Projects/Platform Hub.md"
obsidian-rag context "Daily Notes/2026-02-14.md"

As an MCP server, it gives any compatible AI assistant the same capabilities — searching your notes, finding related content, and pulling context during conversations.

Requirements

  • Python 3.11+
  • uv (for running and installing)
  • One of: OPENAI_API_KEY, Ollama, or LM Studio for embeddings

Setup

1. Run the setup wizard

uvx obsidian-notes-rag setup

This creates a config at ~/.config/obsidian-notes-rag/config.toml with your vault path, embedding provider, and API key.

2. Build the index

uvx obsidian-notes-rag index

Parses your markdown files, chunks them by heading structure (using Chonkie RecursiveChunker), generates embeddings, and stores everything in a local SQLite database.

3. Connect to an MCP client

Works with any MCP-compatible client. Examples:

Claude Code:

claude mcp add -s user obsidian-notes-rag -- uvx obsidian-notes-rag serve

Claude Desktop, Cursor, Windsurf, etc. (JSON config):

Add to your client's MCP config file (e.g. ~/Library/Application Support/Claude/claude_desktop_config.json for Claude Desktop on macOS):

{
  "mcpServers": {
    "obsidian-notes-rag": {
      "command": "uvx",
      "args": ["obsidian-notes-rag", "serve"]
    }
  }
}

4. Install the CLI (optional)

If you want obsidian-rag available as a standalone command:

uv tool install obsidian-notes-rag

This installs both obsidian-rag and obsidian-notes-rag to ~/.local/bin/.

Using the CLI with AI coding assistants

Instead of running the MCP server, you can have your AI assistant call the CLI directly via shell commands. This avoids loading MCP tool definitions into the context window, freeing up tokens for your actual work.

To do this, create a rule or skill that tells your assistant when and how to use the CLI:

  • Claude Code: Create a skill with CLI usage instructions
  • Cursor: Add a rule to .cursor/rules/
  • Windsurf: Add a rule to .windsurfrules

The rule should describe when to use each command (search, similar, context) and any project-specific conventions. This gives the assistant enough context to run the right CLI commands without the overhead of an MCP connection.

CLI Reference

# Search
obsidian-rag search "query"                  # semantic search
obsidian-rag search "standup" --type daily   # filter by note type
obsidian-rag search "design" -n 10           # more results

# Explore
obsidian-rag similar "Path/To/Note.md"       # find related notes
obsidian-rag context "Path/To/Note.md"       # show note + related context

# Index
obsidian-rag index                            # re-index vault
obsidian-rag index --clear                    # rebuild from scratch
obsidian-rag index --path-filter "Daily Notes/"  # index subset

# Info
obsidian-rag stats                            # show index size

# Services
obsidian-rag serve                            # start MCP server
obsidian-rag watch                            # watch for changes, auto-reindex
obsidian-rag install-service                  # macOS launchd auto-start
obsidian-rag uninstall-service                # remove service
obsidian-rag service-status                   # check service status

MCP Tools

Once connected, your AI assistant has access to:

Tool What it does
search_notes Find notes matching a query
get_similar Find notes similar to a given note
get_note_context Get a note with related context
get_stats Show index statistics
reindex Rebuild the index

Keeping the Index Fresh

Manual: obsidian-rag index

Auto-reindex on file changes: obsidian-rag watch (run in a terminal or background)

macOS background service: obsidian-rag install-service (starts on login, appears in System Settings > Login Items)

Using Ollama (local, no API key)

ollama pull nomic-embed-text
obsidian-rag --provider ollama index

Using LM Studio (local, no API key)

Load an embedding model in LM Studio, then:

obsidian-rag --provider lmstudio index

Configuration

The setup wizard writes to ~/.config/obsidian-notes-rag/config.toml. You can also override with environment variables:

Variable Description
OPENAI_API_KEY OpenAI API key
OBSIDIAN_RAG_PROVIDER openai (default), ollama, or lmstudio
OBSIDIAN_RAG_VAULT Path to Obsidian vault
OBSIDIAN_RAG_DATA Index storage path (default: platform-specific)
OBSIDIAN_RAG_OLLAMA_URL Ollama URL (default: http://localhost:11434)
OBSIDIAN_RAG_LMSTUDIO_URL LM Studio URL (default: http://localhost:1234)
OBSIDIAN_RAG_MODEL Override embedding model

How it works

  1. Parses markdown files, strips YAML frontmatter
  2. Chunks content using Chonkie's RecursiveChunker (splits by headings > paragraphs > lines > sentences, max 1500 tokens per chunk)
  3. Generates embeddings via your chosen provider
  4. Stores metadata in SQLite, vectors in sqlite-vec (KNN search via vec0 virtual tables)
  5. MCP server and CLI both query the same local database

Upgrading

If you installed the CLI with uv tool install, upgrade with:

uv tool upgrade obsidian-notes-rag

If you use uvx to run commands or the MCP server, it automatically uses the latest version.

Upgrading to v1.0.0

v1.0.0 replaces ChromaDB with sqlite-vec. After upgrading, rebuild your index:

obsidian-rag index --clear

The old ChromaDB data at ~/.local/share/obsidian-notes-rag/ (or your configured path) can be deleted.

Contributing

See CONTRIBUTING.md for development setup.

Support

Buy Me A Coffee

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

obsidian_notes_rag-1.0.1.tar.gz (122.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

obsidian_notes_rag-1.0.1-py3-none-any.whl (28.3 kB view details)

Uploaded Python 3

File details

Details for the file obsidian_notes_rag-1.0.1.tar.gz.

File metadata

  • Download URL: obsidian_notes_rag-1.0.1.tar.gz
  • Upload date:
  • Size: 122.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for obsidian_notes_rag-1.0.1.tar.gz
Algorithm Hash digest
SHA256 cd125fe6d8374e0079b88033c75036d6f2f530dfc9a6c9e41659af0ad2886b82
MD5 9f2a5933696f42c9b9c4dde5e64e48d1
BLAKE2b-256 f082d7789e940055d2bb83de8222c93249aa81fdbeda85aa11e8d4cc60b2492d

See more details on using hashes here.

Provenance

The following attestation bundles were made for obsidian_notes_rag-1.0.1.tar.gz:

Publisher: release.yml on ernestkoe/obsidian-notes-rag

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file obsidian_notes_rag-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for obsidian_notes_rag-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 fc90b923f5117aa1b8f002b3e8fc21bd6ce4ebdcee8bdc87b754746df385f598
MD5 88d7f699da6bb6b8c28cf961314599aa
BLAKE2b-256 f4d0311e1c5df056fce7ccde366c1face253531f7dc89469e32d3b23847dc8ef

See more details on using hashes here.

Provenance

The following attestation bundles were made for obsidian_notes_rag-1.0.1-py3-none-any.whl:

Publisher: release.yml on ernestkoe/obsidian-notes-rag

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page