Skip to main content

Lightweight semantic code search engine — hybrid vector + FTS + AST graph + regex fusion + MCP server

Project description

codexlens-search

Semantic code search engine with MCP server for Claude Code.

Hybrid search: vector + FTS + AST graph + ripgrep regex — with RRF fusion and reranking.

Quick Start

pip install codexlens-search[all]

Add to your project .mcp.json:

{
  "mcpServers": {
    "codexlens": {
      "command": "uvx",
      "args": ["--from", "codexlens-search[all]", "codexlens-mcp"],
      "env": {
        "CODEXLENS_EMBED_API_URL": "https://api.openai.com/v1",
        "CODEXLENS_EMBED_API_KEY": "${OPENAI_API_KEY}",
        "CODEXLENS_EMBED_API_MODEL": "text-embedding-3-small",
        "CODEXLENS_EMBED_DIM": "1536"
      }
    }
  }
}

That's it. Claude Code will auto-discover the tools: index_project -> Search.

Install

Choose the install that matches your platform:

# Minimal — CPU inference (fastembed bundles onnxruntime CPU)
pip install codexlens-search

# Windows GPU — DirectML, any DirectX 12 GPU (NVIDIA/AMD/Intel)
pip install codexlens-search[directml]

# Linux/Windows NVIDIA GPU — CUDA (requires CUDA + cuDNN)
pip install codexlens-search[cuda]

# Auto-select — DirectML on Windows, CPU elsewhere
pip install codexlens-search[all]

Platform Recommendations

Platform Recommended Command
Windows + any GPU [directml] pip install codexlens-search[directml]
Windows CPU only base pip install codexlens-search
Linux + NVIDIA GPU [cuda] pip install codexlens-search[cuda]
Linux CPU / AMD GPU base pip install codexlens-search
macOS (Apple Silicon) base pip install codexlens-search
Don't know / CI [all] pip install codexlens-search[all]

Note: On Windows, if you install the base package without [directml], the MCP server will auto-detect the missing GPU runtime and install onnxruntime-directml on first launch. GPU takes effect from the second start.

What's Included

All install variants include:

  • MCP servercodexlens-mcp command
  • AST parsing — tree-sitter symbol extraction + graph search
  • USearch — high-performance HNSW ANN backend (default)
  • FAISS — ANN + binary index backend (Hamming coarse search)
  • File watcher — watchdog auto-indexing
  • Gitignore filtering — recursive .gitignore support
  • Focused search — when no index exists, greps relevant files, indexes only those (~10s), then runs semantic search — no waiting for full index build

ANN Backend Selection

Three backends for approximate nearest neighbor search, auto-selected in order:

Backend Install Best for
usearch (default) Included Cross-platform, fastest CPU HNSW
faiss Included GPU acceleration, binary Hamming search
hnswlib Included Lightweight fallback

Override with CODEXLENS_ANN_BACKEND:

CODEXLENS_ANN_BACKEND=faiss    # use FAISS (GPU when available)
CODEXLENS_ANN_BACKEND=usearch  # use USearch (default)
CODEXLENS_ANN_BACKEND=hnswlib  # use hnswlib
CODEXLENS_ANN_BACKEND=auto     # auto-select (usearch > faiss > hnswlib)

MCP Tools

Search

Hybrid code search combining semantic vector, FTS, AST graph, and ripgrep regex.

Mode Description Requires
auto (default) Semantic + regex parallel. No index? Focused grep-index-search in ~10s.
symbol Find definitions by exact/fuzzy name match Index
refs Find cross-references — incoming and outgoing edges Index
regex Ripgrep regex on live files rg

Parameters: project_path, query, mode, scope (restricts auto/regex to subdirectory)

Results capped by CODEXLENS_TOP_K env var (default 10).

Cold Start Search

When no index exists, auto mode uses a focused search pipeline instead of waiting for a full index build:

  1. Expand query — split camelCase/snake_case into search terms
  2. Grep filesrg --count finds top 50 relevant files, ranked by match count
  3. Index — embed only those 50 files (~8-10s with GPU)
  4. Search — semantic vector search on the fresh index
  5. Background — full index builds asynchronously for next queries

This gives semantic results in ~10s vs ~100s for a full index build.

index_project

Build, update, or inspect the search index.

Action Description
sync (default) Incremental — only changed files
rebuild Full re-index from scratch
status Index statistics (files, chunks, symbols, refs)

Parameters: project_path, action, scope

find_files

Glob-based file discovery. Parameters: project_path, pattern (default **/*)

Max results controlled by CODEXLENS_FIND_MAX_RESULTS env var (default 100).

watch_project

Manage file watcher for automatic re-indexing on file changes.

Parameters: project_path, action (start / stop / status)

AST Features

Enabled by default. Disable with CODEXLENS_AST_CHUNKING=false.

  • Smart chunking — splits at symbol boundaries instead of fixed-size windows
  • Symbol extraction — 12 kinds: function, class, method, module, variable, constant, interface, type_alias, enum, struct, trait, property
  • Cross-references — import, call, inherit, type_ref edges
  • Graph search — seeded from vector/FTS results, BFS expansion with adaptive weights

Languages: Python, JavaScript, TypeScript, Go, Java, Rust, C, C++, Ruby, PHP, Scala, Kotlin, Swift, C#, Bash, Lua, Haskell, Elixir, Erlang.

Configuration Examples

Reranker (best quality)

Add reranker API on top of the Quick Start config:

"CODEXLENS_RERANKER_API_URL": "https://api.jina.ai/v1",
"CODEXLENS_RERANKER_API_KEY": "${JINA_API_KEY}",
"CODEXLENS_RERANKER_API_MODEL": "jina-reranker-v2-base-multilingual"

Multi-Endpoint Load Balancing

"CODEXLENS_EMBED_API_ENDPOINTS": "https://api1.example.com/v1|sk-key1|model,https://api2.example.com/v1|sk-key2|model",
"CODEXLENS_EMBED_DIM": "1536"

Format: url|key|model,url|key|model,... — replaces single-endpoint EMBED_API_URL/KEY/MODEL.

Local Models (Offline)

No API needed — fastembed runs the model locally via ONNX runtime.

# Pre-download models (optional — auto-downloads on first use)
codexlens-search download-models
{
  "mcpServers": {
    "codexlens": {
      "command": "codexlens-mcp",
      "env": {
        "CODEXLENS_DEVICE": "directml"
      }
    }
  }
}

Default local model: BAAI/bge-small-en-v1.5 (384d, ~33MB). To use a different model:

{
  "mcpServers": {
    "codexlens": {
      "command": "codexlens-mcp",
      "env": {
        "CODEXLENS_EMBED_MODEL": "BAAI/bge-base-en-v1.5",
        "CODEXLENS_EMBED_DIM": "768",
        "CODEXLENS_DEVICE": "directml"
      }
    }
  }
}

Available Local Models

Model Dim Size Notes
BAAI/bge-small-en-v1.5 384 ~33MB Default, fastest
BAAI/bge-base-en-v1.5 768 ~130MB Better quality
BAAI/bge-large-en-v1.5 1024 ~335MB Best English quality
BAAI/bge-small-zh-v1.5 512 ~46MB Chinese, fast
BAAI/bge-large-zh-v1.5 1024 ~335MB Chinese, best quality
sentence-transformers/all-MiniLM-L6-v2 384 ~23MB Lightweight general

CODEXLENS_EMBED_DIM must match the model's output dimension. Mismatched dim will cause indexing errors.

China Mirror

"CODEXLENS_HF_MIRROR": "https://hf-mirror.com"

Custom Model Cache

"CODEXLENS_MODEL_CACHE_DIR": "/path/to/cache"

GPU

Windows: pip install codexlens-search[directml] — works with any DirectX 12 GPU (NVIDIA/AMD/Intel). No CUDA needed. Even without [directml], the server auto-installs it on first launch.

Linux: pip install codexlens-search[cuda] adds CUDA support (requires CUDA + cuDNN).

Auto-detection priority: CUDA > DirectML > CPU

  • Embedding — ONNX runtime selects best available GPU provider, ~12x faster than CPU
  • FAISS — index auto-transfers to GPU 0 (CUDA only)

Force specific device: CODEXLENS_DEVICE=directml / cuda / cpu

CLI

codexlens-search --db-path .codexlens sync --root ./src
codexlens-search --db-path .codexlens search -q "auth handler" -k 10
codexlens-search --db-path .codexlens status
codexlens-search list-models
codexlens-search download-models

Environment Variables

Local Model

Variable Default Description
CODEXLENS_EMBED_MODEL BAAI/bge-small-en-v1.5 Local fastembed model name
CODEXLENS_EMBED_DIM 384 Vector dimension (must match model)
CODEXLENS_MODEL_CACHE_DIR fastembed default Model download cache directory
CODEXLENS_HF_MIRROR HuggingFace mirror (e.g. https://hf-mirror.com)

Embedding API (overrides local model)

Variable Description
CODEXLENS_EMBED_API_URL API base URL (e.g. https://api.openai.com/v1)
CODEXLENS_EMBED_API_KEY API key
CODEXLENS_EMBED_API_MODEL Model name (e.g. text-embedding-3-small)
CODEXLENS_EMBED_API_ENDPOINTS Multi-endpoint: url|key|model,...

Reranker

Variable Description
CODEXLENS_RERANKER_API_URL Reranker API base URL
CODEXLENS_RERANKER_API_KEY API key
CODEXLENS_RERANKER_API_MODEL Model name

Features

Variable Default Description
CODEXLENS_AST_CHUNKING true AST chunking + symbol extraction
CODEXLENS_GITIGNORE_FILTERING true Recursive .gitignore filtering
CODEXLENS_DEVICE auto auto / cuda / directml / cpu
CODEXLENS_AUTO_WATCH false Auto-start file watcher after indexing

MCP Tool Defaults

Variable Default Description
CODEXLENS_TOP_K 10 Search result limit
CODEXLENS_FIND_MAX_RESULTS 100 find_files result limit

Tuning

Variable Default Description
CODEXLENS_BINARY_TOP_K 200 Binary coarse search candidates
CODEXLENS_ANN_TOP_K 50 ANN fine search candidates
CODEXLENS_FTS_TOP_K 50 FTS results per method
CODEXLENS_FUSION_K 60 RRF fusion k parameter
CODEXLENS_RERANKER_TOP_K 20 Results to rerank
CODEXLENS_EMBED_BATCH_SIZE 32 Texts per API batch
CODEXLENS_EMBED_MAX_TOKENS 8192 Max tokens per text (0=no limit)
CODEXLENS_INDEX_WORKERS 2 Parallel indexing workers
CODEXLENS_MAX_FILE_SIZE 1000000 Max file size in bytes

Architecture

Query -> [Embedder] -> query vector
          |-> [FAISS Binary] -> candidates (Hamming)
          |     +-> [USearch/FAISS HNSW] -> ranked IDs (cosine)
          |-> [FTS exact + fuzzy] -> text matches
          |-> [GraphSearcher] -> symbol neighbors (seeded from vector/FTS)
          +-> [ripgrep] -> regex matches
               +-> [RRF Fusion] -> merged ranking
                     +-> [Reranker] -> final top-k

Development

git clone https://github.com/catlog22/codexlens-search.git
cd codexlens-search
pip install -e ".[dev,all]"
pytest

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

codexlens_search-0.6.8.tar.gz (2.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

codexlens_search-0.6.8-py3-none-any.whl (97.2 kB view details)

Uploaded Python 3

File details

Details for the file codexlens_search-0.6.8.tar.gz.

File metadata

  • Download URL: codexlens_search-0.6.8.tar.gz
  • Upload date:
  • Size: 2.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for codexlens_search-0.6.8.tar.gz
Algorithm Hash digest
SHA256 0624bbc94ea96e6d6c8ad179d2bc8a6aafc9a2c7366f6a04686e2df8d0b7e260
MD5 cc71f76ba91ffbb62e2fc7bb45940ae1
BLAKE2b-256 e417eba850d9be82adbdd29def71b48ac66db4771bfb85f0d9066bcab3cbe7b0

See more details on using hashes here.

File details

Details for the file codexlens_search-0.6.8-py3-none-any.whl.

File metadata

File hashes

Hashes for codexlens_search-0.6.8-py3-none-any.whl
Algorithm Hash digest
SHA256 63530ce3cfcf267195364dd20b6172d5933b74e5ade6802f9fb71dfce90908fd
MD5 b362e3795bb2f26e85c6ed3b7d28beb0
BLAKE2b-256 fb295537b9b0463f08e070a779bf39289ed1a5de1a40ea34003565c432fae792

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page