Lightweight semantic code search engine — hybrid vector + FTS + AST graph + regex fusion + MCP server
Project description
codexlens-search
Semantic code search engine with MCP server for Claude Code.
Hybrid search: vector + FTS + AST graph + ripgrep regex — with RRF fusion and reranking.
Quick Start
pip install codexlens-search[all]
Add to your project .mcp.json:
{
"mcpServers": {
"codexlens": {
"command": "uvx",
"args": ["--from", "codexlens-search[all]", "codexlens-mcp"],
"env": {
"CODEXLENS_EMBED_API_URL": "https://api.openai.com/v1",
"CODEXLENS_EMBED_API_KEY": "${OPENAI_API_KEY}",
"CODEXLENS_EMBED_API_MODEL": "text-embedding-3-small",
"CODEXLENS_EMBED_DIM": "1536"
}
}
}
}
That's it. Claude Code will auto-discover the tools: index_project -> Search.
Install
Choose the install that matches your platform:
# Minimal — CPU inference (fastembed bundles onnxruntime CPU)
pip install codexlens-search
# Windows GPU — DirectML, any DirectX 12 GPU (NVIDIA/AMD/Intel)
pip install codexlens-search[directml]
# Linux/Windows NVIDIA GPU — CUDA (requires CUDA + cuDNN)
pip install codexlens-search[cuda]
# Auto-select — DirectML on Windows, CPU elsewhere
pip install codexlens-search[all]
Platform Recommendations
| Platform | Recommended | Command |
|---|---|---|
| Windows + any GPU | [directml] |
pip install codexlens-search[directml] |
| Windows CPU only | base | pip install codexlens-search |
| Linux + NVIDIA GPU | [cuda] |
pip install codexlens-search[cuda] |
| Linux CPU / AMD GPU | base | pip install codexlens-search |
| macOS (Apple Silicon) | base | pip install codexlens-search |
| Don't know / CI | [all] |
pip install codexlens-search[all] |
Note: On Windows, if you install the base package without
[directml], the MCP server will auto-detect the missing GPU runtime and installonnxruntime-directmlon first launch. GPU takes effect from the second start.
What's Included
All install variants include:
- MCP server —
codexlens-mcpcommand - AST parsing — tree-sitter symbol extraction + graph search
- USearch — high-performance HNSW ANN backend (default)
- FAISS — ANN + binary index backend (Hamming coarse search)
- File watcher — watchdog auto-indexing
- Gitignore filtering — recursive
.gitignoresupport - Focused search — when no index exists, greps relevant files, indexes only those (~10s), then runs semantic search — no waiting for full index build
ANN Backend Selection
Three backends for approximate nearest neighbor search, auto-selected in order:
| Backend | Install | Best for |
|---|---|---|
usearch (default) |
Included | Cross-platform, fastest CPU HNSW |
faiss |
Included | GPU acceleration, binary Hamming search |
hnswlib |
Included | Lightweight fallback |
Override with CODEXLENS_ANN_BACKEND:
CODEXLENS_ANN_BACKEND=faiss # use FAISS (GPU when available)
CODEXLENS_ANN_BACKEND=usearch # use USearch (default)
CODEXLENS_ANN_BACKEND=hnswlib # use hnswlib
CODEXLENS_ANN_BACKEND=auto # auto-select (usearch > faiss > hnswlib)
MCP Tools
Search
Hybrid code search combining semantic vector, FTS, AST graph, and ripgrep regex.
| Mode | Description | Requires |
|---|---|---|
auto (default) |
Semantic + regex parallel. No index? Focused grep-index-search in ~10s. | |
symbol |
Find definitions by exact/fuzzy name match | Index |
refs |
Find cross-references — incoming and outgoing edges | Index |
regex |
Ripgrep regex on live files | rg |
Parameters: project_path, query, mode, scope (restricts auto/regex to subdirectory)
Results capped by CODEXLENS_TOP_K env var (default 10).
Cold Start Search
When no index exists, auto mode uses a focused search pipeline instead of waiting for a full index build:
- Expand query — split camelCase/snake_case into search terms
- Grep files —
rg --countfinds top 50 relevant files, ranked by match count - Index — embed only those 50 files (~8-10s with GPU)
- Search — semantic vector search on the fresh index
- Background — full index builds asynchronously for next queries
This gives semantic results in ~10s vs ~100s for a full index build.
index_project
Build, update, or inspect the search index.
| Action | Description |
|---|---|
sync (default) |
Incremental — only changed files |
rebuild |
Full re-index from scratch |
status |
Index statistics (files, chunks, symbols, refs) |
Parameters: project_path, action, scope
find_files
Glob-based file discovery. Parameters: project_path, pattern (default **/*)
Max results controlled by CODEXLENS_FIND_MAX_RESULTS env var (default 100).
watch_project
Manage file watcher for automatic re-indexing on file changes.
Parameters: project_path, action (start / stop / status)
AST Features
Enabled by default. Disable with CODEXLENS_AST_CHUNKING=false.
- Smart chunking — splits at symbol boundaries instead of fixed-size windows
- Symbol extraction — 12 kinds: function, class, method, module, variable, constant, interface, type_alias, enum, struct, trait, property
- Cross-references — import, call, inherit, type_ref edges
- Graph search — seeded from vector/FTS results, BFS expansion with adaptive weights
Languages: Python, JavaScript, TypeScript, Go, Java, Rust, C, C++, Ruby, PHP, Scala, Kotlin, Swift, C#, Bash, Lua, Haskell, Elixir, Erlang.
Configuration Examples
Reranker (best quality)
Add reranker API on top of the Quick Start config:
"CODEXLENS_RERANKER_API_URL": "https://api.jina.ai/v1",
"CODEXLENS_RERANKER_API_KEY": "${JINA_API_KEY}",
"CODEXLENS_RERANKER_API_MODEL": "jina-reranker-v2-base-multilingual"
Multi-Endpoint Load Balancing
"CODEXLENS_EMBED_API_ENDPOINTS": "https://api1.example.com/v1|sk-key1|model,https://api2.example.com/v1|sk-key2|model",
"CODEXLENS_EMBED_DIM": "1536"
Format: url|key|model,url|key|model,... — replaces single-endpoint EMBED_API_URL/KEY/MODEL.
Local Models (Offline)
No API needed — fastembed runs the model locally via ONNX runtime.
# Pre-download models (optional — auto-downloads on first use)
codexlens-search download-models
{
"mcpServers": {
"codexlens": {
"command": "codexlens-mcp",
"env": {
"CODEXLENS_DEVICE": "directml"
}
}
}
}
Default local model: BAAI/bge-small-en-v1.5 (384d, ~33MB). To use a different model:
{
"mcpServers": {
"codexlens": {
"command": "codexlens-mcp",
"env": {
"CODEXLENS_EMBED_MODEL": "BAAI/bge-base-en-v1.5",
"CODEXLENS_EMBED_DIM": "768",
"CODEXLENS_DEVICE": "directml"
}
}
}
}
Available Local Models
| Model | Dim | Size | Notes |
|---|---|---|---|
BAAI/bge-small-en-v1.5 |
384 | ~33MB | Default, fastest |
BAAI/bge-base-en-v1.5 |
768 | ~130MB | Better quality |
BAAI/bge-large-en-v1.5 |
1024 | ~335MB | Best English quality |
BAAI/bge-small-zh-v1.5 |
512 | ~46MB | Chinese, fast |
BAAI/bge-large-zh-v1.5 |
1024 | ~335MB | Chinese, best quality |
sentence-transformers/all-MiniLM-L6-v2 |
384 | ~23MB | Lightweight general |
CODEXLENS_EMBED_DIMmust match the model's output dimension. Mismatched dim will cause indexing errors.
China Mirror
"CODEXLENS_HF_MIRROR": "https://hf-mirror.com"
Custom Model Cache
"CODEXLENS_MODEL_CACHE_DIR": "/path/to/cache"
GPU
Windows: pip install codexlens-search[directml] — works with any DirectX 12 GPU (NVIDIA/AMD/Intel). No CUDA needed. Even without [directml], the server auto-installs it on first launch.
Linux: pip install codexlens-search[cuda] adds CUDA support (requires CUDA + cuDNN).
Auto-detection priority: CUDA > DirectML > CPU
- Embedding — ONNX runtime selects best available GPU provider, ~12x faster than CPU
- FAISS — index auto-transfers to GPU 0 (CUDA only)
Force specific device: CODEXLENS_DEVICE=directml / cuda / cpu
CLI
codexlens-search --db-path .codexlens sync --root ./src
codexlens-search --db-path .codexlens search -q "auth handler" -k 10
codexlens-search --db-path .codexlens status
codexlens-search list-models
codexlens-search download-models
Environment Variables
Local Model
| Variable | Default | Description |
|---|---|---|
CODEXLENS_EMBED_MODEL |
BAAI/bge-small-en-v1.5 |
Local fastembed model name |
CODEXLENS_EMBED_DIM |
384 |
Vector dimension (must match model) |
CODEXLENS_MODEL_CACHE_DIR |
fastembed default | Model download cache directory |
CODEXLENS_HF_MIRROR |
HuggingFace mirror (e.g. https://hf-mirror.com) |
Embedding API (overrides local model)
| Variable | Description |
|---|---|
CODEXLENS_EMBED_API_URL |
API base URL (e.g. https://api.openai.com/v1) |
CODEXLENS_EMBED_API_KEY |
API key |
CODEXLENS_EMBED_API_MODEL |
Model name (e.g. text-embedding-3-small) |
CODEXLENS_EMBED_API_ENDPOINTS |
Multi-endpoint: url|key|model,... |
Reranker
| Variable | Description |
|---|---|
CODEXLENS_RERANKER_API_URL |
Reranker API base URL |
CODEXLENS_RERANKER_API_KEY |
API key |
CODEXLENS_RERANKER_API_MODEL |
Model name |
Features
| Variable | Default | Description |
|---|---|---|
CODEXLENS_AST_CHUNKING |
true |
AST chunking + symbol extraction |
CODEXLENS_GITIGNORE_FILTERING |
true |
Recursive .gitignore filtering |
CODEXLENS_DEVICE |
auto |
auto / cuda / directml / cpu |
CODEXLENS_AUTO_WATCH |
false |
Auto-start file watcher after indexing |
MCP Tool Defaults
| Variable | Default | Description |
|---|---|---|
CODEXLENS_TOP_K |
10 |
Search result limit |
CODEXLENS_FIND_MAX_RESULTS |
100 |
find_files result limit |
Tuning
| Variable | Default | Description |
|---|---|---|
CODEXLENS_BINARY_TOP_K |
200 |
Binary coarse search candidates |
CODEXLENS_ANN_TOP_K |
50 |
ANN fine search candidates |
CODEXLENS_FTS_TOP_K |
50 |
FTS results per method |
CODEXLENS_FUSION_K |
60 |
RRF fusion k parameter |
CODEXLENS_RERANKER_TOP_K |
20 |
Results to rerank |
CODEXLENS_EMBED_BATCH_SIZE |
32 |
Texts per API batch |
CODEXLENS_EMBED_MAX_TOKENS |
8192 |
Max tokens per text (0=no limit) |
CODEXLENS_INDEX_WORKERS |
2 |
Parallel indexing workers |
CODEXLENS_MAX_FILE_SIZE |
1000000 |
Max file size in bytes |
Architecture
Query -> [Embedder] -> query vector
|-> [FAISS Binary] -> candidates (Hamming)
| +-> [USearch/FAISS HNSW] -> ranked IDs (cosine)
|-> [FTS exact + fuzzy] -> text matches
|-> [GraphSearcher] -> symbol neighbors (seeded from vector/FTS)
+-> [ripgrep] -> regex matches
+-> [RRF Fusion] -> merged ranking
+-> [Reranker] -> final top-k
Development
git clone https://github.com/catlog22/codexlens-search.git
cd codexlens-search
pip install -e ".[dev,all]"
pytest
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file codexlens_search-0.6.9.tar.gz.
File metadata
- Download URL: codexlens_search-0.6.9.tar.gz
- Upload date:
- Size: 2.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
909a0c45477266e37184771ef4a054b19bbbf1ad6c6f886e92ac4162f2f7c422
|
|
| MD5 |
56edd7c7bea764ec43b0b771608cb316
|
|
| BLAKE2b-256 |
cb609112f30761fb4cee5d5449e9320fe29091d121bda29d79ddc9856f0fb32a
|
File details
Details for the file codexlens_search-0.6.9-py3-none-any.whl.
File metadata
- Download URL: codexlens_search-0.6.9-py3-none-any.whl
- Upload date:
- Size: 97.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
178082fd4f42a82d40106f062111837a6ebd2e025edffc48f09a14a71c6f5c19
|
|
| MD5 |
b42a0ba4d42b51890911a320dc43aa88
|
|
| BLAKE2b-256 |
145bb042edadb4f0781daf9975bf6e241e7d6c29b5ffa3020de46e26fcde4dfe
|