Skip to main content

Queryable concept map of a codebase for LLM coding agents

Project description

combfind

When an AI coding agent gets a ticket like "users get logged out randomly on mobile," it has two failure modes: it reads too many files burning tokens and time, or it finds a relevant file and patches it locally, missing that the bug lives in shared code, an interface, or a sibling implementation.

combfind fixes this. It builds a concept map of a codebase so an agent can query "session token refresh" and get back ranked symbols with files and line ranges. The key is what it tells you about structure: is this an interface, an implementation, or one of several siblings that all need to change together? That context is what prevents a local patch to the wrong layer. In practice it cuts orientation-phase token cost by 50-66% (measured on one dev loop; your mileage will vary): the agent reads 3-5 targeted files instead of scanning dozens.

Runs entirely locally. Doesn't require paid APIs.

Install

# Local LLM (llama.cpp)
pip install "combfind[llm]" \
  --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu

# Apple Silicon (MLX)
pip install "combfind[mlx]"

# Remote OpenAI-compatible API
pip install "combfind[openai]"

# Gleam support
pip install "combfind[gleam]"

Download the default local model (~2.5 GB):

combfind download-model

Quick start

# Build the index
combfind init /path/to/repo --db repo.db

# Query it
combfind query "how does authentication work" --db repo.db

# Inspect a symbol from the results
combfind inspect auth.service.AuthService --db repo.db

Usage

init: build the index

# Basic
combfind init /path/to/repo --db repo.db

# Exclude test files (recommended for cleaner concepts)
combfind init /path/to/repo --db repo.db --exclude-regex '.*test.*'

# OpenAI-compatible API
COMBFIND_LLM_API_KEY=sk-... COMBFIND_LLM_MODEL=gpt-4o-mini \
  combfind init /path/to/repo --db repo.db --llm-mode openai

# Apple Silicon MLX
combfind init /path/to/repo --db repo.db --llm-mode mlx \
  --llm-model mlx-community/Qwen2.5-7B-Instruct-4bit
Flag Default Description
--db <repo_path>/.combfind.db Output database path
--llm-mode local LLM backend: local, openai, or mlx
--llm-model auto-detected GGUF path (local) or HF repo ID (mlx)
--exclude-paths Paths to skip, relative to repo root (repeatable)
--exclude-regex Regex matched against file paths to skip
--llm-workers 1 Parallel LLM calls (useful with --llm-mode openai)
--docgen off Generate docstrings for undocumented symbols (slow)
--force off Re-run all stages, ignoring the cache

query: search the index

combfind query "users get logged out randomly" --db repo.db
combfind query "where are database migrations" --db repo.db --format json

Text output:

[1] Token Refresh (implementation) - 0.87
    why: Handles session token validation and refresh logic.
    auth/service.py
      auth.service.AuthService.refresh  :42-67
      auth.service.AuthService.validate  :70-91

JSON output:

[
  {
    "rank": 1,
    "concept": "Token Refresh",
    "role": "implementation",
    "score": 0.87,
    "files": [
      {
        "path": "auth/service.py",
        "symbols": [
          {"name": "refresh", "qualified_name": "auth.service.AuthService.refresh", "start_line": 42, "end_line": 67},
          {"name": "validate", "qualified_name": "auth.service.AuthService.validate", "start_line": 70, "end_line": 91}
        ]
      }
    ],
    "why_relevant": "Handles session token validation and refresh logic.",
    "sibling_implementations": []
  }
]
Flag Default Description
--db .combfind.db Database to query
--top-k 5 Number of results
--format text text or json
--rerank off Re-score results with LLM (requires --llm-mode)
--agentic off Iterative query loop: LLM steers follow-up searches until satisfied (requires --llm-mode)
--agentic-limit 3 Max iterations for --agentic
--llm-mode LLM backend for --rerank / --agentic: local, openai, or mlx

inspect: look up a symbol

combfind inspect auth.service.AuthService --db repo.db
combfind inspect auth.service.AuthService auth.service.TokenService --db repo.db --format json

Output:

auth.service.AuthService  (class, auth/service.py:10-80)
concept:  Token Refresh  [implementation]
sig:      class AuthService

callers (1):
  auth.mock.MockAuthService  auth/mock.py:5

callees (1):
  auth.service.AuthService.validate  auth/service.py:20

concept siblings (1):
  auth.service.AuthService.validate  [method]  auth/service.py
Flag Default Description
--db .combfind.db Database to query
--format text text or json

How it works

The init pipeline runs six stages, each reading and writing to a SQLite file:

  1. parse: tree-sitter extracts files, symbols (signatures, line ranges, docstrings, imports)
  2. index: SCIP or tree-sitter heuristics populate a references table of calls, imports, and inheritance edges
  3. embed: sentence-transformers produces a vector per symbol
  4. cluster: symbols are grouped by package/directory, then sub-clustered with KMeans (~20 symbols per concept)
  5. label: a local LLM names and describes each cluster and assigns a structural role (interface | implementation | orchestrator | entry_point | domain_model | infrastructure | cross_cutting)
  6. embed concepts: sentence-transformers produces a vector per concept description

At query time: embed the query, cosine search over concept embeddings, optionally rerank with LLM, expand top concepts to member symbols and 1-hop callers/callees, return ranked symbols and code regions.

Stages are cached by a content hash of their inputs. When you re-run init, only stages affected by changed files are re-executed; the rest are skipped. Pass --force to rebuild from scratch.

Performance

On a 50k LOC Go codebase using Qwen2.5:7b via Ollama, the initial index builds in ~5 minutes. Query time is around 7 seconds, most of which is loading the local model.

Incremental reindexing is fast. When a handful of files change, re-running init takes around 30 seconds; only the stages affected by changed files are re-executed. The index is also crash-safe: progress is committed to SQLite in batches within each stage, so if a run is interrupted it picks up close to where it left off rather than starting over.

The goal is not to replace careful code reading. It is to give an agent a cheap orientation pass so it knows which 3-5 files to read rather than all 500. On that goal, combfind achieves file_recall@3 of 0.75 on structural queries with --rerank, evaluated against 10 real bug fixes from a production Go codebase. That puts it above dense retrieval baselines like BM25 and E5-large (NDCG ~0.57-0.59 per Practical Code RAG at Scale, 2025), with no API costs. The state of the art (Agentless with frontier models) reaches ~90% recall@5, but requires expensive multi-step LLM pipelines per query. combfind trades some accuracy for being fast, cheap, and fully local.

How to query well

combfind matches against concept descriptions, so structural queries outperform symptom descriptions.

"Where are user creation request DTOs and their field definitions?" finds the right code immediately. "EmailVerified boolean gets rejected by the validator" does not, because the symptom vocabulary has no overlap with the code structure.

When an agent receives a bug ticket, the right move is to translate the symptom into a structural question before querying: not what went wrong, but where does this kind of code live.

Supported languages

Python, Go, Java, Gleam, Erlang.

Optional SCIP tools

These are not required but produce more accurate call and import edges than the tree-sitter fallback:

Tool Language Install
scip-go Go go install github.com/scip-code/scip-go/cmd/scip-go@latest
scip-python Python npm install -g @sourcegraph/scip-python
scip-java Java scip-java releases

Using a remote LLM

Pass --llm-mode openai to use any OpenAI-compatible API:

export COMBFIND_LLM_BASE_URL=https://api.openai.com/v1
export COMBFIND_LLM_API_KEY=sk-...
export COMBFIND_LLM_MODEL=gpt-4o-mini

combfind init /path/to/repo --db repo.db --llm-mode openai

Works with OpenAI, Ollama (http://localhost:11434/v1), LM Studio (http://localhost:1234/v1), and any other OpenAI-compatible server.

Environment variables

Variable Default Description
COMBFIND_LOG_LEVEL info Log verbosity: debug, info, warning, error
COMBFIND_MODEL auto-detected GGUF path (local) or HF repo ID (mlx); equivalent to --llm-model
COMBFIND_LLM_BASE_URL Base URL for OpenAI-compatible API
COMBFIND_LLM_API_KEY API key for remote LLM
COMBFIND_LLM_MODEL gpt-4o-mini Model name for --llm-mode openai
HF_HUB_OFFLINE Set to 1 to use cached embedding models without network access

Contributing

See CONTRIBUTING.md for dev setup, commit conventions, and the release pipeline.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

combfind-1.5.0.tar.gz (41.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

combfind-1.5.0-py3-none-any.whl (50.3 kB view details)

Uploaded Python 3

File details

Details for the file combfind-1.5.0.tar.gz.

File metadata

  • Download URL: combfind-1.5.0.tar.gz
  • Upload date:
  • Size: 41.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for combfind-1.5.0.tar.gz
Algorithm Hash digest
SHA256 43cd7fef5309e46f19a3bae762148c35c4095ec030c10a0aa9a88c54fb33ee79
MD5 ddaf52ef2c533540bb1959e8a70b4f21
BLAKE2b-256 2d97763e4a6e4070c88f4df28390865a154ab47900bb2940afdcce0edf02f072

See more details on using hashes here.

Provenance

The following attestation bundles were made for combfind-1.5.0.tar.gz:

Publisher: release.yml on The127/combfind

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file combfind-1.5.0-py3-none-any.whl.

File metadata

  • Download URL: combfind-1.5.0-py3-none-any.whl
  • Upload date:
  • Size: 50.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for combfind-1.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2211dc893b227131a5656b20eb575d0693c43d2b24ac2770c57d0667f8b562f6
MD5 15ff12b4cc525c818713f23b2f77c995
BLAKE2b-256 fb3641c9672516c5e668f52bd213d38d387b679266bfa86956621eab73fd9f46

See more details on using hashes here.

Provenance

The following attestation bundles were made for combfind-1.5.0-py3-none-any.whl:

Publisher: release.yml on The127/combfind

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page