Skip to main content

Symbol lookup / search engine CLI

Project description

Pluk

Git-commit–aware symbol lookup & impact analysis engine


What is a "symbol"?

In Pluk, a symbol is any named entity in your codebase that can be referenced, defined, or impacted by changes. This includes functions, classes, methods, variables, and other identifiers that appear in your source code. Pluk tracks symbols across commits and repositories to enable powerful queries like "go to definition", "find all references", and "impact analysis".

Pluk gives developers “go-to-definition”, “find-all-references”, and “blast-radius” impact queries across one or more Git repositories. Heavy lifting (indexing, querying, storage) runs in Docker containers; a lightweight host shim (pluk) boots the stack and delegates commands into a thin CLI container (plukd) that talks to an internal API.


Key Features

  • Fuzzy symbol search (pluk search) for finding symbols in the current commit
  • Definition lookup (pluk define)
  • Impact analysis (pluk impact) to trace downstream dependents
  • Commit-aware indexing (pluk diff) across Git history
  • Containerized backend: PostgreSQL (graph) + Redis (broker/cache)
  • Strict lifecycle: pluk start (host shim) is required before any containerized commands; use the shim on the host to manage services (start, status, cleanup).
  • Host controls: pluk status to check, pluk cleanup to stop services

Quickstart

  1. Install
pip install pluk
  1. Start services (required)
pluk start

This creates/updates ~/.pluk/docker-compose.yml, pulls latest images, and brings up: postgres, redis, api (FastAPI), worker (Celery), and cli (idle exec target). The API stays internal to the Docker network. Note: service lifecycle commands (start, status, cleanup) are implemented in the host shim; run them on your host shell using the pluk command.

  1. Index and query
pluk init /path/to/repo           # queue full index (host shim extracts repo's origin URL and commit and forwards them into the containerized CLI)
pluk search MyClass               # fuzzy lookup; symbol matches branch-wide (cached)
pluk define my_function           # show definition (file:line@commit)
pluk impact computeFoo            # direct dependents; blast radius (cached)
pluk diff symbol abc123 def456    # symbol changes between commits abc123 → def456

Important: the repository you index must be public (or otherwise directly reachable by the worker container). The worker clones repositories inside the container environment using the repository URL; private repositories that require credentials are not supported by the host shim workflow.

Note: CLI commands that poll for job status (like pluk init) now display real-time output, thanks to unbuffered Python output in the CLI container.

  1. Check / stop (host-side)
pluk status     # tells you if services are running
pluk cleanup    # stops services (containers stay; fast restart)

If you want a full teardown (remove containers/network), use:

docker compose -f ~/.pluk/docker-compose.yml down -v

Data Flow

How it works

  • Host shim (pluk) writes the Compose file, pulls images, and runs docker compose up.
  • CLI container (plukd) is the exec target; it calls the API at http://api:8000.
  • API (FastAPI) serves read endpoints (/search, /define, /impact, /diff) and enqueues write jobs (/reindex) to Redis.
  • Worker (Celery) consumes jobs from Redis, clones/pulls repos into a volume (/var/pluk/repos), parses deltas, and writes to Postgres.
  • Reads never block on indexing; write progress can be polled via job status endpoints (planned).

Architecture (current)

  • Single image, multiple roles: Compose selects per-service command
    • apiuvicorn pluk.api:app --host 0.0.0.0 --port 8000
    • workercelery -A pluk.worker worker -l info
    • clisleep infinity (keeps container up for docker compose exec)
  • Internal networking: API is not exposed to the host; CLI calls it over Docker DNS (PLUK_API_URL=http://api:8000).
  • Config: PLUK_DATABASE_URL, PLUK_REDIS_URL injected via Compose; worker uses PLUK_REPOS_DIR=/var/pluk/repos.
  • Images: by default the shim uses jorstors/pluk:latest, postgres:16-alpine, and redis-alpine

Development

  • Project layout (src/pluk):
    • shim.py — host shim entrypoint (pluk)
    • cli.py — container CLI (plukd)
    • api.py — FastAPI app (internal API)
    • worker.py — Celery app & tasks
  • Entry points (pyproject.toml):
[project.scripts]
pluk  = "pluk.shim:main"
plukd = "pluk.cli:main"

Testing

pytest

Docker must be running; services must be started via pluk start for integration tests.


License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pluk-0.5.0.tar.gz (22.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pluk-0.5.0-py3-none-any.whl (19.6 kB view details)

Uploaded Python 3

File details

Details for the file pluk-0.5.0.tar.gz.

File metadata

  • Download URL: pluk-0.5.0.tar.gz
  • Upload date:
  • Size: 22.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for pluk-0.5.0.tar.gz
Algorithm Hash digest
SHA256 b22124832bbe5ef1dd1219b35503b3acac717c59d601244bd76c3b8fc6ec8eb0
MD5 086cfd501efb39e987ad2c83eff82559
BLAKE2b-256 44abf1def201044331522c17a5fa4a087a9efcd88a3bfbde7f6928b575cef637

See more details on using hashes here.

File details

Details for the file pluk-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: pluk-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 19.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for pluk-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 68a7d09151a0b0f52fa9e8b0af6cc49e54e3f05a89781f73d6395c4a43d42d49
MD5 b5bf3074d01d05809341fb7397c23c2b
BLAKE2b-256 4dc6d13f359a8b5c71c1845bc9c991fea878c6ca2b34f7ca45839ee54076cbdc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page