Skip to main content

FastMCP server that exposes Velociraptor APIs.

Project description

Velociraptor MCP Server

Quickstart (from PyPI)

python3 -m venv .venv
. .venv/bin/activate
pip install velociraptor-mcp-server
velociraptor-mcp --config /absolute/path/to/velociraptor_lab/volumes/api/api.config.yaml

You need a Velociraptor mTLS API config (api.config.yaml). The included velociraptor_lab can generate one (see “Using the lab”).

A FastMCP-based server that exposes Velociraptor capabilities (VQL queries, hunts, artifacts, VFS/file ops, monitoring, alerts) over the MCP protocol for use with Codex/ChatGPT-style agents.

Prerequisites

  • Python 3.10+
  • Podman (or Docker) if you want to use the included velociraptor_lab for local testing.
  • Generated Velociraptor mTLS API config (api.config.yaml) – the lab can generate this for you.

Installation (from source or dev)

Create a virtualenv (avoids macOS/Homebrew PEP 668 errors) and install:

python3 -m venv .venv
. .venv/bin/activate
pip install .                 # runtime install from source
pip install -e .[dev]         # editable install with dev deps

Prefer the package on PyPI for users; use editable mode for development. The legacy workflow still works if you prefer the raw requirements:

pip install -r requirements.txt

Running the MCP server

After installing, you can either call the module directly or use the installed console script:

# installed entry point
velociraptor-mcp --config velociraptor_lab/volumes/api/api.config.yaml \
  --log-level INFO --server-name velociraptor-mcp

# or, from source
python3 main.py --config velociraptor_lab/volumes/api/api.config.yaml \
  --log-level INFO --server-name velociraptor-mcp

Options:

  • --config or env VELOCIRAPTOR_API_CONFIG: path to api.config.yaml (default volumes/api/api.config.yaml)
  • --log-level or env MCP_LOG_LEVEL (default INFO)
  • --server-name or env MCP_SERVER_NAME

Available tools (summary)

  • VQL: query_vql
  • Clients: list_clients, get_client_info, search_clients
  • Hunts: list_hunts, get_hunt_details, create_hunt, stop_hunt, get_hunt_results
  • Artifacts: list_artifacts, collect_artifact, upload_artifact, get_artifact_definition
  • Files/VFS: list_directory, get_file_info, download_file
  • Monitoring/Alerts: get_server_stats, get_client_activity, list_alerts, create_alert
  • Resources/Prompts: artifact catalog, VQL templates, incident-response prompts

Using the lab (recommended for development)

velociraptor_lab/ contains a Podman/Docker stack that spins up:

  • Velociraptor server (GUI + gRPC)
  • Test client that should auto-enroll
  • Generated mTLS configs under velociraptor_lab/volumes/{server,client,api,datastore}

Quick start (Podman):

cd velociraptor_lab
podman machine start                     # macOS/AppleHV
podman compose -f podman-compose.yml up --build -d
# or use the manual podman run commands in velociraptor_lab/README.md

Verify API:

. ../.venv/bin/activate
python test_api.py --config volumes/api/api.config.yaml

Then run the MCP server (see above) pointing at the generated api.config.yaml.

Troubleshooting lab enrollment:

  • The client must reach https://VelociraptorServer:8000/ inside the podman network. Keep hostname/alias consistent with the cert CN.
  • If clients() is empty, delete velociraptor_lab/volumes/* and redeploy the lab.
  • For manual probing, python test_api.py --query "SELECT * FROM clients()".

Development

  • Create a fresh venv and install dev extras: python3 -m venv .venv && . .venv/bin/activate && pip install -e .[dev].
  • Run unit tests locally: . .venv/bin/activate && pytest.
  • Validate API wiring against the lab: python test_api.py --config velociraptor_lab/volumes/api/api.config.yaml --query "SELECT * FROM clients()" (after bringing up the lab).
  • Linting is minimal today; focus on tests and keeping tool names/config keys aligned with VQL and env vars.

Tests

. .venv/bin/activate
pytest

Note: lab API test is skipped automatically if pyvelociraptor or configs are missing.

Codex MCP setup

You can register this server with the Codex CLI (stdio transport).

Fast path (CLI):

codex mcp add velociraptor \
  --env VELOCIRAPTOR_API_CONFIG=/absolute/path/to/velociraptor_lab/volumes/api/api.config.yaml \
  -- python3 main.py --config /absolute/path/to/velociraptor_lab/volumes/api/api.config.yaml \
  --log-level INFO --server-name velociraptor-mcp
  • Run the command from the repo root or use absolute paths so Codex can find main.py.
  • Restart Codex; inside the TUI /mcp shows active servers.

Config file (manual): add to ~/.codex/config.toml if you prefer editing directly:

[mcp_servers.velociraptor]
command = "python3"
args = ["main.py", "--config", "/absolute/path/to/velociraptor_lab/volumes/api/api.config.yaml", "--log-level", "INFO", "--server-name", "velociraptor-mcp"]
env = { VELOCIRAPTOR_API_CONFIG = "/absolute/path/to/velociraptor_lab/volumes/api/api.config.yaml" }
cwd = "/absolute/path/to/velociraptor-mcp-server"
startup_timeout_sec = 15   # optional; defaults to 10
tool_timeout_sec = 120     # optional; defaults to 60

Either approach keeps your api.config.yaml path in one place via VELOCIRAPTOR_API_CONFIG. Codex shares the same MCP config between the CLI and IDE.

Troubleshooting

  • ModuleNotFoundError or grpc missing → ensure you’re in the venv and ran pip install ..
  • Velociraptor API config not found → point --config / VELOCIRAPTOR_API_CONFIG to the generated api.config.yaml.
  • Handshake fails in Codex → check ~/.codex/log/codex-tui.log; most common is the missing config path.
  • PEP 668 “externally managed” error → always use a venv (as above).

Structure

mcp_server/       # server, tools, resources, prompts
main.py           # entrypoint
tests/            # unit tests + fixtures
requirements.txt  # shared deps (MCP + lab)
velociraptor_lab/ # podman/docker lab for local Velociraptor API

Publishing to PyPI (manual)

. .venv/bin/activate
pip install -U pip build twine
python -m build
twine upload dist/*   # requires PYPI_USERNAME and PYPI_PASSWORD or token in ~/.pypirc

Tagging a release is recommended (see CI section).

Releasing via GitHub Actions

If your CI publishes on tagged pushes, use these steps:

git add pyproject.toml mcp_server/__init__.py README.md
git commit -m "Release 0.1.5"        # or skip if already committed
git tag -a v0.1.5 -m "v0.1.5"        # bump tag version as needed
git push origin main                  # adjust branch name if different
git push origin v0.1.5                # triggers the release workflow

Trusted Publishing: the workflow uses PyPI OIDC; register .github/workflows/ci.yml as a Trusted Publisher in PyPI project settings. No PYPI_API_TOKEN is needed once linked.

License

This project is licensed under the MIT License. See LICENSE for the full text.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

velociraptor_mcp_server-0.1.5.tar.gz (17.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

velociraptor_mcp_server-0.1.5-py3-none-any.whl (17.9 kB view details)

Uploaded Python 3

File details

Details for the file velociraptor_mcp_server-0.1.5.tar.gz.

File metadata

  • Download URL: velociraptor_mcp_server-0.1.5.tar.gz
  • Upload date:
  • Size: 17.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.12

File hashes

Hashes for velociraptor_mcp_server-0.1.5.tar.gz
Algorithm Hash digest
SHA256 9cb16c198d248ef6edddb16802764db5a2038d9fd9be08bc7926a3c218602c53
MD5 08f1eb34382d1ebbcc06857777c20bd2
BLAKE2b-256 d0515381200b8a09bdb31e831db783282c2b29d4988b3fc0cb26c15d8beed3b7

See more details on using hashes here.

File details

Details for the file velociraptor_mcp_server-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for velociraptor_mcp_server-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 f38727059736de7d3133ec91e21f7e6e70612bc0c214b0e2421a24a58630a48c
MD5 5502670559d0efd760724fab55165170
BLAKE2b-256 6821ce9780a331215b872ad9f838131009e1cc54bdb3e1cf9d2f5ce63c02bc03

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page