Velocity Brain: agent-first memory and execution engine
Project description
Velocity Brain
CLI-native. API-capable. MCP-ready.
What Is Velocity Brain
Velocity Brain is a local-first brain system for agents. It stores memory in Postgres, retrieves internal context, and runs deterministic agent workflows.
Core value:
- Brain-first retrieval before action
- Persistent memory and timeline model
- Agent loop runtime for planning and execution
- MCP tools for multiple MCP-compatible clients
Quick Start (Local CLI)
1) Install
From PyPI (after you publish):
python -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
python -m pip install velocitybrain
From local repo (dev mode):
python -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
python -m pip install -e .
2) Configure env
Copy-Item .env.example .env
Default local DB URL in .env.example is:
DATABASE_URL=postgresql://velocity:velocity@localhost:5432/velocitybrain
3) Start and initialize DB
docker compose up db -d
docker compose exec -T db psql -U velocity -d velocitybrain -f /docker-entrypoint-initdb.d/01-schema.sql
4) Validate
velocitybrain init
velocitybrain doctor
5) Use core flows
velocitybrain ingest --source note --content "Met Jane Doe from Acme and discussed GTM"
velocitybrain query "What do I know about Jane Doe?"
velocitybrain run "Prepare me for meeting with Jane Doe tomorrow"
How Answers Work Today
velocitybrain query and velocitybrain run do not call Claude/OpenAI/Gemini APIs by default.
They currently work as follows:
query: keyword + hybrid retrieval from your localentitiestable, then lightweight synthesis from top internal match.run: intent detection + deterministic planning + simulated execution actions + local memory writeback.
So yes, if you connect Velocity Brain via MCP to Claude Code (or another client), that external client can use Velocity Brain tools. But Velocity Brain itself is local-first and does not require an LLM API to produce baseline outputs.
CLI Commands
velocitybrain about
velocitybrain init
velocitybrain doctor
velocitybrain skills --category query --limit 5
velocitybrain ingest --source note --content "..."
velocitybrain query "..."
velocitybrain run "..."
velocitybrain serve api --host 0.0.0.0 --port 8080 --reload
velocitybrain serve mcp
Output controls:
velocitybrain --json query "What changed this week?"
velocitybrain --color about
velocitybrain --no-color about
Behavior notes:
--jsonprints machine-readable JSON output.--colorforces ANSI color output.--no-colordisables ANSI styling.
MCP Setup (Multi-Client)
Start MCP server:
velocitybrain serve mcp
Generic stdio config (works in MCP clients that accept JSON mcpServers):
{
"mcpServers": {
"velocitybrain": {
"command": "velocitybrain",
"args": ["serve", "mcp"]
}
}
}
If the client cannot resolve PATH, use absolute executable path:
{
"mcpServers": {
"velocitybrain": {
"command": "C:/Path/To/Python/Scripts/velocitybrain.exe",
"args": ["serve", "mcp"]
}
}
}
Client notes:
- Claude Code CLI: supports MCP server add/list/get/remove commands. Example:
claude mcp add velocitybrain -- velocitybrain serve mcp
- OpenAI Codex CLI: supports MCP server registration. Example:
codex mcp add velocitybrain -- velocitybrain serve mcp
- Gemini CLI: supports
mcpServersin Gemini settings JSON. - Cline: supports MCP setup through its MCP settings/marketplace UI.
Available Velocity Brain MCP tools:
ingest_textqueryrun_agentlist_skillshealthz
API Usage
Start API:
velocitybrain serve api --host 0.0.0.0 --port 8080 --reload
Endpoints:
- Health:
http://localhost:8080/v1/healthz - Docs:
http://localhost:8080/docs
Publish to PyPI
1) Prepare release metadata
python -m pip install --upgrade build twine
- Bump
versioninpyproject.tomlfor every release. - Keep package name as
velocitybraininpyproject.toml.
2) Build clean artifacts
Remove-Item -Recurse -Force dist,build,*.egg-info -ErrorAction SilentlyContinue
python -m build
3) Validate artifacts
python -m twine check dist/*
4) Test on TestPyPI first (recommended)
python -m twine upload --repository testpypi dist/*
python -m pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple velocitybrain==0.1.0
velocitybrain about
5) Upload to PyPI
Use API token auth:
setx TWINE_USERNAME "__token__"
setx TWINE_PASSWORD "pypi-<your-token>"
python -m twine upload dist/*
Then open a new terminal and run:
python -m pip install --upgrade velocitybrain
velocitybrain about
Notes:
- If
velocitybrainname is already taken on PyPI, publish with a new project name (for examplevelocitybrain-ai) and keep the same console script name if desired. - You can switch to Trusted Publishing (GitHub Actions + PyPI trusted publisher) later to avoid long-lived API tokens.
Testing
python -m pytest -q
Backward Compatibility
Legacy commands still work:
velocityx ...python velocityx.py ...
Documentation
docs/ARCHITECTURE.mddocs/FOLDER_STRUCTURE.mddocs/DB_SCHEMA.mddocs/API_DESIGN.mddocs/SKILL_SYSTEM.mddocs/AGENT_LOOP.mddocs/WORKFLOWS.md
Reference Links
- Claude Code MCP docs: https://docs.claude.com/en/docs/claude-code/mcp
- Gemini CLI MCP docs: https://google-gemini.github.io/gemini-cli/docs/tools/mcp-server.html
- OpenAI Codex MCP docs: https://developers.openai.com/codex/mcp
- Cline MCP docs: https://docs.cline.bot/mcp/mcp-overview
License
MIT
Runtime Identity
Velocity Brain now supports a runtime identity specification layer via identity.spec.json (configurable with IDENTITY_SPEC_PATH) that is loaded independently of AGENTS.md.
CLI:
velocitybrain identity
API:
GET /v1/identity/spec
Org-mode Support
You can ingest Org files directly:
velocitybrain ingest --source notes --org-file ./notes/daily.org
API:
POST /v1/ingest/org
Sync (Dry-run + Multi-repo)
By default, sync is dry-run only and does not mutate registry/state.
velocitybrain sync --repo .
velocitybrain sync --repo C:/repo-a --repo C:/repo-b --apply
API:
POST /v1/sync/pushPOST /v1/sync/pullPOST /v1/sync/full
Security and Policy
- Destructive MCP tools (
delete_page,put_page,sync_brain) are policy-gated by default. - Set
MCP_ALLOW_DESTRUCTIVE_TOOLS=trueand explicitapprove=truein tool args to allow. - File-based ingestion is workspace-bounded unless
ALLOW_UNSAFE_FILE_READS=true.
Evaluation and Governance
- Retrieval evaluation endpoint:
POST /v1/eval/query - Access token minting:
POST /v1/access/token - Encrypted legacy plan storage:
POST /v1/legacy/plan,GET /v1/legacy/plan/{owner}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file velocitybrain-0.1.0.tar.gz.
File metadata
- Download URL: velocitybrain-0.1.0.tar.gz
- Upload date:
- Size: 31.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
946dd91108855bb6c6d1f922fd756cab4b640157ed9e1f726b79ed7f818d20ef
|
|
| MD5 |
da91859d315ed7422105fcb335576461
|
|
| BLAKE2b-256 |
84b1347cdf5bb5a9972428ab070fcaffe3602e4017777c333f6321b35ef0ea9e
|
File details
Details for the file velocitybrain-0.1.0-py3-none-any.whl.
File metadata
- Download URL: velocitybrain-0.1.0-py3-none-any.whl
- Upload date:
- Size: 37.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7127105324d85a05ffee8a18e172aaa113cfe8f737322440a3a1947db5e47117
|
|
| MD5 |
0c60feb2be04126ac7c2c5e02f3ee8d4
|
|
| BLAKE2b-256 |
ad4f34d2a64d1e19006be29815e174243aa1aecdc182dc258b2c2c48b8483a16
|