Skip to main content

Data intelligence layer for AI agents — connect, understand, and analyze enterprise data

Project description

Maestro

maestro-nerve

Shared intelligence memory layer for procurement, compliance, and tender-review agents.

PyPI version Python 3.11+ License: MIT


maestro-nerve gives AI agents a shared, evidence-backed memory over fragmented work sources. It is built for procurement, compliance, and tender-review workflows where Gmail, Drive, docs, sheets, notes, and agent outputs need to converge into one inspectable intelligence layer.

Why nerve?

Feature nerve Vector DB SQLite + embeddings
Structured knowledge (entities + edges) Yes No Manual
Multi-channel recall (BM25 + embedding + graph + co-access + temporal + session) Yes Embedding only Manual
Online learning (Thompson Sampling) Yes No No
Batch API (multiple ops in one call) Yes No No
Remote semantic embeddings Yes Varies No

Quickstart

For AI Agents (Claude Code skill)

# nerve is available as a Claude Code skill
mnerve understand "your query"

For Developers

Neon is the required V1 database. Point NEON_DATABASE_URL at a Neon Postgres database with sslmode=require, then initialize and serve:

# Install
pip install maestro-nerve

# Point nerve at Neon
export NEON_DATABASE_URL="postgresql://USER:PASS@HOST.neon.tech/DB?sslmode=require"
mnerve init

# Start daemon
mnerve serve

Architecture

┌─────────────────────────────────────────────────────┐
│         External Sources + Heterogeneous Agents     │
├───────────────────────┬─────────────────────────────┤
│  source intake        │  agent-authored intake      │
│  Gmail · Drive · ...  │  fact · conclusion · ...    │
├───────────────────────┴─────────────────────────────┤
│                 Evidence Ledger                     │
│   source_objects · facts · conclusions · events     │
├─────────────────────────────────────────────────────┤
│                 Semantic Layer                      │
│     entities · claims · evidence · decisions        │
├─────────────────────────────────────────────────────┤
│                 Serve Layer                         │
│        MCP · HTTP · CLI · workspace surfaces        │
└─────────────────────────────────────────────────────┘

Legacy compatibility CLI

The repo still contains legacy remember/understand/feedback surfaces. They are compatibility-only and do not define the long-term product contract. The authoritative V1 and cross-agent naming lives in:

  • docs/superpowers/specs/2026-04-21-mcp-v1-tools.md
  • docs/superpowers/plans/2026-04-23-cross-agent-memory-broker.md
  • docs/superpowers/plans/2026-04-23-unified-agent-source-intake-architecture.md

Legacy examples:

# Store knowledge (structured)
echo '{"nodes":[{"name":"Dr. Chen","type":"person","properties":{"affiliation":"NUS"}}]}' \
  | mnerve remember --structured

# Recall
mnerve understand "climate data gaps"

# Feedback (improves future ranking)
mnerve feedback q-abc123 548 515

# Batch (preferred — one call instead of many)
mnerve batch '[
  {"op":"understand","params":{"question":"ERA5 climate"}},
  {"op":"remember","params":{"content":"ERA5 has gaps pre-1979","type":"experience"}},
  {"op":"feedback","params":{"query_id":"$0.query_id","selected":[548]}}
]'

Legacy compatibility API reference

Method Endpoint Description
GET /health Health check with PG status + cache stats
GET /understand?question=... 6-channel recall with fusion ranking
POST /remember Legacy compatibility write surface; not the long-term shared-memory contract
POST /feedback Legacy compatibility ranking feedback surface
POST /batch Multiple operations in one request
GET /schema-hint?preview=... Existing types + matches before storing
POST /learn Record rules, aliases, authorities
POST /register Register a data source
GET /sources List registered data sources
GET /discover Discover tables and columns
POST /query Query data with semantic context
GET /search?q=... Semantic column search

Configuration

Env var Default Description
NEON_DATABASE_URL (none) Canonical cloud Postgres / Neon connection string; required for V1
NERVE_DSN (none) Explicit daemon / CLI override; if set, it must also be a Neon DSN
NERVE_LEGACY 0 Enable deprecated nerve-v1 REST surface temporarily when set to 1
NERVE_API_KEY (none) Bearer token for API auth
NERVE_CORS_ORIGINS http://localhost:3000 Comma-separated CORS origins
CLOUDFLARE_ACCOUNT_ID (none) Cloudflare account ID for Workers AI embeddings
CLOUDFLARE_API_TOKEN (none) Cloudflare API token with AI:Run permission
NERVE_EMBED_MODEL @cf/qwen/qwen3-embedding-0.6b Override Workers AI embedding model
NERVE_ENCRYPTION_KEY (none) Fernet key for credential encryption

Local V1 env setup

The repo includes a root env template at .env.example.

What you need right now for the current V1 scaffold:

  • NEON_DATABASE_URL for the Python daemon / CLI
  • NERVE_LEGACY=1 only if you still need the deprecated nerve-v1 REST surface during migration
  • INTERNAL_API_BASE_URL only if apps/web should talk to a daemon URL other than http://127.0.0.1:7420
  • repo-root SQL migrations under migrations/ are applied by mnerve init and on daemon startup

What is documented but not yet required for the current scaffold:

  • NERVE_AUTH_SECRET
  • GOOGLE_CLIENT_ID
  • GOOGLE_CLIENT_SECRET
  • MICROSOFT_CLIENT_ID
  • MICROSOFT_CLIENT_SECRET
  • STRIPE_SECRET_KEY
  • STRIPE_WEBHOOK_SECRET
  • NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY
  • R2_*
  • NERVE_DSN only if you intentionally override NEON_DATABASE_URL with another Neon DSN

Those V1 cloud variables are part of the target architecture, but the current web and desktop passes are still scaffold / contract-first for auth, billing, and companion flows. They should be requested when the corresponding runtime path is actually wired, not before.

For Google OAuth, keep the two redirect URIs distinct:

  • Sign-in auth callback: http://localhost:3000/api/auth/google/callback
  • Connector grant callback: http://127.0.0.1:7420/api/connectors/google/callback

For Microsoft connector OAuth:

  • Connector grant callback: http://127.0.0.1:7420/api/connectors/microsoft/callback

In production, replace those origins with the real web and daemon base URLs.

Workspace RLS note

V1 tables use real PostgreSQL row-level security with current_setting('app.workspace_id')::uuid. The schema is in place, but the current daemon does not yet inject app.workspace_id on every connection checkout. Until workspace-scoped request plumbing lands, callers that query V1 tables directly must SET app.workspace_id = '<workspace-uuid>' on the session before issuing workspace-bound SQL.

Performance

Benchmarked on M4 Pro, single uvicorn worker:

Metric Value
understand (cold) 58ms p50
understand (cached) 1ms p50
remember 65ms
batch (3 ops) 183ms
Cache hit rate 74.5%
Sustained QPS 17+ (cold) / 251 (cached)

Embedding inference runs through Cloudflare Workers AI in the V1 runtime; the Python daemon no longer loads a local embedding model on startup.

Contributing

# Dev setup
git clone https://github.com/maestro-ai-stack/maestro-nerve-internal.git maestro-nerve
cd maestro-nerve
pip install -e ".[dev,onnx]"

# Run tests
pytest tests/ -x -v

# Lint
ruff check src/ tests/

Harness

Canonical repo-level harness entrypoints:

scripts/harness-fast.sh
scripts/harness-python-fast.sh
scripts/harness-web-fast.sh
scripts/harness-web-smoke.sh
scripts/harness-desktop-fast.sh
scripts/harness-install-hooks.sh

What they do:

  • scripts/harness-fast.sh: hygiene, import boundaries, Python fast lane, web fast lane, desktop fast lane
  • scripts/harness-python-fast.sh: ruff + mypy + pytest-cov for the stable runtime lane
  • scripts/harness-web-fast.sh: eslint + typecheck + vitest + next build
  • scripts/harness-web-smoke.sh: Playwright smoke suite for landing, 404, and health route
  • scripts/harness-desktop-fast.sh: desktop typecheck + build:ui + cargo check
  • scripts/harness-install-hooks.sh: installs local pre-commit and pre-push hooks

License

MIT — see LICENSE.

Built by Maestro — Singapore AI product studio.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

maestro_nerve-0.1.2.tar.gz (2.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

maestro_nerve-0.1.2-py3-none-any.whl (339.0 kB view details)

Uploaded Python 3

File details

Details for the file maestro_nerve-0.1.2.tar.gz.

File metadata

  • Download URL: maestro_nerve-0.1.2.tar.gz
  • Upload date:
  • Size: 2.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for maestro_nerve-0.1.2.tar.gz
Algorithm Hash digest
SHA256 ae0bede09ea7775d9743959f3b17cb31094f2c3794d36a80f62497cf9d1c131a
MD5 a6d32f858aace320e3a7727c756cda81
BLAKE2b-256 7fa35d78448fd263f1f15a6d13866356f2b2dcf0c54e8eec64c28ce1fdc3c78e

See more details on using hashes here.

Provenance

The following attestation bundles were made for maestro_nerve-0.1.2.tar.gz:

Publisher: publish-pypi.yml on maestro-ai-stack/maestro-nerve-internal

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file maestro_nerve-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: maestro_nerve-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 339.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for maestro_nerve-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 c3153a33bd1ea78166d255a039c1bc69705756ee30c429114298bc62dd034dfe
MD5 2750644b740ccfeff11a728e10d65964
BLAKE2b-256 a169a28891cec4290d8c0686431526ad560fde22622f804d48d242590097da64

See more details on using hashes here.

Provenance

The following attestation bundles were made for maestro_nerve-0.1.2-py3-none-any.whl:

Publisher: publish-pypi.yml on maestro-ai-stack/maestro-nerve-internal

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page