Reduce Claude, GPT, and Gemini token costs on code questions by 40-70x. Semantic codebase indexing, MCP server, and PostToolUse compression hooks for Claude Code, Cursor, Cline, and Continue.
Project description
๐ง NeuralMind
Semantic code intelligence for AI coding agents โ smart context retrieval + tool-output compression in one package.
NeuralMind turns a code repository into a queryable neural index. AI agents use it to answer code questions in ~800 tokens instead of loading 50,000+ tokens of raw source.
๐ Visit the landing page โข ๐ Read the About page โข โ๏ธ Not affiliated with NeuralMind.ai
๐ Security & Compliance
For enterprises and regulated industries:
- 100% Local Processing โ Your code never leaves your machine. All embeddings are generated and stored locally using ChromaDB.
- No External APIs โ NeuralMind runs completely offline. No cloud services, no telemetry, no data exfiltration.
- Explainable AI โ Every context decision is auditable. Know exactly which code was retrieved (Extracted) vs. inferred by the model.
- Open-Source & MIT Licensed โ Full transparency. No hidden clauses, no vendor lock-in. Audit the code yourself.
- GDPR/HIPAA-Friendly โ Process sensitive code without compliance concerns. All data stays under your control.
For CTOs & Security Teams:
- โ Zero external dependencies for code storage
- โ Runs behind your firewall or on-premise
- โ No collection of usage metrics or code patterns
- โ Predictable, reproducible results (not a black box)
- โ Works with proprietary and sensitive code
๐ For AI Coding Agents
If you are Claude Code, Codex, Cursor, Cline, or any AI agent with NeuralMind installed, start here.
Check if NeuralMind is active
neuralmind stats .
Expected output when ready:
Project: your-project
Built: True
Nodes: 241
If Built: False, run graphify update . && neuralmind build . first.
Startup workflow
Every new session, do this first:
neuralmind wakeup .
Or via MCP:
neuralmind_wakeup(project_path=".")
This returns ~365โ600 tokens of structured project context:
- Project name and description (from
CLAUDE.md,mempalace.yaml, orREADME.mdfirst line) - How many code entities and clusters are indexed
- Architecture overview: top 10 code clusters with their entity types and sample names
- Sections from
graphify-out/GRAPH_REPORT.mdif present
Use this output as your orientation before writing any code. It replaces reading the entire repository.
Decision tree โ which tool to call
Need to understand the project?
โโโบ neuralmind wakeup . (MCP: neuralmind_wakeup) ~400 tokens
Answering a specific code question?
โโโบ neuralmind query . "question" (MCP: neuralmind_query) ~800โ1100 tokens
About to open a source file?
โโโบ neuralmind skeleton <file> (MCP: neuralmind_skeleton) ~5โ15ร cheaper than Read
โ Only fall back to Read when you need the actual implementation body
โ Use NEURALMIND_BYPASS=1 when you truly need raw source
Answering a complex, multi-part question?
โโโบ neuralmind recursive-query . "q" (MCP: neuralmind_recursive_query) decomposes + synthesizes
Question about reference documents (PDFs, legal, clinical)?
โโโบ neuralmind query-docs . "q" (MCP: neuralmind_query_docs) searches doc index only
Searching for a specific function/class/entity?
โโโบ neuralmind search . "term" (MCP: neuralmind_search) ranked by semantic similarity
Made code changes and need to update the index?
โโโบ neuralmind build . (MCP: neuralmind_build) incremental โ only re-embeds changed nodes
Understanding the output
wakeup / query output format
## Project: myapp
Full-stack web app for task management. Uses React 18, Node.js, and PostgreSQL.
Knowledge Graph: 241 entities, 23 clusters
Type: Code repository with semantic indexing
## Architecture Overview
### Code Clusters
- Cluster 5 (45 entities): function โ authenticate_user, hash_password, verify_token
- Cluster 12 (23 entities): class โ UserController, AuthMiddleware, SessionStore
- Cluster 3 (18 entities): function โ createTask, updateTask, deleteTask
...
## Relevant Code Areas โ query only; absent from wakeup
### Cluster 5 (relevance: 1.73)
Contains: function entities
- authenticate_user (code) โ auth.py
- verify_token (code) โ auth.py
## Search Results โ query only
- AuthMiddleware (score: 0.91) โ middleware.py
- jwt_handler (score: 0.85) โ auth/jwt.py
---
Tokens: 847 | 59.0x reduction | Layers: L0, L1, L2, L3 | Communities: [5, 12]
Layer meanings:
| Layer | Name | Always loaded | Content |
|---|---|---|---|
| L0 | Identity | โ yes | Project name, description, graph size |
| L1 | Summary | โ yes | Architecture, top clusters, GRAPH_REPORT sections |
| L2 | On-demand | query only | Top 3 clusters most relevant to the query |
| L3 | Search | query only | Semantic search hits (up to 10) |
skeleton output format
# src/auth/handlers.py (community 5, 8 functions)
## Functions
L12 authenticate_user โ Validates credentials and issues JWT
L45 verify_token โ Checks JWT signature and expiry
L78 refresh_token โ Issues new JWT from a valid refresh token
L102 logout โ Revokes refresh token in DB
## Call graph (within this file)
authenticate_user โ verify_token, hash_password
refresh_token โ verify_token
## Cross-file
verify_token imports_from โ utils/jwt.py (high 0.95)
authenticate_user shares_data_with โ models/user.py (high 0.91)
[Full source available: Read this file with NEURALMIND_BYPASS=1]
Use skeleton to understand what a file does, how its functions relate, and which other files it depends on โ without consuming tokens on the full source body.
search output format
1. authenticate_user (function) - score: 0.92
File: auth/handlers.py Community: 5
2. AuthMiddleware (class) - score: 0.87
File: auth/middleware.py Community: 5
3. hash_password (function) - score: 0.81
File: utils/crypto.py Community: 5
PostToolUse hooks โ what happens automatically
If neuralmind install-hooks has been run for this project (check for .claude/settings.json), Claude Code automatically compresses tool outputs before you see them:
| Tool | What happens | Typical savings |
|---|---|---|
| Read | Raw source โ graph skeleton (functions, rationales, call graph) | ~88% |
| Bash | Full output โ error lines + warning lines + last 3 lines + summary | ~91% |
| Grep | Unlimited matches โ capped at 25 + "N more hidden" pointer | varies |
This is fully automatic โ you do not need to call any extra tools.
To bypass compression for a single command (e.g., when you need the full file body):
NEURALMIND_BYPASS=1 <your command>
After making code changes
The index does not auto-update unless a git post-commit hook was installed with neuralmind init-hook .. After significant code changes, rebuild manually:
neuralmind build . # incremental โ only re-embeds changed nodes
neuralmind build . --force # full rebuild โ re-embeds everything
MCP tool quick reference
| Tool | When to call | Required params | Returns |
|---|---|---|---|
neuralmind_wakeup |
Session start | project_path |
L0+L1 context string, token count |
neuralmind_query |
Code question | project_path, question |
L0โL3 context string, token count, reduction ratio |
neuralmind_search |
Find entity | project_path, query |
List of nodes with scores, file paths |
neuralmind_skeleton |
Explore file | project_path, file_path |
Functions + rationales + call graph + cross-file edges |
neuralmind_recursive_query |
Complex question | project_path, question |
Synthesized answer, sub-queries, gaps, sources |
neuralmind_query_docs |
Reference docs | project_path, question |
Relevant doc chunks with source files and relevance scores |
neuralmind_stats |
Check status | project_path |
Built status, node count, community count |
neuralmind_build |
Rebuild index | project_path |
Build stats dict |
neuralmind_benchmark |
Measure savings | project_path |
Per-query token counts and reduction ratios |
โก Two-phase optimization
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Phase 1: Retrieval โ what to fetch โ
โ neuralmind wakeup . โ ~365 tokens (vs 50K raw) โ
โ neuralmind query "?" โ ~800 tokens (vs 2,700 raw) โ
โ neuralmind_skeleton โ graph-backed file view โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Phase 2: Consumption โ what the agent actually sees โ
โ PostToolUse hooks compress Read/Bash/Grep output โ
โ File reads โ graph skeleton (~88% reduction) โ
โ Bash output โ errors + summary (~91% reduction) โ
โ Search results โ capped at 25 matches โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Combined effect: 5โ10ร total reduction vs baseline Claude Code.
๐ฏ The Problem
You: "How does authentication work in my codebase?"
โ Traditional: Load entire codebase โ 50,000 tokens โ $0.15โ$3.75/query
โ
NeuralMind: Smart context โ 766 tokens โ $0.002โ$0.06/query
๐ฐ Real Savings
| Model | Without NeuralMind | With NeuralMind | Monthly Savings |
|---|---|---|---|
| Claude 3.5 Sonnet | $450/month | $7/month | $443 |
| GPT-4o | $750/month | $12/month | $738 |
| GPT-4.5 | $11,250/month | $180/month | $11,070 |
| Claude Opus | $2,250/month | $36/month | $2,214 |
Based on 100 queries/day. Pricing sources
โ Does it work on your code? Prove it in 5 minutes.
NeuralMind benchmarks itself in CI on every PR. But your codebase isn't our fixture. The only way to know what it does for you is to measure it on your code.
pip install neuralmind graphifyy
cd /path/to/your-project
graphify update . && neuralmind build .
neuralmind benchmark .
You'll get back your actual reduction ratio and per-query token count โ typically 30โ80ร on real repos. No telemetry, nothing uploaded, nothing committed. If the numbers don't justify it, pip uninstall neuralmind and move on โ 5 minutes lost.
Want the dollar figure for your team?
neuralmind benchmark . --contribute
That flag produces a ready-to-share JSON blob with your project's numbers, the exact command that produced them, and an estimated monthly savings at your query volume. Paste it into Slack, a design doc, a PR โ or optionally contribute it to the public leaderboard.
Full walkthrough: Does NeuralMind work on your codebase?
๐จ When do I reach for NeuralMind?
Two ways to decide: start with what's annoying you (symptoms), or start with what you're trying to achieve (goals).
Symptoms โ "This is happening to me"
| What you notice | Reach for | Why it fixes it |
|---|---|---|
| Claude Code hits context limits mid-task | neuralmind install-hooks . |
Auto-compresses Read/Bash/Grep before the agent sees them (~88โ91%) |
| My monthly LLM bill is climbing | neuralmind query + hooks |
40โ70ร fewer tokens per code question |
| I start every session re-pasting project structure | neuralmind wakeup . |
~400 tokens of orientation; pipe into any chat |
| Agent reads a 2,000-line file to answer about one function | neuralmind skeleton <file> |
Functions + call graph, no body; ~88% cheaper than Read |
grep floods the agent with hundreds of matches |
neuralmind install-hooks . |
Caps at 25 matches with "N more hidden" pointer |
| The agent is confidently wrong about what my code does | Start session with wakeup; ask with query |
Grounds the model in real structure instead of guessing |
| I want to query my codebase from ChatGPT / Gemini | neuralmind wakeup . | pbcopy |
Model-agnostic output; paste into any chat |
| Retrieval feels random across similar questions | neuralmind learn . |
Cooccurrence-based reranking adapts to your patterns |
| Index feels out of date after a refactor | neuralmind build . (or init-hook once) |
Incremental โ only re-embeds changed nodes |
Goals โ "What am I trying to solve for?"
| If your goal isโฆ | Do this | Expected outcome |
|---|---|---|
| Cut LLM spend on code Q&A | install-hooks + use query for questions |
5โ10ร total reduction vs baseline agent |
| Faster, more grounded agent responses | wakeup at session start โ query / skeleton during |
Fewer hallucinations; less re-exploration |
| Keep all code local (no SaaS, no telemetry) | Default install โ no extra config | 100% offline; nothing leaves the machine |
| Work across Claude + GPT + Gemini with one index | Build once, pipe output into any model | Same context quality, model-agnostic |
| Make retrieval adapt to how your team queries | Enable memory (TTY prompt) + neuralmind learn . |
Relevance improves on repeat patterns |
| Measure savings for a manager or stakeholder | neuralmind benchmark . --json |
Per-query tokens, reduction ratios, dollar estimate |
| Auto-refresh the index as code changes | neuralmind init-hook . (git post-commit) |
Every commit rebuilds incrementally |
Still not sure?
You probably don't need NeuralMind if:
- Your codebase is under ~5K tokens total (just paste the whole thing in).
- You don't use an AI coding agent.
- You only want inline completions โ use Copilot or Cursor directly.
You almost certainly want NeuralMind if any row above describes a recurring frustration, or if your LLM bill has crossed the point where a 40โ70ร reduction is worth 5 minutes of setup.
See the use-case walkthroughs for step-by-step guides matched to your situation.
๐ค Who is NeuralMind for?
| You areโฆ | NeuralMind gives youโฆ |
|---|---|
| A Claude Code user watching your token bill climb | PostToolUse compression on every Read/Bash/Grep + ~60ร smaller query context |
| A Cursor user who wants semantic retrieval outside Cursor too | CLI + MCP server that works in any agent with the same index |
| A Cline / Continue user without a built-in codebase index | Drop-in MCP neuralmind_query and neuralmind_skeleton tools |
| Running OpenAI / Gemini / local models | Model-agnostic context โ pipe wakeup / query output into any chat |
| A solo developer with a growing monorepo | Incremental rebuilds + learning that adapts to your query patterns |
| A team tech lead worried about LLM spend | Measurable per-query token reduction with neuralmind benchmark |
| A security-conscious engineer or in a regulated industry | 100% local, offline, no code leaves the machine |
| A researcher / hobbyist exploring LLM cost optimization | Open-source reference implementation of two-phase token optimization |
Not a fit if: you need cross-repo search across a whole organization (use Sourcegraph Cody), or you only want inline completions (use Copilot).
๐ข Enterprise Use Cases
NeuralMind solves specific pain points for companies at scale:
Regulated Industries (Finance, Healthcare, Legal, Government)
Challenge: AI tools can't be trusted if they can't explain decisions.
NeuralMind Solution:
- Every recommendation is traceable to extracted code (auditable, not guessed)
- Works 100% on-premise โ no cloud, no data transfer, zero exfiltration risk
- Meets GDPR, HIPAA, SOC 2, and ISO 27001 requirements
- Explainability by design โ see what code fed each decision
Enterprises with Proprietary / Sensitive Code
Challenge: Sending code to external APIs or SaaS models is a legal no-go.
NeuralMind Solution:
- All processing stays on your hardware or internal network
- No ChromaDB cloud โ uses local SQLite-compatible storage
- No API keys, no authentication to external vendors
- Process trade secrets, algorithms, and confidential code safely
Large Organizations Scaling AI Coding Assistant Spend
Challenge: 100 developers ร Claude Sonnet queries = $50K+/month LLM bill
NeuralMind Solution:
- 40โ70ร token reduction per query โ cut budget by 95%+
- Explicit benchmarking (
neuralmind benchmark) to show ROI to finance - Measurable savings: baseline vs. optimized (in dollars)
- Deploy once, benefit across all teams using the same codebase
Internal Platform Teams & Shared Infrastructure
Challenge: Different teams query the same codebase; results are inconsistent.
NeuralMind Solution:
- Build the index once โ share across all teams
- Cooccurrence learning adapts to your org's query patterns (
neuralmind learn) - Consistent, reproducible context for every question
- Single source of truth for "how does this system work?"
Teams Needing Offline/Disconnected Development
Challenge: Regulated environments, air-gapped networks, or unreliable connectivity.
NeuralMind Solution:
- No internet required after the initial install
- Pre-build the index on a connected machine, ship it in source control
- Works in submarines, rural offices, flight-mode development
- No API rate limiting or service outages
๐ค Why NeuralMind vs. Heuristic-Only
Both approaches are valid; the tradeoff is retrieval quality vs. simplicity.
| Approach | Token Reduction | Accuracy | Deps | Learns Over Time |
|---|---|---|---|---|
| Heuristic-only (no embeddings) | ~33x (~97% fewer tokens) | 70-80% top-5 (community baseline) | None | No |
| NeuralMind | 40-70x | Project-dependent; evaluate against the same top-5 query set | ChromaDB | Yes (cooccurrence patterns) |
NeuralMind does include a dependency (ChromaDB), but it still runs entirely offline โ no API calls, no cloud services, no data leaves your machine.
If your priority is strict zero-dependency operation, heuristic-only is the simplest path. If your priority is stronger semantic retrieval and adaptive relevance, NeuralMind is the better fit.
โ๏ธ NeuralMind vs. Alternatives
Short answers to "why not just use X?". Each row links to a deeper page.
| Compared against | Short verdict |
|---|---|
Cursor @codebase |
Works only in Cursor; NeuralMind works in any agent and adds tool-output compression |
| Aider repo-map | Aider is syntactic only; NeuralMind adds semantic retrieval and compression |
| Sourcegraph Cody | Cody is server-hosted and org-wide; NeuralMind is local and per-project |
| Continue / Cline | Those are agent runtimes; NeuralMind is the context/compression layer underneath |
| GitHub Copilot | Copilot is hosted completions; NeuralMind is local context for any agent |
| Windsurf / Codeium | Vertically integrated IDE; NeuralMind is editor- and model-agnostic |
| Claude Projects | Projects reload all files every turn; NeuralMind retrieves only what the query needs |
| Prompt caching | Caching amortizes a big prompt; NeuralMind makes the prompt small โ combine both |
| LangChain / LlamaIndex for code | Frameworks you assemble; NeuralMind is the assembled default for code agents |
| Long context windows (1M/2M) | Possible โ cheap โ NeuralMind gives ~60ร cost reduction on the same model |
| Generic RAG over a codebase | Text chunking loses structure; NeuralMind keeps the call graph |
| Tree-sitter / ctags / grep | Deterministic but syntactic; use alongside NeuralMind, not instead of |
Full comparison index: docs/comparisons/.
๐ Quick Start (humans)
# Install (includes the CLI, semantic indexing, and the MCP server
# for Claude Code, Cursor, Cline, Continue, and any MCP client)
pip install neuralmind graphifyy
# Go to your project
cd your-project
# Generate knowledge graph (requires graphify)
graphify update .
# Build neural index
neuralmind build .
# (Optional) Install Claude Code PostToolUse compression hooks
neuralmind install-hooks .
# (Optional) Auto-rebuild on every git commit
neuralmind init-hook .
# Start using
neuralmind wakeup .
neuralmind query . "How does authentication work?"
neuralmind skeleton src/auth/handlers.py
๐ง How It Works
NeuralMind wraps a graphify knowledge graph (graphify-out/graph.json) in a ChromaDB vector store.
When you query it, a 4-layer progressive disclosure system loads only the context relevant to
your question.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Layer 0: Project Identity (~100 tokens) โ ALWAYS LOADED โ
โ Source: CLAUDE.md / mempalace.yaml / README first line โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Layer 1: Architecture Summary (~500 tokens) โ ALWAYS LOADED โ
โ Source: Community distribution + GRAPH_REPORT.md โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Layer 2: Relevant Modules (~300โ500 tokens) โ QUERY-AWARE โ
โ Source: Top 3 clusters semantically matching the query โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Layer 3: Semantic Search (~300โ500 tokens) โ QUERY-AWARE โ
โ Source: ChromaDB similarity search over all graph nodes โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Total: ~800โ1,100 tokens vs 50,000+ for the full codebase
Prerequisites: NeuralMind requires graphify update . to have been run first. This produces:
graphify-out/graph.jsonโ the knowledge graph (required)graphify-out/GRAPH_REPORT.mdโ architecture summary (enriches L1, optional)graphify-out/neuralmind_db/โ ChromaDB vector store (created byneuralmind build)
๐ฅ๏ธ Complete CLI Reference
neuralmind build
Build or incrementally update the neural index from graphify-out/graph.json.
neuralmind build [project_path] [--force]
| Argument/Option | Default | Description |
|---|---|---|
project_path |
. |
Project root containing graphify-out/graph.json |
--force, -f |
off | Re-embed every node even if unchanged |
neuralmind build .
neuralmind build /path/to/project --force
Output: nodes processed, added, updated, skipped, communities indexed, build duration.
neuralmind wakeup
Get minimal project context for starting a session (~400โ600 tokens, L0 + L1 only).
neuralmind wakeup <project_path> [--json]
neuralmind wakeup .
neuralmind wakeup . --json
neuralmind wakeup . > CONTEXT.md
neuralmind query
Query the codebase with natural language (~800โ1,100 tokens, all 4 layers).
neuralmind query <project_path> "<question>" [--json]
neuralmind query . "How does authentication work?"
neuralmind query . "What are the main API endpoints?" --json
neuralmind query /path/to/project "Explain the database schema"
On first run from a TTY, you will be prompted once to enable local query memory logging.
Disable with NEURALMIND_MEMORY=0.
neuralmind search
Direct semantic search โ returns code entities ranked by similarity to the query.
neuralmind search <project_path> "<query>" [--n N] [--json]
| Option | Default | Description |
|---|---|---|
--n |
10 | Maximum number of results |
--json, -j |
off | Machine-readable JSON output |
neuralmind search . "authentication"
neuralmind search . "database connection" --n 5
neuralmind search . "PaymentController" --json
neuralmind skeleton
Print a compact graph-backed view of a file without loading full source (~88% cheaper than Read).
neuralmind skeleton <file_path> [--project-path .] [--json]
| Option | Default | Description |
|---|---|---|
--project-path |
. |
Project root (where the index lives) |
--json, -j |
off | Machine-readable JSON output |
neuralmind skeleton src/auth/handlers.py
neuralmind skeleton src/auth/handlers.py --project-path /my/project
neuralmind skeleton src/auth/handlers.py --json
Output: function list with line numbers and rationales, internal call graph, cross-file edges (imports, data sharing), and a pointer to the full source for when you need it.
neuralmind benchmark
Measure token reduction using a set of sample queries.
neuralmind benchmark <project_path> [--json]
neuralmind benchmark .
neuralmind benchmark . --json
neuralmind stats
Show index status and statistics.
neuralmind stats <project_path> [--json]
neuralmind stats .
neuralmind stats . --json # {"built": true, "total_nodes": 241, "communities": 23, ...}
neuralmind learn
Analyze logged query history to discover module cooccurrence patterns. Improves future query relevance automatically.
neuralmind learn <project_path>
neuralmind learn .
Reads .neuralmind/memory/query_events.jsonl, writes .neuralmind/learned_patterns.json.
The next neuralmind query applies boosted reranking automatically.
neuralmind install-hooks
Install or remove Claude Code PostToolUse compression hooks.
neuralmind install-hooks [project_path] [--global] [--uninstall]
| Option | Description |
|---|---|
--global |
Install in ~/.claude/settings.json (affects all projects) |
--uninstall |
Remove NeuralMind hooks only; preserves other tools' hooks |
neuralmind install-hooks . # project-scoped
neuralmind install-hooks --global # all projects
neuralmind install-hooks --uninstall # remove project hooks
neuralmind install-hooks --uninstall --global # remove global hooks
neuralmind init-hook
Install a Git post-commit hook that auto-rebuilds the index after every commit.
Safe and idempotent โ coexists with other tools' hook contributions.
neuralmind init-hook [project_path]
neuralmind init-hook .
neuralmind init-hook /path/to/project
๐ MCP Server
NeuralMind ships a Model Context Protocol server (neuralmind-mcp) that exposes all tools
to MCP-compatible agents.
Starting the server
neuralmind-mcp
# or
python -m neuralmind.mcp_server
Claude Desktop configuration
{
"mcpServers": {
"neuralmind": {
"command": "neuralmind-mcp",
"args": ["/absolute/path/to/project"]
}
}
}
Config file locations:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Claude Code / Cursor project-scoped auto-registration
Drop a .mcp.json at your project root:
{
"mcpServers": {
"neuralmind": {
"command": "neuralmind-mcp",
"args": ["."]
}
}
}
Hermes-Agent (Nous Research)
Hermes-Agent is a self-improving agent framework that supports MCP servers. NeuralMind has been verified end-to-end against Hermes-Agent v0.12.0 (build 2026.4.30) โ the agent discovered all 11 NeuralMind tools (4-second handshake) when registered as shown below.
Prerequisite: install NeuralMind. The MCP server (neuralmind-mcp)
ships with the default install:
pip install neuralmind
Older
pip install "neuralmind[mcp]"commands still work โ themcpextra is preserved as a no-op for backwards compatibility.
Two ways to register the server. Both end up in ~/.hermes/config.yaml:
Option A โ CLI (recommended for first-time setup):
hermes mcp add
Option B โ edit the config directly (~/.hermes/config.yaml, add under
the mcp_servers top-level key):
mcp_servers:
neuralmind:
command: "neuralmind-mcp"
args: ["/absolute/path/to/project"]
Verify the server is registered and reachable:
hermes mcp list # neuralmind should appear, status โ
hermes mcp test neuralmind # โ Connected, โ Tools discovered: 11
If you haven't installed Hermes-Agent yet, the upstream installer is:
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
source ~/.bashrc
After editing the YAML directly, run /reload-mcp from the running hermes
CLI to pick up the change without restarting (the hermes mcp add flow does
this automatically). Both stdio (shown above) and HTTP transports are
supported โ see the upstream
MCP integration docs
for the full schema (command, args, env, url, headers, enabled,
per-server tools filtering, timeout, connect_timeout).
OpenClaw
OpenClaw is a personal AI assistant
that registers MCP servers via its CLI. Verified against OpenClaw 2026.5.2 โ
mcp set / mcp list / mcp show round-trip the documented JSON schema
into ~/.openclaw/openclaw.json exactly as expected.
Prerequisite: install NeuralMind (the MCP server ships with the default install):
pip install neuralmind
Register NeuralMind:
openclaw mcp set neuralmind '{"command":"neuralmind-mcp","args":["/absolute/path/to/project"]}'
Verify it landed:
openclaw mcp list # neuralmind should appear
openclaw mcp show neuralmind # echoes the JSON you stored
Remove with openclaw mcp unset neuralmind. Definitions are stored under
the mcp.servers key in ~/.openclaw/openclaw.json.
If you haven't installed OpenClaw yet:
npm install -g openclaw@latest # or: pnpm add -g openclaw@latest
openclaw onboard --install-daemon
OpenClaw's MCP support covers stdio (shown above), SSE, HTTP, and
streamable-http transports โ see the upstream
MCP CLI reference for details on
url/transport config and the inverse direction (openclaw mcp serve,
which exposes OpenClaw's own channels as an MCP server to other clients).
Troubleshooting
"Connection closed" / "Connection failed" right after register. Almost
always means an old NeuralMind install (โค 0.4.x) where the MCP server was
gated behind the [mcp] extra. From 0.5.0 onward the MCP SDK is bundled.
Fix:
pip install --upgrade neuralmind
Then re-run the host's verify step (hermes mcp test neuralmind or
openclaw mcp list).
neuralmind-mcp: command not found. The package installed but the
console script wasn't put on PATH โ usually because pip installed into a
user site-packages dir that isn't on PATH. Add ~/.local/bin to PATH or
reinstall in a venv where the entry point is on PATH.
The host shows neuralmind in mcp list but no tools when you query.
Run neuralmind build /path/to/project first โ the index has to exist
before the MCP tools can answer queries. The hooks (SessionStart,
UserPromptSubmit, PreCompact from neuralmind install-hooks) need a
built index too.
MCP tool schemas
neuralmind_wakeup
{
"project_path": "string (required) โ absolute path to project root"
}
Returns:
{
"context": "string",
"tokens": 412,
"reduction_ratio": 121.4,
"layers": ["L0", "L1"]
}
neuralmind_query
{
"project_path": "string (required)",
"question": "string (required) โ natural language question"
}
Returns:
{
"context": "string",
"tokens": 847,
"reduction_ratio": 59.0,
"layers": ["L0", "L1", "L2", "L3"],
"communities_loaded": [5, 12],
"search_hits": 8
}
neuralmind_search
{
"project_path": "string (required)",
"query": "string (required)",
"n": 10
}
Returns array of:
{ "id": "node_id", "label": "authenticate_user", "file_type": "code",
"source_file": "auth/handlers.py", "score": 0.92 }
neuralmind_skeleton
{
"project_path": "string (required)",
"file_path": "string (required) โ absolute or project-relative path"
}
Returns:
{ "file": "src/auth/handlers.py", "skeleton": "# src/auth/handlers.py ...", "chars": 620, "indexed": true }
neuralmind_recursive_query
Recursively decompose and explore complex questions. Breaks multi-part questions into focused sub-queries, executes them, identifies gaps, and synthesizes results. Searches both code and document indexes.
{
"project_path": "string (required)",
"question": "string (required) โ compound question to decompose",
"max_depth": 3,
"include_docs": true
}
Returns:
{
"question": "string",
"answer": "string โ synthesized answer",
"sub_queries": [{"query": "string", "results": [...], "source": "string"}],
"depth_reached": 2,
"gaps_identified": ["string"],
"total_queries": 6,
"token_estimate": 4156,
"sources": ["file1.ts", "file2.ts", "doc.md"]
}
When to use: Multi-faceted questions spanning multiple files or concepts, like "How does auth work and what security measures are in place?" or "What is the deployment architecture and how do Cloudflare and Render interact?"
Benchmark: 6x more tokens than standard query, but decomposes compound questions and achieves full term coverage on 3/5 test questions. See graphify-out/RECURSIVE_QUERY_BENCHMARK.md after running benchmark_report.py.
neuralmind_query_docs
Search reference documents (legal, clinical, strategic PDFs/DOCX converted to markdown). NOT for code โ use neuralmind_query for code questions.
{
"project_path": "string (required)",
"question": "string (required) โ question about reference documents",
"n": 5
}
Returns:
{
"results": [
{
"content": "string โ relevant text chunk",
"source_file": "docs/reference/filename.md",
"file_name": "filename.md",
"chunk": "3/12",
"relevance": 0.719
}
],
"total_doc_chunks": 241,
"query": "string"
}
Setup: Documents must be converted to markdown and indexed first:
# Convert documents (PDF, DOCX, TXT, HTML โ .md)
pip install pypdf mammoth
python doc_indexer.py build /path/to/project
# Or use the doc-ingest skill for batch conversion
Auto-rebuild: A git post-commit hook can rebuild the doc index when files in docs/reference/ change.
Search reference docs via CLI:
python doc_indexer.py query /path/to/project "HIPAA compliance"
python doc_indexer.py stats /path/to/project
neuralmind_build
{
"project_path": "string (required)",
"force": false
}
Returns:
{
"success": true,
"nodes_total": 241,
"nodes_added": 5,
"nodes_updated": 2,
"nodes_skipped": 234,
"communities": 23,
"duration_seconds": 3.1
}
neuralmind_stats
{ "project_path": "string (required)" }
Returns:
{ "built": true, "total_nodes": 241, "communities": 23, "db_path": "..." }
neuralmind_benchmark
{ "project_path": "string (required)" }
Returns:
{
"project": "myapp",
"wakeup_tokens": 341,
"avg_query_tokens": 739,
"avg_reduction_ratio": 65.6,
"results": [...]
}
๐ช PostToolUse Compression
When neuralmind install-hooks has been run, Claude Code automatically applies these transforms
to every tool output before the agent sees it.
Read โ skeleton
Raw source files are replaced with the graph skeleton (functions + rationales + call graph + cross-file edges). This is ~88% smaller and contains the structural information agents need most.
To get the full source anyway:
NEURALMIND_BYPASS=1 <command>
Bash โ filtered output
Long bash output is reduced to:
- All
error/ERROR/FAIL/traceback/warninglines - All summary lines (
=====,passed,failed,Finished,Done in, etc.) - Last 3 lines verbatim
- Header:
[neuralmind: bash compressed, exit=N]
All errors and failures are always preserved. Routine pip/npm/build chatter is dropped.
Grep โ capped results
Search results are capped at 25 matches with a [N more hidden] note appended.
Prevents context flooding from repository-wide searches.
Tunable thresholds
| Variable | Default | Description |
|---|---|---|
NEURALMIND_BYPASS |
unset | Set to 1 to disable all compression |
NEURALMIND_BASH_TAIL |
3 |
Lines to keep verbatim from end of bash output |
NEURALMIND_BASH_MAX_CHARS |
3000 |
Below this size, bash output is not compressed |
NEURALMIND_SEARCH_MAX |
25 |
Max grep/search matches before capping |
NEURALMIND_OFFLOAD_THRESHOLD |
15000 |
Chars above which content is written to a temp file |
๐ง Continual Learning
NeuralMind optionally learns from your query patterns to improve future relevance.
How it works
- Collect โ Each
neuralmind querylogs which modules appeared in the result to.neuralmind/memory/query_events.jsonl(opt-in, local only, zero overhead) - Learn โ
neuralmind learn .analyzes cooccurrence: which clusters appear together across queries - Improve โ The next
neuralmind queryapplies a+0.3reranking boost to modules that co-occur with the current query's top matches - Repeat โ The system gets smarter as you use it
Opt-in / consent
On first TTY query:
NeuralMind can keep local query memory (project + global JSONL) to improve future retrieval.
Enable? [y/N]:
Consent saved to ~/.neuralmind/memory_consent.json. Disable at any time:
export NEURALMIND_MEMORY=0 # disable query logging
export NEURALMIND_LEARNING=0 # disable pattern application
File locations
~/.neuralmind/
โโโ memory_consent.json # consent flag
โโโ memory/
โโโ query_events.jsonl # global event log
<project>/.neuralmind/
โโโ memory/
โ โโโ query_events.jsonl # project-specific events
โโโ learned_patterns.json # created by: neuralmind learn .
Privacy
100% local โ nothing is sent to any server. Delete ~/.neuralmind/ and <project>/.neuralmind/
at any time to remove all learning data.
โฐ Keeping the Index Fresh
Automatic โ Git post-commit hook (recommended)
neuralmind init-hook .
After every commit, the hook runs:
neuralmind build . 2>/dev/null && echo "[neuralmind] OK"
Manual
graphify update .
neuralmind build .
Scheduled โ cron
0 6 * * * cd /path/to/project && graphify update . && neuralmind build .
CI/CD โ GitHub Actions
- run: pip install neuralmind graphifyy
- run: graphify update . && neuralmind build .
- run: neuralmind wakeup . > AI_CONTEXT.md
๐ Compatibility
| Component | Works With | Notes |
|---|---|---|
| CLI | Any environment | Pure Python, no daemon required |
| MCP Server | Claude Code, Claude Desktop, Cursor, Cline, Continue, any MCP client | Bundled with pip install neuralmind |
| PostToolUse Hooks | Claude Code only | Uses Claude Code's PostToolUse hook system |
| Git hook | Any git workflow | Appends to existing post-commit, idempotent |
| Copy-paste | ChatGPT, Gemini, any LLM | neuralmind wakeup . | pbcopy |
Quick-start by tool
Claude Code โ full two-phase optimization
pip install neuralmind graphifyy
cd your-project
graphify update .
neuralmind build .
neuralmind install-hooks . # PostToolUse compression
neuralmind init-hook . # auto-rebuild on commit (optional)
Then use MCP tools in sessions: neuralmind_wakeup, neuralmind_query, neuralmind_skeleton.
Cursor / Cline / Continue โ MCP server
pip install neuralmind graphifyy
graphify update .
neuralmind build .
Add to your MCP config:
{ "mcpServers": { "neuralmind": { "command": "neuralmind-mcp" } } }
ChatGPT / Gemini / any LLM โ CLI + copy-paste
neuralmind wakeup . | pbcopy # macOS โ paste into chat
neuralmind query . "question" # get context for a specific question
โจ What's New in v0.4.0 โ Brain-like Synapse Layer
NeuralMind now runs as a second brain alongside the LLM: a persistent associative memory that learns continuously from how the agent and the codebase actually interact. See the release notes for the full story.
| Feature | Details |
|---|---|
| Synapse store | SQLite-backed weighted graph; Hebbian reinforce, decay, long-term potentiation |
| Spreading activation | mind.synaptic_neighbors(query) โ usage-based recall complementing vector search |
neuralmind watch daemon |
File edits become co-activation signals; brain learns even when no query runs |
| Three new Claude Code hooks | SessionStart (decay+export), UserPromptSubmit (recall injection), PreCompact (hub normalization) |
| Auto-memory export | Writes SYNAPSE_MEMORY.md to Claude Code's auto-memory dir so associations surface natively |
| Three new MCP tools | synaptic_neighbors, synapse_stats, synapse_decay, export_synapse_memory |
| 3ร fewer embedder calls per query | Selector caches one search per query and slices for L2/L3/synapses |
Earlier (v0.3.x)
| Feature | Version | Details |
|---|---|---|
| Memory Collection | v0.3.0 | Local JSONL storage for query events |
| Opt-in Consent | v0.3.0 | One-time TTY prompt, env var overrides |
| EmbeddingBackend abstraction | v0.3.1 | Pluggable vector backend (Pinecone/Weaviate ready) |
| Pattern Learning | v0.3.2 | neuralmind learn . analyzes cooccurrence |
| Smart Reranking | v0.3.2 | L3 results boosted by learned patterns |
| Accurate Build Stats | v0.3.3 | Correctly distinguishes added vs updated nodes |
| Documentation polish | v0.3.4 | CLI flags sync, Setup Guide, agent guidance in README |
๐ Benchmarks
NeuralMind benchmarks itself on every pull request. A hermetic fixture (tests/fixtures/sample_project/) plus a committed query set (tests/fixtures/benchmark_queries.json) runs through the full retrieval pipeline, and CI fails if aggregate reduction drops below a conservative floor (currently 4ร on the small fixture โ the fixture is intentionally tiny, real repos consistently hit 40โ70ร as shown below).
What CI measures on every PR
- Phase 1 โ Reduction. Naive baseline (every
.pyfile in the fixture concatenated) vsNeuralMind.query()output, per query. All tokens counted withtiktoken. - Phase 2 โ Learning uplift. Same queries run cold, then after seeding memory and running
neuralmind learn. Reports the delta in reduction ratio and top-k retrieval hit rate. On a 500-line fixture the numerical uplift is modest by design โ the test proves the mechanism persists, not that it's magic. - Per-model breakdown. GPT-4o and GPT-4/3.5 counts are measured via real tiktoken encodings. Claude uses the Anthropic SDK tokenizer when available, else a clearly-labeled estimate derived from published vocab ratios. Llama is always estimated. No fabricated numbers anywhere.
- Memory persistence.
tests/test_memory_persistence.pyasserts events are logged,neuralmind learnproduces a patterns file, and subsequent queries load it without error.
Community benchmarks
Real-world numbers submitted by users. Your code never leaves your machine โ you submit a PR (or an issue, which a maintainer converts to a PR) with only the numbers. CI validates every entry against the schema and re-renders this table automatically.
| Project | Lang | Nodes | Wakeup | Avg Query | Reduction | Model | Submitted |
|---|---|---|---|---|---|---|---|
| cmmc20 | JavaScript | 241 | 341 | 739 | 65.6ร | Claude 3.5 Sonnet | @dfrostar ยท 2025-10-01 |
| mempalace | Python | 1,626 | 412 | 891 | 46.0ร | Claude 3.5 Sonnet | @dfrostar ยท 2025-10-01 |
2 submission(s). See the JSON data for notes and verification commands.
Submit yours:
- Easy path: open a benchmark submission issue โ fill out a form, a maintainer converts it to a PR.
- PR directly: add an entry to
docs/community-benchmarks.jsonand runpython scripts/render_community_table.py --inject README.mdto regenerate the table. Schema:community-benchmarks.schema.json.
All entries include the exact neuralmind command that produced them, so reviewers (and any reader) can audit the numbers.
Reproduce locally (on our fixture)
pip install . tiktoken matplotlib graphifyy
graphify update tests/fixtures/sample_project
neuralmind build tests/fixtures/sample_project --force
python -m tests.benchmark.run # phase 1 + phase 2
python -m tests.benchmark.multi_model # per-model breakdown
python scripts/generate_chart.py # refreshes the PNG above
Full machine-readable results land in tests/benchmark/results.json, human-readable report in tests/benchmark/report.md.
Reproduce on your code
Don't just trust numbers from our fixture โ run it on your repo:
pip install neuralmind graphifyy
graphify update . && neuralmind build .
neuralmind benchmark . --contribute
Output shows your reduction ratio, tokens per query, and estimated monthly savings at Claude 3.5 Sonnet pricing. Full walkthrough: Does NeuralMind work on your codebase?
Retrieval quality baseline
- Heuristic-only baseline (community-reported): 70โ80% top-5 retrieval accuracy
- NeuralMind target on the same query set: exceed that baseline via semantic retrieval + learned cooccurrence reranking
The pytest regression gate (tests/test_benchmark_regression.py) currently enforces โฅ50% top-k hit rate on the fixture plus โฅ4ร reduction (low because the fixture is tiny; real repos measure 10ร higher).
โ FAQ
How much does NeuralMind reduce Claude / GPT token costs?
Measured on real repos: 40โ70ร reduction per query (see Benchmarks). For a team running 100 queries/day on Claude Sonnet, that is roughly $450/month โ $7/month. Exact savings depend on codebase size and model pricing.
Does NeuralMind work outside Claude Code?
Yes. The CLI works anywhere Python runs; the MCP server works with Cursor, Cline, Continue, Claude Desktop, and any MCP-compatible agent. For non-MCP tools like ChatGPT or Gemini, neuralmind wakeup . | pbcopy pipes context into a regular chat window. Only the PostToolUse compression hooks are Claude-Code-specific.
Does my code leave my machine?
No. NeuralMind is fully offline โ no API calls, no cloud services. Embeddings run locally via ChromaDB, and the knowledge graph is stored in graphify-out/ in your project. Query memory (optional, opt-in) is written to .neuralmind/ on disk.
Is this RAG? How is it different from LangChain or LlamaIndex?
It is a form of RAG, but specialized for code. Instead of chunking text, NeuralMind retrieves over a knowledge graph of code entities (functions, classes, clusters) with a fixed 4-layer structure. That keeps the call graph intact and produces a token-budgeted output instead of a flat list of chunks. See vs. LangChain/LlamaIndex.
I have a 1M context window now โ do I still need this?
Long context makes it possible to stuff a whole repo in; it does not make it cheap. You still pay per input token, so a 50K-token repo at Claude Sonnet rates costs ~$0.15 every turn. NeuralMind drops that to ~$0.002. See vs. long context.
Does it support my language?
Any language graphify supports (Python, JavaScript/TypeScript, and others via tree-sitter). NeuralMind consumes graphify-out/graph.json โ if graphify can index it, NeuralMind can query it.
What is the difference between wakeup, query, and skeleton?
wakeupโ ~400 tokens of project orientation (L0 + L1). Run it at session start.queryโ ~800โ1,100 tokens for a specific natural-language question (L0โL3).skeletonโ compact view of a single file (functions + call graph + cross-file edges). Use beforeRead.
How does the PostToolUse compression work?
When neuralmind install-hooks . has been run, Claude Code invokes NeuralMind after every Read/Bash/Grep tool call but before the agent sees the output. Read becomes a skeleton (~88% smaller), Bash keeps errors + last 3 lines (~91% smaller), Grep caps at 25 matches. Set NEURALMIND_BYPASS=1 on any command to opt out.
Can I use NeuralMind without a knowledge graph?
No โ the knowledge graph (graphify-out/graph.json) is the source of truth. Run graphify update . first, then neuralmind build ..
Does it auto-update when I change code?
Only if you install the git post-commit hook with neuralmind init-hook .. Otherwise run neuralmind build . manually; it is incremental and only re-embeds changed nodes.
What do I do if retrieval quality is poor on my repo?
- Check that
neuralmind stats .reports all your nodes indexed. - Run
neuralmind benchmark .to see reduction ratios. - Enable query memory (it prompts on first TTY run) and periodically run
neuralmind learn .โ cooccurrence-based reranking improves relevance on your actual queries. - Open an issue with the query and expected result โ retrieval quality is the thing we most want to improve.
๐ Documentation
| Resource | Contents |
|---|---|
| Setup Guide | First-time setup for Claude Code, Claude Desktop, Cursor, any LLM |
| CLI Reference | All commands and options |
| Scheduling Guide | Automate audits with Windows Task Scheduler, GitHub Actions, or cron |
| Version Strategy | Versioning policy, breaking changes, support timeline, upgrade path |
| Compatibility Matrix | Version compatibility, Python/platform support, known issues, migration guides |
| Learning Guide | Continual learning details |
| API Reference | Python API (NeuralMind, ContextResult, TokenBudget) |
| Architecture | 4-layer progressive disclosure design |
| Integration Guide | MCP, CI/CD, VS Code, JetBrains |
| Troubleshooting | Common issues and fixes |
| Future-Proofing Plan | 8-initiative roadmap for sustainability and scale |
| Brain-like Learning | Design rationale for the learning system |
| Use Cases | Step-by-step walkthroughs: Claude Code, cost optimization, any-LLM, offline/regulated, growing monorepo |
| Comparisons | NeuralMind vs. Cursor, Copilot, Cody, Aider, Claude Projects, LangChain, long context, prompt caching, RAG, tree-sitter |
| USAGE.md | Extended usage examples |
๐ค Contributing
See CONTRIBUTING.md for guidelines.
๐ License
MIT License โ see LICENSE for details.
โญ Star this repo if NeuralMind saves you money!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neuralmind-0.5.0.tar.gz.
File metadata
- Download URL: neuralmind-0.5.0.tar.gz
- Upload date:
- Size: 100.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c82137af5dc119f7c1dd2ed5d0706e02a7596e61a510e0badd638de0453aa2fc
|
|
| MD5 |
219c78048305085d82fd4c323e26c479
|
|
| BLAKE2b-256 |
aab3c52c6eac5a4913cffc257334804714c3b75cf927fb94cda8e84cb4a3997c
|
Provenance
The following attestation bundles were made for neuralmind-0.5.0.tar.gz:
Publisher:
release.yml on dfrostar/neuralmind
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
neuralmind-0.5.0.tar.gz -
Subject digest:
c82137af5dc119f7c1dd2ed5d0706e02a7596e61a510e0badd638de0453aa2fc - Sigstore transparency entry: 1436371204
- Sigstore integration time:
-
Permalink:
dfrostar/neuralmind@b136e6acb8b7a125519c43d506d07d94ae4632ef -
Branch / Tag:
refs/tags/v0.5.0 - Owner: https://github.com/dfrostar
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@b136e6acb8b7a125519c43d506d07d94ae4632ef -
Trigger Event:
push
-
Statement type:
File details
Details for the file neuralmind-0.5.0-py3-none-any.whl.
File metadata
- Download URL: neuralmind-0.5.0-py3-none-any.whl
- Upload date:
- Size: 84.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9f63323350ca87f9cdf2b96390dc3d2bec5f68996cefc4123bd14ed2e5e56dd6
|
|
| MD5 |
d2b2052234895ac74b0dc58afb21eb35
|
|
| BLAKE2b-256 |
cec6a3ba80e5c0e6d548f798481eaa9c31710fdf89ef3bde1d8fdfb2fbcfdef7
|
Provenance
The following attestation bundles were made for neuralmind-0.5.0-py3-none-any.whl:
Publisher:
release.yml on dfrostar/neuralmind
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
neuralmind-0.5.0-py3-none-any.whl -
Subject digest:
9f63323350ca87f9cdf2b96390dc3d2bec5f68996cefc4123bd14ed2e5e56dd6 - Sigstore transparency entry: 1436371247
- Sigstore integration time:
-
Permalink:
dfrostar/neuralmind@b136e6acb8b7a125519c43d506d07d94ae4632ef -
Branch / Tag:
refs/tags/v0.5.0 - Owner: https://github.com/dfrostar
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@b136e6acb8b7a125519c43d506d07d94ae4632ef -
Trigger Event:
push
-
Statement type: