Adaptive Neural Knowledge System + PostToolUse compression hooks. Two-phase token optimization (retrieval + consumption) for Claude Code.
Project description
๐ง NeuralMind
Semantic code intelligence for AI coding agents โ smart context retrieval + tool-output compression in one package.
NeuralMind turns a code repository into a queryable neural index. AI agents use it to answer code questions in ~800 tokens instead of loading 50,000+ tokens of raw source.
๐ For AI Coding Agents
If you are Claude Code, Codex, Cursor, Cline, or any AI agent with NeuralMind installed, start here.
Check if NeuralMind is active
neuralmind stats .
Expected output when ready:
Project: your-project
Built: True
Nodes: 241
If Built: False, run graphify update . && neuralmind build . first.
Startup workflow
Every new session, do this first:
neuralmind wakeup .
Or via MCP:
neuralmind_wakeup(project_path=".")
This returns ~365โ600 tokens of structured project context:
- Project name and description (from
CLAUDE.md,mempalace.yaml, orREADME.mdfirst line) - How many code entities and clusters are indexed
- Architecture overview: top 10 code clusters with their entity types and sample names
- Sections from
graphify-out/GRAPH_REPORT.mdif present
Use this output as your orientation before writing any code. It replaces reading the entire repository.
Decision tree โ which tool to call
Need to understand the project?
โโโบ neuralmind wakeup . (MCP: neuralmind_wakeup) ~400 tokens
Answering a specific code question?
โโโบ neuralmind query . "question" (MCP: neuralmind_query) ~800โ1100 tokens
About to open a source file?
โโโบ neuralmind skeleton <file> (MCP: neuralmind_skeleton) ~5โ15ร cheaper than Read
โ Only fall back to Read when you need the actual implementation body
โ Use NEURALMIND_BYPASS=1 when you truly need raw source
Searching for a specific function/class/entity?
โโโบ neuralmind search . "term" (MCP: neuralmind_search) ranked by semantic similarity
Made code changes and need to update the index?
โโโบ neuralmind build . (MCP: neuralmind_build) incremental โ only re-embeds changed nodes
Understanding the output
wakeup / query output format
## Project: myapp
Full-stack web app for task management. Uses React 18, Node.js, and PostgreSQL.
Knowledge Graph: 241 entities, 23 clusters
Type: Code repository with semantic indexing
## Architecture Overview
### Code Clusters
- Cluster 5 (45 entities): function โ authenticate_user, hash_password, verify_token
- Cluster 12 (23 entities): class โ UserController, AuthMiddleware, SessionStore
- Cluster 3 (18 entities): function โ createTask, updateTask, deleteTask
...
## Relevant Code Areas โ query only; absent from wakeup
### Cluster 5 (relevance: 1.73)
Contains: function entities
- authenticate_user (code) โ auth.py
- verify_token (code) โ auth.py
## Search Results โ query only
- AuthMiddleware (score: 0.91) โ middleware.py
- jwt_handler (score: 0.85) โ auth/jwt.py
---
Tokens: 847 | 59.0x reduction | Layers: L0, L1, L2, L3 | Communities: [5, 12]
Layer meanings:
| Layer | Name | Always loaded | Content |
|---|---|---|---|
| L0 | Identity | โ yes | Project name, description, graph size |
| L1 | Summary | โ yes | Architecture, top clusters, GRAPH_REPORT sections |
| L2 | On-demand | query only | Top 3 clusters most relevant to the query |
| L3 | Search | query only | Semantic search hits (up to 10) |
skeleton output format
# src/auth/handlers.py (community 5, 8 functions)
## Functions
L12 authenticate_user โ Validates credentials and issues JWT
L45 verify_token โ Checks JWT signature and expiry
L78 refresh_token โ Issues new JWT from a valid refresh token
L102 logout โ Revokes refresh token in DB
## Call graph (within this file)
authenticate_user โ verify_token, hash_password
refresh_token โ verify_token
## Cross-file
verify_token imports_from โ utils/jwt.py (high 0.95)
authenticate_user shares_data_with โ models/user.py (high 0.91)
[Full source available: Read this file with NEURALMIND_BYPASS=1]
Use skeleton to understand what a file does, how its functions relate, and which other files it depends on โ without consuming tokens on the full source body.
search output format
1. authenticate_user (function) - score: 0.92
File: auth/handlers.py Community: 5
2. AuthMiddleware (class) - score: 0.87
File: auth/middleware.py Community: 5
3. hash_password (function) - score: 0.81
File: utils/crypto.py Community: 5
PostToolUse hooks โ what happens automatically
If neuralmind install-hooks has been run for this project (check for .claude/settings.json), Claude Code automatically compresses tool outputs before you see them:
| Tool | What happens | Typical savings |
|---|---|---|
| Read | Raw source โ graph skeleton (functions, rationales, call graph) | ~88% |
| Bash | Full output โ error lines + warning lines + last 3 lines + summary | ~91% |
| Grep | Unlimited matches โ capped at 25 + "N more hidden" pointer | varies |
This is fully automatic โ you do not need to call any extra tools.
To bypass compression for a single command (e.g., when you need the full file body):
NEURALMIND_BYPASS=1 <your command>
After making code changes
The index does not auto-update unless a git post-commit hook was installed with neuralmind init-hook .. After significant code changes, rebuild manually:
neuralmind build . # incremental โ only re-embeds changed nodes
neuralmind build . --force # full rebuild โ re-embeds everything
MCP tool quick reference
| Tool | When to call | Required params | Returns |
|---|---|---|---|
neuralmind_wakeup |
Session start | project_path |
L0+L1 context string, token count |
neuralmind_query |
Code question | project_path, question |
L0โL3 context string, token count, reduction ratio |
neuralmind_search |
Find entity | project_path, query |
List of nodes with scores, file paths |
neuralmind_skeleton |
Explore file | project_path, file_path |
Functions + rationales + call graph + cross-file edges |
neuralmind_stats |
Check status | project_path |
Built status, node count, community count |
neuralmind_build |
Rebuild index | project_path |
Build stats dict |
neuralmind_benchmark |
Measure savings | project_path |
Per-query token counts and reduction ratios |
โก Two-phase optimization
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Phase 1: Retrieval โ what to fetch โ
โ neuralmind wakeup . โ ~365 tokens (vs 50K raw) โ
โ neuralmind query "?" โ ~800 tokens (vs 2,700 raw) โ
โ neuralmind_skeleton โ graph-backed file view โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Phase 2: Consumption โ what the agent actually sees โ
โ PostToolUse hooks compress Read/Bash/Grep output โ
โ File reads โ graph skeleton (~88% reduction) โ
โ Bash output โ errors + summary (~91% reduction) โ
โ Search results โ capped at 25 matches โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Combined effect: 5โ10ร total reduction vs baseline Claude Code.
๐ฏ The Problem
You: "How does authentication work in my codebase?"
โ Traditional: Load entire codebase โ 50,000 tokens โ $0.15โ$3.75/query
โ
NeuralMind: Smart context โ 766 tokens โ $0.002โ$0.06/query
๐ฐ Real Savings
| Model | Without NeuralMind | With NeuralMind | Monthly Savings |
|---|---|---|---|
| Claude 3.5 Sonnet | $450/month | $7/month | $443 |
| GPT-4o | $750/month | $12/month | $738 |
| GPT-4.5 | $11,250/month | $180/month | $11,070 |
| Claude Opus | $2,250/month | $36/month | $2,214 |
Based on 100 queries/day. Pricing sources
๐ Quick Start (humans)
# Install
pip install neuralmind graphifyy
# Go to your project
cd your-project
# Generate knowledge graph (requires graphify)
graphify update .
# Build neural index
neuralmind build .
# (Optional) Install Claude Code PostToolUse compression hooks
neuralmind install-hooks .
# (Optional) Auto-rebuild on every git commit
neuralmind init-hook .
# Start using
neuralmind wakeup .
neuralmind query . "How does authentication work?"
neuralmind skeleton src/auth/handlers.py
๐ง How It Works
NeuralMind wraps a graphify knowledge graph (graphify-out/graph.json) in a ChromaDB vector store.
When you query it, a 4-layer progressive disclosure system loads only the context relevant to
your question.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Layer 0: Project Identity (~100 tokens) โ ALWAYS LOADED โ
โ Source: CLAUDE.md / mempalace.yaml / README first line โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Layer 1: Architecture Summary (~500 tokens) โ ALWAYS LOADED โ
โ Source: Community distribution + GRAPH_REPORT.md โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Layer 2: Relevant Modules (~300โ500 tokens) โ QUERY-AWARE โ
โ Source: Top 3 clusters semantically matching the query โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Layer 3: Semantic Search (~300โ500 tokens) โ QUERY-AWARE โ
โ Source: ChromaDB similarity search over all graph nodes โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Total: ~800โ1,100 tokens vs 50,000+ for the full codebase
Prerequisites: NeuralMind requires graphify update . to have been run first. This produces:
graphify-out/graph.jsonโ the knowledge graph (required)graphify-out/GRAPH_REPORT.mdโ architecture summary (enriches L1, optional)graphify-out/neuralmind_db/โ ChromaDB vector store (created byneuralmind build)
๐ฅ๏ธ Complete CLI Reference
neuralmind build
Build or incrementally update the neural index from graphify-out/graph.json.
neuralmind build [project_path] [--force]
| Argument/Option | Default | Description |
|---|---|---|
project_path |
. |
Project root containing graphify-out/graph.json |
--force, -f |
off | Re-embed every node even if unchanged |
neuralmind build .
neuralmind build /path/to/project --force
Output: nodes processed, added, updated, skipped, communities indexed, build duration.
neuralmind wakeup
Get minimal project context for starting a session (~400โ600 tokens, L0 + L1 only).
neuralmind wakeup <project_path> [--json]
neuralmind wakeup .
neuralmind wakeup . --json
neuralmind wakeup . > CONTEXT.md
neuralmind query
Query the codebase with natural language (~800โ1,100 tokens, all 4 layers).
neuralmind query <project_path> "<question>" [--json]
neuralmind query . "How does authentication work?"
neuralmind query . "What are the main API endpoints?" --json
neuralmind query /path/to/project "Explain the database schema"
On first run from a TTY, you will be prompted once to enable local query memory logging.
Disable with NEURALMIND_MEMORY=0.
neuralmind search
Direct semantic search โ returns code entities ranked by similarity to the query.
neuralmind search <project_path> "<query>" [--n N] [--json]
| Option | Default | Description |
|---|---|---|
--n |
10 | Maximum number of results |
--json, -j |
off | Machine-readable JSON output |
neuralmind search . "authentication"
neuralmind search . "database connection" --n 5
neuralmind search . "PaymentController" --json
neuralmind skeleton
Print a compact graph-backed view of a file without loading full source (~88% cheaper than Read).
neuralmind skeleton <file_path> [--project-path .] [--json]
| Option | Default | Description |
|---|---|---|
--project-path |
. |
Project root (where the index lives) |
--json, -j |
off | Machine-readable JSON output |
neuralmind skeleton src/auth/handlers.py
neuralmind skeleton src/auth/handlers.py --project-path /my/project
neuralmind skeleton src/auth/handlers.py --json
Output: function list with line numbers and rationales, internal call graph, cross-file edges (imports, data sharing), and a pointer to the full source for when you need it.
neuralmind benchmark
Measure token reduction using a set of sample queries.
neuralmind benchmark <project_path> [--json]
neuralmind benchmark .
neuralmind benchmark . --json
neuralmind stats
Show index status and statistics.
neuralmind stats <project_path> [--json]
neuralmind stats .
neuralmind stats . --json # {"built": true, "total_nodes": 241, "communities": 23, ...}
neuralmind learn
Analyze logged query history to discover module cooccurrence patterns. Improves future query relevance automatically.
neuralmind learn <project_path>
neuralmind learn .
Reads .neuralmind/memory/query_events.jsonl, writes .neuralmind/learned_patterns.json.
The next neuralmind query applies boosted reranking automatically.
neuralmind install-hooks
Install or remove Claude Code PostToolUse compression hooks.
neuralmind install-hooks [project_path] [--global] [--uninstall]
| Option | Description |
|---|---|
--global |
Install in ~/.claude/settings.json (affects all projects) |
--uninstall |
Remove NeuralMind hooks only; preserves other tools' hooks |
neuralmind install-hooks . # project-scoped
neuralmind install-hooks --global # all projects
neuralmind install-hooks --uninstall # remove project hooks
neuralmind install-hooks --uninstall --global # remove global hooks
neuralmind init-hook
Install a Git post-commit hook that auto-rebuilds the index after every commit.
Safe and idempotent โ coexists with other tools' hook contributions.
neuralmind init-hook [project_path]
neuralmind init-hook .
neuralmind init-hook /path/to/project
๐ MCP Server
NeuralMind ships a Model Context Protocol server (neuralmind-mcp) that exposes all tools
to MCP-compatible agents.
Starting the server
neuralmind-mcp
# or
python -m neuralmind.mcp_server
Claude Desktop configuration
{
"mcpServers": {
"neuralmind": {
"command": "neuralmind-mcp",
"args": ["/absolute/path/to/project"]
}
}
}
Config file locations:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Claude Code / Cursor project-scoped auto-registration
Drop a .mcp.json at your project root:
{
"mcpServers": {
"neuralmind": {
"command": "neuralmind-mcp",
"args": ["."]
}
}
}
MCP tool schemas
neuralmind_wakeup
{
"project_path": "string (required) โ absolute path to project root"
}
Returns:
{
"context": "string",
"tokens": 412,
"reduction_ratio": 121.4,
"layers": ["L0", "L1"]
}
neuralmind_query
{
"project_path": "string (required)",
"question": "string (required) โ natural language question"
}
Returns:
{
"context": "string",
"tokens": 847,
"reduction_ratio": 59.0,
"layers": ["L0", "L1", "L2", "L3"],
"communities_loaded": [5, 12],
"search_hits": 8
}
neuralmind_search
{
"project_path": "string (required)",
"query": "string (required)",
"n": 10
}
Returns array of:
{ "id": "node_id", "label": "authenticate_user", "file_type": "code",
"source_file": "auth/handlers.py", "score": 0.92 }
neuralmind_skeleton
{
"project_path": "string (required)",
"file_path": "string (required) โ absolute or project-relative path"
}
Returns:
{ "file": "src/auth/handlers.py", "skeleton": "# src/auth/handlers.py ...", "chars": 620, "indexed": true }
neuralmind_build
{
"project_path": "string (required)",
"force": false
}
Returns:
{
"success": true,
"nodes_total": 241,
"nodes_added": 5,
"nodes_updated": 2,
"nodes_skipped": 234,
"communities": 23,
"duration_seconds": 3.1
}
neuralmind_stats
{ "project_path": "string (required)" }
Returns:
{ "built": true, "total_nodes": 241, "communities": 23, "db_path": "..." }
neuralmind_benchmark
{ "project_path": "string (required)" }
Returns:
{
"project": "myapp",
"wakeup_tokens": 341,
"avg_query_tokens": 739,
"avg_reduction_ratio": 65.6,
"results": [...]
}
๐ช PostToolUse Compression
When neuralmind install-hooks has been run, Claude Code automatically applies these transforms
to every tool output before the agent sees it.
Read โ skeleton
Raw source files are replaced with the graph skeleton (functions + rationales + call graph + cross-file edges). This is ~88% smaller and contains the structural information agents need most.
To get the full source anyway:
NEURALMIND_BYPASS=1 <command>
Bash โ filtered output
Long bash output is reduced to:
- All
error/ERROR/FAIL/traceback/warninglines - All summary lines (
=====,passed,failed,Finished,Done in, etc.) - Last 3 lines verbatim
- Header:
[neuralmind: bash compressed, exit=N]
All errors and failures are always preserved. Routine pip/npm/build chatter is dropped.
Grep โ capped results
Search results are capped at 25 matches with a [N more hidden] note appended.
Prevents context flooding from repository-wide searches.
Tunable thresholds
| Variable | Default | Description |
|---|---|---|
NEURALMIND_BYPASS |
unset | Set to 1 to disable all compression |
NEURALMIND_BASH_TAIL |
3 |
Lines to keep verbatim from end of bash output |
NEURALMIND_BASH_MAX_CHARS |
3000 |
Below this size, bash output is not compressed |
NEURALMIND_SEARCH_MAX |
25 |
Max grep/search matches before capping |
NEURALMIND_OFFLOAD_THRESHOLD |
15000 |
Chars above which content is written to a temp file |
๐ง Continual Learning
NeuralMind optionally learns from your query patterns to improve future relevance.
How it works
- Collect โ Each
neuralmind querylogs which modules appeared in the result to.neuralmind/memory/query_events.jsonl(opt-in, local only, zero overhead) - Learn โ
neuralmind learn .analyzes cooccurrence: which clusters appear together across queries - Improve โ The next
neuralmind queryapplies a+0.3reranking boost to modules that co-occur with the current query's top matches - Repeat โ The system gets smarter as you use it
Opt-in / consent
On first TTY query:
NeuralMind can keep local query memory (project + global JSONL) to improve future retrieval.
Enable? [y/N]:
Consent saved to ~/.neuralmind/memory_consent.json. Disable at any time:
export NEURALMIND_MEMORY=0 # disable query logging
export NEURALMIND_LEARNING=0 # disable pattern application
File locations
~/.neuralmind/
โโโ memory_consent.json # consent flag
โโโ memory/
โโโ query_events.jsonl # global event log
<project>/.neuralmind/
โโโ memory/
โ โโโ query_events.jsonl # project-specific events
โโโ learned_patterns.json # created by: neuralmind learn .
Privacy
100% local โ nothing is sent to any server. Delete ~/.neuralmind/ and <project>/.neuralmind/
at any time to remove all learning data.
โฐ Keeping the Index Fresh
Automatic โ Git post-commit hook (recommended)
neuralmind init-hook .
After every commit, the hook runs:
neuralmind build . 2>/dev/null && echo "[neuralmind] OK"
Manual
graphify update .
neuralmind build .
Scheduled โ cron
0 6 * * * cd /path/to/project && graphify update . && neuralmind build .
CI/CD โ GitHub Actions
- run: pip install neuralmind graphifyy
- run: graphify update . && neuralmind build .
- run: neuralmind wakeup . > AI_CONTEXT.md
๐ Compatibility
| Component | Works With | Notes |
|---|---|---|
| CLI | Any environment | Pure Python, no daemon required |
| MCP Server | Claude Code, Claude Desktop, Cursor, Cline, Continue, any MCP client | pip install "neuralmind[mcp]" |
| PostToolUse Hooks | Claude Code only | Uses Claude Code's PostToolUse hook system |
| Git hook | Any git workflow | Appends to existing post-commit, idempotent |
| Copy-paste | ChatGPT, Gemini, any LLM | neuralmind wakeup . | pbcopy |
Quick-start by tool
Claude Code โ full two-phase optimization
pip install neuralmind graphifyy
cd your-project
graphify update .
neuralmind build .
neuralmind install-hooks . # PostToolUse compression
neuralmind init-hook . # auto-rebuild on commit (optional)
Then use MCP tools in sessions: neuralmind_wakeup, neuralmind_query, neuralmind_skeleton.
Cursor / Cline / Continue โ MCP server
pip install "neuralmind[mcp]" graphifyy
graphify update .
neuralmind build .
Add to your MCP config:
{ "mcpServers": { "neuralmind": { "command": "neuralmind-mcp" } } }
ChatGPT / Gemini / any LLM โ CLI + copy-paste
neuralmind wakeup . | pbcopy # macOS โ paste into chat
neuralmind query . "question" # get context for a specific question
โจ What's New in v0.3.x
| Feature | Version | Details |
|---|---|---|
| Memory Collection | v0.3.0 | Local JSONL storage for query events |
| Opt-in Consent | v0.3.0 | One-time TTY prompt, env var overrides |
| EmbeddingBackend abstraction | v0.3.1 | Pluggable vector backend (Pinecone/Weaviate ready) |
| Pattern Learning | v0.3.2 | neuralmind learn . analyzes cooccurrence |
| Smart Reranking | v0.3.2 | L3 results boosted by learned patterns |
| Accurate Build Stats | v0.3.3 | Correctly distinguishes added vs updated nodes |
| Documentation polish | v0.3.4 | CLI flags sync, Setup Guide, agent guidance in README |
๐ Benchmarks
| Project | Nodes | Wakeup | Avg Query | Avg Reduction |
|---|---|---|---|---|
| cmmc20 (React/Node) | 241 | 341 tokens | 739 tokens | 65.6x |
| mempalace (Python) | 1,626 | 412 tokens | 891 tokens | 46.0x |
๐ Documentation
| Resource | Contents |
|---|---|
| Setup Guide | First-time setup for Claude Code, Claude Desktop, Cursor, any LLM |
| CLI Reference | All commands and options |
| Learning Guide | Continual learning details |
| API Reference | Python API (NeuralMind, ContextResult, TokenBudget) |
| Architecture | 4-layer progressive disclosure design |
| Integration Guide | MCP, CI/CD, VS Code, JetBrains |
| Troubleshooting | Common issues and fixes |
| Brain-like Learning | Design rationale for the learning system |
| USAGE.md | Extended usage examples |
๐ค Contributing
See CONTRIBUTING.md for guidelines.
๐ License
MIT License โ see LICENSE for details.
โญ Star this repo if NeuralMind saves you money!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neuralmind-0.3.4.tar.gz.
File metadata
- Download URL: neuralmind-0.3.4.tar.gz
- Upload date:
- Size: 52.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c3758513f94f3bf18f58b20d6b2569e248966de434e20006eb29afe4c576dbea
|
|
| MD5 |
30bcf36d729c23eb76783bd38365d5d6
|
|
| BLAKE2b-256 |
377c7e6fa774408b3a0182d265d9e34c074cb0b9e81e46c95934f2060cc586e6
|
Provenance
The following attestation bundles were made for neuralmind-0.3.4.tar.gz:
Publisher:
release.yml on dfrostar/neuralmind
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
neuralmind-0.3.4.tar.gz -
Subject digest:
c3758513f94f3bf18f58b20d6b2569e248966de434e20006eb29afe4c576dbea - Sigstore transparency entry: 1343431058
- Sigstore integration time:
-
Permalink:
dfrostar/neuralmind@4c7fa03a7cc627d47fb6a3ddebd4fe970bed2ccb -
Branch / Tag:
refs/tags/v0.3.3.3 - Owner: https://github.com/dfrostar
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@4c7fa03a7cc627d47fb6a3ddebd4fe970bed2ccb -
Trigger Event:
push
-
Statement type:
File details
Details for the file neuralmind-0.3.4-py3-none-any.whl.
File metadata
- Download URL: neuralmind-0.3.4-py3-none-any.whl
- Upload date:
- Size: 48.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
130494ed5100b6be2b5acaa94ce97632a154c284f7af1fc968a67ff53c1ff63c
|
|
| MD5 |
47de140d8e49f92b9057a57673554133
|
|
| BLAKE2b-256 |
dd7244dc91f700c1319df4d14531b481dab50bc7668a5e92e731e58df0a41ee3
|
Provenance
The following attestation bundles were made for neuralmind-0.3.4-py3-none-any.whl:
Publisher:
release.yml on dfrostar/neuralmind
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
neuralmind-0.3.4-py3-none-any.whl -
Subject digest:
130494ed5100b6be2b5acaa94ce97632a154c284f7af1fc968a67ff53c1ff63c - Sigstore transparency entry: 1343431072
- Sigstore integration time:
-
Permalink:
dfrostar/neuralmind@4c7fa03a7cc627d47fb6a3ddebd4fe970bed2ccb -
Branch / Tag:
refs/tags/v0.3.3.3 - Owner: https://github.com/dfrostar
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@4c7fa03a7cc627d47fb6a3ddebd4fe970bed2ccb -
Trigger Event:
push
-
Statement type: