Skip to main content

Asynchronous Self-Healing KV Cache for Silicon-Native LLMs by GDI Nexus

Project description

ASH-KV: Hardware-Native Neural Integrity Middleware

Hardware License Version Company

ASH-KV (Asynchronous Self-Healing KV Cache) is a high-performance middleware layer designed for Runtime Neural Integrity Enforcement. It leverages silicon-native kernels to monitor the mathematical uncertainty of the Attention Manifold and surgically prunes logical drift at the hardware level.


🔬 Technical Core

⚡ Deterministic Manifold Monitoring

Instead of heuristic text-scanning, ASH-KV monitors Attention Varentropy. By calculating the mathematical variance across the KV-Cache in real-time, the system identifies the exact moment a model's transition probability distribution collapses—the mathematical precursor to hallucination.

🛡️ Fused Kernel Mutation

When drift is detected, ASH-KV executes a Gaussian Penalty Mask directly within the model's compute graph.

  • Apple Silicon: Uses @mx.compile Fused Metal kernels for zero-latency mutation.
  • NVIDIA: Uses PyTorch/CUDA-synchronized tensor operations.
  • Latency: Measured at < 0.9ms on Apple M4 hardware.

📖 API Reference

protect(model, sensitivity=0.85, critic_model_path=None)

Wraps an existing model with the ASH-KV Hypervisor.

Parameter Type Default Description
model nn.Module Required An MLX or PyTorch model instance.
sensitivity float 0.85 The drift threshold (0.0 to 1.0). Lower is stricter.
critic_model_path str None Optional path to a CoreML .mlpackage for ANE offloading.

Returns: (protected_model, cache, adapter, proxies)

  • cache: The ASHCache instance managing the manifold.
  • adapter: The AdaptiveSensitivity agent for dynamic scaling.
  • proxies: A list of KV-cache proxies to be passed to the model's forward pass.

🚀 Usage with mlx-lm

ASH-KV is designed to be a drop-in upgrade for the mlx-lm ecosystem.

from mlx_lm import load
from mlx_ash_kv.api import protect, generate_stream

# 1. Load your model natively
model, tokenizer = load("mlx-community/Meta-Llama-3-8B-Instruct-4bit")

# 2. Apply the ASH-KV Shield
model, cache, adapter, proxies = protect(model, sensitivity=0.85)

# 3. Stream with Real-Time Healing
gen = generate_stream(model, tokenizer, cache, proxies, prompt="Explain quantum gravity.")

for token, health_score in gen:
    print(token, end="", flush=True)
    # health_score < 0.1 indicates an ASH-KV intervention occurred

📊 Benchmarks & Reproducibility

Our performance claims are verifiable using the included benchmarking suite.

ash-kv install    # Hardware Stress Test
ash-kv benchmark  # Run 100-case Latency/Integrity suite

Scripts are located in scripts/publish_benchmarks.py. Methodology uses time.perf_counter_ns() to measure the Fused Metal Kernel overhead.


🏗️ Architecture (HAL)

The Hardware Abstraction Layer ensures the same code runs across disparate architectures:

  • MLXHealer: Fused Metal operations for Apple Silicon.
  • CudaHealer: Synchronized PyTorch operations for NVIDIA.
  • UniversalTensorCritic: Zero-shot mathematical manifold evaluation.

⚠️ DISCLAIMER

ASH-KV is a probabilistic reliability layer for assisting professionals. It is NOT a substitute for professional clinical or legal judgment. All AI outputs must be verified by qualified humans.


© 2026 GDI Nexus Software Solutions LLP. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlx_ash_kv-8.2.5.tar.gz (15.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mlx_ash_kv-8.2.5-py3-none-any.whl (16.9 kB view details)

Uploaded Python 3

File details

Details for the file mlx_ash_kv-8.2.5.tar.gz.

File metadata

  • Download URL: mlx_ash_kv-8.2.5.tar.gz
  • Upload date:
  • Size: 15.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for mlx_ash_kv-8.2.5.tar.gz
Algorithm Hash digest
SHA256 a9ef09b5056937f014c0832c70041f39c0f8ac25f88b369ab617d88b54b7f8b6
MD5 3117de43543a890974e7fad5fc27973d
BLAKE2b-256 620a6fb3b108dd337746827431150f6481f5546ffc30f2d880f0aac084520b79

See more details on using hashes here.

File details

Details for the file mlx_ash_kv-8.2.5-py3-none-any.whl.

File metadata

  • Download URL: mlx_ash_kv-8.2.5-py3-none-any.whl
  • Upload date:
  • Size: 16.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for mlx_ash_kv-8.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 e1ea6781fdd900bcc2903fe3c5d575a080e8607fa7050c951aec3e7ecc62f7e0
MD5 9031557232e6cf98f1daf87fe1ff5167
BLAKE2b-256 8e014166882decb95b46f4c4ee346550d3d23fa8e8fa07702c701895755eebec

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page