Skip to main content

Installer for the NanoMind Analyst daemon: serves the Qwen3-1.7B security analyst NLM behind an input-classifier gate over a Unix socket.

Project description

nanomind-analyst

Installer for the NanoMind Analyst daemon. The daemon serves the Qwen3-1.7B security analyst NLM behind an input-classifier gate over a Unix socket at /tmp/nanomind-guard.sock. Consumers (hackmyagent, opena2a-cli, ai-trust) connect to that socket for generative threat analysis on individual findings.

This package writes a per-user launchd LaunchAgent, fetches and verifies the model artifacts from Hugging Face, and manages the daemon lifecycle. Apple Silicon (Darwin arm64) only in v0.1.

Install

pip install nanomind-analyst
nanomind-analyst install

The install step:

  1. Verifies platform (Apple Silicon required).
  2. Copies the input-classifier-v1 artifacts (bundled in the wheel) into ~/Library/Application Support/nanomind-analyst/artifacts/input-classifier-v1/. SHA256 verified before copy.
  3. Fetches the Analyst NLM (~3.4 GB) from opena2a/nanomind-security-analyst at the pinned v3.0.0 commit. SHA256 verified after fetch.
  4. Writes a launchd plist to ~/Library/LaunchAgents/org.opena2a.nanomind-analyst.plist.
  5. Bootstraps the LaunchAgent into the user's gui session.
  6. Waits up to 60 seconds for the daemon to bind the socket and pass its healthz probe.

The fetch step is the long one (several minutes on first run; cached on subsequent runs).

Commands

Command What
nanomind-analyst install Full install flow. Idempotent.
nanomind-analyst uninstall Stop, unload, remove plist. --remove-artifacts also deletes the 3.4 GB NLM.
nanomind-analyst start Kickstart the loaded LaunchAgent.
nanomind-analyst stop SIGTERM the daemon. Agent stays loaded; no auto-restart on clean exit.
nanomind-analyst restart Stop then start.
nanomind-analyst status Report whether the agent is loaded and healthz returns ready.
nanomind-analyst logs Tail ~/Library/Logs/nanomind-analyst.log. --no-follow for one-shot.
nanomind-analyst --version Print the package version.

What gets written where

Path Purpose
~/Library/LaunchAgents/org.opena2a.nanomind-analyst.plist LaunchAgent plist.
~/Library/Application Support/nanomind-analyst/artifacts/input-classifier-v1/ Classifier joblib + meta.json (~5 KB).
~/Library/Application Support/nanomind-analyst/artifacts/nanomind-security-analyst/ NLM weights, tokenizer, configs (~3.4 GB).
~/Library/Logs/nanomind-analyst.log Daemon stdout + stderr.
/tmp/nanomind-guard.sock Unix socket the daemon binds (0600, owner-only).

No root, no /opt, no sudo.

Trust chain

The install does not depend on Hugging Face being trustworthy. The wheel manifest is the authoritative source of artifact identity:

  1. PyPI Trusted Publishing (OIDC) attests the wheel was built by the workflow at .github/workflows/release-nanomind-analyst.yml from a commit in opena2a-org/nanomind (SLSA v1 provenance).
  2. The wheel bakes EXPECTED_NLM_SAFETENSORS_SHA256 and EXPECTED_NLM_TOKENIZER_SHA256 constants in artifacts.py.
  3. At install, the fetched NLM files are SHA256-verified against those constants. A tampered Hugging Face artifact causes install to refuse.
  4. At daemon boot, INPUT_CLASSIFIER_JOBLIB_SHA256 is re-verified before joblib.load() runs (joblib uses pickle = arbitrary code execution at deserialization).

Verify wheel provenance after publish:

npm  # no equivalent; use:
python -m pip download nanomind-analyst --no-deps --dest /tmp/verify
# Then inspect the wheel's METADATA + RECORD; PyPI surfaces attestations
# at https://pypi.org/project/nanomind-analyst/#files

Linux / cloud daemon

Not supported in v0.1. The daemon is bf16 on Apple MPS; fp16 yields 0% accuracy on Qwen3-1.7B. Cross-platform inference is tracked in opena2a-org/nanomind issues with the labels nanomind-analyst + platform-linux.

Known limitations

  • Cold-boot latency. Daemon takes ~30 seconds to load the NLM on first request after a system reboot. The install flow waits up to 60 seconds for this; subsequent restarts are faster (the launchd-managed process stays warm).
  • NLM latency floor. The Analyst NLM emits ~400 tokens of structured output per request at ~15 ms/token on bf16 MPS. Floor is ~6 seconds per finding. Consumers (HMA, opena2a-cli) should batch or filter before invoking. The input-classifier gate bypasses the NLM on off-topic inputs (~92% bypass rate on benign user input).
  • Single-instance. The daemon binds a single Unix socket. Multiple nanomind-analyst install runs on the same machine share the same socket; the LaunchAgent label is unique to the user.

Companion package

hackmyagent (npm) --nanomind flag uses this daemon. After installing this package, hackmyagent nanomind setup detects the nanomind-analyst CLI on PATH and shells out to nanomind-analyst install. If the CLI is missing, it prints the pip install instructions.

Model card

opena2a/nanomind-security-analyst. v3.0.0 (Qwen3-1.7B SFT LoRA r=64), Apache-2.0.

License

MIT.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nanomind_analyst-0.1.0.tar.gz (39.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nanomind_analyst-0.1.0-py3-none-any.whl (38.7 kB view details)

Uploaded Python 3

File details

Details for the file nanomind_analyst-0.1.0.tar.gz.

File metadata

  • Download URL: nanomind_analyst-0.1.0.tar.gz
  • Upload date:
  • Size: 39.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for nanomind_analyst-0.1.0.tar.gz
Algorithm Hash digest
SHA256 370d4b5817804d6a799218dccd06367f72f4cebc9310e784858c79866a14f068
MD5 2ece298dcc4df179d951dde06bf61e07
BLAKE2b-256 93160251a53146032486e30cb4aa26875dbdf44e401a1b4f6aba0cc2286e706f

See more details on using hashes here.

File details

Details for the file nanomind_analyst-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for nanomind_analyst-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2502a3fbedfe1857dd00d3eb61a3b600dc1f640745732c3b8a4c2dcd3be03b77
MD5 2e204abba1b549c0360e1cb62d66b9e6
BLAKE2b-256 1c7a834cbeefa3651ea518fb5ef8337d75ea24efa68a57da414695153514b7b5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page