Skip to main content

MLX Inference Quality Diagnostic Toolkit

Project description


title: README type: note permalink: mlxtriage/readme

mlx-triage

PyPI version CI Python 3.11+ License: MIT macOS Apple Silicon Validated: 32 models across 10 families

Your MLX model is producing garbage. Is it the weights? A known MLX bug? Your quantization settings?

mlx-triage answers that in 30 seconds — without loading the model into memory.

pip install mlx-triage
mlx-triage check ./my-model

mlx-triage demo

What It Checks

Tested against 32 models across 10 families (Qwen, Gemma, GLM, Mistral/Devstral, LiquidAI, GPT-OSS, Nemotron, Llama, Phi, Nanbeige), 7 quantization formats (bf16 through QAT 4-bit and MXFP4), from 0.6B to 35B parameters. Zero false negatives. Full validation results ->

Tier 0 — Sanity Checks (no MLX needed, < 30 seconds)

Check What it catches
Dtype Compatibility BF16->FP16 precision loss, training/storage dtype mismatches
Tokenizer & EOS Config Missing EOS tokens, chat template issues, Llama 3 dual-stop-token edge cases
Weight File Integrity NaN/Inf values, all-zero layers, corrupt safetensors headers
MLX Version & Known Bugs Outdated MLX with documented bugs affecting your model architecture

Tier 1 — Statistical Smoke Tests (MLX required)

Check What it catches
Determinism Non-reproducible outputs at temp=0 (infrastructure issue, not model)
Reference Divergence MLX output diverging from PyTorch/Transformers reference
Quantization Quality Excessive perplexity indicating broken quantization

Install

Requires Python 3.11+ and macOS on Apple Silicon (M1-M4).

# From PyPI
pip install mlx-triage

# With MLX for Tier 1 checks
pip install "mlx-triage[mlx]"

# With reference comparison (Tier 1, Test 1.2)
pip install "mlx-triage[reference]"

# Development
git clone https://github.com/swaylenhayes/mlx-triage.git
cd mlx-triage
uv sync --extra dev

Usage

# Tier 0 only (default — no MLX needed)
mlx-triage check /path/to/model

# Tier 0 + Tier 1
mlx-triage check /path/to/model --tier 1

# JSON output
mlx-triage check /path/to/model --format json

# Require full execution (fail if any check is skipped)
mlx-triage check /path/to/model --tier 1 --format json --strict

# Save report to file
mlx-triage check /path/to/model --tier 1 --output report.json

Tier 0 runs in under 30 seconds on any model. Tier 1 requires MLX and takes 5-15 minutes depending on model size.

Reliability Claims in JSON Output

Each JSON report now includes:

  • claim_level: runtime-qualified when all checks executed, preflight-only when any check was skipped
  • checks_executed: Number of checks that ran
  • checks_skipped: Number of checks skipped
  • skipped_check_ids: IDs of skipped checks

Use --strict in CI or external reporting workflows to enforce full execution. In strict mode, mlx-triage exits with a non-zero status if any check is skipped.

How It Works

mlx-triage uses a tiered diagnostic protocol — each tier increases in depth and cost:

  1. Tier 0 reads model files directly (safetensors headers, config JSON, tokenizer config) without loading the model into memory. This catches the most common issues instantly.

  2. Tier 1 loads the model via MLX and runs statistical tests — determinism checks (10 runs at temp=0), perplexity measurement against a fixed eval corpus, and optional comparison against a PyTorch reference backend.

  3. Tiers 2-3 (planned) will add isolation tests (batch invariance, memory pressure, context length stress) and deep diagnostics (layer-wise activation comparison, cross-runtime analysis).

If Tier 0 finds critical issues, Tier 1 is skipped — fix the fundamentals first.

Known Bugs Database

mlx-triage ships with a curated database of documented MLX bugs (known_bugs.yaml), cross-referenced against your installed MLX version and model architecture. Running MLX < 0.22.0 with float16 weights? It flags the known qmv kernel overflow. Got a 4-bit Llama model looping on long prompts? There's a documented bug for that. Safetensors file looks valid but weights are numerically garbage? That's a known silent bfloat16 corruption path.

Contributing a bug report to the database is the easiest way to help — see CONTRIBUTING.md.

Research Basis

The diagnostic protocol is grounded in systematic analysis of MLX infrastructure defects across multiple model architectures and quantization levels. See METHODOLOGY.md for the evidence basis, including infrastructure defect taxonomy, first-party experiments, and cross-model synthesis.

Contributing

Contributions welcome — especially to the known bugs database. See CONTRIBUTING.md.

License

MIT


If mlx-triage saved you a debugging session, star it — it helps other MLX developers find the tool.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlx_triage-0.2.0.tar.gz (239.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mlx_triage-0.2.0-py3-none-any.whl (37.7 kB view details)

Uploaded Python 3

File details

Details for the file mlx_triage-0.2.0.tar.gz.

File metadata

  • Download URL: mlx_triage-0.2.0.tar.gz
  • Upload date:
  • Size: 239.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mlx_triage-0.2.0.tar.gz
Algorithm Hash digest
SHA256 85fe5eeb9bd5dc421f817a228daf40f3f31047b1229731f3fcae93432b2a1cd6
MD5 fb009051d06f501a5926f070997f03b3
BLAKE2b-256 460246dd2d2e063f80468696c4bcdfebd30cd71a2ed4bea892b2f3c46c2034c9

See more details on using hashes here.

Provenance

The following attestation bundles were made for mlx_triage-0.2.0.tar.gz:

Publisher: ci.yml on swaylenhayes/mlx-triage

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mlx_triage-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: mlx_triage-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 37.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mlx_triage-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d4e23857dea9a9a96b1f62937a4213af6d243f79169bfe4fed29d99bdf31fd05
MD5 7e5f766be419c1b1086f3a81c3b2f91f
BLAKE2b-256 9394cbf7b04ed73ac2226642b6ae828ef996bf64a058c45529ff3c3a8638fac6

See more details on using hashes here.

Provenance

The following attestation bundles were made for mlx_triage-0.2.0-py3-none-any.whl:

Publisher: ci.yml on swaylenhayes/mlx-triage

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page