Skip to main content

Multi-engine LLM benchmark & monitoring CLI for Apple Silicon

Project description

asiai logo

asiai

Apple Silicon AI — Multi-engine LLM benchmark & monitoring CLI

PyPI CI Coverage License Python macOS Sponsor Benchmarks Top Speed AI Agents

asiai bench demo

asiai compares inference engines side-by-side on your Mac. Load the same model on Ollama and LM Studio, run asiai bench, get the numbers. No guessing, no vibes — just tok/s, TTFT, power efficiency, and stability per engine.

Share your results with the community (--share), compare against other Apple Silicon users (asiai compare), and get smart engine recommendations (asiai recommend).

Born from the OpenClaw project, where we needed hard data to pick the fastest engine for multi-agent swarms on Mac Mini M4 Pro.

Quick start

pipx install asiai        # Recommended: isolated install

Or via Homebrew:

brew tap druide67/tap
brew install asiai

Other options:

uvx asiai detect           # Run without installing (requires uv)
pip install asiai           # Standard pip install

Then benchmark and share:

asiai bench --quick --card --share    # Bench + shareable card in ~15 seconds

Commands

asiai detect

Auto-detect running inference engines across 7 ports.

$ asiai detect

Detected engines:

  ● ollama 0.17.4
    URL: http://localhost:11434

  ● lmstudio 0.4.5
    URL: http://localhost:1234
    Running: 1 model(s)
      - qwen3.5-35b-a3b  MLX

asiai bench

Cross-engine benchmark with standardized prompts. Runs 3 iterations per prompt by default, reports median tok/s (SPEC standard) with stability classification.

$ asiai bench -m qwen3.5 --runs 3 --power

  Mac Mini M4 Pro — Apple M4 Pro  RAM: 64.0 GB (42% used)  Pressure: normal

Benchmark: qwen3.5

  Engine       tok/s (±stddev)    Tokens   Duration     TTFT       VRAM    Thermal
  ────────── ───────────────── ───────── ────────── ──────── ────────── ──────────
  lmstudio    72.6 ± 0.0 (stable)   435    6.20s    0.28s        —    nominal
  ollama      30.4 ± 0.1 (stable)   448   15.28s    0.25s   26.0 GB   nominal

  Winner: lmstudio (2.4x faster)
  Power: lmstudio 13.2W (5.52 tok/s/W) — ollama 16.0W (1.89 tok/s/W)

Options:

-m, --model MODEL          Model to benchmark (default: auto-detect)
-e, --engines LIST         Filter engines (e.g. ollama,lmstudio,mlxlm)
-p, --prompts LIST         Prompt types: code, tool_call, reasoning, long_gen
-r, --runs N               Runs per prompt (default: 3, for median + stddev)
    --power                Cross-validate power with sudo powermetrics (IOReport always-on)
    --context-size SIZE    Context fill prompt: 4k, 16k, 32k, 64k
    --share                Share results with the community (anonymous, opt-in)
-Q, --quick                Quick benchmark: 1 prompt, 1 run (~15 seconds)
    --card                 Generate shareable benchmark card (SVG + PNG with --share)
-H, --history PERIOD       Show past benchmarks (e.g. 7d, 24h)

Cross-model comparison — benchmark multiple models in one run and get a ranked summary:

# Cross-model comparison
asiai bench --compare qwen3.5:4b deepseek-r1:7b -e ollama --card

The runner resolves model names across engines automatically — gemma2:9b (Ollama) and gemma-2-9b (LM Studio) are matched as the same model.

asiai models

List loaded models across all engines. Use --json for machine-readable output.

$ asiai models

ollama  http://localhost:11434
  ● qwen3.5:35b-a3b                             26.0 GB Q4_K_M

lmstudio  http://localhost:1234
  ● qwen3.5-35b-a3b                                 MLX

asiai monitor

System and inference metrics snapshot, stored in SQLite. Use --json for machine-readable output.

$ asiai monitor

System
  Uptime:    3d 12h
  CPU Load:  2.45 / 3.12 / 2.89  (1m / 5m / 15m)
  Memory:    45.2 GB / 64.0 GB  71%
  Pressure:  normal
  Thermal:   nominal  (100%)

Inference  ollama 0.17.4
  Models loaded: 1  VRAM total: 26.0 GB

  Model                                        VRAM   Format  Quant
  ──────────────────────────────────────── ────────── ──────── ──────
  qwen3.5:35b-a3b                            26.0 GB     gguf Q4_K_M

Options:

-w, --watch SEC            Refresh every SEC seconds
-q, --quiet                Collect and store without output (for daemon use)
    --json                 Output as JSON (for scripting)
-H, --history PERIOD       Show history (e.g. 24h, 1h)
-a, --analyze HOURS        Comprehensive analysis with trends
-c, --compare TS TS        Compare two timestamps
    --alert-webhook URL    POST alerts on state transitions (memory, thermal, engine down)

asiai doctor

Diagnose installation, engines, system health, and database.

$ asiai doctor

Doctor

  System
    ✓ Apple Silicon       Mac Mini M4 Pro — Apple M4 Pro
    ✓ RAM                 64 GB total, 42% used
    ✓ Memory pressure     normal
    ✓ Thermal             nominal (100%)

  Engine
    ✓ Ollama              v0.17.4 — 1 model(s): qwen3.5:35b-a3b
    ✓ LM Studio           v0.4.5 — 1 model(s): qwen3.5-35b-a3b
    ✗ mlx-lm              not installed
    ✗ llama.cpp            not installed
    ✗ vllm-mlx            not installed

  Database
    ✓ SQLite              2.4 MB, last entry: 1m ago

  5 ok, 0 warning(s), 3 failed

asiai daemon

Background monitoring via macOS launchd. Collects metrics every minute.

asiai daemon start              # Install and start the daemon
asiai daemon start --interval 30  # Custom interval (seconds)
asiai daemon status             # Check if running
asiai daemon logs               # View recent logs
asiai daemon stop               # Stop and uninstall

asiai web

Web dashboard with real-time monitoring, benchmark controls, and interactive charts. Requires pip install asiai[web].

asiai web                    # Opens browser at http://127.0.0.1:8899
asiai web --port 9000        # Custom port
asiai web --host 0.0.0.0     # Listen on all interfaces
asiai web --no-open          # Don't auto-open browser

Features: system overview, engine status, live benchmark with SSE progress, history charts, doctor checks, dark/light theme.

asiai leaderboard

Browse community benchmarks. Filter by chip or model.

asiai leaderboard                      # All results
asiai leaderboard --chip "M4 Pro"      # Filter by chip
asiai leaderboard --model qwen2.5      # Filter by model

asiai compare

Compare your local results against community medians.

asiai compare --chip "Apple M1 Max" --model qwen2.5:7b

asiai recommend

Get engine recommendations based on your hardware and benchmarks.

asiai recommend                                # Best engine for your Mac
asiai recommend --use-case latency             # Optimize for TTFT
asiai recommend --model qwen2.5 --community    # Include community data

asiai setup

Interactive setup wizard — detects hardware, engines, models, and suggests next steps.

asiai setup

asiai mcp

Start the MCP server for AI agent integration. 11 tools, 3 resources.

asiai mcp                          # stdio (Claude Code, Cursor)
asiai mcp --transport sse          # SSE (network agents)

asiai tui

Interactive terminal dashboard with auto-refresh. Requires pip install asiai[tui].

asiai tui

Benchmark Card — share your results

Generate a shareable benchmark card image with one flag:

asiai bench --card                    # SVG saved locally (zero dependencies)
asiai bench --card --share            # SVG + PNG via community API
asiai bench --quick --card --share    # Quick bench + card + share

Benchmark card example

A 1200x630 dark-themed card with your model, chip, specs banner (quantization, RAM, GPU cores, context size), engine comparison bar chart, winner highlight, and metric chips (tok/s, TTFT, power, engine version). Optimized for Reddit, X, Discord, and GitHub READMEs.

Every shared card includes asiai branding — the Speedtest.net model for local LLM inference.

Supported engines

Engine Port Install API
Ollama 11434 brew install ollama Native
LM Studio 1234 brew install --cask lm-studio OpenAI-compatible
mlx-lm 8080 brew install mlx-lm OpenAI-compatible
llama.cpp 8080 brew install llama.cpp OpenAI-compatible
oMLX 8000 brew tap jundot/omlx && brew install omlx OpenAI-compatible
vllm-mlx 8000 pip install vllm-mlx OpenAI-compatible
Exo 52415 pip install exo OpenAI-compatible

What it measures

Metric Description
tok/s Generation speed (tokens/sec), excluding prompt processing (TTFT)
TTFT Time to first token — prompt processing latency
Power GPU, CPU, ANE, DRAM power in watts (IOReport, no sudo)
tok/s/W Energy efficiency — tokens per second per watt
Stability Run-to-run variance: stable (CV<5%), variable (<10%), unstable (>10%)
VRAM GPU memory footprint (Ollama, LM Studio via lms CLI)
Thermal CPU throttling state and speed limit percentage

All metrics stored in SQLite (~/.local/share/asiai/metrics.db) with 90-day retention and automatic regression detection.

Benchmark methodology

Following MLPerf, SPEC CPU 2017, and NVIDIA GenAI-Perf standards:

  • Warmup: 1 non-timed generation per engine before measured runs
  • Runs: 3 iterations per prompt (configurable), median as primary metric
  • Sampling: temperature=0 (greedy decoding) for deterministic results
  • Power: Always-on via IOReport (no sudo). Per-engine, not session-wide average
  • Variance: Pooled intra-prompt stddev (isolates run-to-run noise)
  • Metadata: Engine version, model quantization, hardware chip, macOS version stored per result

See docs/benchmark-best-practices.md for the full conformance audit.

Benchmark prompts

Four standardized prompts test different generation patterns:

Name Tokens Tests
code 512 Structured code generation (BST in Python)
tool_call 256 JSON function calling / instruction following
reasoning 384 Multi-step math problem
long_gen 1024 Sustained throughput (bash script)

Use --context-size 4k|16k|32k|64k to test with large context fill prompts instead.

API & Prometheus

When running asiai web, three REST API endpoints are available for programmatic access. Interactive API documentation (Swagger UI) is available at http://localhost:8899/docs.

Endpoint Description
GET /api/status Lightweight health check (< 500ms) — engine reachability, memory pressure, thermal
GET /api/snapshot Full system + engine snapshot with loaded models, VRAM, versions
GET /api/benchmarks Benchmark results with tok/s, TTFT, power, context_size, engine_version
GET /api/engine-history Engine status history (TCP, KV cache, tokens predicted)
GET /api/benchmark-process Process CPU/RSS metrics from benchmark runs (7d retention)
GET /api/metrics Prometheus exposition format — system, engine, model, benchmark gauges

Prometheus integration

# prometheus.yml
scrape_configs:
  - job_name: 'asiai'
    static_configs:
      - targets: ['localhost:8899']
    metrics_path: '/api/metrics'
    scrape_interval: 30s

CLI JSON output

asiai monitor --json | jq '.mem_pressure'
asiai models --json | jq '.engines[].models[].name'

Requirements

  • macOS on Apple Silicon (M1 / M2 / M3 / M4 families)
  • Python 3.11+
  • At least one inference engine running locally

Zero dependencies

The core uses only the Python standard library — urllib, sqlite3, subprocess, argparse. No requests, no psutil, no rich. Just stdlib.

Optional extras:

  • asiai[web] — FastAPI web dashboard with charts
  • asiai[tui] — Textual terminal dashboard
  • asiai[all] — Web + TUI
  • asiai[dev] — pytest, ruff

Roadmap

Version Scope Status
v0.1 detect + bench + monitor + models (CLI, stdlib) Done
v0.2 mlx-lm + doctor + daemon + TUI (Textual) Done
v0.3 5 engines, power metrics, multi-run variance, regression detection Done
v0.4 CI, MkDocs, export JSON, thermal drift, web dashboard Done
v0.5 REST API, Prometheus /metrics, CLI --json, engine uptime tracking Done
v0.6 Multi-service LaunchAgent (daemon start web), daemon status/logs/stop --all Done
v0.7 Alert webhooks, LM Studio VRAM, Ollama config in doctor Done
v1.0 Community Benchmark DB, smart recommendations, Exo engine, leaderboard Done
v1.0.1 MCP server (11 tools), benchmark card, --quick mode, setup wizard, agent integration Done
v1.1 Fleet mode (multi-Mac), notifications macOS, MCP prompts, VRAM predictor Planned

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

asiai-1.2.0.tar.gz (2.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

asiai-1.2.0-py3-none-any.whl (329.6 kB view details)

Uploaded Python 3

File details

Details for the file asiai-1.2.0.tar.gz.

File metadata

  • Download URL: asiai-1.2.0.tar.gz
  • Upload date:
  • Size: 2.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for asiai-1.2.0.tar.gz
Algorithm Hash digest
SHA256 570987cd9abd088c0181880c9b6fe1af26b10f8a88c2a762351d9ec29f74e8f9
MD5 62bf11b7d49a8024af8004d6a646e3a9
BLAKE2b-256 05bf8929b982c04bafb095990dbb05e0c016ebcb69834369443b5850946345a3

See more details on using hashes here.

Provenance

The following attestation bundles were made for asiai-1.2.0.tar.gz:

Publisher: release.yml on druide67/asiai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file asiai-1.2.0-py3-none-any.whl.

File metadata

  • Download URL: asiai-1.2.0-py3-none-any.whl
  • Upload date:
  • Size: 329.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for asiai-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2882c1ac1df986720b84a5c9ae9003292206897977f3cf921747d9ce0c3a5e2a
MD5 28f966b1831ab5779f0fa3cde662783d
BLAKE2b-256 7e238272b7f0bd907e0934fdfe3408069d0b503186e44b5a6b7b8829fb70181d

See more details on using hashes here.

Provenance

The following attestation bundles were made for asiai-1.2.0-py3-none-any.whl:

Publisher: release.yml on druide67/asiai

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page