Skip to main content

Detect hardware and estimate LLM model inference capability

Project description

tamebi

Detect your hardware. Know what you can run.

tamebi is a CLI tool that automatically detects your machine's hardware (CPU, RAM, GPU, disk) and tells you exactly which LLM models you can run — with estimated memory usage, throughput, and time to first token.

Install

pip install tamebi

or with uv:

uv pip install tamebi

NVIDIA, AMD, and Apple Silicon are all detected automatically — no extra flags or extras needed.

Quick Start

tamebi check

CLI Reference

tamebi check

Detect hardware and estimate which LLM models can run.

Flag Short Default Description
--json -j false Output as JSON instead of rich tables
--context-length -c 4096 Context length in tokens. KV cache scales linearly with this — 4K vs 128K changes memory dramatically
--batch-size -b 1 Concurrent requests. Each gets its own KV cache. Set >1 if planning to serve multiple users
--verbose false Show detailed detection info (driver versions, etc.)

tamebi update

Pull the latest model catalog from the remote. The catalog updates automatically in the background but you can force a refresh with this command.

tamebi update

Examples

# Basic hardware check
tamebi check

# JSON output for scripting
tamebi check --json

# Estimate for serving 4 concurrent users with 8K context
tamebi check --batch-size 4 --context-length 8192

# Use each model's native max context window instead of the 4K default
tamebi check --context-length 0

# Force-refresh the model catalog
tamebi update

Supported Hardware

Vendor Detection Method Details
NVIDIA nvidia-ml-py (NVML) Model, VRAM, CUDA version, compute capability
AMD rocm-smi (subprocess) Model, VRAM (requires ROCm)
Apple Silicon system_profiler Chip model (M1/M2/M3/M4), unified memory
CPU-only psutil + py-cpuinfo Cores, threads, frequency, architecture

Model Catalog

The catalog is automatically updated weekly and covers the latest releases from major labs including Meta, Mistral, Google, Qwen, DeepSeek, GLM, MiniMax, Kimi, Liquid, and AllenAI. Models are fetched directly from HuggingFace Hub — no manual maintenance required.

Run tamebi update at any time to pull the latest catalog.

How Estimation Works

Memory is estimated per model and precision:

Total VRAM = Model Weights + KV Cache + Overhead

Model Weights = params (billions) × bytes_per_param
  FP16: 2 bytes | INT8: 1 byte | INT4: 0.5 bytes

KV Cache = 2 × layers × num_kv_heads × head_dim × context_len × bytes × batch_size
  (GQA-aware: uses KV heads, not Q heads)

Overhead = 15% of weights (activations + fragmentation) + 0.5 GB (NVIDIA only)

Performance estimates (tokens/sec, time to first token) are based on hardware-class lookup tables. They show ranges, not exact numbers — actual performance depends on drivers, software stack, and workload.

License

Copyright (c) 2026 Tamebi. All rights reserved. Proprietary and confidential.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tamebi-0.2.2.tar.gz (20.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tamebi-0.2.2-py3-none-any.whl (20.9 kB view details)

Uploaded Python 3

File details

Details for the file tamebi-0.2.2.tar.gz.

File metadata

  • Download URL: tamebi-0.2.2.tar.gz
  • Upload date:
  • Size: 20.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for tamebi-0.2.2.tar.gz
Algorithm Hash digest
SHA256 533f51d6aead0368722789829646df2fb663280148b147a8ae215dae82bde4a6
MD5 0c9b3e6629da18a481f3f4f3054911a2
BLAKE2b-256 18b1ab8838d92c2bf7f142c5e4c8f14dbd4e7af4f5d9a148f9bbc202fe1a5523

See more details on using hashes here.

File details

Details for the file tamebi-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: tamebi-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 20.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for tamebi-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 943f6c2e6d84cdcae091d8cf01c3702643646e66f21f99d5c32fc87a647023d0
MD5 195e272e7ffdccc264e460d2269e96f0
BLAKE2b-256 1aa81d414c08905c9c59be4064b869641d0c4ae999a631ac5c8c868e2f2d462e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page