Skip to main content

Detect hardware and estimate LLM model inference capability

Project description

tamebi

Detect your hardware. Know what you can run.

tamebi is a CLI tool that automatically detects your machine's hardware (CPU, RAM, GPU, disk) and tells you exactly which LLM models you can run — with estimated memory usage, throughput, and time to first token.

Install

pip install tamebi

or with uv:

uv pip install tamebi

NVIDIA, AMD, and Apple Silicon are all detected automatically — no extra flags or extras needed.

Quick Start

tamebi check

CLI Reference

tamebi check

Detect hardware and estimate which LLM models can run.

Flag Short Default Description
--json -j false Output as JSON instead of rich tables
--context-length -c 4096 Context length in tokens. KV cache scales linearly with this — 4K vs 128K changes memory dramatically
--batch-size -b 1 Concurrent requests. Each gets its own KV cache. Set >1 if planning to serve multiple users
--verbose false Show detailed detection info (driver versions, etc.)

tamebi update

Pull the latest model catalog from the remote. The catalog updates automatically in the background but you can force a refresh with this command.

tamebi update

Examples

# Basic hardware check
tamebi check

# JSON output for scripting
tamebi check --json

# Estimate for serving 4 concurrent users with 8K context
tamebi check --batch-size 4 --context-length 8192

# Use each model's native max context window instead of the 4K default
tamebi check --context-length 0

# Force-refresh the model catalog
tamebi update

Supported Hardware

Vendor Detection Method Details
NVIDIA nvidia-ml-py (NVML) Model, VRAM, CUDA version, compute capability
AMD rocm-smi (subprocess) Model, VRAM (requires ROCm)
Apple Silicon system_profiler Chip model (M1/M2/M3/M4), unified memory
CPU-only psutil + py-cpuinfo Cores, threads, frequency, architecture

Model Catalog

The catalog is automatically updated weekly and covers the latest releases from major labs including Meta, Mistral, Google, Qwen, DeepSeek, GLM, MiniMax, Kimi, Liquid, and AllenAI. Models are fetched directly from HuggingFace Hub — no manual maintenance required.

Run tamebi update at any time to pull the latest catalog.

How Estimation Works

Memory is estimated per model and precision:

Total VRAM = Model Weights + KV Cache + Overhead

Model Weights = params (billions) × bytes_per_param
  FP16: 2 bytes | INT8: 1 byte | INT4: 0.5 bytes

KV Cache = 2 × layers × num_kv_heads × head_dim × context_len × bytes × batch_size
  (GQA-aware: uses KV heads, not Q heads)

Overhead = 15% of weights (activations + fragmentation) + 0.5 GB (NVIDIA only)

Performance estimates (tokens/sec, time to first token) are based on hardware-class lookup tables. They show ranges, not exact numbers — actual performance depends on drivers, software stack, and workload.

License

Copyright (c) 2026 Tamebi. All rights reserved. Proprietary and confidential.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tamebi-0.2.1.tar.gz (20.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tamebi-0.2.1-py3-none-any.whl (20.9 kB view details)

Uploaded Python 3

File details

Details for the file tamebi-0.2.1.tar.gz.

File metadata

  • Download URL: tamebi-0.2.1.tar.gz
  • Upload date:
  • Size: 20.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for tamebi-0.2.1.tar.gz
Algorithm Hash digest
SHA256 4c172d4e7b6e8f9d897b8ff32d6022aefa38c14df50b9cf0622f04f87b549c27
MD5 5d1c594f5311dd1f4eae184880760345
BLAKE2b-256 4575ea384509c5e694839f19d49a2a2fb831fc877bc1f9db77a1a71935f9d174

See more details on using hashes here.

File details

Details for the file tamebi-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: tamebi-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 20.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for tamebi-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e051b01cb661be83ec09665da8b0d6a6c0f9693ce5bdda00cff2f3f113f53954
MD5 8ba26d7f40c18065db0d1588691424a1
BLAKE2b-256 0fee77228963d75537b2ece4bb0165b742f80ce021effc995e08c0fc03e0518f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page