Skip to main content

Detect hardware and estimate LLM model inference capability

Project description

tamebi

Detect your hardware. Know what you can run.

tamebi is a CLI tool that automatically detects your machine's hardware (CPU, RAM, GPU, disk) and tells you exactly which LLM models you can run — with estimated memory usage, throughput, and time to first token.

Install

pip install tamebi

or with uv:

uv pip install tamebi

NVIDIA, AMD, and Apple Silicon are all detected automatically — no extra flags or extras needed.

Quick Start

tamebi check

CLI Reference

tamebi check

Detect hardware and estimate which LLM models can run.

Flag Short Default Description
--json -j false Output as JSON instead of rich tables
--context-length -c 4096 Context length in tokens. KV cache scales linearly with this — 4K vs 128K changes memory dramatically
--batch-size -b 1 Concurrent requests. Each gets its own KV cache. Set >1 if planning to serve multiple users
--verbose false Show detailed detection info (driver versions, etc.)

tamebi update

Pull the latest model catalog from the remote. The catalog updates automatically in the background but you can force a refresh with this command.

tamebi update

Examples

# Basic hardware check
tamebi check

# JSON output for scripting
tamebi check --json

# Estimate for serving 4 concurrent users with 8K context
tamebi check --batch-size 4 --context-length 8192

# Use each model's native max context window instead of the 4K default
tamebi check --context-length 0

# Force-refresh the model catalog
tamebi update

Supported Hardware

Vendor Detection Method Details
NVIDIA nvidia-ml-py (NVML) Model, VRAM, CUDA version, compute capability
AMD rocm-smi (subprocess) Model, VRAM (requires ROCm)
Apple Silicon system_profiler Chip model (M1/M2/M3/M4), unified memory
CPU-only psutil + py-cpuinfo Cores, threads, frequency, architecture

Model Catalog

The catalog is automatically updated weekly and covers the latest releases from major labs including Meta, Mistral, Google, Qwen, DeepSeek, GLM, MiniMax, Kimi, Liquid, and AllenAI. Models are fetched directly from HuggingFace Hub — no manual maintenance required.

Run tamebi update at any time to pull the latest catalog.

How Estimation Works

Memory is estimated per model and precision:

Total VRAM = Model Weights + KV Cache + Overhead

Model Weights = params (billions) × bytes_per_param
  FP16: 2 bytes | INT8: 1 byte | INT4: 0.5 bytes

KV Cache = 2 × layers × num_kv_heads × head_dim × context_len × bytes × batch_size
  (GQA-aware: uses KV heads, not Q heads)

Overhead = 15% of weights (activations + fragmentation) + 0.5 GB (NVIDIA only)

Performance estimates (tokens/sec, time to first token) are based on hardware-class lookup tables. They show ranges, not exact numbers — actual performance depends on drivers, software stack, and workload.

License

Copyright (c) 2026 Tamebi. All rights reserved. Proprietary and confidential.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tamebi-1.0.0.tar.gz (20.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tamebi-1.0.0-py3-none-any.whl (21.1 kB view details)

Uploaded Python 3

File details

Details for the file tamebi-1.0.0.tar.gz.

File metadata

  • Download URL: tamebi-1.0.0.tar.gz
  • Upload date:
  • Size: 20.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for tamebi-1.0.0.tar.gz
Algorithm Hash digest
SHA256 53dab3138709dab3df06f9cc217dfe73e0dde2fb6914f1094039ff7307da9447
MD5 3707caa5de0dd85133b508e0b44735a7
BLAKE2b-256 95cdd87d41cb01eee98cece91690e1fdcfad8e6e6f588fd3acd5d57467a802b8

See more details on using hashes here.

File details

Details for the file tamebi-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: tamebi-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 21.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for tamebi-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c45ebaf199ed611d39d9add46ba29dcee260f2c01d4a489e49f6f7c6b4731480
MD5 f5b342651af902d892db4d0ca4f0cc4d
BLAKE2b-256 98acd53e9d6f5a3504f1e40576fe60a488f9622009944b3d94f1ad26bd97b1a4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page