Skip to main content

Find the best local AI model for your GPU — terminal UI

Project description

fitmyllm

Find the best local AI model for your GPU — full-featured terminal UI.

Install

pip install fitmyllm

Or run without installing:

pipx run fitmyllm

Setup

Get your free API key at fitmyllm.com/?tab=mcp, then:

fitmyllm setup
# Paste your API key (starts with fml_)

Or set it as an environment variable:

export FITMYLLM_API_KEY=fml_your_key_here

Run

fitmyllm

Features

Screen Description
Find Models Auto-detect GPU, 11 filters (use case, context, size, family, quant, speed...), 30+ models ranked by score
Find GPU GPU recommendations for any model with budget, speed, vendor, and quant filters
Enterprise 10-tab deployment analysis: overview, risk, checklist, TCO, scaling, SLA, GPU matrix, performance, fine-tuning, architecture
Compare Side-by-side comparison of up to 4 models with all metrics
Install Choose quantization, pick engine (7 supported), install with live progress bar
Chat Talk to models via Ollama with real-time streaming and collapsible thinking blocks
Tier List Models and GPUs ranked S-F with cloud GPU alternatives
Benchmarks Leaderboard sortable by 8 benchmark metrics
GPU Prices Search and compare GPU pricing with vendor filter
Command Simulator Interactive parameter tuning for 7 engines
Charts ASCII score/speed/VRAM bars and quality-vs-speed scatter plot

Keyboard Shortcuts

Key Action
f Toggle filter panel
g Search/change GPU
Space Mark model for comparison
c Compare marked models / Chat
i Install model
t Command simulator
s Save/unsave model
r Show HuggingFace README
e Export results as Markdown
v Show ASCII charts
Ctrl+S Save current filters as defaults
Ctrl+T Toggle thinking blocks in chat
Esc Go back
q Quit

Supported Engines

Ollama, vLLM, LM Studio, llama.cpp, KoboldCpp, Jan, Docker Model Runner

Offline Mode

API responses are cached in ~/.fitmyllm/cache/ (24h TTL). If you lose internet, the CLI falls back to cached data automatically.

Requirements

  • Python 3.10+
  • API key from fitmyllm.com
  • Ollama (optional, for install/chat features)

Project details


Release history Release notifications | RSS feed

This version

0.3.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fitmyllm-0.3.0.tar.gz (54.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fitmyllm-0.3.0-py3-none-any.whl (78.3 kB view details)

Uploaded Python 3

File details

Details for the file fitmyllm-0.3.0.tar.gz.

File metadata

  • Download URL: fitmyllm-0.3.0.tar.gz
  • Upload date:
  • Size: 54.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for fitmyllm-0.3.0.tar.gz
Algorithm Hash digest
SHA256 84a1c3b7d716d0f1aa35d79c12e0b883aeee22e1f991267fdc9a3cc014f56c3a
MD5 d5b2a76b336de5138ab9cac7452b4a55
BLAKE2b-256 91f7530aa5016f23afecae3447527dc749a30b3a491bae21a20aae267ecf801a

See more details on using hashes here.

File details

Details for the file fitmyllm-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: fitmyllm-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 78.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for fitmyllm-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e63372b028bf753ddd92d0e818eb0085f645d76c89575d9159b4c681ea0ea8d0
MD5 647a86e86fc070b97abb49f7277b4ece
BLAKE2b-256 f98da40b2749755712a060e5f34f6b9726404c47f51c4eb34a567a9ebe7f0131

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page