Find the best local AI model for your GPU — terminal UI
Project description
fitmyllm
Find the best local AI model for your GPU — full-featured terminal UI.
Install
pip install fitmyllm
Or run without installing:
pipx run fitmyllm
Setup
Get your free API key at fitmyllm.com/?tab=mcp, then:
fitmyllm setup
# Paste your API key (starts with fml_)
Or set it as an environment variable:
export FITMYLLM_API_KEY=fml_your_key_here
Run
fitmyllm
Features
| Screen | Description |
|---|---|
| Find Models | Auto-detect GPU, 11 filters (use case, context, size, family, quant, speed...), 30+ models ranked by score |
| Find GPU | GPU recommendations for any model with budget, speed, vendor, and quant filters |
| Enterprise | 10-tab deployment analysis: overview, risk, checklist, TCO, scaling, SLA, GPU matrix, performance, fine-tuning, architecture |
| Compare | Side-by-side comparison of up to 4 models with all metrics |
| Install | Choose quantization, pick engine (7 supported), install with live progress bar |
| Chat | Talk to models via Ollama with real-time streaming and collapsible thinking blocks |
| Tier List | Models and GPUs ranked S-F with cloud GPU alternatives |
| Benchmarks | Leaderboard sortable by 8 benchmark metrics |
| GPU Prices | Search and compare GPU pricing with vendor filter |
| Command Simulator | Interactive parameter tuning for 7 engines |
| Charts | ASCII score/speed/VRAM bars and quality-vs-speed scatter plot |
Keyboard Shortcuts
| Key | Action |
|---|---|
f |
Toggle filter panel |
g |
Search/change GPU |
Space |
Mark model for comparison |
c |
Compare marked models / Chat |
i |
Install model |
t |
Command simulator |
s |
Save/unsave model |
r |
Show HuggingFace README |
e |
Export results as Markdown |
v |
Show ASCII charts |
Ctrl+S |
Save current filters as defaults |
Ctrl+T |
Toggle thinking blocks in chat |
Esc |
Go back |
q |
Quit |
Supported Engines
Ollama, vLLM, LM Studio, llama.cpp, KoboldCpp, Jan, Docker Model Runner
Offline Mode
API responses are cached in ~/.fitmyllm/cache/ (24h TTL). If you lose internet, the CLI falls back to cached data automatically.
Requirements
- Python 3.10+
- API key from fitmyllm.com
- Ollama (optional, for install/chat features)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fitmyllm-0.2.0.tar.gz.
File metadata
- Download URL: fitmyllm-0.2.0.tar.gz
- Upload date:
- Size: 41.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
43377f5cc3ceba435017596fd34cfae9dc09396429c5b6ab9e761bad3579fa88
|
|
| MD5 |
c2d7855e5c5b040807451d7937228d06
|
|
| BLAKE2b-256 |
3f0ee229cd78765a4a1c62f1c56afe3539c463eb844494b988bba1b2363a0b5c
|
File details
Details for the file fitmyllm-0.2.0-py3-none-any.whl.
File metadata
- Download URL: fitmyllm-0.2.0-py3-none-any.whl
- Upload date:
- Size: 58.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2b0b043d578b16b2c91f501f827f25374e397d8c820927f3e91fcd90fa44c0af
|
|
| MD5 |
f1be9e775f1fd954770ae12633271f44
|
|
| BLAKE2b-256 |
bbd1a6e7dc271b2cc59c54407f809268eb8501edbdde1b3270dc80baff808a42
|