A CLI to estimate inference memory requirements for Hugging Face models, written in Python.
Project description
[!WARNING]
hf-memis still experimental and therefore subject to major changes across releases, so please keep in mind that breaking changes may occur until v1.0.0.
hf-mem is a CLI to estimate inference memory requirements for Hugging Face models, written in Python. hf-mem is lightweight, only depends on httpx, as it pulls the Safetensors and / or GGUF metadata via HTTP Range requests. It's recommended to run with uv for a better experience.
hf-mem lets you estimate the inference requirements to run any model from the Hugging Face Hub, including Transformers, Diffusers and Sentence Transformers models, or really any model as long as it contains any of Safetensors or GGUF weights.
Read more information about hf-mem in this short-form post, but note it's not up-to-date as it was written in January 2026.
Usage
CLI (Recommended)
Transformers
uvx hf-mem --model-id MiniMaxAI/MiniMax-M2
Diffusers
uvx hf-mem --model-id Qwen/Qwen-Image
Sentence Transformers
uvx hf-mem --model-id google/embeddinggemma-300m
Python
You can also run it programmatically with Python as:
from hf_mem import run
result = run(model_id="MiniMaxAI/MiniMax-M2", experimental=True)
print(result)
# Result(model_id='MiniMaxAI/MiniMax-M2', revision='main', filename=None, memory=230121630720, kv_cache=24964497408, total_memory=255086128128, details=False)
If you're already inside an async application, use arun(...) instead:
from hf_mem import arun
result = await arun(model_id="MiniMaxAI/MiniMax-M2", experimental=True)
print(result)
# Result(model_id='MiniMaxAI/MiniMax-M2', revision='main', filename=None, memory=230121630720, kv_cache=24964497408, total_memory=255086128128, details=False)
Experimental
By enabling the --experimental flag, you can enable the KV Cache memory estimation for LLMs (...ForCausalLM) and VLMs (...ForConditionalGeneration), even including a custom --max-model-len (defaults to the config.json default), --batch-size (defaults to 1), and the --kv-cache-dtype (defaults to auto which means it uses the default data type set in config.json under torch_dtype or dtype, or rather from quantization_config when applicable).
uvx hf-mem --model-id MiniMaxAI/MiniMax-M2 --experimental
GGUF
If the repository contains GGUF model weights, those will be listed by default (only if there are no Safetensors weights, otherwise the GGUFs will be ignored) and the memory will be estimated for each one of those; whereas if a specific file is provided, then the memory estimation will be targeted for that given file instead.
uvx hf-mem --model-id TheBloke/deepseek-llm-7B-chat-GGUF --experimental
Or if you want to only get the estimation on a given file:
uvx hf-mem --model-id TheBloke/deepseek-llm-7B-chat-GGUF --gguf-file deepseek-llm-7b-chat.Q2_K.gguf --experimental
(Optional) Agent Skills
Optionally, you can add hf-mem as an agent skill, which allows the underlying coding agent to discover and use it when provided as a SKILL.md, e.g., .claude/skills/hf-mem/SKILL.md.
More information can be found at Anthropic Agent Skills and how to use them.
References
Technical
Visual
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hf_mem-0.5.0.tar.gz.
File metadata
- Download URL: hf_mem-0.5.0.tar.gz
- Upload date:
- Size: 23.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
51061ed2e68c556d561728094310e92fb3924e5494e4c572b8f33964d9594896
|
|
| MD5 |
d720a2621382e14d777e3acd31c8bfa1
|
|
| BLAKE2b-256 |
9bc80cbea8bf4095baacc69fa53fe68830c9f68d71baec029f0989905d85d21d
|
File details
Details for the file hf_mem-0.5.0-py3-none-any.whl.
File metadata
- Download URL: hf_mem-0.5.0-py3-none-any.whl
- Upload date:
- Size: 31.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b7fbd5720a76bb9d239cb87ad4f26b76359d396de956f45ba9e86c010b587240
|
|
| MD5 |
838ffbef41b14b91bdef091702c64dff
|
|
| BLAKE2b-256 |
8b93793458e0d4a6673f54bd24262658b8c58b3828e277130546f53f1c186593
|