Skip to main content

A CLI to estimate inference memory requirements for Hugging Face models, written in Python.

Project description


[!WARNING] hf-mem is still experimental and therefore subject to major changes across releases, so please keep in mind that breaking changes may occur until v1.0.0.

hf-mem is a CLI to estimate inference memory requirements for Hugging Face models, written in Python. hf-mem is lightweight, only depends on httpx, as it pulls the Safetensors and / or GGUF metadata via HTTP Range requests. It's recommended to run with uv for a better experience.

hf-mem lets you estimate the inference requirements to run any model from the Hugging Face Hub, including Transformers, Diffusers and Sentence Transformers models, or really any model as long as it contains any of Safetensors or GGUF weights.

Read more information about hf-mem in this short-form post, but note it's not up-to-date as it was written in January 2026.

Usage

CLI (Recommended)

Transformers

uvx hf-mem --model-id MiniMaxAI/MiniMax-M2

Diffusers

uvx hf-mem --model-id Qwen/Qwen-Image

Sentence Transformers

uvx hf-mem --model-id google/embeddinggemma-300m

Python

You can also run it programmatically with Python as:

from hf_mem import run

result = run(model_id="MiniMaxAI/MiniMax-M2", experimental=True)
print(result)
# Result(model_id='MiniMaxAI/MiniMax-M2', revision='main', filename=None, memory=230121630720, kv_cache=24964497408, total_memory=255086128128, details=False)

If you're already inside an async application, use arun(...) instead:

from hf_mem import arun

result = await arun(model_id="MiniMaxAI/MiniMax-M2", experimental=True)
print(result)
# Result(model_id='MiniMaxAI/MiniMax-M2', revision='main', filename=None, memory=230121630720, kv_cache=24964497408, total_memory=255086128128, details=False)

Experimental

By enabling the --experimental flag, you can enable the KV Cache memory estimation for LLMs (...ForCausalLM) and VLMs (...ForConditionalGeneration), even including a custom --max-model-len (defaults to the config.json default), --batch-size (defaults to 1), and the --kv-cache-dtype (defaults to auto which means it uses the default data type set in config.json under torch_dtype or dtype, or rather from quantization_config when applicable).

uvx hf-mem --model-id MiniMaxAI/MiniMax-M2 --experimental

GGUF

If the repository contains GGUF model weights, those will be listed by default (only if there are no Safetensors weights, otherwise the GGUFs will be ignored) and the memory will be estimated for each one of those; whereas if a specific file is provided, then the memory estimation will be targeted for that given file instead.

uvx hf-mem --model-id TheBloke/deepseek-llm-7B-chat-GGUF --experimental

Or if you want to only get the estimation on a given file:

uvx hf-mem --model-id TheBloke/deepseek-llm-7B-chat-GGUF --gguf-file deepseek-llm-7b-chat.Q2_K.gguf --experimental

Skills

Optionally, you can add hf-mem as an agent skill, which allows the underlying coding agent to discover and use it when provided as a SKILL.md, e.g., .claude/skills/hf-mem/SKILL.md.

More information can be found at Anthropic Agent Skills and how to use them.

Extensions

Optionally, you can also add hf-mem as an extension to the Hugging Face Hub CLI under hf as hf mem .... First you need to install hf as explained in Hugging Face CLI - "Getting started". To add hf-mem as an extension, all you need to do is run:

hf extensions add alvarobartt/hf-mem

More information can be found at Hugging Face Hub CLI - "How Python extensions are installed".

Once installed, you can use hf-mem via hf as hf mem ... along with the rest of Hugging Face extensions you have installed (which you can list via hf extensions list).

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hf_mem-0.5.3.tar.gz (24.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hf_mem-0.5.3-py3-none-any.whl (31.6 kB view details)

Uploaded Python 3

File details

Details for the file hf_mem-0.5.3.tar.gz.

File metadata

  • Download URL: hf_mem-0.5.3.tar.gz
  • Upload date:
  • Size: 24.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for hf_mem-0.5.3.tar.gz
Algorithm Hash digest
SHA256 96cbee4e193aecd03146b618b68ff4383cd037e539acffe209d7b6a8cac0f45e
MD5 799aa58d053be206c2dac4eeed28f7d6
BLAKE2b-256 c8a657c2872173f5edd2ca50b01f2ecf546677dbdf1b4fb17ae152342d37e023

See more details on using hashes here.

File details

Details for the file hf_mem-0.5.3-py3-none-any.whl.

File metadata

  • Download URL: hf_mem-0.5.3-py3-none-any.whl
  • Upload date:
  • Size: 31.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for hf_mem-0.5.3-py3-none-any.whl
Algorithm Hash digest
SHA256 723af236eed4b483d2792f144862c12dee80ae173ce964c5b79ba09d7ebe36a0
MD5 e2116a48e82bcc43dc2cf5f9f06fb61f
BLAKE2b-256 f9255c8f89f00662e56b10cbdfdebe1a4de7b0997bc67150faaae5b44d575932

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page