Skip to main content

Optimum Nvidia is the interface between the Hugging Face Transformers and NVIDIA GPUs. "

Project description

Optimum-NVIDIA

Optimized inference with NVIDIA and Hugging Face

Documentation python cuda trt-llm license


Optimum-NVIDIA delivers the best inference performance on the NVIDIA platform through Hugging Face. Run LLaMA 2 at 1,200 tokens/second (up to 28x faster than the framework) by changing just a single line in your existing transformers code.

Installation

Pip

Pip installation flow has been validated on Ubuntu only at this stage.

apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin libopenmpi-dev
python -m pip install --pre --extra-index-url https://pypi.nvidia.com optimum-nvidia

For developers who want to target the best performances, please look at the installation methods below.

Docker container

You can use a Docker container to try Optimum-NVIDIA today. Images are available on the Hugging Face Docker Hub.

docker pull huggingface/optimum-nvidia

Building from source

Instead of using the pre-built docker container, you can build Optimum-NVIDIA from source:

TARGET_SM="90-real;89-real"
git clone --recursive --depth=1 https://github.com/huggingface/optimum-nvidia.git
cd optimum-nvidia/third-party/tensorrt-llm
make -C docker release_build CUDA_ARCHS=$TARGET_SM
cd ../.. && docker build -t <organisation_name/image_name>:<version> -f docker/Dockerfile .

Quickstart Guide

Pipelines

Hugging Face pipelines provide a simple yet powerful abstraction to quickly set up inference. If you already have a pipeline from transformers, you can unlock the performance benefits of Optimum-NVIDIA by just changing one line.

- from transformers.pipelines import pipeline
+ from optimum.nvidia.pipelines import pipeline

pipe = pipeline('text-generation', 'meta-llama/Llama-2-7b-chat-hf', use_fp8=True)
pipe("Describe a real-world application of AI in sustainable energy.")

Generate

If you want control over advanced features like quantization and token selection strategies, we recommend using the generate() API. Just like with pipelines, switching from existing transformers code is super simple.

- from transformers import AutoModelForCausalLM
+ from optimum.nvidia import AutoModelForCausalLM
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf", padding_side="left")

model = AutoModelForCausalLM.from_pretrained(
  "meta-llama/Llama-2-7b-chat-hf",
+ use_fp8=True,  
)

model_inputs = tokenizer(["How is autonomous vehicle technology transforming the future of transportation and urban planning?"], return_tensors="pt").to("cuda")

generated_ids = model.generate(
    **model_inputs, 
    top_k=40, 
    top_p=0.7, 
    repetition_penalty=10,
)

tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

To learn more about text generation with LLMs, check out this guide!

Support Matrix

We test Optimum-NVIDIA on 4090, L40S, and H100 Tensor Core GPUs, though it is expected to work on any GPU based on the following architectures:

  • Turing (with experimental support for T4 / RTX Quadro x000)
  • Ampere (A100/A30 are supported. Experimental support for A10, A40, RTX Ax000)
  • Hopper
  • Ada-Lovelace

Note that FP8 support is only available on GPUs based on Hopper and Ada-Lovelace architectures.

Optimum-NVIDIA works on Linux will support Windows soon.

Optimum-NVIDIA currently accelerates text-generation with LLaMAForCausalLM, and we are actively working to expand support to include more model architectures and tasks.

Contributing

Check out our Contributing Guide

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

optimum-nvidia-0.1.0b5.tar.gz (52.3 kB view details)

Uploaded Source

Built Distribution

optimum_nvidia-0.1.0b5-py3-none-any.whl (72.2 kB view details)

Uploaded Python 3

File details

Details for the file optimum-nvidia-0.1.0b5.tar.gz.

File metadata

  • Download URL: optimum-nvidia-0.1.0b5.tar.gz
  • Upload date:
  • Size: 52.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.14

File hashes

Hashes for optimum-nvidia-0.1.0b5.tar.gz
Algorithm Hash digest
SHA256 552df189f0586e1c5b9ba4df2053c63a2f405762d553fdc3ea388518ef2b79e5
MD5 1fb5a77c7c478d5b306f7738fa4f0d5c
BLAKE2b-256 5442902dfcfe5175655cc92fc03b30a05ed21ed10cff753461978f5d4c678da2

See more details on using hashes here.

File details

Details for the file optimum_nvidia-0.1.0b5-py3-none-any.whl.

File metadata

File hashes

Hashes for optimum_nvidia-0.1.0b5-py3-none-any.whl
Algorithm Hash digest
SHA256 fa70e7694769d7f1f99cf90aee03f23f917c213a974792d2a8d7d24caf72d0a0
MD5 edf3184f2f1c51351cb1b6c3ce922ab3
BLAKE2b-256 26d7b73fc9f3039bdd0ef766cb20c8df65709dd4e2af952ab0079aa86fb342c2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page