Skip to main content

A high-throughput and memory-efficient inference and serving engine for LLMs

Project description

nm-vllm

Overview

This repo nm-vllm-ent contains all the source for the Neuralmagic Enterprise Edition of vllm. The nm-vllm packages built from this repo are supported enterprise distributions of vLLM. Packages are versioned Python wheels and docker images. These are released as "production level" official releases and "beta level" Nightly's.

Official releases are made at the discretion of Neuralmagic, but typically track with vllm releases. These wheels are available via "public pypi" as well as "nm-pypi".

Nightly's are released every night given green runs in automation. The wheels are available at "nm-pypi".

Installation

PyPI

The nm-vllm PyPi package includes pre-compiled binaries for CUDA (version 12.1) kernels. For other PyTorch or CUDA versions, please compile the package from source.

Install it using pip:

pip install nm-vllm --extra-index-url https://pypi.neuralmagic.com/simple

To utilize the weight sparsity features, include the optional sparse dependencies.

pip install nm-vllm[sparse] --extra-index-url https://pypi.neuralmagic.com/simple

You can also build and install nm-vllm from source (this will take ~10 minutes):

git clone https://github.com/neuralmagic/nm-vllm.git
cd nm-vllm
pip install -e .[sparse] --extra-index-url https://pypi.neuralmagic.com/simple

Docker

The nm-vllm container registry includes premade docker images.

Launch the OpenAI-compatible server with:

MODEL_ID=Qwen/Qwen2-0.5B-Instruct
docker run --gpus all --shm-size 2g ghcr.io/neuralmagic/nm-vllm-openai:latest --model $MODEL_ID

Models

Neural Magic maintains a variety of optimized models on our Hugging Face organization profiles:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

nm_vllm-0.5.3.0-cp38-abi3-manylinux_2_17_x86_64.whl (158.3 MB view details)

Uploaded CPython 3.8+ manylinux: glibc 2.17+ x86-64

File details

Details for the file nm_vllm-0.5.3.0-cp38-abi3-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for nm_vllm-0.5.3.0-cp38-abi3-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 2410d25b10282aa49d3890dcc65cf781b9f4133f7fe6c97c42c6085d1e71ec4b
MD5 6e257f9c9dbe5e3f1d989e2b058bacf0
BLAKE2b-256 31177e2304d3f1b7cdc717445cc3c08327f9999fc766db2c16e7b0360dd1ccc2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page