Skip to main content

Efficient LLM inference on Slurm clusters using vLLM.

Project description

Vector Inference: Easy inference on Slurm clusters

This repository provides an easy-to-use solution to run inference servers on Slurm-managed computing clusters using vLLM. All scripts in this repository runs natively on the Vector Institute cluster environment. To adapt to other environments, update launch_server.sh, vllm.slurm, multinode_vllm.slurm and models.csv accordingly.

Installation

If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, run the following to install package:

pip install vec-inf

Otherwise, we recommend using the provided Dockerfile to set up your own environment with the package

Launch an inference server

We will use the Llama 3.1 model as example, to launch an OpenAI compatible inference server for Meta-Llama-3.1-8B-Instruct, run:

vec-inf launch Meta-Llama-3.1-8B-Instruct

You should see an output like the following:

launch_img

The model would be launched using the default parameters, you can override these values by providing additional options, use --help to see the full list. You can also launch your own customized model as long as the model architecture is supported by vLLM, you'll need to specify all model launching related options to run a successful run.

You can check the inference server status by providing the Slurm job ID to the status command:

vec-inf status 13014393

You should see an output like the following:

status_img

There are 5 possible states:

  • PENDING: Job submitted to Slurm, but not executed yet. Job pending reason will be shown.
  • LAUNCHING: Job is running but the server is not ready yet.
  • READY: Inference server running and ready to take requests.
  • FAILED: Inference server in an unhealthy state. Job failed reason will be shown.
  • SHUTDOWN: Inference server is shutdown/cancelled.

Note that the base URL is only available when model is in READY state, and if you've changed the Slurm log directory path, you also need to specify it when using the status command.

Finally, when you're finished using a model, you can shut it down by providing the Slurm job ID:

vec-inf shutdown 13014393

> Shutting down model with Slurm Job ID: 13014393

You call view the full list of available models by running the list command:

vec-inf list
list_img

You can also view the default setup for a specific supported model by providing the model name, for example Meta-Llama-3.1-70B-Instruct:

vec-inf list Meta-Llama-3.1-70B-Instruct
list_model_img

launch, list, and status command supports --json-mode, where the command output would be structured as a JSON string.

Send inference requests

Once the inference server is ready, you can start sending in inference requests. We provide example scripts for sending inference requests in examples folder. Make sure to update the model server URL and the model weights location in the scripts. For example, you can run python examples/inference/llm/completions.py, and you should expect to see an output like the following:

{"id":"cmpl-bdf43763adf242588af07af88b070b62","object":"text_completion","created":2983960,"model":"/model-weights/Llama-2-7b-hf","choices":[{"index":0,"text":"\nCanada is close to the actual continent of North America. Aside from the Arctic islands","logprobs":null,"finish_reason":"length"}],"usage":{"prompt_tokens":8,"total_tokens":28,"completion_tokens":20}}

NOTE: For multimodal models, currently only ChatCompletion is available, and only one image can be provided for each prompt.

SSH tunnel from your local device

If you want to run inference from your local device, you can open a SSH tunnel to your cluster environment like the following:

ssh -L 8081:172.17.8.29:8081 username@v.vectorinstitute.ai -N

Where the last number in the URL is the GPU number (gpu029 in this case). The example provided above is for the vector cluster, change the variables accordingly for your environment

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vec_inf-0.3.2.tar.gz (14.1 kB view details)

Uploaded Source

Built Distribution

vec_inf-0.3.2-py3-none-any.whl (15.3 kB view details)

Uploaded Python 3

File details

Details for the file vec_inf-0.3.2.tar.gz.

File metadata

  • Download URL: vec_inf-0.3.2.tar.gz
  • Upload date:
  • Size: 14.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.0

File hashes

Hashes for vec_inf-0.3.2.tar.gz
Algorithm Hash digest
SHA256 568afa4184f54973e31d2bf3ea56e9913f2311e7ec87ecedb4180613ce590927
MD5 26d08f07de89dcb6f0165e79af1b9a16
BLAKE2b-256 d2f5e89538c3b904219bf8b0c5f8730648b9b2a5a786b85f915940f9258eeb64

See more details on using hashes here.

File details

Details for the file vec_inf-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: vec_inf-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 15.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.0

File hashes

Hashes for vec_inf-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 beda2bf2576764a52095d980e6a2e9c2a7e0308388fba97553f79e583cc4e233
MD5 faa4c5034da0834fc80a012910c66f1d
BLAKE2b-256 35adbcf1d54e977adbcacfe8b4b7bf20c5186fbf78e7230c25539185398e198a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page