An MLOps library for LLM deployment w/ the vLLM engine on RunPod's infra.
Project description
SuperLaser
⚠️Not yet ready for primetime ⚠️
SuperLaser provides a comprehensive suite of tools and scripts designed for deploying Large Language Models (LLMs) onto RunPod's pod and serverless infrastructure. Additionally, the deployment utilizes a containerized vLLM engine during runtime, ensuring memory-efficient and high-performance inference capabilities.
Features
- Scalable Deployment: Easily scale your LLM inference tasks with vLLM and RunPod serverless capabilities.
- Cost-Effective: Optimize resource and hardware usage: tensor parallelism
- Easy Integration: Seamless integration with existing LLM workflows.
Install
pip install superlaser
Before you begin, ensure you have:
- A RunPod account.
RunPod Config
First step is to obtain an API key from RunPod. Go to your account's console, in the Settings
section, click on API Keys
.
After obtaining a key, set it as an environment variable:
export RUNPOD_API_KEY=<YOUR-API-KEY>
Configure Template
Before spinning up a serverless endpoint, let's first configure a template that we'll pass into the endpoint during staging. The template allows you to select a serverless or pod asset, your docker image name, and the container's and volume's disk space.
Configure your template with the following attributes:
import os
from superlaser import RunpodHandler as runpod
api_key = os.environ.get("RUNPOD_API_KEY")
template_data = runpod.set_template(
serverless="true", # false spins up a pod instead
template_name="superlaser-inf", # Give a name to your template
container_image="runpod/worker-vllm:0.3.1-cuda12.1.0", # Docker image stub
model_name="mistralai/Mistral-7B-v0.1", # Hugging Face model stub
max_model_length=340, # Maximum number of tokens for the engine to handle per request.
container_disk=15,
volume_disk=15,
)
Push template to your RunPod account:
template = runpod(api_key=api_key, data=template_data)
print(template().text)
Configure Endpoint
After your template is created, it will return a data dicitionary that includes your template ID. We will pass this template id when configuring the serverless endpoint in the section below:
endpoint_data = runpod.create_serverless_endpoint(
gpu_ids="AMPERE_24", # options for gpuIds are "AMPERE_16,AMPERE_24,AMPERE_48,AMPERE_80,ADA_24"
idle_timeout=5,
name="vllm_endpoint",
scaler_type="QUEUE_DELAY",
scaler_value=1,
template_id="template-id",
workers_max=1,
workers_min=0,
)
Boot up your endpoint on RunPod:
endpoint = runpod(api_key=api_key, data=endpoint_data)
print(endpoint().text)
Call Endpoint
After your endpoint is staged, it will return a dictionary with your endpoint ID. Pass this endpoint ID to the SuperLaser
client and start making API requests!
superlaser = SuperLaser(endpoint_id="endpoint-id", model_name="mistralai/Mistral-7B-v0.1")
superlaser("Why is SuperLaser awesome?")
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for superlaser-0.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 207249793e8036da59e87f249ca8af589576f98ccfae75a06cfeae9626fea5ca |
|
MD5 | 3ebce7a4b5583f7e38cce28f8738a18b |
|
BLAKE2b-256 | c38760749536efa29a5cc429f78962f98ea718e9c08f8e37c06f6f792c197adb |