Lightweight AI server.
Project description
LitServe
High-throughput serving engine for AI models
✅ Batching ✅ Streaming ✅ Multi-GPU ✅ PyTorch/JAX/TF ✅ Full control ✅ Auth
Deploy AI models Lightning fast ⚡
LitServe is a high-throughput serving engine for deploying AI models at scale. LitServe generates an API endpoint for a model, handles batching, streaming, autoscaling across CPU/GPUs and more.
Why we wrote LitServe:
- Work with any model: LLMs, vision, time-series, etc...
- We wanted a zero abstraction, minimal, hackable code-base without bloat.
- Built for enterprise scale (not demos, etc...).
- Easy enough for researchers, scalable and hackable for engineers.
- Work on any hardware (GPU/TPU) automatically.
- Let you focus on model performance, not the serving boilerplate.
Think of LitServe as PyTorch Lightning for model serving (if you're familiar with Lightning) but supports every framework like PyTorch, JAX, Tensorflow and more.
Examples
Explore various examples that show different models deployed with LitServe:
Example | description | Run |
---|---|---|
Hello world | Hello world model | |
ANY Hugging face model | Deploy any Hugging Face model | |
Hugging face BERT model | Deploy model for tasks like text generation and more | |
Open AI CLIP | Deploy Open AI CLIP (computer vision model) for tasks like image understanding | |
Open AI Whisper | Deploy Open AI Whisper (Audio model) for tasks like speech to text |
Install LitServe
Install LitServe via pip:
pip install litserve
Advanced install options
Install the main branch:
pip install git+https://github.com/Lightning-AI/litserve.git@main
Install from source:
git clone https://github.com/Lightning-AI/litserve
cd litserve
pip install -e '.[all]'
Get started
LitServe is an inference server for AI/ML models that is minimal and highly scalable.
It has 2 simple, minimal APIs - LitAPI and LitServer.
Implement a server
Here's a hello world example:
# server.py
import litserve as ls
# STEP 1: DEFINE YOUR MODEL API
class SimpleLitAPI(ls.LitAPI):
def setup(self, device):
# Setup the model so it can be called in `predict`.
self.model = lambda x: x**2
def decode_request(self, request):
# Convert the request payload to your model input.
return request["input"]
def predict(self, x):
# Run the model on the input and return the output.
return self.model(x)
def encode_response(self, output):
# Convert the model output to a response payload.
return {"output": output}
# STEP 2: START THE SERVER
if __name__ == "__main__":
api = SimpleLitAPI()
server = ls.LitServer(api, accelerator="gpu")
server.run(port=8000)
Now run the server via the command-line
python server.py
Use the server
LitServe automatically generates a client when it starts. Use this client to test the server:
python client.py
Or ping the server yourself directly
import requests
response = requests.post("http://127.0.0.1:8000/predict", json={"input": 4.0})
The server expects the client to send a POST
to the /predict
URL with a JSON payload.
The way the payload is structured is up to the implementation of the LitAPI
subclass.
Features
LitServe supports multiple advanced state-of-the-art features.
Feature | description |
---|---|
Accelerators | CPU, GPU, Multi-GPU |
Model types | LLMs, Vision, Time series, any model type... |
ML frameworks | PyTorch, Jax, Tensorflow, numpy, etc... |
Batching | ✅ |
API authentication | ✅ |
Full request/response control | ✅ |
Automatic schema validation | ✅ |
Handle timeouts | ✅ |
Handle disconnects | ✅ |
Streaming | in progress... |
[!NOTE] Our goal is not to jump on every hype train, but instead support features that scale under the most demanding enterprise deployments.
Feature details
Explore each feature in detail:
Automatic schema validation
Define the request and response as Pydantic models, to automatically validate the request.
from pydantic import BaseModel
class PredictRequest(BaseModel):
input: float
class PredictResponse(BaseModel):
output: float
class SimpleLitAPI(LitAPI):
def setup(self, device):
self.model = lambda x: x**2
def decode_request(self, request: PredictRequest) -> float:
return request.input
def predict(self, x):
return self.model(x)
def encode_response(self, output: float) -> PredictResponse:
return PredictResponse(output=output)
if __name__ == "__main__":
api = SimpleLitAPI()
server = LitServer(api, accelerator="cpu")
server.run(port=8888)
Serve on GPUs
LitServer
has the ability to coordinate serving from multiple GPUs.
For example, running the API server on a 4-GPU machine, with a PyTorch model served by each GPU:
from fastapi import Request, Response
from litserve import LitAPI, LitServer
import torch
import torch.nn as nn
class Linear(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(1, 1)
self.linear.weight.data.fill_(2.0)
self.linear.bias.data.fill_(1.0)
def forward(self, x):
return self.linear(x)
class SimpleLitAPI(LitAPI):
def setup(self, device):
# move the model to the correct device
# keep track of the device for moving data accordingly
self.model = Linear().to(device)
self.device = device
def decode_request(self, request: Request):
# get the input and create a 1D tensor on the correct device
content = request["input"]
return torch.tensor([content], device=self.device)
def predict(self, x):
# the model expects a batch dimension, so create it
return self.model(x[None, :])
def encode_response(self, output) -> Response:
# float will take the output value directly onto CPU memory
return {"output": float(output)}
if __name__ == "__main__":
# accelerator="cuda", devices=4 will lead to 4 workers serving the
# model from "cuda:0", "cuda:1", "cuda:2", "cuda:3" respectively
server = LitServer(SimpleLitAPI(), accelerator="cuda", devices=4)
server.run(port=8000)
The devices
variable can also be an array specifying what device id to
run the model on:
server = LitServer(SimpleLitAPI(), accelerator="cuda", devices=[0, 3])
Last, you can run multiple copies of the same model from the same device, if the model is small. The following will load two copies of the model on each of the 4 GPUs:
server = LitServer(SimpleLitAPI(), accelerator="cuda", devices=4, workers_per_device=2)
Timeouts and disconnections
The server will remove a queued request if the client requesting it disconnects.
You can configure a timeout (in seconds) after which clients will receive a 504
HTTP
response (Gateway Timeout) indicating that their request has timed out.
For example, this is how you can configure the server with a timeout of 30 seconds per response.
server = LitServer(SimpleLitAPI(), accelerator="cuda", devices=4, timeout=30)
This is useful to avoid requests queuing up beyond the ability of the server to respond.
Use API key authentication
In order to secure the API behind an API key, just define the env var when starting the server
LIT_SERVER_API_KEY=supersecretkey python main.py
Clients are expected to auth with the same API key set in the X-API-Key
HTTP header.
Contribute
LitServe is a community project accepting contributions. Let's make the world's most advanced AI inference engine.
Run Tests
Use pytest
to run tests locally.
First, install test dependencies:
pip install -r _requirements/test.txt
Run the tests
pytest tests
License
litserve is released under the Apache 2.0 license. See LICENSE file for details. Then, run pytest in your terminal as follows:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.