Skip to main content

Lightweight AI server.

Project description

LitServe: Deploy AI models Lightning fast ⚡

Lightning

 

High-throughput serving engine for AI models.
Friendly interface. Enterprise scale.


LitServe is an engine for scalable AI model deployment built on FastAPI. Features like batching, streaming, and GPU autoscaling eliminate the need to rebuild a FastAPI server for each model.

✅ Batching       ✅ Streaming          ✅ Auto-GPU, multi-GPU   
✅ Multi-modal    ✅ PyTorch/JAX/TF     ✅ Full control          
✅ Auth           ✅ Built on Fast API  ✅ Custom specs (Open AI)

Discord cpu-tests license

 

 

Performance

Our benchmarks show that LitServe (built on FastAPI) handles more simultaneous requests than FastAPI and TorchServe (higher is better).

Reproduce the full benchmarks here.

LitServe

These results are for image and text classification ML tasks. The performance relationships hold for other ML tasks (embedding, LLM serving, audio, segmentation, object detection, summarization etc...).

💡 Note on LLM serving: For high-performance LLM serving (like Ollama/VLLM), use LitGPT or build your custom VLLM-like server with LitServe. Optimizations like kv-caching, which can be done with LitServe, are needed to maximize LLM performance.

 

Featured examples

Use LitServe to deploy any type of model or AI service (embeddings, LLMs, vision, audio, multi-modal, etc).

Featured examples
Toy model: Hello world LLMs: Llama 3 (8B), LLM Proxy server NLP: Hugging face, BERT Multimodal: OpenAI Clip Audio: Whisper, AudioCraft, StableAudio Vision: Stable diffusion 2 Speech: Text-speech (XTTS V2)
Key features
Serve all models: LLMs, vision, etc ✅ All frameworks: PyTorch/Jax/sklearn/.. ✅ Dev friendly: build AI, not infra ✅ Easy interface: no abstractions ✅ Enterprise scale: scale huge models ✅ Auto GPU scaling: zero code changes ✅ Self host: or run on Studios

 

Quick start

Install LitServe via pip (or advanced installs):

pip install litserve

Define a server

Here's a hello world example (explore real examples):

# server.py
import litserve as ls

# STEP 1: DEFINE A MODEL API
class SimpleLitAPI(ls.LitAPI):
    # Called once at startup. Setup models, DB connections, etc...
    def setup(self, device):
        self.model = lambda x: x**2  

    # Convert the request payload to model input.
    def decode_request(self, request):
        return request["input"] 

    # Run inference on the the model, return the output.
    def predict(self, x):
        return self.model(x) 

    # Convert the model output to a response payload.
    def encode_response(self, output):
        return {"output": output} 

# STEP 2: START THE SERVER
if __name__ == "__main__":
    api = SimpleLitAPI()
    server = ls.LitServer(api, accelerator="auto")
    server.run(port=8000)

Now run the server via the command-line

python server.py

LitAPI class gives full control and hackability.
LitServer handles optimizations like batching, auto-GPU scaling, etc...

Query the server

Use the automatically generated LitServe client or write your own:

Option A - Use generated client:           
python client.py

Option B - Custom client example:          
import requests
response = requests.post(
    "http://127.0.0.1:8000/predict",
    json={"input": 4.0}
)

 

Deployment options

Self-manage LitServe deployments (just run it on any machine!), or deploy with one click on Lightning AI.

LitServe is developed by Lightning AI which provides infrastructure for deploying AI models.

 

 

Feature Self Managed Fully Managed on Studios
Deployment ✅ Do it yourself deployment ✅ One-button cloud deploy
Load balancing
Autoscaling
Scale to zero
Multi-machine inference
Authentication
Own VPC
AWS, GCP
Use your own cloud commits

 

Features

LitServe supports multiple advanced state-of-the-art features.

All model types: LLMs, vision, time series, etc...
Auto-GPU scaling
Authentication
Autoscaling
Batching
Streaming
All ML frameworks: PyTorch, Jax, Tensorflow, Hugging Face...
Open AI spec

10+ features...

Note: Our goal is not to jump on every hype train, but instead support features that scale under the most demanding enterprise deployments.

 

Community

LitServe is a community project accepting contributions - Let's make the world's most advanced AI inference engine.

💬 Get help on Discord
📋 License: Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litserve-0.2.0.dev0.tar.gz (30.4 kB view hashes)

Uploaded Source

Built Distribution

litserve-0.2.0.dev0-py3-none-any.whl (31.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page