Skip to main content

Lightweight AI server.

Project description

LitGPT

LitServe

High-throughput serving engine for AI models.
Friendly interface. Enterprise scale.

✅ Batching       ✅ Streaming          ✅ Auto-GPU, multi-GPU   
✅ Multi-modal    ✅ PyTorch/JAX/TF     ✅ Full control          
✅ Auth           ✅ Built on Fast API  ✅ Custom specs (Open AI)

license PyPI - Python Version cpu-tests Discord

Lightning AIQuick startExamplesDeployFeaturesBenchmarksDocs

 

Get started

 

Deploy AI models Lightning fast ⚡

LitServe is a high-throughput serving engine designed to deploy AI models at scale. It creates an API endpoint for models, managing batching, streaming, and autoscaling across CPUs and GPUs and more.

Think of LitServe as PyTorch Lightning for model serving but supports every framework like PyTorch, JAX, Tensorflow and more.

Key features:

  • Supports all models: LLMs, vision, time-series, etc...
  • All ML frameworks: Use PyTorch, Jax, SKLearn, etc...
  • Developer friendly: Focus on AI deployment not infrastructure.
  • Minimal interface: Zero-abstraction, hackable code-base.
  • Enterprise scale: Designed to handle large models with low latency.
  • Auto GPU scaling: Scale to multi-GPU with zero code changes.
  • Run anywhere: Run yourself on any machine or fully managed on Lightning Studios.

Examples:

Model type Links
Toy example Hello world
LLMs Llama 3 (8B), LLM Proxy server
NLP models Any Hugging face model, BERT model
Multimodal Open AI Clip
Audio Open AI Whisper, Meta AudioCraft, Stable Audio
Vision Stable diffusion 2
Speech Text-speech (XTTS V2)

 

 

Quick start

 

 

Install LitServe via pip (or advanced installs):

pip install litserve

Define a server

Here's a hello world example (explore real examples):

# server.py
import litserve as ls

# STEP 1: DEFINE A MODEL API
class SimpleLitAPI(ls.LitAPI):
    # Called once at startup. Setup models, DB connections, etc...
    def setup(self, device):
        self.model = lambda x: x**2  

    # Convert the request payload to model input.
    def decode_request(self, request):
        return request["input"] 

    # Run inference on the the model, return the output.
    def predict(self, x):
        return self.model(x) 

    # Convert the model output to a response payload.
    def encode_response(self, output):
        return {"output": output} 

# STEP 2: START THE SERVER
if __name__ == "__main__":
    api = SimpleLitAPI()
    server = ls.LitServer(api, accelerator="auto")
    server.run(port=8000)

Now run the server via the command-line

python server.py

The LitAPI class gives you full control and hackability. The LitServer handles advanced optimizations like batching, streaming and auto-GPU scaling.

Query the server

LitServe automatically generates a client when it starts. Use this client to test the server:

python client.py

Or query the server yourself directly

import requests
response = requests.post("http://127.0.0.1:8000/predict", json={"input": 4.0})

 

Deployment options

LitServe is developed by Lightning AI - An AI development platform which provides infrastructure for deploying AI models.
Self manage your own deployments or use Lightning Studios to deploy production-grade models without cloud headaches.

 

Feature Self Managed Fully Managed on Studios
Deployment ✅ Do it yourself deployment ✅ One-button cloud deploy
Load balancing
Autoscaling
Multi-machine inference
Authentication
Own VPC
AWS, GCP
Use your own cloud commits

 

Features

LitServe supports multiple advanced state-of-the-art features.

All model types: LLMs, vision, time series, etc....
Auto-GPU scaling.
Authentication.
Autoscaling.
Batching.
Streaming.
All ML frameworks: PyTorch, Jax, Tensorflow, Hugging Face....
Open AI spec.
10+ features....

Note: Our goal is not to jump on every hype train, but instead support features that scale under the most demanding enterprise deployments.

 

Community

LitServe is a community project accepting contributions - Let's make the world's most advanced AI inference engine.

💬 Get help on Discord
📋 License: Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litserve-0.1.3.tar.gz (27.9 kB view hashes)

Uploaded Source

Built Distribution

litserve-0.1.3-py3-none-any.whl (29.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page