Skip to main content

Lightweight AI server.

Project description

Easily serve AI models Lightning fast ⚡

Lightning

 

Lightning-fast serving engine for AI models.
Easy. Flexible. Enterprise-scale.


LitServe is an easy-to-use, flexible serving engine for AI models built on FastAPI. It augments FastAPI with features like batching, streaming, and GPU autoscaling eliminate the need to rebuild a FastAPI server per model.

LitServe is at least 2x faster than plain FastAPI due to AI-specific multi-worker handling.

✅ (2x)+ faster serving  ✅ Easy to use          ✅ LLMs, non LLMs and more
✅ Bring your own model  ✅ PyTorch/JAX/TF/...   ✅ Built on FastAPI       
✅ GPU autoscaling       ✅ Batching, Streaming  ✅ Self-host or ⚡️ managed 
✅ Compound AI           ✅ Integrate with vLLM and more                   

Discord cpu-tests codecov license

 

 

Quick start

Install LitServe via pip (more options):

pip install litserve

Define a server

This toy example with 2 models (AI compound system) shows LitServe's flexibility (see real examples):

# server.py
import litserve as ls

# (STEP 1) - DEFINE THE API (compound AI system)
class SimpleLitAPI(ls.LitAPI):
    def setup(self, device):
        # setup is called once at startup. Build a compound AI system (1+ models), connect DBs, load data, etc...
        self.model1 = lambda x: x**2
        self.model2 = lambda x: x**3

    def decode_request(self, request):
        # Convert the request payload to model input.
        return request["input"] 

    def predict(self, x):
        # Easily build compound systems. Run inference and return the output.
        squared = self.model1(x)
        cubed = self.model2(x)
        output = squared + cubed
        return {"output": output}

    def encode_response(self, output):
        # Convert the model output to a response payload.
        return {"output": output} 

# (STEP 2) - START THE SERVER
if __name__ == "__main__":
    # scale with advanced features (batching, GPUs, etc...)
    server = ls.LitServer(SimpleLitAPI(), accelerator="auto", max_batch_size=1)
    server.run(port=8000)

Now run the server via the command-line

python server.py

Test the server

Run the auto-generated test client:

python client.py    

Or use this terminal command:

curl -X POST http://127.0.0.1:8000/predict -H "Content-Type: application/json" -d '{"input": 4.0}'

LLM serving

LitServe isn’t just for LLMs like vLLM or Ollama; it serves any AI model with full control over internals (learn more).
For easy LLM serving, integrate vLLM with LitServe, or use LitGPT (built on LitServe).

litgpt serve microsoft/phi-2

Summary

  • LitAPI lets you easily build complex AI systems with one or more models (docs).
  • Use the setup method for one-time tasks like connecting models, DBs, and loading data (docs).
  • LitServer handles optimizations like batching, GPU autoscaling, streaming, etc... (docs).
  • Self host on your own machines or use Lightning Studios for a fully managed deployment (learn more).

Learn how to make this server 200x faster.

 

Featured examples

Use LitServe to deploy any model or AI service: (Compound AI, Gen AI, classic ML, embeddings, LLMs, vision, audio, etc...)

Examples

Toy model:      Hello world
LLMs:           Llama 3.2, LLM Proxy server, Agent with tool use
RAG:            vLLM RAG (Llama 3.2), RAG API (LlamaIndex)
NLP:            Hugging face, BERT, Text embedding API
Multimodal:     OpenAI Clip, MiniCPM, Phi-3.5 Vision Instruct, Qwen2-VL, Pixtral
Audio:          Whisper, AudioCraft, StableAudio, Noise cancellation (DeepFilterNet)
Vision:         Stable diffusion 2, AuraFlow, Flux, Image Super Resolution (Aura SR),
                Background Removal, Control Stable Diffusion (ControlNet)
Speech:         Text-speech (XTTS V2), Parler-TTS
Classical ML:   Random forest, XGBoost
Miscellaneous:  Media conversion API (ffmpeg), PyTorch + TensorFlow in one API

Browse 100+ community-built templates

 

Features

State-of-the-art features:

(2x)+ faster than plain FastAPI
Bring your own model
Build compound systems (1+ models)
GPU autoscaling
Batching
Streaming
Worker autoscaling
Self-host on your machines
Host fully managed on Lightning AI
Serve all models: (LLMs, vision, etc.)
Scale to zero (serverless)
Supports PyTorch, JAX, TF, etc...
OpenAPI compliant
Open AI compatibility
Authentication
Dockerization

10+ features...

Note: We prioritize scalable, enterprise-level features over hype.

 

Performance

LitServe is designed for AI workloads. Specialized multi-worker handling delivers a minimum 2x speedup over FastAPI.

Additional features like batching and GPU autoscaling can drive performance well beyond 2x, scaling efficiently to handle more simultaneous requests than FastAPI and TorchServe.

Reproduce the full benchmarks here (higher is better).

LitServe

These results are for image and text classification ML tasks. The performance relationships hold for other ML tasks (embedding, LLM serving, audio, segmentation, object detection, summarization etc...).

💡 Note on LLM serving: For high-performance LLM serving (like Ollama/vLLM), integrate vLLM with LitServe, use LitGPT, or build your custom vLLM-like server with LitServe. Optimizations like kv-caching, which can be done with LitServe, are needed to maximize LLM performance.

 

Hosting options

LitServe can be hosted independently on your own machines or fully managed via Lightning Studios.

Self-hosting is ideal for hackers, students, and DIY developers, while fully managed hosting is ideal for enterprise developers needing easy autoscaling, security, release management, and 99.995% uptime and observability.

 

 

Feature Self Managed Fully Managed on Studios
Deployment ✅ Do it yourself deployment ✅ One-button cloud deploy
Load balancing
Autoscaling
Scale to zero
Multi-machine inference
Authentication
Own VPC
AWS, GCP
Use your own cloud commits

 

Community

LitServe is a community project accepting contributions - Let's make the world's most advanced AI inference engine.

💬 Get help on Discord
📋 License: Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litserve-0.2.5.dev0.tar.gz (39.6 kB view details)

Uploaded Source

Built Distribution

litserve-0.2.5.dev0-py3-none-any.whl (44.0 kB view details)

Uploaded Python 3

File details

Details for the file litserve-0.2.5.dev0.tar.gz.

File metadata

  • Download URL: litserve-0.2.5.dev0.tar.gz
  • Upload date:
  • Size: 39.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.7

File hashes

Hashes for litserve-0.2.5.dev0.tar.gz
Algorithm Hash digest
SHA256 8ef2fc08e5bed353e3f0e7fe190355c1a115d02da6053af0110515c792a8f2b9
MD5 9c70e233daaf786f7a84d5e4558e1bdd
BLAKE2b-256 91cd60b578f5f1e09238f80b0fcf5cea4317a567203cb2d8ce3467103b07b911

See more details on using hashes here.

File details

Details for the file litserve-0.2.5.dev0-py3-none-any.whl.

File metadata

File hashes

Hashes for litserve-0.2.5.dev0-py3-none-any.whl
Algorithm Hash digest
SHA256 96ec306b85af09f9f57254ef23d682f3efe397f4481fff551abfe5de7fd6558f
MD5 ad1ddf5fbb34ec83c78f1f4ccec23fdd
BLAKE2b-256 7ee94606ba42a22d7d186e2a598ff8461b8e1d68b9c0c0d8e5016ff3b8d6a060

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page