Skip to main content

Lightweight AI server.

Project description

The easiest way to deploy agents, RAG, pipelines, any model.
No MLOps. No YAML.

Lightning

 

Most serving engines serve one model with rigid abstractions. LitServe lets you serve any model (vision, audio, text) and build full AI systems - agents, chatbots, RAG, pipelines - with full control, batching, multi-GPU, streaming, custom logic, multi-model support, and zero YAML.

Self host or deploy in one-click to Lightning AI.

 

✅ Build full AI systems   ✅ 2× faster than FastAPI     ✅ Agents, RAG, pipelines, more
✅ Custom logic + control  ✅ Any PyTorch model          ✅ Self-host or managed        
✅ Multi-GPU autoscaling   ✅ Batching + streaming       ✅ BYO model or vLLM           
✅ No MLOps glue code      ✅ Easy setup in Python       ✅ Serverless support          

PyPI Downloads Discord cpu-tests codecov license

 

 

Quick start

Install LitServe via pip (more options):

pip install litserve

Example 1: Toy inference pipeline with multiple models.
Example 2: Minimal agent to fetch the news (with OpenAI API).
(Advanced examples):

Inference pipeline example

import litserve as ls

# define the api to include any number of models, dbs, etc...
class InferencePipeline(ls.LitAPI):
    def setup(self, device):
        self.model1 = lambda x: x**2
        self.model2 = lambda x: x**3

    def predict(self, request):
        x = request["input"]    
        # perform calculations using both models
        a = self.model1(x)
        b = self.model2(x)
        c = a + b
        return {"output": c}

if __name__ == "__main__":
    # 12+ features like batching, streaming, etc...
    server = ls.LitServer(InferencePipeline(max_batch_size=1), accelerator="auto")
    server.run(port=8000)

Deploy for free to Lightning cloud (or self host anywhere):

# Deploy for free with autoscaling, monitoring, etc...
lightning deploy server.py --cloud

# Or run locally (self host anywhere)
lightning deploy server.py
# python server.py

Test the server: Simulate an http request (run this on any terminal):

curl -X POST http://127.0.0.1:8000/predict -H "Content-Type: application/json" -d '{"input": 4.0}'

Agent example

import re, requests, openai
import litserve as ls

class NewsAgent(ls.LitAPI):
    def setup(self, device):
        self.openai_client = openai.OpenAI(api_key="OPENAI_API_KEY")

    def predict(self, request):
        website_url = request.get("website_url", "https://text.npr.org/")
        website_text = re.sub(r'<[^>]+>', ' ', requests.get(website_url).text)

        # ask the LLM to tell you about the news
        llm_response = self.openai_client.chat.completions.create(
           model="gpt-3.5-turbo", 
           messages=[{"role": "user", "content": f"Based on this, what is the latest: {website_text}"}],
        )
        output = llm_response.choices[0].message.content.strip()
        return {"output": output}

if __name__ == "__main__":
    server = ls.LitServer(NewsAgent())
    server.run(port=8000)

Test it:

curl -X POST http://127.0.0.1:8000/predict -H "Content-Type: application/json" -d '{"website_url": "https://text.npr.org/"}'

 

Key benefits

A few key benefits:

  • Deploy any pipeline or model: Agents, pipelines, RAG, chatbots, image models, video, speech, text, etc...
  • No MLOps glue: LitAPI lets you build full AI systems (multi-model, agent, RAG) in one place (more).
  • Instant setup: Connect models, DBs, and data in a few lines with setup() (more).
  • Optimized: autoscaling, GPU support, and fast inference included (more).
  • Deploy anywhere: self-host or one-click deploy with Lightning (more).
  • FastAPI for AI: Built on FastAPI but optimized for AI - 2× faster with AI-specific multi-worker handling (more).
  • Expert-friendly: Use vLLM, or build your own with full control over batching, caching, and logic (more).

⚠️ Not a vLLM or Ollama alternative out of the box. LitServe gives you lower-level flexibility to build what they do (and more) if you need it.

 

Featured examples

Here are examples of inference pipelines for common model types and use cases.

Toy model:      Hello world
LLMs:           Llama 3.2, LLM Proxy server, Agent with tool use
RAG:            vLLM RAG (Llama 3.2), RAG API (LlamaIndex)
NLP:            Hugging face, BERT, Text embedding API
Multimodal:     OpenAI Clip, MiniCPM, Phi-3.5 Vision Instruct, Qwen2-VL, Pixtral
Audio:          Whisper, AudioCraft, StableAudio, Noise cancellation (DeepFilterNet)
Vision:         Stable diffusion 2, AuraFlow, Flux, Image Super Resolution (Aura SR),
                Background Removal, Control Stable Diffusion (ControlNet)
Speech:         Text-speech (XTTS V2), Parler-TTS
Classical ML:   Random forest, XGBoost
Miscellaneous:  Media conversion API (ffmpeg), PyTorch + TensorFlow in one API, LLM proxy server

Browse 100+ community-built templates

 

Host anywhere

Self-host with full control, or deploy with Lightning AI in seconds with autoscaling, security, and 99.995% uptime.
Free tier included. No setup required. Run on your cloud

lightning deploy server.py --cloud

https://github.com/user-attachments/assets/ff83dab9-0c9f-4453-8dcb-fb9526726344

 

Features

Feature Self Managed Fully Managed on Lightning
Docker-first deployment ✅ DIY ✅ One-click deploy
Cost ✅ Free (DIY) ✅ Generous free tier with pay as you go
Full control
Use any engine (vLLM, etc.) ✅ vLLM, Ollama, LitServe, etc.
Own VPC ✅ (manual setup) ✅ Connect your own VPC
(2x)+ faster than plain FastAPI
Bring your own model
Build compound systems (1+ models)
GPU autoscaling
Batching
Streaming
Worker autoscaling
Serve all models: (LLMs, vision, etc.)
Supports PyTorch, JAX, TF, etc...
OpenAPI compliant
Open AI compatibility
Authentication ❌ DIY ✅ Token, password, custom
GPUs ❌ DIY ✅ 8+ GPU types, H100s from $1.75
Load balancing ✅ Built-in
Scale to zero (serverless) ✅ No machine runs when idle
Autoscale up on demand ✅ Auto scale up/down
Multi-node inference ✅ Distribute across nodes
Use AWS/GCP credits ✅ Use existing cloud commits
Versioning ✅ Make and roll back releases
Enterprise-grade uptime (99.95%) ✅ SLA-backed
SOC2 / HIPAA compliance ✅ Certified & secure
Observability ✅ Built-in, connect 3rd party tools
CI/CD ready ✅ Lightning SDK
24/7 enterprise support ✅ Dedicated support
Cost controls & audit logs ✅ Budgets, breakdowns, logs
Debug on GPUs ✅ Studio integration
20+ features - -

 

Performance

LitServe is designed for AI workloads. Specialized multi-worker handling delivers a minimum 2x speedup over FastAPI.

Additional features like batching and GPU autoscaling can drive performance well beyond 2x, scaling efficiently to handle more simultaneous requests than FastAPI and TorchServe.

Reproduce the full benchmarks here (higher is better).

LitServe

These results are for image and text classification ML tasks. The performance relationships hold for other ML tasks (embedding, LLM serving, audio, segmentation, object detection, summarization etc...).

💡 Note on LLM serving: For high-performance LLM serving (like Ollama/vLLM), integrate vLLM with LitServe, use LitGPT, or build your custom vLLM-like server with LitServe. Optimizations like kv-caching, which can be done with LitServe, are needed to maximize LLM performance.

 

Community

LitServe is a community project accepting contributions - Let's make the world's most advanced AI inference engine.

💬 Get help on Discord
📋 License: Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litserve-0.2.12.tar.gz (69.4 kB view details)

Uploaded Source

Built Distribution

litserve-0.2.12-py3-none-any.whl (83.1 kB view details)

Uploaded Python 3

File details

Details for the file litserve-0.2.12.tar.gz.

File metadata

  • Download URL: litserve-0.2.12.tar.gz
  • Upload date:
  • Size: 69.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for litserve-0.2.12.tar.gz
Algorithm Hash digest
SHA256 a427779e9310c94c3c036b8b8075538e299530800ee9e75b2a27b6437ad74c42
MD5 a92494c9ed3924d9978fe433a0a8802f
BLAKE2b-256 f5152d5a5c714d75c673e0553a7162bbc0e9ef4369e7646460fb1fd9b15259dc

See more details on using hashes here.

File details

Details for the file litserve-0.2.12-py3-none-any.whl.

File metadata

  • Download URL: litserve-0.2.12-py3-none-any.whl
  • Upload date:
  • Size: 83.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for litserve-0.2.12-py3-none-any.whl
Algorithm Hash digest
SHA256 dd58f7a459d68a03954b411052f634a2c9ca3552de4afc49d264f22f698fc6b7
MD5 d6bea0c885b0bfbca18f0e62d461c548
BLAKE2b-256 f8afde05cfa9b7792ca95bb281bd6e3b9d3be871cee7b17287d0cbef25b1a4ce

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page