Skip to main content

No project description provided

Project description

ServerlessLLM

| Documentation | Paper | Discord | WeChat |

ServerlessLLM

ServerlessLLM (sllm, pronounced "slim") is an open-source serverless framework designed to make custom and elastic LLM deployment easy, fast, and affordable. As LLMs grow in size and complexity, deploying them on AI hardware has become increasingly costly and technically challenging, limiting custom LLM deployment to only a select few. ServerlessLLM solves these challenges with a full-stack, LLM-centric serverless system design, optimizing everything from checkpoint formats and inference runtimes to the storage layer and cluster scheduler.

News

  • [10/24] ServerlessLLM was invited to present at a global AI tech vision forum in Singapore.
  • [10/24] We hosted the first ServerlessLLM developer meetup in Edinburgh, attracting over 50 attendees both offline and online. Together, we brainstormed many exciting new features to develop. If you have great ideas, we’d love for you to join us!
  • [10/24] We made the first public release of ServerlessLLM. Check out the details of the release here.
  • [09/24] ServerlessLLM now supports embedding-based RAG + LLM deployment. We’re preparing a blog and demo—stay tuned!
  • [08/24] ServerlessLLM added support for vLLM.
  • [07/24] We presented ServerlessLLM at Nvidia's headquarters.
  • [06/24] ServerlessLLM officially went public.

Goals

ServerlessLLM is designed to support multiple LLMs in efficiently sharing limited AI hardware and dynamically switching between them on demand, which can increase hardware utilization and reduce the cost of LLM services. This multi-LLM scenario, commonly referred to as Serverless, is highly sought after by AI practitioners, as seen in solutions like Serverless Inference, Inference Endpoints, and Model Endpoints. However, these existing offerings often face performance overhead and scalability challenges, which ServerlessLLM effectively addresses through three key capabilities:

ServerlessLLM is Fast:

  • Supports leading LLM inference libraries like vLLM and HuggingFace Transformers. Through vLLM, ServerlessLLM can support various types of AI hardware (summarized by vLLM at here)
  • Achieves 5-10X faster loading speeds compared to Safetensors and the PyTorch Checkpoint Loader.
  • Features an optimized model loading scheduler, offering 5-100X lower start-up latency than Ray Serve and KServe.

ServerlessLLM is Cost-Efficient:

  • Allows multiple LLM models to share GPUs with minimal model switching overhead and supports seamless inference live migration.
  • Maximizes the use of local storage on multi-GPU servers, reducing the need for expensive storage servers and excessive network bandwidth.

ServerlessLLM is Easy-to-Use:

Getting Started

  1. Install ServerlessLLM with pip or from source.
# On the head node
conda create -n sllm python=3.10 -y
conda activate sllm
pip install serverless-llm

# On a worker node
conda create -n sllm-worker python=3.10 -y
conda activate sllm-worker
pip install serverless-llm[worker]
  1. Start a local ServerlessLLM cluster using the Quick Start Guide.

  2. Want to try fast checkpoint loading in your own code? Check out the ServerlessLLM Store Guide.

Documentation

To install ServerlessLLM, please follow the steps outlined in our documentation. ServerlessLLM also offers Python APIs for loading and unloading checkpoints, as well as CLI tools to launch an LLM cluster. Both the CLI tools and APIs are demonstrated in the documentation.

Benchmark

Benchmark results for ServerlessLLM can be found here.

Community

ServerlessLLM is maintained by a global team of over 10 developers, and this number is growing. If you're interested in learning more or getting involved, we invite you to join our community on Discord and WeChat. Share your ideas, ask questions, and contribute to the development of ServerlessLLM. For becoming a contributor, please refer to our Contributor Guide.

Citation

If you use ServerlessLLM for your research, please cite our paper:

@inproceedings{fu2024serverlessllm,
  title={ServerlessLLM: Low-Latency Serverless Inference for Large Language Models},
  author={Fu, Yao and Xue, Leyang and Huang, Yeqi and Brabete, Andrei-Octavian and Ustiugov, Dmitrii and Patel, Yuvraj and Mai, Luo},
  booktitle={18th USENIX Symposium on Operating Systems Design and Implementation (OSDI 24)},
  pages={135--153},
  year={2024}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

serverless-llm-0.5.1.tar.gz (32.7 kB view details)

Uploaded Source

Built Distribution

serverless_llm-0.5.1-py3-none-any.whl (55.1 kB view details)

Uploaded Python 3

File details

Details for the file serverless-llm-0.5.1.tar.gz.

File metadata

  • Download URL: serverless-llm-0.5.1.tar.gz
  • Upload date:
  • Size: 32.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.7

File hashes

Hashes for serverless-llm-0.5.1.tar.gz
Algorithm Hash digest
SHA256 f4eedbcd3e665df1dcfeda3ccfa4205d1756a5fbba0d49ee5ef00e324f67d162
MD5 5b8bd13f64126c471d7e2000bd6f1f4e
BLAKE2b-256 e670e389c40d94019aea081e4b7ea4d758666d6ec4ab1de73e908bc4aa01b0e4

See more details on using hashes here.

File details

Details for the file serverless_llm-0.5.1-py3-none-any.whl.

File metadata

File hashes

Hashes for serverless_llm-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 37f4b828a3da90ad56941458b09919d7546112607a616e4f58a3f9ceb22caa47
MD5 bbd055d0018bc77985c00e0a4934d235
BLAKE2b-256 eb1594c60a43e6907526af754ac4d615f80d621adbf96a99f6a1531e3a29224c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page