Skip to main content

No project description provided

Project description

ServerlessLLM

| Documentation | Paper | Discord |

ServerlessLLM

ServerlessLLM (sllm, pronounced as "slim") is a fast, affordable and easy library designed for multi-LLM serving, also known as Serverless Inference, Inference Endpoint, or Model Endpoints. This library is ideal for environments with limited GPU resources (GPU poor), as it allows efficient dynamic loading of models onto GPUs. By supporting high levels of GPU multiplexing, it maximizes GPU utilization without the need to dedicate GPUs to individual models.

News

  • [07/24] We are working towards to the first release and making documentation ready. Stay tuned!

About

ServerlessLLM is Fast:

  • Supports various leading LLM inference libraries including vLLM and HuggingFace Transformers.
  • Achieves 5-10X faster loading speed than Safetensors and PyTorch Checkpoint Loader.
  • Supports start-time-optimized model loading scheduler, achieving 5-100X better LLM start-up latency than Ray Serve and KServe.

ServerlessLLM is Affordable:

  • Supports many LLM models to share a few GPUs with low model switching overhead and seamless inference live migration.
  • Fully utilizes local storage resources available on multi-GPU servers, reducing the need for employing costly storage servers and network bandwidth.

ServerlessLLM is Easy:

Getting Started

  1. Install ServerlessLLM following Installation Guide.

  2. Start a local ServerlessLLM cluster following Quick Start Guide.

  3. Just want to try out fast checkpoint loading in your own code? Check out the ServerlessLLM Store Guide.

Performance

A detailed analysis of the performance of ServerlessLLM is here.

Contributing

ServerlessLLM is actively maintained and developed by those Contributors. We welcome new contributors to join us in making ServerlessLLM faster, better and more easier to use. Please check Contributing Guide for details.

Citation

If you use ServerlessLLM for your research, please cite our paper:

@inproceedings{fu2024serverlessllm,
  title={ServerlessLLM: Low-Latency Serverless Inference for Large Language Models},
  author={Fu, Yao and Xue, Leyang and Huang, Yeqi and Brabete, Andrei-Octavian and Ustiugov, Dmitrii and Patel, Yuvraj and Mai, Luo},
  booktitle={18th USENIX Symposium on Operating Systems Design and Implementation (OSDI 24)},
  pages={135--153},
  year={2024}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

serverless_llm-0.5.0.tar.gz (31.8 kB view details)

Uploaded Source

Built Distribution

serverless_llm-0.5.0-py3-none-any.whl (54.1 kB view details)

Uploaded Python 3

File details

Details for the file serverless_llm-0.5.0.tar.gz.

File metadata

  • Download URL: serverless_llm-0.5.0.tar.gz
  • Upload date:
  • Size: 31.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.12

File hashes

Hashes for serverless_llm-0.5.0.tar.gz
Algorithm Hash digest
SHA256 b9f22d02bbafdfe073f9eba588217f17f78af485f2615606fd99e11ccd118120
MD5 d10d85fadc213d900a8a34e9a23222b7
BLAKE2b-256 ce9803f1eb7c4bd35e9760c7d50cf9e681c57c6f602ac8f56b9299d99e45b68d

See more details on using hashes here.

File details

Details for the file serverless_llm-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for serverless_llm-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ec38e0f69273d3708ffe59e7b74d213bc256ad22d7281434bfd98d972d588228
MD5 2c773080bd9ac52f4befc0fb432ea28e
BLAKE2b-256 89436dde978f4a6534b51e31fb2aa2fee2638196788b455fa06517b7691aa061

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page