Skip to main content

SGLang is yet another fast serving framework for large language models and vision language models.

Project description

logo

PyPI PyPI - Downloads license issue resolution open issues


| Blog | Documentation | Join Slack | Join Bi-Weekly Development Meeting | Slides |

News

  • [2024/10] 🔥 The First SGLang Online Meetup (slides).
  • [2024/09] SGLang v0.3 Release: 7x Faster DeepSeek MLA, 1.5x Faster torch.compile, Multi-Image/Video LLaVA-OneVision (blog).
  • [2024/07] Faster Llama3 Serving with SGLang Runtime (vs. TensorRT-LLM, vLLM) (blog).
More
  • [2024/02] SGLang enables 3x faster JSON decoding with compressed finite state machine (blog).
  • [2024/04] SGLang is used by the official LLaVA-NeXT (video) release (blog).
  • [2024/01] SGLang provides up to 5x faster inference with RadixAttention (blog).
  • [2024/01] SGLang powers the serving of the official LLaVA v1.6 release demo (usage).

About

SGLang is a fast serving framework for large language models and vision language models. It makes your interaction with models faster and more controllable by co-designing the backend runtime and frontend language. The core features include:

  • Fast Backend Runtime: Provides efficient serving with RadixAttention for prefix caching, jump-forward constrained decoding, continuous batching, token attention (paged attention), tensor parallelism, FlashInfer kernels, chunked prefill, and quantization (INT4/FP8/AWQ/GPTQ).
  • Flexible Frontend Language: Offers an intuitive interface for programming LLM applications, including chained generation calls, advanced prompting, control flow, multi-modal inputs, parallelism, and external interactions.
  • Extensive Model Support: Supports a wide range of generative models (Llama, Gemma, Mistral, QWen, DeepSeek, LLaVA, etc.), embedding models (e5-mistral, gte) and reward models (Skywork), with easy extensibility for integrating new models.
  • Active Community: SGLang is open-source and backed by an active community with industry adoption.

Getting Started

Install SGLang: See https://sgl-project.github.io/start/install.html

Send requests: See https://sgl-project.github.io/start/send_request.html

Backend: SGLang Runtime (SRT)

See https://sgl-project.github.io/backend/backend.html

Frontend: Structured Generation Language (SGLang)

See https://sgl-project.github.io/frontend/frontend.html

Benchmark And Performance

Learn more in our release blogs: v0.2 blog, v0.3 blog

Roadmap

Development Roadmap (2024 Q4)

Citation And Acknowledgment

Please cite our paper, SGLang: Efficient Execution of Structured Language Model Programs, if you find the project useful. We also learned from the design and reused code from the following projects: Guidance, vLLM, LightLLM, FlashInfer, Outlines, and LMQL.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sglang-0.3.5.post2.tar.gz (320.0 kB view details)

Uploaded Source

Built Distribution

sglang-0.3.5.post2-py3-none-any.whl (447.5 kB view details)

Uploaded Python 3

File details

Details for the file sglang-0.3.5.post2.tar.gz.

File metadata

  • Download URL: sglang-0.3.5.post2.tar.gz
  • Upload date:
  • Size: 320.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for sglang-0.3.5.post2.tar.gz
Algorithm Hash digest
SHA256 1e80fa8f442766575bf05d736a9e45eaa5769f4c509679191f4f4040d17e1752
MD5 63aea31fcecde794f8b89408aff9b0eb
BLAKE2b-256 5697843060356a7cb2a2ba3eab0a26963d2446bb993830867f366361e030013b

See more details on using hashes here.

File details

Details for the file sglang-0.3.5.post2-py3-none-any.whl.

File metadata

  • Download URL: sglang-0.3.5.post2-py3-none-any.whl
  • Upload date:
  • Size: 447.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for sglang-0.3.5.post2-py3-none-any.whl
Algorithm Hash digest
SHA256 c5bb8bf4c5718f7097a30900d389e3a817360a6862508f8cd506654ca9aba106
MD5 72b49a2e8953b7266db64b219a194113
BLAKE2b-256 7bc0403c2f74dce9ef607fd43fe4004e659bee68019e1f850e9f50a9dada43cb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page