Skip to main content

SGLang is a fast serving framework for large language models and vision language models.

Project description

logo

PyPI PyPI - Downloads license issue resolution open issues Ask DeepWiki


Blog | Documentation | Roadmap | Join Slack | Weekly Dev Meeting | Slides

News

  • [2026/02] 🔥 Unlocking 25x Inference Performance with SGLang on NVIDIA GB300 NVL72 (blog).
  • [2026/01] 🔥 SGLang Diffusion accelerates video and image generation (blog).
  • [2025/12] SGLang provides day-0 support for latest open models (MiMo-V2-Flash, Nemotron 3 Nano, Mistral Large 3, LLaDA 2.0 Diffusion LLM, MiniMax M2).
  • [2025/10] 🔥 SGLang now runs natively on TPU with the SGLang-Jax backend (blog).
  • [2025/09] Deploying DeepSeek on GB200 NVL72 with PD and Large Scale EP (Part II): 3.8x Prefill, 4.8x Decode Throughput (blog).
  • [2025/09] SGLang Day 0 Support for DeepSeek-V3.2 with Sparse Attention (blog).
  • [2025/08] SGLang x AMD SF Meetup on 8/22: Hands-on GPU workshop, tech talks by AMD/xAI/SGLang, and networking (Roadmap, Large-scale EP, Highlights, AITER/MoRI, Wave).
More
  • [2025/11] SGLang Diffusion accelerates video and image generation (blog).
  • [2025/10] PyTorch Conference 2025 SGLang Talk (slide).
  • [2025/10] SGLang x Nvidia SF Meetup on 10/2 (recap).
  • [2025/08] SGLang provides day-0 support for OpenAI gpt-oss model (instructions)
  • [2025/06] SGLang, the high-performance serving infrastructure powering trillions of tokens daily, has been awarded the third batch of the Open Source AI Grant by a16z (a16z blog).
  • [2025/05] Deploying DeepSeek with PD Disaggregation and Large-scale Expert Parallelism on 96 H100 GPUs (blog).
  • [2025/06] Deploying DeepSeek on GB200 NVL72 with PD and Large Scale EP (Part I): 2.7x Higher Decoding Throughput (blog).
  • [2025/03] Supercharge DeepSeek-R1 Inference on AMD Instinct MI300X (AMD blog)
  • [2025/03] SGLang Joins PyTorch Ecosystem: Efficient LLM Serving Engine (PyTorch blog)
  • [2025/02] Unlock DeepSeek-R1 Inference Performance on AMD Instinct™ MI300X GPU (AMD blog)
  • [2025/01] SGLang provides day one support for DeepSeek V3/R1 models on NVIDIA and AMD GPUs with DeepSeek-specific optimizations. (instructions, AMD blog, 10+ other companies)
  • [2024/12] v0.4 Release: Zero-Overhead Batch Scheduler, Cache-Aware Load Balancer, Faster Structured Outputs (blog).
  • [2024/10] The First SGLang Online Meetup (slides).
  • [2024/09] v0.3 Release: 7x Faster DeepSeek MLA, 1.5x Faster torch.compile, Multi-Image/Video LLaVA-OneVision (blog).
  • [2024/07] v0.2 Release: Faster Llama3 Serving with SGLang Runtime (vs. TensorRT-LLM, vLLM) (blog).
  • [2024/02] SGLang enables 3x faster JSON decoding with compressed finite state machine (blog).
  • [2024/01] SGLang provides up to 5x faster inference with RadixAttention (blog).
  • [2024/01] SGLang powers the serving of the official LLaVA v1.6 release demo (usage).

About

SGLang is a high-performance serving framework for large language models and multimodal models. It is designed to deliver low-latency and high-throughput inference across a wide range of setups, from a single GPU to large distributed clusters. Its core features include:

  • Fast Runtime: Provides efficient serving with RadixAttention for prefix caching, a zero-overhead CPU scheduler, prefill-decode disaggregation, speculative decoding, continuous batching, paged attention, tensor/pipeline/expert/data parallelism, structured outputs, chunked prefill, quantization (FP4/FP8/INT4/AWQ/GPTQ), and multi-LoRA batching.
  • Broad Model Support: Supports a wide range of language models (Llama, Qwen, DeepSeek, Kimi, GLM, GPT, Gemma, Mistral, etc.), embedding models (e5-mistral, gte, mcdse), reward models (Skywork), and diffusion models (WAN, Qwen-Image), with easy extensibility for adding new models. Compatible with most Hugging Face models and OpenAI APIs.
  • Extensive Hardware Support: Runs on NVIDIA GPUs (GB200/B300/H100/A100/Spark/5090), AMD GPUs (MI355/MI300), Intel Xeon CPUs, Google TPUs, Ascend NPUs, and more.
  • Active Community: SGLang is open-source and supported by a vibrant community with widespread industry adoption, powering over 400,000 GPUs worldwide.
  • RL & Post-Training Backbone: SGLang is a proven rollout backend used for training many frontier models, with native RL integrations and adoption by well-known post-training frameworks such as AReaL, Miles, slime, Tunix, verl and more.

Getting Started

Benchmark and Performance

Learn more in the release blogs: v0.2 blog, v0.3 blog, v0.4 blog, Large-scale expert parallelism, GB200 rack-scale parallelism, GB300 long context.

Adoption and Sponsorship

SGLang has been deployed at large scale, generating trillions of tokens in production each day. It is trusted and adopted by a wide range of leading enterprises and institutions, including xAI, AMD, NVIDIA, Intel, LinkedIn, Cursor, Oracle Cloud, Google Cloud, Microsoft Azure, AWS, Atlas Cloud, Voltage Park, Nebius, DataCrunch, Novita, InnoMatrix, MIT, UCLA, the University of Washington, Stanford, UC Berkeley, Tsinghua University, Jam & Tea Studios, Baseten, and other major technology organizations. As an open-source LLM inference engine, SGLang has become the de facto industry standard, with deployments running on over 400,000 GPUs worldwide. SGLang is currently hosted under the non-profit open-source organization LMSYS.

logo

Contact Us

For enterprises interested in adopting or deploying SGLang at scale, including technical consulting, sponsorship opportunities, or partnership inquiries, please contact us at sglang@lmsys.org.

Long-term active SGLang contributors are eligible for coding agent sponsorship, such as Cursor, Claude Code, or OpenAI Codex. Email sglang@lmsys.org with your most important commits or pull requests.

Acknowledgment

We learned the design and reused code from the following projects: Guidance, vLLM, LightLLM, FlashInfer, Outlines, and LMQL.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

sglang-0.5.11-cp313-cp313-manylinux_2_34_x86_64.whl (7.3 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.34+ x86-64

sglang-0.5.11-cp313-cp313-manylinux_2_34_aarch64.whl (7.3 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.34+ ARM64

sglang-0.5.11-cp312-cp312-manylinux_2_34_x86_64.whl (7.3 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.34+ x86-64

sglang-0.5.11-cp312-cp312-manylinux_2_34_aarch64.whl (7.3 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.34+ ARM64

sglang-0.5.11-cp311-cp311-manylinux_2_34_x86_64.whl (7.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.34+ x86-64

sglang-0.5.11-cp311-cp311-manylinux_2_34_aarch64.whl (7.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.34+ ARM64

sglang-0.5.11-cp310-cp310-manylinux_2_34_x86_64.whl (7.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.34+ x86-64

sglang-0.5.11-cp310-cp310-manylinux_2_34_aarch64.whl (7.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.34+ ARM64

File details

Details for the file sglang-0.5.11-cp313-cp313-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for sglang-0.5.11-cp313-cp313-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 db716eb7081082e2fb1e0542cd860a553243ea1bb1387925aa9d517d44e03b96
MD5 356157e9cf1cb0367899fd44edf20201
BLAKE2b-256 4c03d9fe8402d18232c3099e3b502903dd150b74ce199aed7348568b3d2fdffa

See more details on using hashes here.

File details

Details for the file sglang-0.5.11-cp313-cp313-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for sglang-0.5.11-cp313-cp313-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 2a455ff867cf2b5cce6b2930483a3cd64506ba2cdf12a375d8d731ae45ad1b93
MD5 b6c95a39721b19c9b76e854e28dc3f8a
BLAKE2b-256 e6c8ccc83d76bc86f3cb1295c548f42115db5c25f15672a9daa47f00b503154f

See more details on using hashes here.

File details

Details for the file sglang-0.5.11-cp312-cp312-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for sglang-0.5.11-cp312-cp312-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 0cdf73e8d463df53010c33017c7baa68ce5f2d0bf7d015923d6b37b809ca7c50
MD5 0fc3d70c29624b089c5bc7d1b508de68
BLAKE2b-256 f0e933bfdbbf010843ce96e69be6711d221a5569429bea0fc8645fba7494b0ac

See more details on using hashes here.

File details

Details for the file sglang-0.5.11-cp312-cp312-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for sglang-0.5.11-cp312-cp312-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 c985b22712333a356c257553196e2aa8276995608e3f7d90a8ee22aa79ec000d
MD5 bdc133f26e746f4d1f5dab8d276f8ad5
BLAKE2b-256 916c78a114bfd178008ddd759a44b56d045f6a560ead1fe5ba65224b4a12b964

See more details on using hashes here.

File details

Details for the file sglang-0.5.11-cp311-cp311-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for sglang-0.5.11-cp311-cp311-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 fe0748beaff218d3e16c9018a5e19f018db06fc3c7ac4f90d2f93da1cf074b6f
MD5 ce845e3c1b3caa5701a74eda0ff9fb62
BLAKE2b-256 223869c189787023034a3c594d8bb9eefbac86cfa5cd4114c29836b97fce8a10

See more details on using hashes here.

File details

Details for the file sglang-0.5.11-cp311-cp311-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for sglang-0.5.11-cp311-cp311-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 85b84d05a2423c69153daaea0f4745b545e805131d6bf7d420358f4e394b65f2
MD5 cafb13663e8974d3887d9fd56d9c286e
BLAKE2b-256 a891d15e051f8fcf60d9f16eb831f088f764bb79947a02c99ffdb2d96dbda4a4

See more details on using hashes here.

File details

Details for the file sglang-0.5.11-cp310-cp310-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for sglang-0.5.11-cp310-cp310-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 a0a1b9db9b03c2b4176257dfb2ac58b878db5b8a551828c641c672b8f9ee9147
MD5 919eed8a53fe14bcffac0d116aa2e031
BLAKE2b-256 8fde26fdb022d8d2fc5e85fd497752f9b954c8c9abd236b400869c754250b8ac

See more details on using hashes here.

File details

Details for the file sglang-0.5.11-cp310-cp310-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for sglang-0.5.11-cp310-cp310-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 a9002c6377e09c2dc6c21053c0076cfcbc0dc8f3a767005b51e092e335907faf
MD5 50336fc84b1bd538579b015316bc3b88
BLAKE2b-256 be7e0833223c64205168c83f2d9f278a35417e64799d6a2b89acfc9fdbf8f5aa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page