Last released Oct 26, 2023
A high-throughput and memory-efficient inference and serving engine for LLMs
Supported by