Last released Apr 21, 2026
CLI for benchmarking LLM inference servers (vLLM, SGLang, llama.cpp)
Supported by