Skip to main content

vLLM backend for kani

Project description

kani-ext-vllm

This Kani extension repository adds 3 engines for using vLLM to deploy LLMs on local hardware.

vLLM is an LLM deployment platform optimized for GPU memory efficiency and throughput. This extension adds Kani engines to use vLLM engines in offline mode, manage a vLLM server, or connect to an existing vLLM server depending on the use case.

To install this package, you can install it from PyPI:

$ pip install kani-ext-vllm

Alternatively, you can install it using the git source:

$ pip install git+https://github.com/zhudotexe/kani-ext-vllm.git@main

See https://docs.vllm.ai/en/latest/index.html for more information on vLLM.

Usage

This package provides 3 main methods of serving models with vLLM:

  • Offline mode
  • vLLM-Native API mode
  • OpenAI-Compatible API mode

These are generally equivalent, but offer slightly different options for each mode:

Mode Communication Multiple Parallel Models? Prompt Template/Parsing Best For
Offline Local No kani Low-level control over the model
vLLM API HTTP Yes kani Running multiple different models in parallel
OpenAI API HTTP Yes vLLM Fast iteration and testing multiple models; multimodal models

Offline Mode

from kani import Kani, chat_in_terminal
from kani.ext.vllm import VLLMEngine

engine = VLLMEngine(model_id="meta-llama/Meta-Llama-3-8B-Instruct")
ai = Kani(engine)
chat_in_terminal(ai)

vLLM-Native API Mode

The API mode can be used to connect to an existing running vLLM server or to start a managed vLLM server.

Connecting to a Running Server

from kani import Kani, chat_in_terminal
from kani.ext.vllm import VLLMServerEngine

engine = VLLMServerEngine(
    model_id="meta-llama/Meta-Llama-3-8B-Instruct",
    vllm_host="127.0.0.1",
    vllm_port=8000,
    use_managed_server=False,
)
ai = Kani(engine)
chat_in_terminal(ai)

Managed Server

[!NOTE] The vLLM server will be started on a random free port. It will not be exposed to the wider internet (i.e, it binds to localhost).

from kani import Kani, chat_in_terminal
from kani.ext.vllm import VLLMServerEngine

engine = VLLMServerEngine(model_id="meta-llama/Meta-Llama-3-8B-Instruct")
ai = Kani(engine)
chat_in_terminal(ai)

OpenAI-Compatible API Mode

Connecting to a Running Server

from kani import Kani, chat_in_terminal
from kani.ext.vllm import VLLMOpenAIEngine

engine = VLLMOpenAIEngine(
    model_id="meta-llama/Meta-Llama-3-8B-Instruct",
    vllm_host="127.0.0.1",
    vllm_port=8000,
    use_managed_server=False,
)
ai = Kani(engine)
chat_in_terminal(ai)

Managed Server

[!NOTE] The vLLM server will be started on a random free port. It will not be exposed to the wider internet (i.e, it binds to localhost).

from kani import Kani, chat_in_terminal
from kani.ext.vllm import VLLMOpenAIEngine

engine = VLLMOpenAIEngine(model_id="meta-llama/Meta-Llama-3-8B-Instruct")
ai = Kani(engine)
chat_in_terminal(ai)

Using Multiple GPUs

For multi-GPU support (probably needed), add model_load_kwargs={"tensor_parallel_size": 4}. Replace "4" with the number of GPUs you have available.

[!NOTE] If you are loading in an API mode, use vllm_args={"tensor_parallel_size": 4} instead.

Examples

Offline Mode

from kani.ext.vllm import VLLMEngine
from vllm import SamplingParams

model = VLLMEngine(
    model_id="mistralai/Mistral-Small-Instruct-2409",
    model_load_kwargs={"tensor_parallel_size": 2, "tokenizer_mode": "auto"},
    sampling_params=SamplingParams(temperature=0, max_tokens=2048),
)

vLLM-Native API Mode

from kani.ext.vllm import VLLMServerEngine

model = VLLMServerEngine(
    model_id="mistralai/Mistral-Small-Instruct-2409",
    vllm_args={"tensor_parallel_size": 2, "tokenizer_mode": "auto"},
    # note that these should not be wrapped in SamplingParams!
    temperature=0,
    max_tokens=2048,
)

See https://docs.vllm.ai/en/stable/serving/openai_compatible_server.html#completions-api_1 for a list of valid decoding parameters that can be specified in the engine constructor.

See https://docs.vllm.ai/en/stable/cli/serve/ for a list of valid arguments to vllm_args.

OpenAI-Compatible API Mode

from kani.ext.vllm import VLLMOpenAIEngine

model = VLLMOpenAIEngine(
    model_id="Qwen/Qwen3-Omni-30B-A3B-Instruct",
    vllm_args={"tensor_parallel_size": 2, "allowed_local_media_path": "/"},
    # note that these should not be wrapped in SamplingParams!
    temperature=0,
    max_tokens=2048,
)

See https://docs.vllm.ai/en/stable/serving/openai_compatible_server.html#chat-api_1 for a list of valid decoding parameters that can be specified in the engine constructor.

See https://docs.vllm.ai/en/stable/cli/serve/ for a list of valid arguments to vllm_args.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kani_ext_vllm-0.3.1.tar.gz (17.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kani_ext_vllm-0.3.1-py3-none-any.whl (14.9 kB view details)

Uploaded Python 3

File details

Details for the file kani_ext_vllm-0.3.1.tar.gz.

File metadata

  • Download URL: kani_ext_vllm-0.3.1.tar.gz
  • Upload date:
  • Size: 17.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for kani_ext_vllm-0.3.1.tar.gz
Algorithm Hash digest
SHA256 5d70abda83acb8a85a07fc89435614fe0d29364a41c3f211d430a90e2d0c7b18
MD5 e3d7cbe83a88f19bd0f2c2a9ee41dbc4
BLAKE2b-256 9059088e67b22032e927f50929ade057b403e180b19f1e488fac39e9eebeae87

See more details on using hashes here.

Provenance

The following attestation bundles were made for kani_ext_vllm-0.3.1.tar.gz:

Publisher: pythonpublish.yml on zhudotexe/kani-ext-vllm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file kani_ext_vllm-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: kani_ext_vllm-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 14.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for kani_ext_vllm-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0343f84260dcb2ec1b0231629f44df52800c6d16730c35403a4a83b45b531f4e
MD5 628bc89e6c002a3959681a46f920f3a0
BLAKE2b-256 514e3ac8da8505cf98ba538a398ca53fcccefcb843e837f7aea336b3c45e0aea

See more details on using hashes here.

Provenance

The following attestation bundles were made for kani_ext_vllm-0.3.1-py3-none-any.whl:

Publisher: pythonpublish.yml on zhudotexe/kani-ext-vllm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page