Skip to main content

Provide mock instance to test vllm without CUDA or any GPUs.

Project description

vllm-mock

Provide a mock instance to test vLLM without CUDA or any GPUs.

Features

  • vllm.LLM.generate mock
  • vllm.LLM.chat mock

Usage

It is highly recommended to use the mock instance with pytest-mock.

from vllm_mock import LLM
from vllm import SamplingParams

def test_vllm(mocker):
    mock_class = mocker.patch("vllm.LLM")
    mock_class.return_value = LLM(model="mock-model")

    llm = mock_class()
    sampling_params = SamplingParams(temperature=0.8, top_p=0.9, logprobs=1)
    response = llm.generate("Hello, world!", sampling_params=sampling_params)
    assert isinstance(response[0].outputs[0].text, str)

    chat_response = llm.chat([
		{"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello, world!"}
    ], sampling_params=sampling_params)
    assert isinstance(chat_response[0].outputs[0].text, str)

Installation

pip install vllm-mock pytest-mock

or in a UV environment

uv add --dev vllm-mock pytest-mock

To-do List

  • Mock vLLM API server
  • Mock Reasoning model features
  • Mock quantization features
  • Mock LoRA features
  • vLM models mock
  • vllm.LLM.beam_search mock
  • vllm.LLM.embed mock
  • vllm.LLM.classify mock
  • vllm.LLM.encode mock
  • vllm.LLM.reward mock

For Contributors

1. Setup Environment

First, clone a repository

git clone https://github.com/NomaDamas/vllm-mock.git
cd vllm-mock

Then, install the environment and the pre-commit hooks with

make install

This will also generate your uv.lock file

2. Run the pre-commit hooks

Initially, the CI/CD pipeline might be failing due to formatting issues. To resolve those run:

uv run pre-commit run -a

You can create any issue or PR to support this project. Thank you!

Builder of this repository

  • Jeffrey is a creator of this repo. Made this because he desperately needed it for his research.
  • NomaDamas is an AI open-source Hacker House in Seoul, Korea.

Repository initiated with fpgmaas/cookiecutter-uv.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vllm_mock-0.0.3.tar.gz (301.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vllm_mock-0.0.3-py3-none-any.whl (6.5 kB view details)

Uploaded Python 3

File details

Details for the file vllm_mock-0.0.3.tar.gz.

File metadata

  • Download URL: vllm_mock-0.0.3.tar.gz
  • Upload date:
  • Size: 301.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for vllm_mock-0.0.3.tar.gz
Algorithm Hash digest
SHA256 948634eca0f8cf49dfcaca5d4f7be64cc3c9d5d07a7da4a3eb00f3b3f73f254c
MD5 b92f4d471f816e48bda43c2a9536fd15
BLAKE2b-256 7e6fa74e1dfd605665fb8b3efbc5d347bf6c40f17c29b8cee3e9f87e9084fd13

See more details on using hashes here.

File details

Details for the file vllm_mock-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: vllm_mock-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 6.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for vllm_mock-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a8a824313913457719f87be392783ca14a4032832a7aa0a1f8f56f33a7bf68a1
MD5 35d9dd60b847bff51be02db593711e32
BLAKE2b-256 24f7bcd0e755a9b56b3172ee20e87165445d276927cd4dc11840187059b61979

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page