Skip to main content

Mock LLMs' responses for unit-tests

Project description

LLM Mocks

Welcome to the llm-mocks Library! This library is designed to provide mock responses for Large Language Models (LLMs) from popular providers. Whether you are developing, testing, or integrating LLM-based solutions, this library offers a reliable and efficient way to simulate LLM responses without needing direct access to the actual services.

Key Features

  • Mock Responses: Predefined responses for various queries to simulate interactions with LLMs from providers like Anthropic and OpenAI.

  • Seamless Integration: Easy to integrate with your existing development and testing workflows such as Pytest.

  • Extensibility: Ability to customize mock responses to better match your specific use cases.

  • Performance: no network request, ensuring smooth development and testing processes.

Getting Started

To get started with LLM Library, follow the installation instructions and explore our examples to see how you can simulate LLM interactions in your projects. We provide detailed guides to help you quickly integrate the library and start benefiting from its features.

Installation

You can install the LLM Library using pip:

pip install llm-mocks

For more information on installation and setup, please refer to our Installation Guide.

Supported providers

  • OpenAI (Chat CompletionS, Images Regenerations, Embedding)
  • Anthropic (Messages)
  • Cohere (Chat, Embed, Rerank)
  • Groq specs
  • Cohere specs
  • Azure specs
  • AWS Bedrock specs
  • Gemini specs
  • VertexAI specs

Usage

The library will try to return mock response if the providers and apis are supported; otherwise it will fallback to vcrpy implementation of network mocking.

With Pytest Recording

Pytest recording already implement vcrpy. Simply initialise LLMMock to mock response.

from llm_mocks import LLMMock

...

LLMMock()

Normal mock

Follow vcrpy usage documentation for setting up vcr. Then initialise LLMMock class for reponse mocking.

Customisation

By default, the mock client always returns the same response. There are a few ways to customise response for testing and development:

Randomise response

You can pass faker instance to LLMMock for randomisation. If you want to control the level of randomness, simply add seed configuration as Faker(seed=...).

from llm_mocks import LLMMock
from faker import Faker

faker = Faker()
LLMMock(faker=faker)

Provide your own response

If you would like the to return specific response, you can override response generate from mock client using reponse_overwrite hook:

from llm_mocks import LLMMock

def response_overwrite(request, response):
    ...

LLMMock(reponse_overwrite=response_overwrite)
...

Use vcr for recording and replay

This library monkey patch vcrpy to provide mock request before vcr cassette read local file or make network request. To resume vcr functionality, simply disable mock service:

LLMMock(mock_disabled)

Contributing

We welcome contributions from the community! If you have ideas for new features, improvements, or bug fixes, please check out our Contributing Guide to get started.

Support

If you encounter any issues or have questions, please visit our Support Page or open an issue on our GitHub repository.

Thank you for using LLM Library! We hope it makes your development and testing processes smoother and more efficient.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_mocks-0.1.0.tar.gz (49.0 kB view details)

Uploaded Source

Built Distribution

llm_mocks-0.1.0-py3-none-any.whl (43.9 kB view details)

Uploaded Python 3

File details

Details for the file llm_mocks-0.1.0.tar.gz.

File metadata

  • Download URL: llm_mocks-0.1.0.tar.gz
  • Upload date:
  • Size: 49.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.11.6 Darwin/22.6.0

File hashes

Hashes for llm_mocks-0.1.0.tar.gz
Algorithm Hash digest
SHA256 6f6a05914a5a500caaead2ea5997716c1616e82d3229ff134edb52debe675b0f
MD5 b0ac5657a68c84202ebb971ff2f9641e
BLAKE2b-256 dbf0d7486bac903a69660f0b3e1e01cc3810840864538a7c4161adfb8ff490bc

See more details on using hashes here.

File details

Details for the file llm_mocks-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: llm_mocks-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 43.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.11.6 Darwin/22.6.0

File hashes

Hashes for llm_mocks-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 04ba56bdc0568c05e2ec1645bfc2569acd24f1fbe3b5a7efee3254cf2cdfe2c1
MD5 394e42a6ea61a4f8c52b8a5a0af65275
BLAKE2b-256 7f922f1d8191eb73aeda52e6848dce825af1ed2bb3ac90397577470861367e2e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page