Skip to main content

Mock clients for your favorite LLM APIs

Project description

MockAI

False LLM endpoints for testing

MockAI provides a local server that interops with multiple LLM SDKs, so you can call these APIs as normal but receive mock or pre-determined responses at no cost!

The package currently provides full support for OpenAI and Anthropic. It patches these libraries directly under the hood, so it will always be up to date.

Installation

# With pip
pip install ai-mock 

# With poetry
poetry add ai-mock

# With uv
uv add ai-mock

Usage

Start the MockAI server

This is the server that the mock clients will communicate with, we'll see later how we can configure our own pre-determined responses :).

# After installing MockAI 
$ mockai 

Chat Completions

To use a mock version of these providers, you only have to change a single line of code (and just barely!):

- from openai import OpenAI         # Real Client
+ from mockai.openai import OpenAI  # Fake Client
# Rest of the code remains the exact same!
client = OpenAI()

response = client.chat.completions.create(
        model="gpt-5",  # Model can be whatever you want
        messages=[
            {
                "role": "user",
                "content": "Hi Mock!"
            }
        ],
        # All other kwargs are accepted, but ignored (except for stream ;)) 
        temperate = 0.7,
        top_k = 0.95
    )

print(response.choices[0].message.content)
# >> "Hi Mock!"

# By default, the response will be a copy of the
# content of the last message in the conversation

Alternatively, you can use the real SDK and set the base url to the MockAI server address

from openai import OpenAI         # Real Client

# The mockai server runs on port 8100 by default
client = OpenAI(api_key="not used but required", base_url="http://localhost:8100/openai")

response = client.chat.completions.create(
        model="gpt-5",
        messages=[
            {
                "role": "user",
                "content": "Hi Mock!"
            }
        ],
        temperate = 0.7,
        top_k = 0.95
    )

print(response.choices[0].message.content)
# >> "Hi Mock!"

MockAI also provides clients for Anthropic:

# from anthropic import Anthropic
from mockai.anthropic import Anthropic

client = Anthropic()

response = client.messages.create(
        model="claude-3.5-opus",
        messages=[{"role": "user", "content": "What's up!"}],
        max_tokens=1024
    )

print(response.content)
# >> "What's up!"

And of course the async versions of all clients are supported:

from mockai.openai import AsyncOpenAI
from mockai.anthropic import AsyncAnthropic

Streaming is supported as well:

from mockai.openai import OpenAI

client = OpenAI()

response = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "Hi mock!"}],
        stream = True
    )

# Streaming mock responses will yield one letter per chunk
for chunk in response:
    if chunk.choices:
        if chunk.choices[0].delta.content:
            print(chunk.choices[0].delta.content)
# >> H
# >> i
# >>  
# >> m
# >> o
# >> c
# >> k
# >> !

To learn more about the usage of each client, you can look at the docs of the respective provider, the mock clients are the exact same!

Tool Calling

All mock clients also work with tool calling! To trigger a tool call, you must specify it in a pre-determined response.

from mockai.openai import OpenAI

client = OpenAI()

response = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "Function!"}],
    )

print(response.choices[0].message.tool_calls[0].function.name)
# >> "mock"
print(response.choices[0].message.tool_calls[0].function.arguments)
# >> "{"mock_arg": "mock_val"}"

Configure responses

The MockAI server takes an optional path to a JSON file were we can establish our responses for both completions and tool calls. The structure of the json is simple: Each object must have a "type" key of value "text" or "function", an input key with a value, which is what will be matched against, and an output key, which is what will be returned if the input key matches the user input.

// mock_responses.json
[
  {
    "type": "text",
    "input": "How are ya?",
    "output": "I'm fine, thank u 😊. How about you?"
  },
  {
    "type": "function",
    "input": "Where's my order?",
    "output": {
      "name": "get_delivery_date",
      "arguments": {
        "order_id": "1337"
      }
    }
  }
]

When creating your .json file, please follow these rules:

  1. Each response must have a type key, whose value must be either text or function, this will determine the response object of the client.
  2. Responses of type text must have a output key with a string value.
  3. Responses of type function must have a name key with the name of the function, and a arguments key with a dict of args and values (Example: {"weather": "42 degrees Fahrenheit"}).
  4. Responses of type function can accept a list of objects, to simulate parallel tool calls.

Load the json file

To create a MockAI server with our json file, we just need to pass it to the mockai command.

$ mockai mock_responses.json

# The full file path can also be passed
$ mockai ~/home/foo/bar/mock_responses.json

With this, our mock clients will have access to our pre-determined responses!

from mockai.openai import OpenAI

client = OpenAI()

response = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "How are ya?"}],
    )

print(response.choices[0].message.content)
# >> "I'm fine, thank u 😊. How about you?"

response = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "Where's my order?"}],
    )

print(response.choices[0].message.tool_calls[0].function.name)
# >> "get_delivery_date"

print(response.choices[0].message.tool_calls[0].function.arguments)
# >> "{'order_id': '1337'}"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_mock-0.2.1.tar.gz (9.7 kB view details)

Uploaded Source

Built Distribution

ai_mock-0.2.1-py3-none-any.whl (11.1 kB view details)

Uploaded Python 3

File details

Details for the file ai_mock-0.2.1.tar.gz.

File metadata

  • Download URL: ai_mock-0.2.1.tar.gz
  • Upload date:
  • Size: 9.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.3 Linux/6.8.0-45-generic

File hashes

Hashes for ai_mock-0.2.1.tar.gz
Algorithm Hash digest
SHA256 7509c0b457e8c2230e04525cc218ab0b239b17b67f597d1f4b3b02845d94c577
MD5 f34158084375a4b3af8e57f2501b4214
BLAKE2b-256 50cc9b65470dd31761b7104ef47136ac75bbe515e172761f657ce6a0a123b38b

See more details on using hashes here.

File details

Details for the file ai_mock-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: ai_mock-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 11.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.3 Linux/6.8.0-45-generic

File hashes

Hashes for ai_mock-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 430e7fc990dd1d71cafc471634187b4832c0b2a6d11284bd90db84c290a969d7
MD5 a33c759e016de9cb35b3f2ad768dbe6e
BLAKE2b-256 7985b89b804512beb34cebaef327b6dd2b54e2aaa3e1f06c335c7e8c2b3f3460

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page