Skip to main content

A tiny library for manage calls to the LLMs of different services (Paris-Saclay Aristote Included).

Project description

Unified Model Caller

A small, lightweight library that provides a single unified interface for calling LLMs from different providers. Instead of learning each provider's SDK separately, you instantiate one LLMCaller and swap the service name.

Supported services

Service name Provider
openai OpenAI (GPT models)
anthropic Anthropic (Claude models)
google Google (Gemini models)
xai xAI (Grok models)
ilaas Ilaas
aristoteonmydocker Aristote on MyDocker

Installation

Via pip:

pip install unified-model-caller

Via uv:

uv add unified-model-caller

Usage

from unified_model_caller import LLMCaller

caller = LLMCaller("google", "gemini-2.0-flash", api_key="<your-api-key>")
response = caller.call("What is a matrix?")
print(response)

The constructor signature is:

LLMCaller(service: str, model: str, api_key: str = "")
  • service — case-insensitive service name (see table above)
  • model — model identifier string passed directly to the provider
  • api_key — API key; can be omitted for services that don't require one

Rate limiting

Call wait_cooldown() between requests to respect each service's built-in cooldown:

caller.wait_cooldown()
response = caller.call("Next prompt")

Listing available services

LLMCaller.get_services()
# ['openai', 'anthropic', 'google', 'xai', 'ilaas', 'aristoteonmydocker']

Adding an external service

You can register a new service at runtime from any Python file — no changes to the library are needed.

1. Create a service file

The file must define a class that inherits from BaseService and implements four methods:

# my_service.py
from unified_model_caller import BaseService

class MyService(BaseService):
    def get_name(self) -> str:
        """Unique lowercase name used to identify this service."""
        return "myservice"

    def requires_token(self) -> bool:
        """Return True if the service needs an API key."""
        return True

    def service_cooldown(self) -> int:
        """Minimum delay between calls, in milliseconds."""
        return 1000

    def call(self, model: str, prompt: str) -> str:
        """Send prompt to the model and return the response text."""
        import requests
        response = requests.post(
            "https://api.myservice.example/v1/completions",
            json={"model": model, "prompt": prompt},
            headers={"Authorization": f"Bearer {self.api_key}"},
        )
        return response.json()["text"]

The api_key passed to LLMCaller(...) is available as self.api_key inside your class.

2. Register and use it

from unified_model_caller import LLMCaller

LLMCaller.add_service("/path/to/my_service.py")

caller = LLMCaller("myservice", "my-model-name", api_key="<your-api-key>")
response = caller.call("Hello!")
print(response)

add_service loads the file, finds the BaseService subclass inside it, and registers it globally under the name returned by get_name(). The service is then available to all subsequent LLMCaller instances in the same process.

BaseService contract

Method Return type Description
get_name(self) str Unique service identifier (lowercase). Used as the service argument to LLMCaller.
requires_token(self) bool Whether the service needs an API key.
service_cooldown(self) int Cooldown between calls in milliseconds.
call(self, model, prompt) str Perform the API call and return the response text.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unified_model_caller-0.2.2.tar.gz (6.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

unified_model_caller-0.2.2-py3-none-any.whl (8.8 kB view details)

Uploaded Python 3

File details

Details for the file unified_model_caller-0.2.2.tar.gz.

File metadata

  • Download URL: unified_model_caller-0.2.2.tar.gz
  • Upload date:
  • Size: 6.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for unified_model_caller-0.2.2.tar.gz
Algorithm Hash digest
SHA256 2cbb9380d32982856412da1270cd7b05806108243e015c891d91229e2a92bcb2
MD5 7e9d0d8967c40ac7b4d2ed4670c3d011
BLAKE2b-256 4d0e9fc9ed4b4d9d82d48a0cf3dd54f66860ec0ef84084476a66b6ea884d9da5

See more details on using hashes here.

Provenance

The following attestation bundles were made for unified_model_caller-0.2.2.tar.gz:

Publisher: release.yml on DobbiKov/unified-model-caller

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file unified_model_caller-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for unified_model_caller-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 51c293e44b6d5798007d01c4ee1c4ab866932af6887c7fa92de6c768133ed488
MD5 179a84ff0a7298c8b493239bca7ff47a
BLAKE2b-256 41965775e59a4deb4d068ef06145c24a8cf485a5128a956d451641a19a193722

See more details on using hashes here.

Provenance

The following attestation bundles were made for unified_model_caller-0.2.2-py3-none-any.whl:

Publisher: release.yml on DobbiKov/unified-model-caller

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page