Skip to main content

A tiny library for manage calls to the LLMs of different services (Paris-Saclay Aristote Included).

Project description

Unified Model Caller

A small, lightweight library that provides a single unified interface for calling LLMs from different providers. Instead of learning each provider's SDK separately, you instantiate one LLMCaller and swap the service name.

Supported services

Service name Provider
openai OpenAI (GPT models)
anthropic Anthropic (Claude models)
google Google (Gemini models)
xai xAI (Grok models)
ilaas Ilaas
aristoteonmydocker Aristote on MyDocker

Installation

Via pip:

pip install unified-model-caller

Via uv:

uv add unified-model-caller

Usage

from unified_model_caller import LLMCaller

caller = LLMCaller("google", "gemini-2.0-flash", api_key="<your-api-key>")
response = caller.call("What is a matrix?")
print(response)

The constructor signature is:

LLMCaller(service: str, model: str, api_key: str = "")
  • service — case-insensitive service name (see table above)
  • model — model identifier string passed directly to the provider
  • api_key — API key; can be omitted for services that don't require one

Rate limiting

Call wait_cooldown() between requests to respect each service's built-in cooldown:

caller.wait_cooldown()
response = caller.call("Next prompt")

Listing available services

LLMCaller.get_services()
# ['openai', 'anthropic', 'google', 'xai', 'ilaas', 'aristoteonmydocker']

Adding an external service

You can register a new service at runtime from any Python file — no changes to the library are needed.

1. Create a service file

The file must define a class that inherits from BaseService and implements four methods:

# my_service.py
from unified_model_caller import BaseService

class MyService(BaseService):
    def get_name(self) -> str:
        """Unique lowercase name used to identify this service."""
        return "myservice"

    def requires_token(self) -> bool:
        """Return True if the service needs an API key."""
        return True

    def service_cooldown(self) -> int:
        """Minimum delay between calls, in milliseconds."""
        return 1000

    def call(self, model: str, prompt: str) -> str:
        """Send prompt to the model and return the response text."""
        import requests
        response = requests.post(
            "https://api.myservice.example/v1/completions",
            json={"model": model, "prompt": prompt},
            headers={"Authorization": f"Bearer {self.api_key}"},
        )
        return response.json()["text"]

The api_key passed to LLMCaller(...) is available as self.api_key inside your class.

2. Register and use it

from unified_model_caller import LLMCaller

LLMCaller.add_service("/path/to/my_service.py")

caller = LLMCaller("myservice", "my-model-name", api_key="<your-api-key>")
response = caller.call("Hello!")
print(response)

add_service loads the file, finds the BaseService subclass inside it, and registers it globally under the name returned by get_name(). The service is then available to all subsequent LLMCaller instances in the same process.

BaseService contract

Method Return type Description
get_name(self) str Unique service identifier (lowercase). Used as the service argument to LLMCaller.
requires_token(self) bool Whether the service needs an API key.
service_cooldown(self) int Cooldown between calls in milliseconds.
call(self, model, prompt) str Perform the API call and return the response text.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unified_model_caller-0.2.1.tar.gz (6.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

unified_model_caller-0.2.1-py3-none-any.whl (8.8 kB view details)

Uploaded Python 3

File details

Details for the file unified_model_caller-0.2.1.tar.gz.

File metadata

  • Download URL: unified_model_caller-0.2.1.tar.gz
  • Upload date:
  • Size: 6.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for unified_model_caller-0.2.1.tar.gz
Algorithm Hash digest
SHA256 51a76bbb92427e17f631b060b46dd3a681281e1e302209be4c52e86d72910894
MD5 45dd8965515c7cec12f46075cab8c842
BLAKE2b-256 110f5d54be0468ad8807eee15df4975d965c67be9def996bc58ff3ba438ea75c

See more details on using hashes here.

Provenance

The following attestation bundles were made for unified_model_caller-0.2.1.tar.gz:

Publisher: release.yml on DobbiKov/unified-model-caller

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file unified_model_caller-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for unified_model_caller-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 31e6e59812289f3715f8768551b477410c258a4dd40b89defddc69aac6ef653c
MD5 5100b3794c5bc0310a7a884addca5f48
BLAKE2b-256 214cee5858229f1424a2fd6300b6a93536602dccb625a6a5585df216860ec438

See more details on using hashes here.

Provenance

The following attestation bundles were made for unified_model_caller-0.2.1-py3-none-any.whl:

Publisher: release.yml on DobbiKov/unified-model-caller

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page