Skip to main content

Universal interface for different LLM providers

Project description

univllm

PyPI version

A universal Python package that provides a standardised interface for different LLM providers including OpenAI, Anthropic, Deepseek, and Mistral.

Features

  • Universal Interface: Single API to interact with multiple LLM providers
  • Auto-Detection: Automatically detect the appropriate provider based on model name
  • Streaming Support: Stream completions from all supported providers
  • Model Capabilities: Query model capabilities like context window, function calling support, etc.
  • Error Handling: Comprehensive error handling with provider-specific exceptions
  • Async Support: Fully asynchronous API for better performance

Supported Providers

  • OpenAI: GPT-4o & GPT-5 family models
  • Anthropic: Claude 3.x / 4.x family models
  • Deepseek: Deepseek Chat, Deepseek Coder
  • Mistral: Mistral, Magistral & Codestral models

Supported Model Prefixes

The library validates models using simple prefix matching (see SUPPORTED_MODELS lists). Any model string that begins with one of these prefixes will be accepted. Provider-specific suffixes or date/version tags (e.g. -20240229, -latest, -0125, minor patch tags) are allowed but not individually validated.

Provider Accepted Prefixes (Exact / Prefix Match) Notes
OpenAI gpt-5, gpt-5, gpt-5, gpt-oss-120b, gpt-oss-20b, gpt-vision-1, gpt-4o Any extended suffix (e.g. gpt-4o-mini-2024-xx) will pass if it starts with a listed prefix.
Anthropic claude-3-7-sonnet-, claude-4-opus-, claude-4-sonnet-, claude-opus-4.1, claude-code Older variants (e.g. dated claude-3-* forms) can be added by extending the list in supported_models.py.
Deepseek deepseek-chat, deepseek-coder Code vs chat optimized.
Mistral mistral-small-, mistral-medium-, magistral-small-, magistral-medium-, codestral-, mistral-ocr- E.g. mistral-small-latest

Note: If you need additional model prefixes, you can locally extend the corresponding SUPPORTED_MODELS list in univllm/providers/supported_models.py or contribute a PR.

Installation

pip install assistants-core

Quick Start

import asyncio
from univllm import UniversalLLMClient


async def main():
    client = UniversalLLMClient()

    # Auto-detects provider based on model name
    response = await client.complete(
        messages=["What is the capital of France?"],
        model="gpt-4o"
    )

    print(response.content)


asyncio.run(main())

Configuration

Set your API keys as environment variables:

export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export DEEPSEEK_API_KEY="your-deepseek-key"
export MISTRAL_API_KEY="your-mistral-key"

Or pass them directly:

from univllm import UniversalLLMClient, ProviderType

client = UniversalLLMClient(
    provider=ProviderType.OPENAI,
    api_key="your-api-key"
)

Usage Examples

Basic Completion

import asyncio
from univllm import UniversalLLMClient


async def main():
    client = UniversalLLMClient()

    response = await client.complete(
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Explain quantum computing briefly."}
        ],
        model="gpt-4o",
        max_tokens=150,
        temperature=0.7
    )

    print(f"Response: {response.content}")
    print(f"Provider: {response.provider}")
    print(f"Model: {response.model}")
    print(f"Usage: {response.usage}")


asyncio.run(main())

Streaming Completion

import asyncio
from univllm import UniversalLLMClient


async def main():
    client = UniversalLLMClient()

    async for chunk in client.stream_complete(
            messages=["Tell me a short story about a robot."],
            model="gpt-4o",
            max_tokens=200
    ):
        print(chunk, end="", flush=True)


asyncio.run(main())

Model Capabilities

import asyncio
from univllm import UniversalLLMClient


async def main():
    client = UniversalLLMClient()

    # Get capabilities for a specific model
    capabilities = client.get_model_capabilities("gpt-4o")

    print(f"Supports function calling: {capabilities.supports_function_calling}")
    print(f"Context window: {capabilities.context_window}")
    print(f"Max tokens: {capabilities.max_tokens}")

    # Get all supported models
    all_models = client.get_supported_models()
    for provider, models in all_models.items():
        print(f"{provider}: {len(models)} models")


asyncio.run(main())

Multiple Providers

import asyncio
from univllm import UniversalLLMClient
from univllm.models import ProviderType


async def main():
    client = UniversalLLMClient()

    question = "What is machine learning?"

    # OpenAI
    openai_response = await client.complete(
        messages=[question],
        model="gpt-4o"
    )

    # Anthropic  
    anthropic_response = await client.complete(
        messages=[question],
        model="claude-4-sonnet"
    )

    print(f"OpenAI: {openai_response.content[:100]}...")
    print(f"Anthropic: {anthropic_response.content[:100]}...")


asyncio.run(main())

API Reference

UniversalLLMClient

Main client class for interacting with LLM providers.

Methods

  • complete(): Generate a completion
  • stream_complete(): Generate a streaming completion
  • get_model_capabilities(): Get model capabilities
  • get_supported_models(): Get supported models for all providers
  • set_provider(): Set or change the provider

Models

  • CompletionRequest: Request object for completions
  • CompletionResponse: Response object from completions
  • ModelCapabilities: Information about model capabilities
  • Message: Individual message in a conversation

Providers

  • ProviderType: Enum of supported providers
  • BaseLLMProvider: Base class for provider implementations

Exceptions

  • UniversalLLMError: Base exception
  • ProviderError: Provider-related errors
  • ModelNotSupportedError: Unsupported model errors
  • AuthenticationError: Authentication failures
  • ConfigurationError: Configuration issues

Development

git clone https://github.com/nihilok/univllm.git
cd univllm
pip install -e ".[dev]"

Run tests:

pytest

Licence

MIT Licence

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

univllm-0.1.1.tar.gz (13.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

univllm-0.1.1-py3-none-any.whl (16.9 kB view details)

Uploaded Python 3

File details

Details for the file univllm-0.1.1.tar.gz.

File metadata

  • Download URL: univllm-0.1.1.tar.gz
  • Upload date:
  • Size: 13.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for univllm-0.1.1.tar.gz
Algorithm Hash digest
SHA256 20520fbb7389a614ecff8f42a0a7ada36485d3745dc0e6a73bf2d02af1ff44fe
MD5 0f4106cd2465d3d245447bbed9c5444f
BLAKE2b-256 b245ed65c5bce6a7737df9beb08f501dd9928274c5e5ada8f9827ced1435189c

See more details on using hashes here.

File details

Details for the file univllm-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: univllm-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 16.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for univllm-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8490f3956620fe09b31e49d3a887106f3d839790393be054607ef4d0744da8e0
MD5 f394bcb36b1dc88ee59aa22bc5b0139e
BLAKE2b-256 11ecdfdb7b7124ea092b00a4f90fa13aaa87a005fa1dea1a0e4f1fc2cb3c0e9e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page