Skip to main content

Production-focused Python library for multi-provider LLM management with unified API

Project description

JustLLMs

A production-ready Python library for multi-provider LLM management with a unified API.

PyPI version Downloads

Why JustLLMs?

Managing multiple LLM providers is complex. You need to handle different APIs, manage authentication, and ensure reliability. JustLLMs solves these challenges by providing a unified interface across all major providers with automatic fallbacks and consistent error handling.

Installation

pip install justllms

Package size: ~113KB | Lines of code: ~4.3K | Dependencies: Production-focused

Quick Start

from justllms import JustLLM

# Initialize with your API keys
client = JustLLM({
    "providers": {
        "openai": {"api_key": "your-openai-key"},
        "google": {"api_key": "your-google-key"},
        "anthropic": {"api_key": "your-anthropic-key"}
    }
})

# Simple completion - uses configured fallback or first available provider
response = client.completion.create(
    messages=[{"role": "user", "content": "Explain quantum computing briefly"}]
)
print(response.content)

Core Features

Multi-Provider Support

Connect to all major LLM providers with a single, consistent interface:

  • OpenAI (GPT-5, GPT-4, etc.)
  • Google (Gemini 2.5, Gemini 1.5 models)
  • Anthropic (Claude 4, Claude 3.5 models)
  • Azure OpenAI (with deployment mapping)
  • xAI Grok, DeepSeek
  • Ollama (local Llama/Mistral/phi models hosted on your machine)
# Switch between providers seamlessly
client = JustLLM({
    "providers": {
        "openai": {"api_key": "your-key"},
        "google": {"api_key": "your-key"},
        "anthropic": {"api_key": "your-key"},
        "ollama": {"base_url": "http://localhost:11434"}
    }
})

# Explicitly specify provider and model
response1 = client.completion.create(
    messages=[{"role": "user", "content": "Explain AI"}],
    model="openai/gpt-4o"  # Format: "provider/model"
)

Ollama runs locally and requires no API key. Set OLLAMA_API_BASE (defaults to http://localhost:11434) and JustLLMs automatically discovers every installed model via the Ollama /api/tags endpoint.

Automatic Fallbacks

Configure fallback providers and models for reliability:

client = JustLLM({
    "providers": {
        "openai": {"api_key": "your-key"},
        "anthropic": {"api_key": "your-key"}
    },
    "routing": {
        "fallback_provider": "anthropic",
        "fallback_model": "claude-3-5-sonnet-20241022"
    }
})

# If no model specified, uses fallback
response = client.completion.create(
    messages=[{"role": "user", "content": "Hello"}]
)

Side-by-Side Model Comparison

Compare multiple LLM providers and models simultaneously with our interactive SXS (Side-by-Side) comparison tool. Perfect for evaluating model performance, testing prompts, and making informed decisions about which models to use.

Features

  • Interactive CLI: Select providers and models using checkbox interface
  • Parallel Execution: All models run simultaneously for fair comparison
  • Real-time Results: Live display with loading animation until all models complete
  • Comprehensive Metrics: Compare latency, token usage, response quality and costs across models
  • Multiple Providers: Test OpenAI, Google, Anthropic, xAI, DeepSeek models side-by-side

Usage

# Run the interactive SXS comparison
justllms sxs

The tool will guide you through:

  1. Provider Selection: Choose which LLM providers to compare
  2. Model Selection: Pick specific models from each provider
  3. Prompt Input: Enter your test prompt
  4. Real-time Comparison: View all responses and metrics simultaneously

Example Output

================================================================================
Prompt: Which programming language is better for beginners: Python or JavaScript?
================================================================================

┌─ openai/gpt-5          ─────────────────────────────────────────────────────┐
│ Python is generally better for beginners due to its clean, readable syntax │
│ that resembles natural language. It has fewer confusing concepts like       │
│ hoisting or prototypes, excellent learning resources, and is widely used    │
│ in education. Python's "batteries included" philosophy means beginners can  │
│ accomplish tasks without learning complex setups, making it ideal for       │
│ building confidence early in programming.                                   │
└─────────────────────────────────────────────────────────────────────────────┘

┌─ google/gemini-2.5-pro ─────────────────────────────────────────────────────┐
│ JavaScript has advantages for beginners because it runs everywhere - in     │
│ browsers, servers, and mobile apps. You can see immediate visual results    │
│ when building web pages, which is motivating. The job market heavily favors │
│ JavaScript developers, and modern frameworks make it powerful. While syntax │
│ can be tricky, the instant feedback and versatility make JavaScript a       │
│ practical first language for aspiring developers.                           │
└─────────────────────────────────────────────────────────────────────────────┘

================================================================================
Metrics Summary:

| Model                   |  Status   | Latency (s) | Tokens | Cost ($) |
|-------------------------|-----------|-------------|--------|----------|
| openai/gpt-5            | ✓ Success |        5.69 |    715 |   0.0000 |
| google/gemini-2.5-pro   | ✓ Success |       8.50 |    868 |   0.0003  |

🏆 Comparison with Alternatives

Feature JustLLMs LangChain LiteLLM OpenAI SDK
Package Size Minimal ~50MB ~5MB ~1MB
Setup Complexity Simple config Complex chains Medium Simple
Multi-Provider ✅ 7+ providers ✅ Many integrations ✅ 100+ providers ❌ OpenAI only
Unified API ✅ Single interface ⚠️ Different patterns ⚠️ Provider-specific ❌ OpenAI only
Side-by-Side Comparison ✅ Interactive CLI tool ❌ None ❌ None ❌ None
Automatic Fallbacks ✅ Built-in ❌ Manual ⚠️ Basic ❌ None
Production Ready ✅ Out of the box ⚠️ Requires setup ✅ Minimal setup ⚠️ Basic features

Provider-Specific Parameters

JustLLMs supports common generation parameters across all providers, plus provider-specific configurations:

Common Parameters (All Providers)

These parameters work across OpenAI, Gemini, Anthropic, and other providers:

response = client.completion.create(
    messages=[{"role": "user", "content": "Hello"}],
    # Common parameters
    temperature=0.7,        # 0.0-2.0: Controls randomness
    top_p=0.9,             # 0.0-1.0: Nucleus sampling
    top_k=40,              # Integer: Top-k sampling (Gemini only)
    max_tokens=1024,       # Maximum tokens to generate
    stop=["END"],          # Stop sequence(s)
    n=1,                   # Number of completions (OpenAI only)
    presence_penalty=0.1,  # -2.0 to 2.0: Penalize new topics
    frequency_penalty=0.2  # -2.0 to 2.0: Penalize repetition
)

Gemini-Specific Parameters

Use generation_config for Gemini-only features:

response = client.completion.create(
    messages=[{"role": "user", "content": "Explain quantum computing"}],
    provider="google",
    model="gemini-2.5-flash",
    # Common parameters
    temperature=0.7,
    top_k=40,
    max_tokens=1024,
    # Gemini-specific configuration
    generation_config={
        "candidateCount": 2,                    # Generate multiple responses
        "responseMimeType": "application/json", # JSON output
        "responseSchema": {...},                # Structured output schema
        "thinkingConfig": {                     # Control thinking budget
            "thinkingBudget": 100               # 0-24000 tokens
        }
    }
)

# Access multiple candidates when candidateCount > 1
print(f"Candidate 1: {response.choices[0].message.content}")
print(f"Candidate 2: {response.choices[1].message.content}")

Notes:

  • Common parameters (temperature, top_k, etc.) should be set at the top level. The generation_config dict is for Gemini-exclusive features.
  • If a parameter is specified in both places, the top-level value takes precedence.
  • When candidateCount > 1, all candidates are returned in response.choices[] with proper indices.

OpenAI-Specific Parameters

OpenAI parameters are passed directly:

response = client.completion.create(
    messages=[{"role": "user", "content": "Hello"}],
    provider="openai",
    model="gpt-4o",
    # Common parameters
    temperature=0.7,
    max_tokens=100,
    n=1,
    presence_penalty=0.1,
    frequency_penalty=0.2
)

Note: top_k is not supported by OpenAI and will be silently ignored. Use generation_config only with Gemini.

Production Configuration

For production deployments:

production_config = {
    "providers": {
        "azure_openai": {
            "api_key": os.getenv("AZURE_OPENAI_API_KEY"),
            "endpoint": os.getenv("AZURE_OPENAI_ENDPOINT"),
            "resource_name": "my-enterprise-resource",
            "deployment_mapping": {
                "gpt-4": "my-gpt4-deployment",
                "gpt-3.5-turbo": "my-gpt35-deployment"
            }
        },
        "anthropic": {"api_key": os.getenv("ANTHROPIC_KEY")},
        "google": {"api_key": os.getenv("GOOGLE_KEY")},
        "ollama": {
            "base_url": os.getenv("OLLAMA_API_BASE", "http://localhost:11434"),
            "enabled": True,
        }
    },
    "routing": {
        "fallback_provider": "azure_openai",
        "fallback_model": "gpt-3.5-turbo"
    }
}

client = JustLLM(production_config)

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

justllms-2.1.6.tar.gz (48.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

justllms-2.1.6-py3-none-any.whl (58.3 kB view details)

Uploaded Python 3

File details

Details for the file justllms-2.1.6.tar.gz.

File metadata

  • Download URL: justllms-2.1.6.tar.gz
  • Upload date:
  • Size: 48.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for justllms-2.1.6.tar.gz
Algorithm Hash digest
SHA256 99beb166ea23dfb0469bad5d339063835a314119dbce65c66643832edae0f267
MD5 410eff67b29252ad6eaae03b83875ec0
BLAKE2b-256 c5bbe790c37dfb55194d73493db39b65401d0f11317e498eb0d08ac7fb89c99c

See more details on using hashes here.

File details

Details for the file justllms-2.1.6-py3-none-any.whl.

File metadata

  • Download URL: justllms-2.1.6-py3-none-any.whl
  • Upload date:
  • Size: 58.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for justllms-2.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 720cd94ff6b5c5deb02e2c82f5d1f8e126b75bbfc5ef78f95df461d8cdfbe28c
MD5 3258b5495b1bff42cd9fc199e20f4bb9
BLAKE2b-256 bd2f2e5f0c77ee9c687d5c9405baaa3dbd74e1aa415e0bab9acaeebac9089652

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page