Skip to main content

A streamlined, lightweight LLM client library for OpenRouter and Ollama providers with structured output generation

Project description

Simplified United LLM

PyPI version Python 3.8+ License: MIT

A streamlined, lightweight LLM client library that provides unified access to OpenRouter and Ollama providers with structured output generation, multi-image vision processing, and schema prompt enhancement.

✨ Key Features

  • 🤖 Three Generation Methods: gen_text(), gen_dict(), gen_pydantic()
  • 🔌 Dual Provider Support: OpenRouter (cloud) + Ollama (local)
  • 👁️ Multi-Image Vision: Process multiple images simultaneously
  • 🎯 Schema Enhancement: Optional prompt enhancement for better compliance
  • 🔗 String-Schema Integration: Automatic validation and conversion
  • 📝 Comprehensive Logging: Daily organized logs with detailed tracking
  • ⚙️ Simple Configuration: JSON-based configuration
  • 🔍 Provider Auto-Detection: Automatic provider selection from model prefixes

📦 Installation

pip install simplified-united-llm

Requirements

  • Python: 3.8+
  • Dependencies: instructor>=1.3.0, pydantic>=2.0.0, string-schema>=0.1.0, openai>=1.0.0

🚀 Quick Start

1. Configuration Setup

Copy the example configuration and add your API keys:

cp united_llm.json.example united_llm.json

Edit united_llm.json:

{
  "api_keys": {
    "openrouter": "your-openrouter-api-key-here",
    "ollama": null
  },
  "base_urls": {
    "openrouter": "https://openrouter.ai/api/v1",
    "ollama": "http://localhost:11434/v1"
  },
  "log_dir": "logs/llm_calls"
}

2. Basic Usage

from united_llm import LLMClient
from pydantic import BaseModel
from typing import List

# Initialize client (auto-loads united_llm.json)
client = LLMClient()

# 1. Text Generation
text = client.gen_text(
    model="ollama:qwen3:8b",
    prompt="Explain quantum computing in simple terms."
)

# 2. Dictionary Generation with Schema Enhancement
data = client.gen_dict(
    model="ollama:qwen3:8b",
    prompt="Extract: John Smith, 30, Engineer in Boston",
    schema="{name: str, age: int, job: str, city: str}",
    add_schema_to_prompt=True  # 🎯 Enhanced compliance!
)
# Output: {"name": "John Smith", "age": 30, "job": "Engineer", "city": "Boston"}

# 3. Pydantic Model Generation
class Person(BaseModel):
    name: str
    age: int
    occupation: str
    city: str

person = client.gen_pydantic(
    model="ollama:qwen3:8b",
    prompt="Extract: Alice Johnson, 25, Designer in Seattle",
    response_model=Person,
    add_schema_to_prompt=True  # 🎯 Enhanced compliance!
)
# Output: Person(name="Alice Johnson", age=25, occupation="Designer", city="Seattle")

3. Multi-Image Vision Processing

from united_llm.utils.image_input import ImageInput

# Load multiple images
images = [
    ImageInput("image1.jpg", name="first"),
    ImageInput("image2.png", name="second")
]

# Analyze multiple images with structured output
class ImageAnalysis(BaseModel):
    description: str
    text_content: str
    objects_detected: List[str]
    image_type: str

result = client.gen_pydantic(
    model="openrouter:google/gemini-2.5-flash-lite",
    prompt="Analyze these images and extract key information",
    response_model=ImageAnalysis,
    images=images,
    add_schema_to_prompt=True
)

🎯 Core Methods

gen_text(model, prompt, images=None)

Generate plain text responses.

gen_dict(model, prompt, schema, images=None, add_schema_to_prompt=False)

Generate structured dictionaries with schema validation.

gen_pydantic(model, prompt, response_model, images=None, add_schema_to_prompt=False)

Generate validated Pydantic model instances.

📋 Schema Syntax

Uses string-schema format:

# Basic types
"{name: str, age: int, active: bool, score: float}"

# Arrays
"{tags: [str], scores: [float]}"

# Nested objects
"{user: {name: str, email: str}, posts: [{title: str, content: str}]}"

🔌 Supported Providers

OpenRouter (Cloud)

  • Prefix: openrouter:
  • Example: openrouter:google/gemini-2.5-flash-lite
  • Requirements: API key required
  • Vision Support: ✅ (with compatible models)
  • Models: All OpenRouter supported models

Ollama (Local)

  • Prefix: ollama:
  • Example: ollama:qwen3:8b
  • Requirements: Local Ollama server running
  • Vision Support: ❌ (text only)
  • Models: Any model available in your Ollama installation

⚙️ Configuration

Required Keys

  • api_keys: Provider API keys
  • base_urls: Provider endpoints

Optional Keys

  • log_dir: Log directory (default: "logs/llm_calls")

Provider Setup

OpenRouter Setup

  1. Sign up at OpenRouter
  2. Get your API key
  3. Add to united_llm.json

Ollama Setup

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a model
ollama pull qwen3:8b

# Start server (runs on localhost:11434)
ollama serve

📝 Logging

Automatic logging to daily files:

logs/llm_calls/2024-12-27.log

Log format:

2024-12-27 10:30:15 | ollama:qwen3:8b | gen_dict | Extract: John... | {"name": "John", "age": 30} | 1.25s

🧪 Testing

Run the comprehensive test suite:

# Basic unit tests
python -m pytest tests/test_basic.py -v

# Integration tests (requires API keys and running services)
python -m pytest tests/ -v

📚 Examples

Comprehensive examples available in the examples/ directory:

  • basic_usage.py - Perfect starting point for new users
  • advanced_features.py - Complex use cases and schema enhancement
  • vision_examples.py - Multi-image processing examples
cd examples
python basic_usage.py

🔧 Advanced Features

Schema Prompt Enhancement

The add_schema_to_prompt=True parameter significantly improves structured output compliance:

# Without enhancement
result = client.gen_dict(
    model="ollama:qwen3:8b",
    prompt="Complex extraction task...",
    schema="{complex: {nested: {schema: str}}}"
)

# With enhancement - better compliance!
result = client.gen_dict(
    model="ollama:qwen3:8b", 
    prompt="Complex extraction task...",
    schema="{complex: {nested: {schema: str}}}",
    add_schema_to_prompt=True  # 🎯 Adds schema to prompt
)

Vision Capabilities

Process multiple images with structured output:

# Single image
image = ImageInput("document.jpg")
text = client.gen_text(
    model="openrouter:google/gemini-2.5-flash-lite",
    prompt="Extract all text from this document",
    images=[image]
)

# Multiple images with comparison
images = [ImageInput("before.jpg"), ImageInput("after.jpg")]
comparison = client.gen_dict(
    model="openrouter:google/gemini-2.5-flash-lite",
    prompt="Compare these two images",
    schema="{similarities: [str], differences: [str], summary: str}",
    images=images,
    add_schema_to_prompt=True
)

🤝 Contributing

Contributions welcome! Please feel free to submit a Pull Request.

📄 License

MIT License - see LICENSE file for details.

🔗 Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

united_llm-0.1.5.tar.gz (24.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

united_llm-0.1.5-py3-none-any.whl (24.0 kB view details)

Uploaded Python 3

File details

Details for the file united_llm-0.1.5.tar.gz.

File metadata

  • Download URL: united_llm-0.1.5.tar.gz
  • Upload date:
  • Size: 24.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for united_llm-0.1.5.tar.gz
Algorithm Hash digest
SHA256 b839c0f4e0c6a7d958e395551ac43d83f7c850451746bc09a436ab4890b596de
MD5 f5cc021e3c9f21ac863a300d949d4ac9
BLAKE2b-256 962bcdac140eaec6b0fc2bea5aab9dd1418dbe6be1555e9fd301dea9225a9f02

See more details on using hashes here.

File details

Details for the file united_llm-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: united_llm-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 24.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for united_llm-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 ef7611ae9522e00643d692a8933c3759fb010ff9f016e3e047b42807baca23fd
MD5 216fb4c48a16878cd6754d1c06e4c164
BLAKE2b-256 367cf5209eb7075b73edb9a34f745d6c18513ee1f6670d96e06cd92f3154a293

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page