Skip to main content

A unified interface for multiple LLM chat providers with structured output support

Project description

SchemaChat

A unified Python interface for multiple LLM chat providers with structured output support using Pydantic models.

Features

  • Multi-provider support: OpenAI, Ollama, and OpenRouter APIs
  • Structured output: Generate validated responses using Pydantic models
  • Factory pattern: Easy provider instantiation and switching
  • Advanced context management: Intelligent context size optimization for Ollama
  • Fallback model support: Switch models dynamically for error recovery
  • Type safety: Full type hints and validation throughout

Installation

pip install schemachat

Quick Start

Basic Text Generation

from schemachat.core.configs.openai import OpenAIConfig
from schemachat.providers.factory import ChatProviderFactory

# Configure OpenAI provider
config = OpenAIConfig(
    api_key="your-openai-api-key",
    base_url="https://api.openai.com/v1",
    model_name="gpt-4",
    fallback_model_name="gpt-3.5-turbo"
)

# Create provider instance
client = ChatProviderFactory.create_provider(config)

# Generate response
response = client.invoke("Hello, world!")
print(response)

Structured Output Generation

from pydantic import BaseModel
from typing import List

class Person(BaseModel):
    name: str
    age: int
    skills: List[str]

# Generate structured response
person = client.invoke_structured(
    "Generate a profile for a Python developer",
    response_model=Person
)

print(f"Name: {person.name}")
print(f"Age: {person.age}")
print(f"Skills: {', '.join(person.skills)}")

Using Ollama Provider

from schemachat.core.configs.ollama import OllamaConfig

# Configure Ollama provider
config = OllamaConfig(
    base_url="http://localhost:11434",
    model_name="llama3.1",
    fallback_model_name="llama3",
    max_num_ctx=128,  # Context size in KB
    num_predict=8192  # Max prediction tokens
)

client = ChatProviderFactory.create_provider(config)
response = client.invoke("Explain quantum computing")

Supported Providers

OpenAI (and OpenAI-compatible APIs)

  • Official OpenAI API
  • OpenRouter
  • Any OpenAI-compatible endpoint

Ollama

  • Local Ollama installations
  • Advanced context size management
  • Automatic token calculation and optimization

Architecture

SchemaChat uses a factory pattern with provider-specific implementations:

  • BaseLLMClient: Abstract interface for all providers
  • BaseConfig: Configuration base class with validation
  • ChatProviderFactory: Factory for creating provider instances
  • Provider-specific optimizations: Each provider includes tailored optimizations

Configuration Options

OpenAI Configuration

OpenAIConfig(
    api_key="your-key",
    base_url="https://api.openai.com/v1",
    model_name="gpt-4",
    fallback_model_name="gpt-3.5-turbo",
    max_tokens=8192,
    temperature=0.7,
    top_p=0.9
)

Ollama Configuration

OllamaConfig(
    base_url="http://localhost:11434",
    model_name="llama3.1",
    fallback_model_name="llama3",
    max_num_ctx=128,  # KB
    num_ctx=32768,    # Specific context size
    num_predict=8192  # Max prediction tokens
)

Advanced Features

Provider Information

info = client.get_provider_info()
print(f"Provider: {info['provider_type']}")
print(f"Model: {info['model_name']}")
print(f"Base URL: {info['base_url']}")

Model Switching

# Switch to fallback model
client.use_fallback_model()

# Switch to specific model
client.use_fallback_model("gpt-4-turbo")

Error Handling

try:
    response = client.invoke("Your prompt")
except Exception as e:
    print(f"Error: {e}")
    # Automatically try fallback model
    client.use_fallback_model()
    response = client.invoke("Your prompt")

Development

Setup Development Environment

# Clone the repository
git clone https://github.com/yourusername/schemachat.git
cd schemachat

# Install dependencies
uv sync

# Install development dependencies
uv sync --group dev

Running Tests

uv run pytest

Code Formatting

uv run black .
uv run ruff check .

Type Checking

uv run mypy .

Requirements

  • Python 3.10+
  • pydantic>=2.12.0
  • openai>=2.3.0 (for OpenAI providers)
  • ollama>=0.6.0 (for Ollama providers)
  • tiktoken>=0.12.0 (for token counting)

License

MIT License - see LICENSE file for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

schemachat-0.1.1.tar.gz (77.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

schemachat-0.1.1-py3-none-any.whl (14.2 kB view details)

Uploaded Python 3

File details

Details for the file schemachat-0.1.1.tar.gz.

File metadata

  • Download URL: schemachat-0.1.1.tar.gz
  • Upload date:
  • Size: 77.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.18

File hashes

Hashes for schemachat-0.1.1.tar.gz
Algorithm Hash digest
SHA256 451c6cf98b143687e8d54252b87ee9cd1e692dcf3e16d1ff23c1253314597eff
MD5 2c0321c677640a7b89730249627d0b7c
BLAKE2b-256 6bacd4e497818c5bcbe59896bda46257b6c5e15570265e86107b937ea82de422

See more details on using hashes here.

File details

Details for the file schemachat-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: schemachat-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 14.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.18

File hashes

Hashes for schemachat-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 02ba890cbafd2e03eb98a1d52ddef5fe69213f34cb37ccf14ac14e87e2e51cb4
MD5 9d7a899dc320ec18b90f966e982b71c1
BLAKE2b-256 e9bd8823f8f843f58158b13df79b665e5925ccb560e711cc46c05ca5d5b7ac50

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page