Skip to main content

Registry for OpenAI models with capability and parameter validation

Project description

OpenAI Model Registry

PyPI version Python Versions CI Status codecov License: MIT

A Python package that provides information about OpenAI models and validates parameters before API calls.

📚 View the Documentation

What This Package Does

  • Helps you avoid invalid API calls by validating parameters ahead of time
  • Provides accurate information about model capabilities (context windows, token limits)
  • Handles model aliases and different model versions
  • Works offline with locally stored model information
  • Keeps model information up-to-date with optional updates

Installation

pip install openai-model-registry

Simple Example

from openai_model_registry import ModelRegistry

# Get information about a model
registry = ModelRegistry.get_instance()
model = registry.get_capabilities("gpt-4o")

# Access model limits
print(f"Context window: {model.context_window} tokens")
print(f"Max output: {model.max_output_tokens} tokens")

# Check if parameter values are valid
model.validate_parameter("temperature", 0.7)  # Valid - no error
try:
    model.validate_parameter("temperature", 3.0)  # Invalid - raises ValueError
except ValueError as e:
    print(f"Error: {e}")

# Check model features
if model.supports_structured:
    print("This model supports Structured Output")

Practical Use Cases

Validating Parameters Before API Calls

def call_openai(model, messages, **params):
    # Validate parameters before making API call
    capabilities = registry.get_capabilities(model)
    for param_name, value in params.items():
        capabilities.validate_parameter(param_name, value)

    # Now make the API call
    return client.chat.completions.create(model=model, messages=messages, **params)

Managing Token Limits

def prepare_prompt(model_name, prompt, max_output=None):
    capabilities = registry.get_capabilities(model_name)

    # Use model's max output if not specified
    max_output = max_output or capabilities.max_output_tokens

    # Calculate available tokens for input
    available_tokens = capabilities.context_window - max_output

    # Ensure prompt fits within available tokens
    return truncate_prompt(prompt, available_tokens)

Key Features

  • Model Information: Get context window size, token limits, and supported features
  • Parameter Validation: Check if parameter values are valid for specific models
  • Version Support: Works with date-based models (e.g., "o3-mini-2025-01-31")
  • Offline Usage: Functions without internet using local registry data
  • Updates: Optional updates to keep model information current

Command Line Usage

Update your local registry data:

openai-model-registry-update

Configuration

The registry uses local files for model information:

# Default locations (XDG Base Directory spec)
Linux: ~/.config/openai-model-registry/
macOS: ~/Library/Application Support/openai-model-registry/
Windows: %LOCALAPPDATA%\openai-model-registry\

You can specify custom locations:

import os

# Use custom registry files
os.environ["MODEL_REGISTRY_PATH"] = "/path/to/custom/models.yml"
os.environ["PARAMETER_CONSTRAINTS_PATH"] = "/path/to/custom/parameter_constraints.yml"

# Then initialize registry
from openai_model_registry import ModelRegistry
registry = ModelRegistry.get_instance()

Documentation

For more details, see:

Development

# Install dependencies (requires Poetry)
poetry install

# Run tests
poetry run pytest

# Run linting
poetry run pre-commit run --all-files

License

MIT License - See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openai_model_registry-0.6.1.tar.gz (26.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openai_model_registry-0.6.1-py3-none-any.whl (30.1 kB view details)

Uploaded Python 3

File details

Details for the file openai_model_registry-0.6.1.tar.gz.

File metadata

  • Download URL: openai_model_registry-0.6.1.tar.gz
  • Upload date:
  • Size: 26.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.11.12 Linux/6.11.0-1014-azure

File hashes

Hashes for openai_model_registry-0.6.1.tar.gz
Algorithm Hash digest
SHA256 c5e7af4a97f0b9ea4c96fe52380f1d8c16aafe6b2d4028b566bed838af4372ce
MD5 2e698f263408d5a139a992e07f9f21e7
BLAKE2b-256 0b04bc55913aa7e5b3974498abc25ab4f1c9da888ce20194c1c42bc1dcb76a46

See more details on using hashes here.

File details

Details for the file openai_model_registry-0.6.1-py3-none-any.whl.

File metadata

  • Download URL: openai_model_registry-0.6.1-py3-none-any.whl
  • Upload date:
  • Size: 30.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.11.12 Linux/6.11.0-1014-azure

File hashes

Hashes for openai_model_registry-0.6.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8f3d20c02de0b87f160c9fae5ac2d7fdf36e50c5cfb2495e1dd47cd80fcd1abc
MD5 9caad90d10406391a7f87f12ffebae51
BLAKE2b-256 2f03826c0725fc04f4c261d12206b43884b8a5800739b51217761844920a9f23

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page