Registry for OpenAI models with capability and parameter validation
Project description
OpenAI Model Registry
A Python package that provides information about OpenAI models and validates parameters before API calls.
📚 View the Documentation 🤖 AI Assistant Documentation - LLM-optimized reference following llmstxt.org
Why Use OpenAI Model Registry?
OpenAI's models have different context-window sizes, parameter ranges, and feature support. If you guess wrong, the API returns an error—often in production.
OpenAI Model Registry keeps an up-to-date, local catalog of every model's limits and capabilities, letting you validate calls before you send them.
Typical benefits:
- Catch invalid
temperature,top_p, andmax_tokensvalues locally. - Swap models confidently by comparing context windows and features.
- Work fully offline—perfect for CI or air-gapped environments.
What This Package Does
- Helps you avoid invalid API calls by validating parameters ahead of time
- Provides accurate information about model capabilities (context windows, token limits)
- Handles model aliases and different model versions
- Works offline with locally stored model information
- Keeps model information up-to-date with optional updates
- Programmatic model cards: structured access to each model's capabilities, parameters, pricing (including per-image tiers), and deprecation metadata (OpenAI and Azure providers)
- Coverage and freshness: includes all OpenAI models as of 2025-08-16; pricing and data are kept current automatically via CI using ostruct
Installation
Core Library (Recommended)
pip install openai-model-registry
With CLI Tools
pip install openai-model-registry[cli]
The core library provides all programmatic functionality. Add the [cli] extra if you want to use the omr command-line tools.
💡 Which installation should I choose?
- Core only (
pip install openai-model-registry) - Perfect for programmatic use in applications, scripts, or libraries- With CLI (
pip install openai-model-registry[cli]) - Adds command-line tools for interactive exploration and debugging
Simple Example
from openai_model_registry import ModelRegistry
# Get information about a model
registry = ModelRegistry.get_default()
model = registry.get_capabilities("gpt-4o")
# Access model limits
print(f"Context window: {model.context_window} tokens")
print(f"Max output: {model.max_output_tokens} tokens")
# Expected output: Context window: 128000 tokens
# Max output: 16384 tokens
# Check if parameter values are valid
model.validate_parameter("temperature", 0.7) # Valid - no error
try:
model.validate_parameter("temperature", 3.0) # Invalid - raises ValueError
except ValueError as e:
print(f"Error: {e}")
# Expected output: Error: Parameter 'temperature' must be between 0 and 2...
# Check model features
if model.supports_structured:
print("This model supports Structured Output")
# Expected output: This model supports Structured Output
➡️ Keeping it fresh: run openai-model-registry-update (CLI) or registry.refresh_from_remote() whenever OpenAI ships new models.
🔵 Azure OpenAI Users: If you're using Azure OpenAI endpoints, be aware of platform-specific limitations, especially for web search capabilities. See our Azure OpenAI documentation for guidance.
Practical Use Cases
Validating Parameters Before API Calls
import openai
from openai_model_registry import ModelRegistry
# Initialize registry and client
registry = ModelRegistry.get_default()
client = openai.OpenAI() # Requires OPENAI_API_KEY environment variable
def call_openai(model, messages, **params):
# Validate parameters before making API call
capabilities = registry.get_capabilities(model)
for param_name, value in params.items():
capabilities.validate_parameter(param_name, value)
# Now make the API call
return client.chat.completions.create(model=model, messages=messages, **params)
# Example usage
messages = [{"role": "user", "content": "Hello!"}]
response = call_openai("gpt-4o", messages, temperature=0.7, max_tokens=100)
# Expected output: Successful API call with validated parameters
Managing Token Limits
from openai_model_registry import ModelRegistry
# Initialize registry
registry = ModelRegistry.get_default()
def truncate_prompt(prompt, max_tokens):
"""Simple truncation function (you'd implement proper tokenization)"""
# This is a simplified example - use tiktoken for real tokenization
words = prompt.split()
if len(words) <= max_tokens:
return prompt
return " ".join(words[:max_tokens])
def prepare_prompt(model_name, prompt, max_output=None):
capabilities = registry.get_capabilities(model_name)
# Use model's max output if not specified
max_output = max_output or capabilities.max_output_tokens
# Calculate available tokens for input
available_tokens = capabilities.context_window - max_output
# Ensure prompt fits within available tokens
return truncate_prompt(prompt, available_tokens)
# Example usage
long_prompt = "This is a very long prompt that might exceed token limits..."
safe_prompt = prepare_prompt("gpt-4o", long_prompt, max_output=1000)
# Expected output: Truncated prompt that fits within token limits
Key Features
- Model Information: Get context window size, token limits, and supported features
- Parameter Validation: Check if parameter values are valid for specific models
- Version Support: Works with date-based models (e.g., "o3-mini-2025-01-31")
- Offline Usage: Functions without internet using local registry data
- Updates: Optional updates to keep model information current
Command Line Usage
OMR CLI
The omr CLI provides comprehensive tools for inspecting and managing your model registry.
Note: CLI tools require the [cli] extra: pip install openai-model-registry[cli]
# List all models
omr models list
# Show data source paths
omr data paths
# Check for updates
omr update check
# Get detailed model info
omr models get gpt-4o
See the CLI Reference for complete documentation.
Note on updates:
omr update applyandomr update refreshwrite updated data files to your user data directory by default (orOMR_DATA_DIRif set). TheOMR_MODEL_REGISTRY_PATHenvironment variable is a read-only override for loadingmodels.yamland is never modified by update commands.
Legacy Update Command
Update your local registry data:
openai-model-registry-update
Configuration
The registry uses local files for model information:
# Default locations (XDG Base Directory spec)
Linux: ~/.local/share/openai-model-registry/
macOS: ~/Library/Application Support/openai-model-registry/
Windows: %LOCALAPPDATA%\openai-model-registry\
You can specify custom locations:
import os
# Use custom registry files
os.environ["OMR_MODEL_REGISTRY_PATH"] = "/path/to/custom/models.yaml"
os.environ["OMR_PARAMETER_CONSTRAINTS_PATH"] = (
"/path/to/custom/parameter_constraints.yml"
)
# Then initialize registry
from openai_model_registry import ModelRegistry
registry = ModelRegistry.get_default()
Environment variables
OMR_DATA_DIR # Override user data dir where updates are written
OMR_MODEL_REGISTRY_PATH # Read-only override for models.yaml load path
OMR_DISABLE_DATA_UPDATES # Set to 1/true to disable automatic data update checks
Documentation
For more details, see:
Development
# Install dependencies with CLI tools (requires Poetry)
poetry install --extras cli
# Run tests
poetry run pytest
# Run linting
poetry run pre-commit run --all-files
Next Steps
- 📚 Examples – real-world scripts in
examples/. - 🤝 Contributing – see CONTRIBUTING.md.
- 📝 Changelog – see CHANGELOG.md for recent updates.
Contributing
We 💜 external contributions! Start with CONTRIBUTING.md and our Code of Conduct.
Need Help?
Open an issue or start a discussion—questions, ideas, and feedback are welcome!
License
MIT License - See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file openai_model_registry-1.0.5.tar.gz.
File metadata
- Download URL: openai_model_registry-1.0.5.tar.gz
- Upload date:
- Size: 72.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ce2f182476e35848dacdb3e1c34356b4b7c15aff5438e878d09ee8d5021fc7eb
|
|
| MD5 |
256cf00654d79711d50462a651a96bc7
|
|
| BLAKE2b-256 |
fde0a977e8066f4a9d98dfc6a66efdd84c9fe9981f4a5c2afe40d17086ed331b
|
Provenance
The following attestation bundles were made for openai_model_registry-1.0.5.tar.gz:
Publisher:
release.yml on yaniv-golan/openai-model-registry
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
openai_model_registry-1.0.5.tar.gz -
Subject digest:
ce2f182476e35848dacdb3e1c34356b4b7c15aff5438e878d09ee8d5021fc7eb - Sigstore transparency entry: 402556181
- Sigstore integration time:
-
Permalink:
yaniv-golan/openai-model-registry@dc3815a2e405de78c267dfee2ab59ddb95375d5b -
Branch / Tag:
refs/tags/v1.0.5 - Owner: https://github.com/yaniv-golan
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@dc3815a2e405de78c267dfee2ab59ddb95375d5b -
Trigger Event:
push
-
Statement type:
File details
Details for the file openai_model_registry-1.0.5-py3-none-any.whl.
File metadata
- Download URL: openai_model_registry-1.0.5-py3-none-any.whl
- Upload date:
- Size: 86.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9fbaef80aeb351736d151f4f96fc460a37811155bfc854348496367182febf4a
|
|
| MD5 |
713787ed4496884f1f89ce89cce76e62
|
|
| BLAKE2b-256 |
3b942208edddd2b40e2ab27f399f8b8610531aee3e51b3fda6752f6006dce3ed
|
Provenance
The following attestation bundles were made for openai_model_registry-1.0.5-py3-none-any.whl:
Publisher:
release.yml on yaniv-golan/openai-model-registry
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
openai_model_registry-1.0.5-py3-none-any.whl -
Subject digest:
9fbaef80aeb351736d151f4f96fc460a37811155bfc854348496367182febf4a - Sigstore transparency entry: 402556200
- Sigstore integration time:
-
Permalink:
yaniv-golan/openai-model-registry@dc3815a2e405de78c267dfee2ab59ddb95375d5b -
Branch / Tag:
refs/tags/v1.0.5 - Owner: https://github.com/yaniv-golan
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@dc3815a2e405de78c267dfee2ab59ddb95375d5b -
Trigger Event:
push
-
Statement type: