Skip to main content

OpenAI-compatible provider base for Metorial

Project description

metorial-openai-compatible

Base package for OpenAI-compatible provider integrations for Metorial. This package provides shared functionality for providers that use OpenAI's function calling format.

Installation

pip install metorial-openai-compatible
# or
uv add metorial-openai-compatible
# or
poetry add metorial-openai-compatible

Features

  • 🔧 OpenAI Format: Standard OpenAI function calling format
  • 📡 Session Management: Automatic tool lifecycle handling
  • 🔄 Format Conversion: Converts Metorial tools to OpenAI function format
  • Async Support: Full async/await support

Usage

Quick Start (Recommended)

This package serves as a base for provider-specific implementations. For end-user usage, use the specific provider packages like metorial-xai, metorial-deepseek, or metorial-togetherai.

Direct Usage (Advanced)

import asyncio
from openai import AsyncOpenAI
from metorial import Metorial
from metorial_openai_compatible import MetorialOpenAICompatibleSession

async def main():
  # Initialize clients
  metorial = Metorial(api_key="...your-metorial-api-key...") # async by default
  compatible_client = AsyncOpenAI(
    api_key="...your-provider-api-key...", 
    base_url="https://your-provider-url/v1"
  )
  
  # Run with automatic session management
  response = await metorial.run(
    "What are the latest commits in the metorial/websocket-explorer repository?",
    "...your-mcp-server-deployment-id...", # can also be list
    compatible_client,
    model="your-model-name",
    max_iterations=25
  )
  
  print("Response:", response)

asyncio.run(main())

Streaming Chat

import asyncio
from openai import AsyncOpenAI
from metorial import Metorial
from metorial.types import StreamEventType

async def example():
  # Initialize clients
  metorial = Metorial(api_key="...your-metorial-api-key...")
  compatible_client = AsyncOpenAI(
    api_key="...your-provider-api-key...",
    base_url="https://your-provider-url/v1"
  )
  
  # Streaming chat with real-time responses
  async def stream_action(session):
    messages = [
      {"role": "user", "content": "Explain quantum computing"}
    ]
    
    async for event in metorial.stream(
      compatible_client, session, messages, 
      model="your-model-name",
      max_iterations=25
    ):
      if event.type == StreamEventType.CONTENT:
        print(f"🤖 {event.content}", end="", flush=True)
      elif event.type == StreamEventType.TOOL_CALL:
        print(f"\n🔧 Executing {len(event.tool_calls)} tool(s)...")
      elif event.type == StreamEventType.COMPLETE:
        print(f"\n✅ Complete!")
  
  await metorial.with_session("...your-server-deployment-id...", stream_action)

asyncio.run(example())

Advanced Usage with Session Management

import asyncio
from metorial import Metorial
from metorial_openai_compatible import MetorialOpenAICompatibleSession

async def main():
  # Initialize Metorial
  metorial = Metorial(api_key="...your-metorial-api-key...")
  
  # Create session with your server deployments
  async with metorial.session(["...your-server-deployment-id..."]) as session:
    # Create OpenAI-compatible wrapper
    openai_session = MetorialOpenAICompatibleSession(
      session.tool_manager,
      with_strict=True  # Enable strict mode
    )
    
    # Use with any OpenAI-compatible client
    tools = openai_session.tools
    
    # Handle tool calls from response
    tool_responses = await openai_session.call_tools(tool_calls)

asyncio.run(main())

As Base Class

This package is primarily used as a base for provider-specific packages:

from metorial_openai_compatible import MetorialOpenAICompatibleSession

class MyProviderSession(MetorialOpenAICompatibleSession):
  def __init__(self, tool_mgr):
    # Configure strict mode based on provider capabilities
    super().__init__(tool_mgr, with_strict=False)

Using Convenience Functions

from metorial_openai_compatible import build_openai_compatible_tools, call_openai_compatible_tools

async def example():
  # Get tools in OpenAI format
  tools = build_openai_compatible_tools(tool_manager, with_strict=True)
  
  # Call tools from OpenAI-compatible response
  tool_messages = await call_openai_compatible_tools(tool_manager, tool_calls)

API Reference

MetorialOpenAICompatibleSession

Main session class for OpenAI-compatible integration.

session = MetorialOpenAICompatibleSession(tool_manager, with_strict=False)

Parameters:

  • tool_manager: Metorial tool manager instance
  • with_strict: Enable strict parameter validation (default: False)

Properties:

  • tools: List of tools in OpenAI function calling format

Methods:

  • async call_tools(tool_calls): Execute tool calls and return tool messages

build_openai_compatible_tools(tool_mgr, with_strict=False)

Build OpenAI-compatible tool definitions.

Parameters:

  • tool_mgr: Tool manager instance
  • with_strict: Enable strict mode (default: False)

Returns: List of tool definitions in OpenAI format

call_openai_compatible_tools(tool_mgr, tool_calls)

Execute tool calls from OpenAI-compatible response.

Returns: List of tool messages

Tool Format

Tools are converted to OpenAI's function calling format:

{
  "type": "function",
  "function": {
    "name": "tool_name",
    "description": "Tool description",
    "parameters": {
      "type": "object",
      "properties": {...},
      "required": [...]
    },
    "strict": True  # Only if with_strict=True
  }
}

Strict Mode

When with_strict=True, the strict field is added to function definitions for providers that support strict parameter validation (like OpenAI and XAI).

Provider Implementations

This package serves as the base for:

  • metorial-xai: XAI (Grok) with strict mode enabled
  • metorial-deepseek: DeepSeek without strict mode
  • metorial-togetherai: Together AI without strict mode

Error Handling

try:
    tool_messages = await session.call_tools(tool_calls)
except Exception as e:
    print(f"Tool execution failed: {e}")

Tool errors are returned as tool messages with error content.

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

metorial_openai_compatible-1.0.1.tar.gz (6.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

metorial_openai_compatible-1.0.1-py3-none-any.whl (5.8 kB view details)

Uploaded Python 3

File details

Details for the file metorial_openai_compatible-1.0.1.tar.gz.

File metadata

File hashes

Hashes for metorial_openai_compatible-1.0.1.tar.gz
Algorithm Hash digest
SHA256 2331f97301e45ec2d950ed0a3931cc6e7572889a1ecbdc61182c42d73372b652
MD5 59ecf0c376924d2fbb6f40ecad39541b
BLAKE2b-256 9be0bfc7505d5f84a60ed960b3ab682ecc564c32a0bc6d69d08e4aa940961ac4

See more details on using hashes here.

Provenance

The following attestation bundles were made for metorial_openai_compatible-1.0.1.tar.gz:

Publisher: release.yml on metorial/metorial-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file metorial_openai_compatible-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for metorial_openai_compatible-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 030fc426d34edf47560755cd0f4daddf6162276c6bcb5816f3f4ac11b42b734b
MD5 81c2ddd07c41d982bf4c5741e3bc804a
BLAKE2b-256 a4bb4e2b48c6751c17f89870e534496d1b706180347ab8ebc0e0299b3e403c42

See more details on using hashes here.

Provenance

The following attestation bundles were made for metorial_openai_compatible-1.0.1-py3-none-any.whl:

Publisher: release.yml on metorial/metorial-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page