Skip to main content

OpenAI-compatible provider base for Metorial

Project description

metorial-openai-compatible

Base package for OpenAI-compatible provider integrations for Metorial. This package provides shared functionality for providers that use OpenAI's function calling format.

Installation

pip install metorial-openai-compatible
# or
uv add metorial-openai-compatible
# or
poetry add metorial-openai-compatible

Features

  • 🔧 OpenAI Format: Standard OpenAI function calling format
  • 📡 Session Management: Automatic tool lifecycle handling
  • 🔄 Format Conversion: Converts Metorial tools to OpenAI function format
  • Async Support: Full async/await support

Usage

Quick Start (Recommended)

This package serves as a base for provider-specific implementations. For end-user usage, use the specific provider packages like metorial-xai, metorial-deepseek, or metorial-togetherai.

Direct Usage (Advanced)

import asyncio
from openai import AsyncOpenAI
from metorial import Metorial
from metorial_openai_compatible import MetorialOpenAICompatibleSession

async def main():
  # Initialize clients
  metorial = Metorial(api_key="...your-metorial-api-key...") # async by default
  compatible_client = AsyncOpenAI(
    api_key="...your-provider-api-key...", 
    base_url="https://your-provider-url/v1"
  )
  
  # Run with automatic session management
  response = await metorial.run(
    "What are the latest commits in the metorial/websocket-explorer repository?",
    "...your-mcp-server-deployment-id...", # can also be list
    compatible_client,
    model="your-model-name",
    max_iterations=25
  )
  
  print("Response:", response)

asyncio.run(main())

Streaming Chat

import asyncio
from openai import AsyncOpenAI
from metorial import Metorial
from metorial.types import StreamEventType

async def example():
  # Initialize clients
  metorial = Metorial(api_key="...your-metorial-api-key...")
  compatible_client = AsyncOpenAI(
    api_key="...your-provider-api-key...",
    base_url="https://your-provider-url/v1"
  )
  
  # Streaming chat with real-time responses
  async def stream_action(session):
    messages = [
      {"role": "user", "content": "Explain quantum computing"}
    ]
    
    async for event in metorial.stream(
      compatible_client, session, messages, 
      model="your-model-name",
      max_iterations=25
    ):
      if event.type == StreamEventType.CONTENT:
        print(f"🤖 {event.content}", end="", flush=True)
      elif event.type == StreamEventType.TOOL_CALL:
        print(f"\n🔧 Executing {len(event.tool_calls)} tool(s)...")
      elif event.type == StreamEventType.COMPLETE:
        print(f"\n✅ Complete!")
  
  await metorial.with_session("...your-server-deployment-id...", stream_action)

asyncio.run(example())

Advanced Usage with Session Management

import asyncio
from metorial import Metorial
from metorial_openai_compatible import MetorialOpenAICompatibleSession

async def main():
  # Initialize Metorial
  metorial = Metorial(api_key="...your-metorial-api-key...")
  
  # Create session with your server deployments
  async with metorial.session(["...your-server-deployment-id..."]) as session:
    # Create OpenAI-compatible wrapper
    openai_session = MetorialOpenAICompatibleSession(
      session.tool_manager,
      with_strict=True  # Enable strict mode
    )
    
    # Use with any OpenAI-compatible client
    tools = openai_session.tools
    
    # Handle tool calls from response
    tool_responses = await openai_session.call_tools(tool_calls)

asyncio.run(main())

As Base Class

This package is primarily used as a base for provider-specific packages:

from metorial_openai_compatible import MetorialOpenAICompatibleSession

class MyProviderSession(MetorialOpenAICompatibleSession):
  def __init__(self, tool_mgr):
    # Configure strict mode based on provider capabilities
    super().__init__(tool_mgr, with_strict=False)

Using Convenience Functions

from metorial_openai_compatible import build_openai_compatible_tools, call_openai_compatible_tools

async def example():
  # Get tools in OpenAI format
  tools = build_openai_compatible_tools(tool_manager, with_strict=True)
  
  # Call tools from OpenAI-compatible response
  tool_messages = await call_openai_compatible_tools(tool_manager, tool_calls)

API Reference

MetorialOpenAICompatibleSession

Main session class for OpenAI-compatible integration.

session = MetorialOpenAICompatibleSession(tool_manager, with_strict=False)

Parameters:

  • tool_manager: Metorial tool manager instance
  • with_strict: Enable strict parameter validation (default: False)

Properties:

  • tools: List of tools in OpenAI function calling format

Methods:

  • async call_tools(tool_calls): Execute tool calls and return tool messages

build_openai_compatible_tools(tool_mgr, with_strict=False)

Build OpenAI-compatible tool definitions.

Parameters:

  • tool_mgr: Tool manager instance
  • with_strict: Enable strict mode (default: False)

Returns: List of tool definitions in OpenAI format

call_openai_compatible_tools(tool_mgr, tool_calls)

Execute tool calls from OpenAI-compatible response.

Returns: List of tool messages

Tool Format

Tools are converted to OpenAI's function calling format:

{
  "type": "function",
  "function": {
    "name": "tool_name",
    "description": "Tool description",
    "parameters": {
      "type": "object",
      "properties": {...},
      "required": [...]
    },
    "strict": True  # Only if with_strict=True
  }
}

Strict Mode

When with_strict=True, the strict field is added to function definitions for providers that support strict parameter validation (like OpenAI and XAI).

Provider Implementations

This package serves as the base for:

  • metorial-xai: XAI (Grok) with strict mode enabled
  • metorial-deepseek: DeepSeek without strict mode
  • metorial-togetherai: Together AI without strict mode

Error Handling

try:
    tool_messages = await session.call_tools(tool_calls)
except Exception as e:
    print(f"Tool execution failed: {e}")

Tool errors are returned as tool messages with error content.

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

metorial_openai_compatible-1.0.0rc5.tar.gz (6.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

metorial_openai_compatible-1.0.0rc5-py3-none-any.whl (5.7 kB view details)

Uploaded Python 3

File details

Details for the file metorial_openai_compatible-1.0.0rc5.tar.gz.

File metadata

File hashes

Hashes for metorial_openai_compatible-1.0.0rc5.tar.gz
Algorithm Hash digest
SHA256 a2fd631d169deec6ccd7683f667641c6dc6254d9a11cd72b86a0582e73b24a30
MD5 c2534cdadd88779afd93d8278916c3a0
BLAKE2b-256 06b802e6a113ec616ae5d98ff7ec76b7683422309eb4af6c5bea5d53972fefd9

See more details on using hashes here.

Provenance

The following attestation bundles were made for metorial_openai_compatible-1.0.0rc5.tar.gz:

Publisher: release.yml on metorial/metorial-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file metorial_openai_compatible-1.0.0rc5-py3-none-any.whl.

File metadata

File hashes

Hashes for metorial_openai_compatible-1.0.0rc5-py3-none-any.whl
Algorithm Hash digest
SHA256 ea87e04234f6c911ce5f81516e120840e0676e1cd7a8df8e7f86a0dc941f998d
MD5 7b1a5ef65f66b488c02f17884bb588e1
BLAKE2b-256 2108d96cf13a774a61e00ea0a1c2bc3839b8fdae8016d99d55d559e14d02075b

See more details on using hashes here.

Provenance

The following attestation bundles were made for metorial_openai_compatible-1.0.0rc5-py3-none-any.whl:

Publisher: release.yml on metorial/metorial-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page