Skip to main content

MCP session management for Metorial

Project description

Metorial Python SDK

The official Python SDK for Metorial.

Available Providers

Provider Import Format Description
OpenAI metorial_openai OpenAI function calling GPT-4, GPT-3.5, etc.
Anthropic metorial_anthropic Claude tool format Claude 3.5, Claude 3, etc.
Google metorial_google Gemini function declarations Gemini Pro, Gemini Flash
Mistral metorial_mistral Mistral function calling Mistral Large, Codestral
DeepSeek metorial_deepseek OpenAI-compatible DeepSeek Chat, DeepSeek Coder
TogetherAI metorial_togetherai OpenAI-compatible Llama, Mixtral, etc.
XAI metorial_xai OpenAI-compatible Grok models
AI SDK metorial_ai_sdk Framework tools Vercel AI SDK, etc.

Installation

# Install core metorial package (includes all provider adapters)
pip install metorial

# Install with specific providers (includes provider client libraries)
pip install metorial[openai,anthropic,google,mistral,deepseek,togetherai,xai]

# Or install individual providers
pip install metorial[openai]      # Includes openai client
pip install metorial[anthropic]   # Includes anthropic client
# ... etc

Quick Start

import asyncio
from metorial import Metorial
from openai import AsyncOpenAI

async def main():
  metorial = Metorial(api_key="your-metorial-api-key")
  openai_client = AsyncOpenAI(api_key="your-openai-api-key")
  
  response = await metorial.run(
    "What are the latest commits in the metorial/websocket-explorer repository?",
    "your-server-deployment-id",  # can also be a list
    openai_client,
    model="gpt-4o",
    max_iterations=25
  )
  
  print("Response:", response)

asyncio.run(main())

That's it! metorial.run() automatically:

  • Creates a session with your MCP server
  • Formats tools for your AI provider
  • Handles the execution loop
  • Manages tool execution
  • Returns the final response

Synchronous Usage

For synchronous applications, use MetorialSync:

from metorial import MetorialSync
from openai import OpenAI

metorial = MetorialSync(api_key="your-metorial-api-key")
openai_client = OpenAI(api_key="your-openai-api-key")

response = metorial.run(
  "What are the latest commits in the metorial/websocket-explorer repository?",
  "your-server-deployment-id",  # can also be a list
  openai_client,
  model="gpt-4o",
  max_iterations=25
)

print("Response:", response)

Provider Examples

Metorial works with all major AI providers. Here are examples using metorial.run():

OpenAI (GPT-4, GPT-3.5)

from metorial import Metorial
from openai import AsyncOpenAI

metorial = Metorial(api_key="your-metorial-api-key")
openai_client = AsyncOpenAI(api_key="your-openai-api-key")

response = await metorial.run(
  "What are the latest commits?",
  "your-deployment-id",
  openai_client,
  model="gpt-4o"
)

Anthropic (Claude)

from metorial import Metorial
import anthropic

metorial = Metorial(api_key="your-metorial-api-key")
anthropic_client = anthropic.AsyncAnthropic(api_key="your-anthropic-api-key")

response = await metorial.run(
  "What are the latest commits?",
  "your-deployment-id", 
  anthropic_client,
  model="claude-3-5-sonnet-20241022"
)

Google (Gemini)

from metorial import Metorial
import google.generativeai as genai

metorial = Metorial(api_key="your-metorial-api-key")
genai.configure(api_key="your-google-api-key")
google_client = genai.GenerativeModel('gemini-pro')

response = await metorial.run(
  "What are the latest commits?",
  "your-deployment-id",
  google_client,
  model="gemini-pro"
)

Mistral AI

from metorial import Metorial
from mistralai import AsyncMistral

metorial = Metorial(api_key="your-metorial-api-key")
mistral_client = AsyncMistral(api_key="your-mistral-api-key")

response = await metorial.run(
  "What are the latest commits?",
  "your-deployment-id",
  mistral_client,
  model="mistral-large-latest"
)

DeepSeek

from metorial import Metorial
from openai import AsyncOpenAI

metorial = Metorial(api_key="your-metorial-api-key")
deepseek_client = AsyncOpenAI(
  api_key="your-deepseek-api-key",
  base_url="https://api.deepseek.com"
)

response = await metorial.run(
  "What are the latest commits?",
  "your-deployment-id",
  deepseek_client,
  model="deepseek-chat"
)

Together AI

from metorial import Metorial
from openai import AsyncOpenAI

metorial = Metorial(api_key="your-metorial-api-key")
together_client = AsyncOpenAI(
  api_key="your-together-api-key",
  base_url="https://api.together.xyz/v1"
)

response = await metorial.run(
  "What are the latest commits?",
  "your-deployment-id",
  together_client,
  model="meta-llama/Llama-2-70b-chat-hf"
)

XAI (Grok)

from metorial import Metorial
from openai import AsyncOpenAI

metorial = Metorial(api_key="your-metorial-api-key")
xai_client = AsyncOpenAI(
  api_key="your-xai-api-key",
  base_url="https://api.x.ai/v1"
)

response = await metorial.run(
  "What are the latest commits?",
  "your-deployment-id",
  xai_client,
  model="grok-beta"
)

Advanced Usage

Session Management

For more control over the conversation flow, use session management directly:

import asyncio
from metorial import Metorial, MetorialOpenAI
from openai import AsyncOpenAI

async def main():
  metorial = Metorial(api_key="your-metorial-api-key")
  openai_client = AsyncOpenAI(api_key="your-openai-api-key")
  
  async def session_callback(session):
    messages = [{"role": "user", "content": "What are the latest commits?"}]
    
    for i in range(10):
      # Call OpenAI with Metorial tools
      response = await openai_client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        tools=session.tools
      )
      
      choice = response.choices[0]
      tool_calls = choice.message.tool_calls
      
      if not tool_calls:
        print(choice.message.content)
        return
      
      # Execute tools through Metorial
      tool_responses = await session.call_tools(tool_calls)
      
      # Add to conversation
      messages.append({
        "role": "assistant",
        "tool_calls": [
          {
            "id": tc.id,
            "type": tc.type,
            "function": {
              "name": tc.function.name,
              "arguments": tc.function.arguments
            }
          } for tc in tool_calls
        ]
      })
      messages.extend(tool_responses)

  await metorial.with_provider_session(
    MetorialOpenAI.chat_completions(openai_client),
    "your-deployment-id",
    session_callback
  )

asyncio.run(main())

Streaming Responses

For real-time streaming responses:

import asyncio
from metorial import Metorial, MetorialOpenAI
from metorial.types import StreamEventType

async def stream_chat():
  metorial = Metorial(api_key="your-metorial-api-key")
  openai_client = AsyncOpenAI(api_key="your-openai-api-key")
  
  async def stream_action(session):
    messages = [{"role": "user", "content": "What are the latest commits?"}]
    
    async for event in metorial.stream(
      openai_client, session, messages, max_iterations=10
    ):
      if event.type == StreamEventType.CONTENT:
        print(f"🤖 {event.content}", end="", flush=True)
      elif event.type == StreamEventType.TOOL_CALL:
        print(f"\n🔧 Executing {len(event.tool_calls)} tool(s)...")
      elif event.type == StreamEventType.COMPLETE:
        print(f"\n✅ Complete! Duration: {event.metadata.get('duration', 0):.2f}s")
      elif event.type == StreamEventType.ERROR:
        print(f"\n❌ Error: {event.error}")
        break
  
  await metorial.with_provider_session(
    MetorialOpenAI.chat_completions(openai_client),
    "your-deployment-id",
    stream_action
  )

asyncio.run(stream_chat())

Batch Processing

Process multiple messages concurrently:

import asyncio

async def batch_example():
  metorial = Metorial(api_key="your-metorial-api-key")
  openai_client = AsyncOpenAI(api_key="your-openai-api-key")
  
  messages = [
    "What are the latest commits?",
    "What are the main features?",
    "How do I get started?"
  ]
  
  results = await metorial.batch_run(
    messages,
    "your-deployment-id",
    openai_client,
    max_iterations=25
  )
  
  for i, result in enumerate(results):
    print(f"Response {i+1}: {result}")

asyncio.run(batch_example())

Error Handling

from metorial import MetorialAPIError

try:
  response = await metorial.run(
    "What are the latest commits?",
    "your-deployment-id",
    openai_client,
    model="gpt-4o"
  )
except MetorialAPIError as e:
  print(f"API Error: {e.message} (Status: {e.status_code})")
except Exception as e:
  print(f"Unexpected error: {e}")

Examples

Check out the examples/ directory for more comprehensive examples.

License

MIT License - see LICENSE file for details.

Support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

metorial_mcp_session-1.0.6.tar.gz (17.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

metorial_mcp_session-1.0.6-py3-none-any.whl (18.8 kB view details)

Uploaded Python 3

File details

Details for the file metorial_mcp_session-1.0.6.tar.gz.

File metadata

  • Download URL: metorial_mcp_session-1.0.6.tar.gz
  • Upload date:
  • Size: 17.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for metorial_mcp_session-1.0.6.tar.gz
Algorithm Hash digest
SHA256 de71f11a1a55aa30db187105e8c7aae0e31bd3b6715a8c6d932ad78320a09082
MD5 fa3167107b47ac8f59269254af81bc7f
BLAKE2b-256 7f7aa7727179b30147625787f3b5db962534ca20600510a8a2d1e28d8ca9ffcf

See more details on using hashes here.

Provenance

The following attestation bundles were made for metorial_mcp_session-1.0.6.tar.gz:

Publisher: release.yml on metorial/metorial-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file metorial_mcp_session-1.0.6-py3-none-any.whl.

File metadata

File hashes

Hashes for metorial_mcp_session-1.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 998278160b90a7277575a15d35a1b77389120520e61001422e42d81f89bedb35
MD5 76be7d134f3f1c3e6927e84f1a0361d5
BLAKE2b-256 cd0d3b1e8943268913a728b14e004c348e30be2751b1cd066312f6427df37278

See more details on using hashes here.

Provenance

The following attestation bundles were made for metorial_mcp_session-1.0.6-py3-none-any.whl:

Publisher: release.yml on metorial/metorial-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page