Skip to main content

Core components for Metorial Python SDK

Project description

Metorial Python SDK

The official Python SDK for Metorial.

Available Providers

Provider Format Description
OpenAI OpenAI function calling GPT-4, GPT-3.5, etc.
Anthropic Claude tool format Claude 3.5, Claude 3, etc.
Google Gemini function declarations Gemini Pro, Gemini Flash
Mistral Mistral function calling Mistral Large, Codestral
DeepSeek OpenAI-compatible DeepSeek Chat, DeepSeek Coder
TogetherAI OpenAI-compatible Llama, Mixtral, etc.
XAI OpenAI-compatible Grok models
AI SDK Framework tools Vercel AI SDK, etc.

Installation

# Install core metorial package (includes all provider adapters)
pip install metorial

# Install with specific providers (includes provider client libraries)
pip install metorial[openai,anthropic,google,mistral,deepseek,togetherai,xai]

Quick Start

Simple Usage

import asyncio
from metorial import Metorial
from openai import AsyncOpenAI

async def main():
  metorial = Metorial(api_key="your-metorial-api-key")
  openai = AsyncOpenAI(api_key="your-openai-api-key")
  
  response = await metorial.run(
    message="Search Hackernews for the latest AI discussions.",
    server_deployments=["hacker-news-server-deployment"],
    client=openai,
    model="gpt-4o",
    max_steps=25    # optional
  )
  
  print("Response:", response.text)

asyncio.run(main())

💡 Tip for Jupyter/Colab Users: If you're running in a Jupyter notebook or Google Colab, you can skip the async def main(): wrapper and asyncio.run() and just use await directly at the top level.

OAuth + Multiple Deployments

For integrations requiring OAuth authentication (like Google Calendar) and multiple server deployments:

import asyncio
import os
from metorial import Metorial
from anthropic import AsyncAnthropic

async def main():
  metorial = Metorial(api_key=os.getenv("METORIAL_API_KEY"))
  anthropic = AsyncAnthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))

  # Create OAuth session for authenticated services
  google_cal_deployment_id = os.getenv("GOOGLE_CALENDAR_DEPLOYMENT_ID")
  
  print("🔗 Creating OAuth session...")
  oauth_session = metorial.oauth.sessions.create(
    server_deployment_id=google_cal_deployment_id
  )

  print("OAuth URLs for user authentication:")
  print(f"   Google Calendar: {oauth_session.url}")

  print("\n⏳ Waiting for OAuth completion...")
  await metorial.oauth.wait_for_completion([oauth_session])
  print("✅ OAuth session completed!")

  # Use multiple server deployments with mixed auth
  hackernews_deployment_id = os.getenv("HACKERNEWS_DEPLOYMENT_ID")
  
  result = await metorial.run(
    message="""Search Hackernews for the latest AI discussions using the available tools. 
    Then create a calendar event using Google Calendar tools with my@email.address for tomorrow at 2pm to discuss AI trends.""",
    server_deployments=[
      { "serverDeploymentId": google_cal_deployment_id, "oauthSessionId": oauth_session.id },
      { "serverDeploymentId": hackernews_deployment_id },
    ],
    client=anthropic,
    model="claude-sonnet-4-20250514",
    max_tokens=4096,
    max_steps=25,
  )
  print(result.text)

asyncio.run(main())

That's it! metorial.run() automatically:

  • Creates a session with your MCP server
  • Formats tools for your AI provider
  • Handles the execution loop
  • Manages tool execution
  • Returns the final response

Advanced Usage with Provider Sessions

For more control over the conversation flow, you can use with_provider_session:

import asyncio
from metorial import Metorial, MetorialOpenAI
from openai import AsyncOpenAI

async def main():
  metorial = Metorial(api_key="your-metorial-api-key")
  openai = AsyncOpenAI(api_key="your-openai-api-key")

  messages = [
    {"role": "user", "content": "What are the top hackernews posts?"}
  ]

  async def session_action(session):
    for i in range(10):
      response = await openai.chat.completions.create(
        messages=messages,
        model="gpt-4o",
        tools=session["tools"]
      )

      choice = response.choices[0]
      tool_calls = choice.message.tool_calls

      if not tool_calls:
        print(choice.message.content)
        return

      # Execute tools through Metorial
      tool_responses = await session["callTools"](tool_calls)

      # Add to conversation
      messages.append({
        "role": "assistant", 
        "tool_calls": choice.message.tool_calls
      })
      messages.extend(tool_responses)

  await metorial.with_provider_session(
    MetorialOpenAI.chat_completions,
    [{"serverDeploymentId": "your-deployment-id"}],
    session_action
  )

asyncio.run(main())

This approach gives you full control over the conversation loop while still benefiting from Metorial's tool management.

Provider Examples

Metorial works with all major AI providers. Here are examples using metorial.run():

Example OpenAI (GPT-4, GPT-3.5)

from metorial import Metorial
from openai import AsyncOpenAI

metorial = Metorial(api_key="your-metorial-api-key")
openai = AsyncOpenAI(api_key="your-openai-api-key")

response = await metorial.run(
  message="What are the latest commits?",
  server_deployments=["your-deployment-id"],
  client=openai,
  model="gpt-4o"
)

Anthropic (Claude)

from metorial import Metorial
import anthropic

metorial = Metorial(api_key="your-metorial-api-key")
anthropic = anthropic.AsyncAnthropic(api_key="your-anthropic-api-key")

response = await metorial.run(
  message="What are the latest commits?",
  server_deployments=["your-deployment-id"],
  client=anthropic,
  model="claude-3-5-sonnet-20241022"
)

Google (Gemini)

from metorial import Metorial
import google.generativeai as genai

metorial = Metorial(api_key="your-metorial-api-key")
genai.configure(api_key="your-google-api-key")
google = genai.GenerativeModel('gemini-pro')

response = await metorial.run(
  message="What are the latest commits?",
  server_deployments=["your-deployment-id"],
  client=google,
  model="gemini-pro"
)

Mistral AI

from metorial import Metorial
from mistralai import AsyncMistral

metorial = Metorial(api_key="your-metorial-api-key")
mistral = AsyncMistral(api_key="your-mistral-api-key")

response = await metorial.run(
  message="What are the latest commits?",
  server_deployments=["your-deployment-id"],
  client=mistral,
  model="mistral-large-latest"
)

DeepSeek

from metorial import Metorial
from openai import AsyncOpenAI

metorial = Metorial(api_key="your-metorial-api-key")
deepseek = AsyncOpenAI(
  api_key="your-deepseek-api-key",
  base_url="https://api.deepseek.com"
)

response = await metorial.run(
  message="What are the latest commits?",
  server_deployments=["your-deployment-id"],
  client=deepseek,
  model="deepseek-chat"
)

Together AI

from metorial import Metorial
from openai import AsyncOpenAI

metorial = Metorial(api_key="your-metorial-api-key")
together = AsyncOpenAI(
  api_key="your-together-api-key",
  base_url="https://api.together.xyz/v1"
)

response = await metorial.run(
  message="What are the latest commits?",
  server_deployments=["your-deployment-id"],
  client=together,
  model="meta-llama/Llama-2-70b-chat-hf"
)

XAI (Grok)

from metorial import Metorial
from openai import AsyncOpenAI

metorial = Metorial(api_key="your-metorial-api-key")
xai = AsyncOpenAI(
  api_key="your-xai-api-key",
  base_url="https://api.x.ai/v1"
)

response = await metorial.run(
  message="What are the latest commits?",
  server_deployments=["your-deployment-id"],
  client=xai,
  model="grok-beta"
)

Error Handling

from metorial import MetorialAPIError

try:
  response = await metorial.run(
    message="What are the latest commits?",
    server_deployments=["your-deployment-id"],
    client=openai,
    model="gpt-4o"
  )
except MetorialAPIError as e:
  print(f"API Error: {e.message} (Status: {e.status_code})")
except Exception as e:
  print(f"Unexpected error: {e}")

Examples

Check out the examples/ directory for more comprehensive examples.

License

MIT License - see LICENSE file for details.

Support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

metorial_core-1.0.8.tar.gz (31.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

metorial_core-1.0.8-py3-none-any.whl (45.3 kB view details)

Uploaded Python 3

File details

Details for the file metorial_core-1.0.8.tar.gz.

File metadata

  • Download URL: metorial_core-1.0.8.tar.gz
  • Upload date:
  • Size: 31.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for metorial_core-1.0.8.tar.gz
Algorithm Hash digest
SHA256 66647a4466d41c721fe8768026bff5367d4ed4a9ca926300ce136fd876910036
MD5 0e546469f99719571be2fa5c7de05ff6
BLAKE2b-256 bd5624c25cfa78d1b6daa621a88b623148bfad4fe80b62e5dece3db1766f9844

See more details on using hashes here.

Provenance

The following attestation bundles were made for metorial_core-1.0.8.tar.gz:

Publisher: release.yml on metorial/metorial-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file metorial_core-1.0.8-py3-none-any.whl.

File metadata

  • Download URL: metorial_core-1.0.8-py3-none-any.whl
  • Upload date:
  • Size: 45.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for metorial_core-1.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 0e2b970a58080f9a1daf0298cc6ee1af005e9ee74d421d8fc8cd619cc10ce2f6
MD5 316fef5e53031000afe4b70f9b0f84be
BLAKE2b-256 847e5b2c6df22c8a800952a8fbcd32a9df7cf1ae057eef86c80927fc57902cc8

See more details on using hashes here.

Provenance

The following attestation bundles were made for metorial_core-1.0.8-py3-none-any.whl:

Publisher: release.yml on metorial/metorial-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page