MeshAI Python SDK — Agent Control Plane client
Project description
MeshAI Python SDK
Python client for the MeshAI Agent Control Plane. Register agents, send telemetry, query anomalies, manage governance policies, and track EU AI Act compliance.
Install
pip install meshai-sdk
With framework auto-tracking:
pip install meshai-sdk[openai] # OpenAI auto-tracking
pip install meshai-sdk[anthropic] # Anthropic auto-tracking
pip install meshai-sdk[crewai] # CrewAI auto-tracking
pip install meshai-sdk[langchain] # LangChain/LangGraph auto-tracking
pip install meshai-sdk[autogen] # AutoGen auto-tracking
pip install meshai-sdk[gemini] # Google Gemini
pip install meshai-sdk[bedrock] # AWS Bedrock
pip install meshai-sdk[llamaindex] # LlamaIndex
pip install meshai-sdk[agno] # Agno (ex-Phidata)
pip install meshai-sdk[pydantic-ai] # Pydantic AI
pip install meshai-sdk[semantic-kernel] # Microsoft Semantic Kernel
Quick Start
from meshai import MeshAI
client = MeshAI(api_key="msh_...", agent_name="my-agent")
client.register(framework="crewai", model_provider="openai", model_name="gpt-4o")
# Automatic heartbeats every 60s
client.start_heartbeat()
# Track token usage (buffered, batched automatically)
client.track_usage(
model_provider="openai",
model_name="gpt-4o",
input_tokens=1500,
output_tokens=800,
)
# Graceful shutdown (also registered via atexit)
client.shutdown()
Auto-Tracking Integrations
OpenAI
from meshai import MeshAI
from meshai.integrations.openai import wrap_openai
import openai
meshai = MeshAI(api_key="msh_...", agent_name="my-agent")
meshai.register(model_provider="openai", model_name="gpt-4o")
oai = wrap_openai(openai.OpenAI(), meshai=meshai)
response = oai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)
# Usage automatically tracked!
Anthropic
from meshai import MeshAI
from meshai.integrations.anthropic import wrap_anthropic
import anthropic
meshai = MeshAI(api_key="msh_...", agent_name="my-agent")
meshai.register(model_provider="anthropic", model_name="claude-sonnet-4-6")
ant = wrap_anthropic(anthropic.Anthropic(), meshai=meshai)
response = ant.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}],
)
CrewAI
from meshai import MeshAI
from meshai.integrations.crewai import track_crewai
meshai = MeshAI(api_key="msh_...", agent_name="my-crew")
meshai.register(framework="crewai")
# Enable global tracking — all crews auto-track usage
track_crewai(meshai)
# Run your crew as normal — model extracted from each LLM call
crew.kickoff()
LangChain / LangGraph
from meshai import MeshAI
from meshai.integrations.langchain import MeshAICallbackHandler
from langchain_openai import ChatOpenAI
meshai = MeshAI(api_key="msh_...", agent_name="my-agent")
meshai.register(framework="langchain")
handler = MeshAICallbackHandler(meshai)
# Use with any LangChain model — model extracted automatically
llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])
# Or with LangGraph
config = {"callbacks": [handler]}
result = graph.stream(input, config=config)
AutoGen
from meshai import MeshAI
from meshai.integrations.autogen import track_autogen
meshai = MeshAI(api_key="msh_...", agent_name="my-agent")
meshai.register(framework="autogen")
# Enable global tracking
track_autogen(meshai)
# Run agents as normal — all LLM calls tracked
Agent Queries
Google Gemini
from meshai import MeshAI
from meshai.integrations.gemini import wrap_gemini
from google import genai
meshai = MeshAI(api_key="msh_...", agent_name="my-agent")
meshai.register(framework="custom", model_provider="google")
client = genai.Client(api_key="...")
tracked = wrap_gemini(client, meshai=meshai)
response = tracked.models.generate_content(model="gemini-2.5-pro", contents="Hello")
AWS Bedrock
from meshai import MeshAI
from meshai.integrations.bedrock import wrap_bedrock
import boto3
meshai = MeshAI(api_key="msh_...", agent_name="my-agent")
meshai.register(framework="custom", model_provider="bedrock")
bedrock = boto3.client("bedrock-runtime")
tracked = wrap_bedrock(bedrock, meshai=meshai)
response = tracked.converse(modelId="anthropic.claude-3-sonnet", messages=[...])
LlamaIndex
from meshai import MeshAI
from meshai.integrations.llamaindex import MeshAILlamaHandler
from llama_index.core import Settings
from llama_index.core.callbacks import CallbackManager
meshai = MeshAI(api_key="msh_...", agent_name="my-agent")
meshai.register(framework="llamaindex")
handler = MeshAILlamaHandler(meshai)
Settings.callback_manager = CallbackManager([handler])
# All LlamaIndex LLM calls now auto-track usage
Agno
from meshai import MeshAI
from meshai.integrations.agno import track_agno
meshai = MeshAI(api_key="msh_...", agent_name="my-agent")
meshai.register(framework="agno")
track_agno(meshai)
# All Agno agents now auto-track usage
Pydantic AI
from meshai import MeshAI
from meshai.integrations.pydantic_ai import track_pydantic_ai
meshai = MeshAI(api_key="msh_...", agent_name="my-agent")
meshai.register(framework="pydantic-ai")
track_pydantic_ai(meshai)
# All Pydantic AI agents now auto-track usage
Semantic Kernel
from meshai import MeshAI
from meshai.integrations.semantic_kernel import track_semantic_kernel
import semantic_kernel as sk
meshai = MeshAI(api_key="msh_...", agent_name="my-agent")
meshai.register(framework="semantic-kernel")
kernel = sk.Kernel()
track_semantic_kernel(meshai, kernel)
# All Semantic Kernel function calls now auto-track usage
# List all agents
agents = client.list_agents(status="healthy", page=1, limit=50)
# Get single agent
agent = client.get_agent("01AGENT_ID_HERE")
# Update agent
client.update_agent("01AGENT_ID", description="Updated description")
# Delete agent (soft delete)
client.delete_agent("01AGENT_ID")
Cost Intelligence
# Cost summary
summary = client.get_cost_summary(start="2026-03-01T00:00:00Z", end="2026-03-17T00:00:00Z")
# Breakdown by agent or model
by_agent = client.get_cost_by_agent()
by_model = client.get_cost_by_model()
Anomaly Detection
# List active anomalies
anomalies = client.list_anomalies(severity="critical")
# Get summary
summary = client.get_anomaly_summary()
# Acknowledge or resolve
client.acknowledge_anomaly(event_id=42)
client.resolve_anomaly(event_id=42)
Governance
Risk Classification
# AI-assisted risk suggestion
suggestion = client.get_risk_suggestion("01AGENT_ID")
# Classify agent risk (EU AI Act Article 6)
client.classify_risk(
agent_id="01AGENT_ID",
risk_level="high",
justification="Handles PII in production",
assessed_by="security-team",
)
# Get classification
risk = client.get_risk_classification("01AGENT_ID")
Policies
# Create a policy
client.create_policy(
name="Production models only",
policy_type="model_allowlist",
rules={"allowed_models": ["gpt-4o", "claude-3-sonnet"]},
conditions={"environments": ["production"]},
)
# List policies
policies = client.list_policies(enabled=True)
# Dry-run evaluate
results = client.evaluate_policies(
agent_id="01AGENT_ID",
provider="openai",
model="gpt-4o",
)
# Update or delete
client.update_policy(policy_id=1, enabled=False)
client.delete_policy(policy_id=1)
Approvals (HITL)
# Check pending approvals
count = client.get_pending_count()
# List pending
pending = client.list_approvals(status="pending")
# Approve or deny
client.decide_approval(
request_id=1,
decision="approved",
reviewer_id="admin",
reason="Reviewed and approved",
)
Compliance (EU AI Act)
# Readiness score (0-120)
readiness = client.get_readiness_score()
# FRIA template (Article 27)
fria = client.get_fria("01AGENT_ID")
# Transparency card
card = client.get_transparency_card("01AGENT_ID")
Incident Reporting (Article 73)
# Report incident
client.create_incident(
agent_id="01AGENT_ID",
title="Data leak detected",
description="Agent exposed PII in response",
severity="critical",
reported_by="security-team",
is_widespread=False, # True = 2-day deadline, False = 15-day
)
# List and update
incidents = client.list_incidents(status="reported")
client.update_incident(
incident_id=1,
root_cause="Model hallucination",
corrective_actions="Added PII filter policy",
authority_notified=True,
)
Billing
# Current plan and agent usage
billing = client.get_billing_info()
# Returns: {plan, price_usd, max_agents, current_agents, at_limit}
Kill Switch
# Block an agent immediately (enforced at proxy layer)
client.block_agent(
agent_id="01AGENT_ID",
reason="Anomalous behavior detected — cost spike 10x above baseline",
)
# Unblock when resolved
client.unblock_agent(agent_id="01AGENT_ID")
Agent Relationships
# Get an agent's model/provider dependencies
relationships = client.get_agent_relationships("01AGENT_ID")
# Get the full organization-wide relationship graph (nodes + edges)
graph = client.get_relationship_graph()
# Returns: {nodes: [...], edges: [...]} — ready for D3.js visualization
ABAC (Agent Owners)
# Assign an owner with permissions
client.assign_owner(
agent_id="01AGENT_ID",
owner_type="team",
owner_id="ml-platform-team",
owner_name="ML Platform Team",
permissions={"can_invoke": True, "can_configure": True, "can_delete": False},
)
# List owners of an agent
owners = client.list_agent_owners("01AGENT_ID")
# List agents owned by a specific owner
agents = client.list_owner_agents("ml-platform-team")
# Remove an owner
client.remove_owner(agent_id="01AGENT_ID", owner_id=1)
Agent Lifecycle
# Set expiry, review frequency, and sponsor
client.set_agent_lifecycle(
agent_id="01AGENT_ID",
expires_at="2026-06-30T23:59:59Z",
review_frequency="quarterly",
sponsor_id="ml-platform-team",
)
# List expired agents
expired = client.list_expired_agents()
# List agents due for review
due = client.list_agents_due_review()
Quarantine
# Quarantine a shadow agent
client.quarantine_agent(
agent_id="01AGENT_ID",
reason="Unknown agent detected — not in registry",
)
# List quarantined agents
quarantined = client.list_quarantined_agents()
# Release from quarantine after review
client.release_quarantine(agent_id="01AGENT_ID")
Security Posture
# Get security posture score (0-100) across 6 dimensions
posture = client.get_security_posture()
# Returns: {score, dimensions: {agent_ownership, expiry_coverage, access_reviews, ...}}
Configuration
client = MeshAI(
api_key="msh_...", # Required
agent_name="my-agent", # Agent name (or pass to register())
base_url="https://api.meshai.dev",
environment="production", # production, staging, dev
batch_size=100, # Events per batch
flush_interval_seconds=5.0, # Auto-flush interval
heartbeat_interval_seconds=60, # Background heartbeat interval
max_retries=3, # Retry count on failure
timeout_seconds=10.0, # HTTP request timeout
)
Design Principles
- Never crashes the host — all SDK errors are caught and logged
- Buffered batching — events flush every 5s or 100 events
- Background heartbeat — daemon thread, auto-stops on shutdown
- Minimal dependencies — only
httpx
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file meshai_sdk-0.3.0.tar.gz.
File metadata
- Download URL: meshai_sdk-0.3.0.tar.gz
- Upload date:
- Size: 22.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0e1e1ab0f2f908125583c241a7e928524196ab9a8467d060cf43dcbc9d7bc88c
|
|
| MD5 |
e16fe98843c512139b8366e11a20ee40
|
|
| BLAKE2b-256 |
ce52087b202193183fe19166dc5008c31bd0be156eb765d8c0274af395990af6
|
Provenance
The following attestation bundles were made for meshai_sdk-0.3.0.tar.gz:
Publisher:
publish.yml on meshailabs-org/meshai-sdk-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
meshai_sdk-0.3.0.tar.gz -
Subject digest:
0e1e1ab0f2f908125583c241a7e928524196ab9a8467d060cf43dcbc9d7bc88c - Sigstore transparency entry: 1123492883
- Sigstore integration time:
-
Permalink:
meshailabs-org/meshai-sdk-python@532b8b8d17d6cd59677e4424e51ac7275b965d00 -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/meshailabs-org
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@532b8b8d17d6cd59677e4424e51ac7275b965d00 -
Trigger Event:
release
-
Statement type:
File details
Details for the file meshai_sdk-0.3.0-py3-none-any.whl.
File metadata
- Download URL: meshai_sdk-0.3.0-py3-none-any.whl
- Upload date:
- Size: 27.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0a8d189d91a2ca271a7aee4ca122868d1cda9581be92157a8783b0de57e8edd6
|
|
| MD5 |
85565b30402f40673cdd7b543fad3f92
|
|
| BLAKE2b-256 |
119ad3045457204b03ef04971e53f24a8a6b51e944b7580c9463b6f1941a781b
|
Provenance
The following attestation bundles were made for meshai_sdk-0.3.0-py3-none-any.whl:
Publisher:
publish.yml on meshailabs-org/meshai-sdk-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
meshai_sdk-0.3.0-py3-none-any.whl -
Subject digest:
0a8d189d91a2ca271a7aee4ca122868d1cda9581be92157a8783b0de57e8edd6 - Sigstore transparency entry: 1123492909
- Sigstore integration time:
-
Permalink:
meshailabs-org/meshai-sdk-python@532b8b8d17d6cd59677e4424e51ac7275b965d00 -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/meshailabs-org
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@532b8b8d17d6cd59677e4424e51ac7275b965d00 -
Trigger Event:
release
-
Statement type: