Pydantic AI adapter for AgentMark - integrate AgentMark prompts with Pydantic AI
Project description
AgentMark Pydantic AI Adapter
Pydantic AI adapter for AgentMark - integrate AgentMark prompts with Pydantic AI for type-safe LLM interactions in Python.
Installation
pip install agentmark-pydantic-ai-v0
For specific providers, install with extras:
pip install agentmark-pydantic-ai-v0[openai]
pip install agentmark-pydantic-ai-v0[anthropic]
pip install agentmark-pydantic-ai-v0[gemini]
Quick Start
from agentmark_pydantic_ai_v0 import create_pydantic_ai_client, run_text_prompt
# Create client
client = create_pydantic_ai_client()
# Load and format a prompt
prompt = await client.load_text_prompt(ast) # AST from AgentMark compiler
params = await prompt.format(props={"name": "Alice"})
# Execute with runner utility
result = await run_text_prompt(params)
print(result.output)
# Or use Pydantic AI directly
from pydantic_ai import Agent
agent = Agent(params.model, system_prompt=params.system_prompt)
result = await agent.run(params.user_prompt)
print(result.output)
Features
- Type-safe integration: Full type safety from AgentMark prompts to Pydantic AI
- Model registry: Flexible model name resolution with pattern matching
- Native tools: Pass pydantic-ai Tool objects or callables directly
- MCP support: Model Context Protocol integration for external tool servers
- Structured output: Automatic JSON Schema to Pydantic model conversion
- Runner utilities: Convenience functions for common execution patterns
API Reference
Factory Function
from agentmark_pydantic_ai_v0 import create_pydantic_ai_client
client = create_pydantic_ai_client(
model_registry=None, # Optional custom model registry
tools=None, # Optional list of native Tool objects or callables
mcp_registry=None, # Optional MCP server registry
eval_registry=None, # Optional eval registry
loader=None, # Optional prompt loader
)
Model Registry
from agentmark_pydantic_ai_v0 import PydanticAIModelRegistry
import re
# Default registry (passthrough — model names forwarded as-is to Pydantic AI)
registry = PydanticAIModelRegistry.create_default()
# Register providers you need (user-driven, not pre-registered)
registry = (
PydanticAIModelRegistry.create_default()
.register_providers({"openai": "openai", "anthropic": "anthropic"})
)
# Or register specific models
registry = PydanticAIModelRegistry()
registry.register_models("gpt-4o", lambda name, opts: f"openai:{name}")
registry.register_models(
re.compile(r"^claude-"),
lambda name, opts: f"anthropic:{name}"
)
Tools
Tools are passed as native pydantic-ai Tool objects or plain callables:
from pydantic_ai import Tool
# Using callables (name is inferred from function name)
def search(query: str) -> str:
return search_web(query)
client = create_pydantic_ai_client(tools=[search])
# Using Tool objects for more control
tool = Tool(function=search, name="search", description="Search the web")
client = create_pydantic_ai_client(tools=[tool])
The MDX config references tools by name. Only tools whose names match entries
in the MDX tools list will be included at adapt time.
MCP Server Registry
from agentmark_pydantic_ai_v0 import McpServerRegistry
registry = McpServerRegistry()
# Register HTTP-based MCP server
registry.register("search-server", {
"url": "http://localhost:8000/mcp",
"headers": {"Authorization": "Bearer token"}, # Optional
})
# Register stdio-based MCP server
registry.register("python-runner", {
"command": "python",
"args": ["-m", "mcp_server"],
"cwd": "/app",
"env": {"API_KEY": "secret"},
})
# Use with client
client = create_pydantic_ai_client(mcp_registry=registry)
Then in your AgentMark prompt, reference MCP tools:
tools:
- mcp://search-server/web-search # Single MCP tool
- mcp://search-server/* # All tools from MCP server
- search # Native tool by name
Runner Utilities
from agentmark_pydantic_ai_v0 import run_text_prompt, run_object_prompt, stream_text_prompt
# Run text prompt
result = await run_text_prompt(params)
print(result.output) # str
print(result.usage) # Usage stats
# Run object prompt (structured output)
result = await run_object_prompt(params)
print(result.output) # Typed Pydantic model
# Stream text prompt
async for chunk in stream_text_prompt(params):
print(chunk, end="", flush=True)
Webhook Handler
For building HTTP servers that execute AgentMark prompts (used by the CLI dev server):
from agentmark_pydantic_ai_v0 import create_pydantic_ai_client, PydanticAIWebhookHandler
# Create client and handler
client = create_pydantic_ai_client()
handler = PydanticAIWebhookHandler(client)
# Execute a prompt (non-streaming)
result = await handler.run_prompt(prompt_ast, {"shouldStream": False})
print(result["result"]) # "Hello, world!"
# Execute a prompt (streaming)
result = await handler.run_prompt(prompt_ast, {"shouldStream": True})
async for chunk in result["stream"]:
print(chunk) # NDJSON chunks
The webhook handler implements the AgentMark webhook protocol, producing NDJSON responses compatible with the CLI.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentmark_pydantic_ai_v0-0.1.4.tar.gz.
File metadata
- Download URL: agentmark_pydantic_ai_v0-0.1.4.tar.gz
- Upload date:
- Size: 41.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
44c46ce032fccb062d42692921dba9ba40ee90b266b9f65c81b5da795b159e59
|
|
| MD5 |
df814204f14949b5be6745d040eb5c4e
|
|
| BLAKE2b-256 |
2a862e86938900b961795304f34b0b122d61febcc58fc8fdddb6b51ca42a8921
|
File details
Details for the file agentmark_pydantic_ai_v0-0.1.4-py3-none-any.whl.
File metadata
- Download URL: agentmark_pydantic_ai_v0-0.1.4-py3-none-any.whl
- Upload date:
- Size: 24.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8e98bb8586a28c1c9dcd439969f038a879ad5fbfd188a9462a98389a6f62ac77
|
|
| MD5 |
4984c077d081760219808c2563a89825
|
|
| BLAKE2b-256 |
5754ed64e8c453e9582f32b92a36d44e3f7ec0d0512e2b9f8f9e65bc80a8c92b
|