Skip to main content

Lightweight framework for building tool-using AI agents with MCP support

Project description

agentory

A lightweight Python library for building tool-calling agents.

Installation

pip install agentory

Install only the providers you actually need

pip install "agentory[openai]" # + azure
pip install "agentory[anthropic]"
pip install "agentory[all]"

Requires Python 3.12+.

Quickstart

import asyncio
from llmify import ChatOpenAI
from agentory import Agent, Tools, ToolCallEvent
from agentory.views import AgentResult

tools = Tools()

@tools.action("Return the current UTC time as an ISO-8601 string.")
def get_time() -> str:
    from datetime import datetime, timezone
    return datetime.now(timezone.utc).isoformat()

async def main():
    llm = ChatOpenAI(model="gpt-5.4-mini")
    agent = Agent(
        instructions="You are a helpful assistant.",
        llm=llm,
        tools=tools,
    )
    async for event in agent.run("What time is it?"):
        if isinstance(event, ToolCallEvent):
            print(f"[tool] {event.tool_name}: {event.status}")
        elif isinstance(event, AgentResult):
            print(event.output)

asyncio.run(main())

Core API

Agent

Agent(
    instructions: str,
    llm: ChatOpenAI | ChatAzureOpenAI | ChatAnthropic,
    tools: Tools | None = None,
    mcp_servers: list[MCPServer] | None = None,
    skills: list[Skill] | None = None,
    max_iterations: int = 10,
    context: Any | None = None,
    message_store: MessageStore | None = None,
    injectables: tuple[Any, ...] | list[Any] | None = None,
    use_done_tool: bool = False,
)

The main agent class. Call agent.run(task) to get an AsyncIterator[StreamEvent] that yields ToolCallEvent and AgentResult objects.

  • context is an optional shared dependency object injected into tools via Inject[YourContextType].
  • message_store is an optional custom message store implementation.
  • injectables lets you register additional dependencies for Inject[...] resolution.
  • use_done_tool registers a built-in done tool that agents can call to terminate explicitly.

MCP servers are connected automatically on the first run() call. Call await agent.close() when done to clean up MCP connections. The async context manager (async with) is also supported as an alternative.

agent = Agent(instructions="...", llm=llm, mcp_servers=[server])
async for event in agent.run("Do something"):
    print(event)
await agent.close()

Tools

A registry that turns plain functions into LLM-callable tools.

tools = Tools()

@tools.action("Fetch the content of a URL.", status_label="Fetching URL")
async def fetch(url: str) -> str:
    ...
  • description – shown to the LLM in the tool schema.
  • name – overrides the function name.
  • status_label – either a string, or a callable that receives the typed params model and returns a human-readable status shown during streaming.
  • status – backward-compatible alias for status_label.
  • params – optional Pydantic model used for argument validation/schema generation.

Type hints on parameters are automatically converted to JSON Schema. Use Annotated[str, "description"] to attach per-parameter descriptions.

When params is provided, tool functions can receive a single typed model parameter:

from pydantic import BaseModel


class SearchParams(BaseModel):
    query: str
    limit: int = 5


@tools.action(
    "Search docs.",
    params=SearchParams,
    status_label=lambda p: f"Searching {p.query}",
)
def search(params: SearchParams) -> str:
    ...

Dependency Injection with Inject

Use Inject[...] to mark parameters for dependency injection. The Agent wires tool context automatically and always provides the active message store as injectable. You can add your own dependencies as flat injectables=[...] on Agent.

from agentory import Agent, Inject, Tools
from agentory.history import MessageStore

class SpotifyClient: ...
class UnsplashClient: ...

tools = Tools()

@tools.action("Search tracks on Spotify.")
async def search_tracks(spotify: Inject[SpotifyClient], query: str) -> str:
    ...

@tools.action("Search photos on Unsplash.")
async def search_photos(unsplash: Inject[UnsplashClient], query: str) -> str:
    ...

@tools.action("Count stored messages.")
def message_count(store: Inject[MessageStore]) -> int:
    return len(store.messages())

agent = Agent(
    instructions="...",
    llm=llm,
    tools=tools,
    injectables=[SpotifyClient(), UnsplashClient()],
)

For explicit context management, you can still set or replace a ToolContext directly on Tools:

from agentory import ToolContext

context = ToolContext().provide(spotify_client, unsplash_client)
tools.set_context(context)

Tool

Low-level dataclass representing a single tool. Usually created via Tools.action; useful when constructing tools manually or from MCP servers.

Skill

A piece of reusable instructions injected into the system prompt inside a <skill> block.

from pathlib import Path
from agentory import Skill

skill = Skill.from_path(Path("my_skill.md"))
# or load SKILL.md from a directory:
skill = Skill.from_directory(Path("skills/my_skill/"))

agent = Agent(instructions="...", llm=llm, skills=[skill])

Skill files use optional YAML frontmatter for name and description:

---
name: web-search
description: Search the web for information
---

Use the search tool whenever the user asks about recent events...

MCPServerStdio

Connects to any Model Context Protocol server over stdio and exposes its tools to the agent.

from agentory import Agent, MCPServerStdio

server = MCPServerStdio(
    command="npx",
    args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
)

agent = Agent(instructions="...", llm=llm, mcp_servers=[server])
async for event in agent.run("List files in /tmp"):
    print(event)
await agent.close()

Options:

Parameter Default Description
command Executable to spawn
args [] Arguments for the command
env None Extra environment variables (inherits current env when None)
cache_tools_list True Cache tool discovery after first call
allowed_tools None Whitelist of tool names to expose

StreamEvent

type StreamEvent = ToolCallEvent | AgentResult

Events yielded by agent.run(). A ToolCallEvent signals that a tool is being called. AgentResult is emitted when the run finishes.

@dataclass
class ToolCallEvent:
    tool_name: str
    status: str | None


@dataclass
class AgentResult:
    output: str
    finish_reason: Literal["done", "max_iterations_reached"]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentory-0.3.0.tar.gz (23.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentory-0.3.0-py3-none-any.whl (18.7 kB view details)

Uploaded Python 3

File details

Details for the file agentory-0.3.0.tar.gz.

File metadata

  • Download URL: agentory-0.3.0.tar.gz
  • Upload date:
  • Size: 23.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.2

File hashes

Hashes for agentory-0.3.0.tar.gz
Algorithm Hash digest
SHA256 e1b5ea26b613171baa37af3c708e01a23954a0177d836c2365bd7d683f50b0df
MD5 b132a8f2f0f363b6720383b2a9eb4fca
BLAKE2b-256 ec4de296e21e2cf45d366ed5c0062ed2f0193806dadb8f63308daa0ba1f91251

See more details on using hashes here.

File details

Details for the file agentory-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: agentory-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 18.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.2

File hashes

Hashes for agentory-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e4c53e45583f27147660a114d7de821609bc533e55eb640d74906364be5dab19
MD5 9b96ae889d56ac60c7dc261a589d45cd
BLAKE2b-256 4bbca4af26fecdaa2e348f52d4a8e15035adf4be44b35799f809168430ff71c7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page