A framework for creating AI agents.
Project description
Agenty
A Pythonic framework for building AI agents and LLM pipelines, powered by pydantic-ai. The framework emphasizes simplicity and maintainability without sacrificing power, making it an ideal choice for rapid prototyping.
[!Caution] Initial Development: Agenty is under active development. Expect frequent breaking changes until we reach a stable release.
Agenty provides a clean, type-safe interface for creating:
- Conversational AI agents with structured inputs and outputs
- LLM pipelines
- Complex agent interactions with minimal boilerplate
Key Features
- Built on pydantic-ai for type validation
- Automatic conversation history management
- Intuitive Pythonic interfaces
The framework is currently only officially tested with the OpenAI API (through a proxy such as LiteLLM/OpenRouter) although theoretically it supports all the models supported by pydantic-ai.
[!TIP] Looking for a more mature alternative? Check out atomic-agents, which heavily inspired this project.
Installation
pip install agenty
Or with uv:
uv add agenty
Quick Preview
Here's a simple example to get started:
import asyncio
from agenty import Agent
from pydantic_ai.models.openai import OpenAIModel
async def main():
agent = Agent(
model=OpenAIModel(
"gpt-4o",
api_key="your-api-key"
),
system_prompt="You are a helpful and friendly AI assistant."
)
response = await agent.run("Hello, how are you?")
print(response)
asyncio.run(main())
In most cases, to build a custom AI agent, you'll want to create your own class that inherits from Agent. The below is functionally equivalent to the above code (and is the recommended way to use this framework)
import asyncio
from agenty import Agent
from pydantic_ai.models.openai import OpenAIModel
class Assistant(Agent):
model = OpenAIModel("gpt-4o", api_key="your-api-key")
system_prompt = "You are a helpful and friendly AI assistant."
async def main():
agent = Assistant()
response = await agent.run("Hello, how are you?")
print(response)
asyncio.run(main())
Tools
Add capabilities to your agents with simple decorators:
class WeatherAgent(Agent):
system_prompt = "You help users check the weather."
def __init__(self, location: str):
super().__init__()
self.location = location
self.temperature = 72 # Simulated temperature
@tool
def get_temperature(self) -> float:
"""Get the current temperature."""
return self.temperature
@tool
def get_location(self) -> str:
"""Get the configured location."""
return self.location
Structured I/O
Define type-safe inputs and outputs for predictable behavior:
from agenty import Agent
from agenty.types import BaseIO
class User(BaseIO):
name: str
age: int
hobbies: List[str]
class UserExtractor(Agent[str, User]):
input_schema = str
output_schema = User
system_prompt = "Extract user information from the text"
Agent Pipelines
Chain multiple agents together for complex workflows:
class TextCleaner(Agent[str, str]):
model = OpenAIModel("gpt-4o-mini", api_key="your-api-key")
system_prompt = "Clean and format the input text"
class SentimentAnalyzer(Agent[str, str]):
model = OpenAIModel("gpt-4o-mini", api_key="your-api-key")
system_prompt = "Analyze the sentiment of the text"
# Create and use the pipeline
pipeline = TextCleaner() | SentimentAnalyzer()
result = await pipeline.run("This is my input text!")
Templates
Create dynamic prompts with Jinja templates:
class DynamicGreeter(Agent):
system_prompt = """
You are a greeter who:
- Speaks in a {{TONE}} tone
- Gives {{LENGTH}} responses
"""
TONE: str = "friendly"
LENGTH: str = "concise"
Hooks
Transform inputs and outputs with hooks:
class MyAgent(Agent[str, str]):
@hook.input
def add_prefix(self, input: str) -> str:
return f"prefix_{input}"
@hook.output
def add_suffix(self, output: str) -> str:
return f"{output}_suffix"
📚 Like what you see? Read the Documentation to learn more!
Requirements
- Python >= 3.12
License
MIT License - see the LICENSE file for details.
Author
Jonathan Chun (@jonchun)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agenty-0.2.1.tar.gz.
File metadata
- Download URL: agenty-0.2.1.tar.gz
- Upload date:
- Size: 121.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
757743a2a821a93135ca259b9ff7cea93b92bbeb4492376a6e66908c7d653ad9
|
|
| MD5 |
5b8cf2d2f7c0df2a93727014b63488e6
|
|
| BLAKE2b-256 |
57f36cf59f4abcb6d061c0e547cb15fdda348ffac0a3079c97e0615f099f9285
|
Provenance
The following attestation bundles were made for agenty-0.2.1.tar.gz:
Publisher:
pypi.yml on jonchun/agenty
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agenty-0.2.1.tar.gz -
Subject digest:
757743a2a821a93135ca259b9ff7cea93b92bbeb4492376a6e66908c7d653ad9 - Sigstore transparency entry: 170337834
- Sigstore integration time:
-
Permalink:
jonchun/agenty@f58892326a06c437c2ea41c125050049130a2eec -
Branch / Tag:
refs/heads/main - Owner: https://github.com/jonchun
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi.yml@f58892326a06c437c2ea41c125050049130a2eec -
Trigger Event:
push
-
Statement type:
File details
Details for the file agenty-0.2.1-py3-none-any.whl.
File metadata
- Download URL: agenty-0.2.1-py3-none-any.whl
- Upload date:
- Size: 23.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.5.30
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8be1b1ba74442b4a59c9e32351f1dc6b36f8de3c1a06d4ce75785f1eb4ca5757
|
|
| MD5 |
0c9810ddb0cb479193d15a17196ef011
|
|
| BLAKE2b-256 |
66ae11f898242e242db9a99d57c6609710d46555635e55c2038902d6ac2f8236
|