A deterministic, intent-based AI agent builder.
Project description
🤖 PyBotchi
A deterministic, intent-based AI agent orchestrator with no restrictions—supports any framework and prioritizes human-reasoning approach.
🎯 Core Philosophy
Humans should handle the reasoning. AI should detect intent and translate natural language into processable data.
Traditional development has successfully solved complex problems across every industry using deterministic code, APIs, and events. The real limitation isn't in execution—it's in translation. What if we could accept natural language and automatically route to the right logic?
PyBotchi takes a different approach from most AI frameworks: LLMs excel at understanding intent and translating between human and computer language—not at business logic, calculations, or deterministic execution. Let each do what it does best.
The PyBotchi Workflow
- Detect & Translate (LLM Layer) - Process natural language to extract intents and identify appropriate Actions with arguments
- Execute Logic (Your Code) - Traditional code handles business logic, calculations, and data processing
- Generate Response (LLM Layer) - Transform processed results back into natural language
⚡ Core Architecture
Nested Intent-Based Supervisor Agent Architecture built on just 3 core classes:
Action- The central agent with a defined lifecycle for intent and execution logicContext- Universal container for conversational state, metadata, and execution contextLLM- Singleton client for managing your model connection
This minimal foundation ensures extreme speed, efficiency, and maximum customizability.
🌟 Key Features
🪶 Ultra-Lightweight
Only 3 core classes to master. The entire system is built on a minimal foundation that minimizes overhead while maximizing performance.
🏗️ Object-Oriented Design
Built on Pydantic BaseModel for rigorous data validation and industry-standard type hinting. Every component is inherently overridable and extendable.
🔧 JSON Schema Native
Automatic JSON Schema conformance for OpenAI, Gemini, and other LLM providers. Easily adaptable to any provider's specification.
🎣 Action Lifecycle Hooks
Fine-grained control over execution stages with overridable hooks: pre, post, on_error, fallback, child_selection, and commit_context.
⚡ Highly Scalable
Async-first architecture with built-in support for distributed execution via gRPC. Deploy agents remotely or across machines for massive parallel workloads.
🧱 Truly Modular
Agents are isolated, self-contained units. Different teams can independently develop, improve, or modify specific agents without impacting core logic.
🔗 Graph By Design
Structured parent-child relationships provide clear visibility into system execution and state, simplifying debugging and testing.
🌍 Framework & Model Agnostic
Works with any LLM client, third-party framework, or business requirement. True agnosticism through complete overridability.
🔌 MCP Protocol Support
Full integration with Model Context Protocol—expose your Actions as MCP tools or consume external MCP servers within your workflows.
🚀 Quick Start
Installation
PyBotchi requires Python 3.12 or higher.
pip install pybotchi
# With gRPC support for distributed execution
pip install pybotchi[grpc]
# With MCP support for Model Context Protocol
pip install pybotchi[mcp]
# With both
pip install pybotchi[grpc,mcp]
Setup LLM
from langchain_openai import ChatOpenAI
from pybotchi import LLM
LLM.add(base=ChatOpenAI(
api_key="your-api-key",
model="gpt-4",
temperature=0.7,
))
Simple Agent
from pybotchi import Action, ActionReturn
class Translation(Action):
"""Translate to specified language."""
async def pre(self, context):
message = await context.llm.ainvoke(context.prompts)
await context.add_response(self, message.text)
return ActionReturn.GO
Agent with Fields
from pybotchi import Action, ActionReturn
from pydantic import Field
class MathProblem(Action):
"""Solve math problems."""
answer: str = Field(description="The answer to the math problem")
async def pre(self, context):
await context.add_response(self, self.answer)
return ActionReturn.GO
Multi-Agent Declaration
from pybotchi import Action
class MultiAgent(Action):
"""AI Assistant for solving math problems and translation."""
class SolveMath(MathProblem):
pass
class Translate(Translation):
pass
Execution
import asyncio
from pybotchi import Context
async def test():
context = Context(
prompts=[
{"role": "system", "content": "You're an AI that can solve math problems and translate requests."},
{"role": "user", "content": "4 x 4 and explain in Filipino"}
],
)
await context.start(MultiAgent)
print(context.prompts[-1]["content"])
asyncio.run(test())
Result:
Ang sagot sa 4 x 4 ay 16.
Paliwanag: Kapag sinabi nating 4 x 4, ibig sabihin ay apat na grupo ng apat. Kung bibilangin natin ito, makakakuha tayo ng kabuuang labing-anim (16).
Ibig sabihin, 4 + 4 + 4 + 4 = 16.
Visualize Your Graph
import asyncio
from pybotchi import graph
async def print_mermaid_graph():
multi_agent_graph = await graph(MultiAgent)
print(multi_agent_graph.flowchart())
run(print_mermaid_graph())
Result:
flowchart TD
__main__.MultiAgent.SolveMath[SolveMath]
__main__.MultiAgent{MultiAgent}
__main__.MultiAgent.Translate[Translate]
__main__.MultiAgent --> __main__.MultiAgent.SolveMath
__main__.MultiAgent --> __main__.MultiAgent.Translate
style __main__.MultiAgent fill:#4CAF50,color:#000000
🧩 Action Lifecycle
Every Action follows a structured lifecycle that gives you complete control over execution flow:
Core Lifecycle Hooks
pre - Pre-Execution
Executes before child agents run. Use for:
- Guardrails and validation
- Data gathering (RAG, knowledge graphs)
- Business logic and preprocessing
- Tool execution
child_selection - Agent Selection
Determines which child agents to execute. Override with:
- Traditional control flow (if/else, switch/case)
- Custom LLM routing logic
- Dynamic agent selection
post - Post-Processing
Executes after all child agents complete. Use for:
- Result consolidation
- Data persistence
- Cleanup and recording
- Logging and notifications
on_error - Error Handling
Handle errors during execution with:
- Retry mechanisms
- Custom error handling
- Logging and alerts
- Re-raise for parent handling
fallback - Non-Tool Results
Executes when no child agent is selected:
- Process text content results
- Handle non-tool-call responses
- Default behaviors
commit_context - Context Control
Controls context merging with main execution:
- Selective data propagation
- Isolated execution contexts
- Custom synchronization rules
Extended Lifecycle Hooks
pre_mcp- MCP connection setup (authentication, config)pre_grpc- gRPC connection setup (credentials, metadata)
🎨 Everything is Overridable & Extendable
class CustomAgent(MultiAgent):
SolveMath = None # Remove action
class NewAction(Action): # Add new action
pass
class Translate(Translation): # Override existing
async def pre(self, context):
# Custom translation logic
pass
🔄 Execution Patterns
Sequential Execution
Multiple agents execute in order via iteration or multi-tool calls.
Concurrent Execution
Parallel execution using async patterns or threading:
class ParallelAgent(Action):
__concurrent__ = True # Enable concurrent execution
class Task1(Action):
pass
class Task2(Action):
pass
Nested Architectures
Build complex hierarchical structures:
class ComplexAgent(Action):
class StoryTelling(Action):
class HorrorStory(Action):
pass
class ComedyStory(Action):
pass
class JokeTelling(Action):
pass
🌐 Distributed Systems with gRPC
Scale your agents across multiple servers with real-time context synchronization:
server.py
from pybotchi import Action, ActionReturn
from pydantic import Field
class MathProblem(Action):
"""Solve math problems."""
__groups__ = {"grpc": {"group-1"}}
answer: str = Field(description="The answer to the math problem")
async def pre(self, context):
await context.add_response(self, self.answer)
return ActionReturn.GO
client.py
from asyncio import run
from pybotchi import ActionReturn
from pybotchi.grpc import GRPCAction, GRPCConnection, graph
class MultiAgent(GRPCAction):
__grpc_connections__ = [GRPCConnection("remote", "localhost:50051", ["group-1"])]
async def pre_grpc(self, context):
# Setup authentication, refresh tokens, etc.
return ActionReturn.GO
async def print_mermaid_graph():
multi_agent_graph = await graph(MultiAgent, integrations={"remote": {}})
print(multi_agent_graph.flowchart())
run(print_mermaid_graph())
Key Benefits
- Unified Graph Execution - Remote Actions integrate seamlessly
- Zero-Overhead Synchronization - No polling loops or coordination overhead
- Database-Free Architecture - Context syncs directly through gRPC
- Concurrent Remote Execution - True distributed parallel processing
- Resource Isolation - Separate compute resources per Action group
Start gRPC server:
pybotchi-grpc server.py
Result
#-------------------------------------------------------#
# Agent ID: agent_b6c9ada82c7444818356a6338e975c09
# Agent Path: server.py
# Agent Path: server.py
# Starting None worker(s) on 0.0.0.0:50051
#-------------------------------------------------------#
# Agent Path: server.py
# Agent Handler: PyBotchiGRPC
# gRPC server running on 0.0.0.0:50051
#-------------------------------------------------------#
gRPC client print graph:
python3 client.py
flowchart TD
__main__.MultiAgent[MultiAgent]
grpc.agent_b6c9ada82c7444818356a6338e975c09.MathProblem[MathProblem]
__main__.MultiAgent --**GRPC** : remote--> grpc.agent_b6c9ada82c7444818356a6338e975c09.MathProblem
style __main__.MultiAgent fill:#4CAF50,color:#000000
🔌 Model Context Protocol (MCP)
Integrate with the MCP ecosystem—expose Actions as MCP tools or consume external MCP servers:
As MCP Server
from pybotchi.mcp import build_mcp_app
class MyAction(Action):
__groups__ = {"mcp": {"group-1"}}
# Your action implementation
app = build_mcp_app(transport="streamable-http")
As MCP Client
from pybotchi.mcp import MCPAction, MCPConnection
class Agent(MCPAction):
__mcp_connections__ = [
MCPConnection("jira", "SSE", "https://mcp.atlassian.com/v1/sse")
]
Key Benefits
- Standard Protocol Support - Full MCP specification compatibility
- Group-Based Organization - Fine-grained access control per endpoint
- Bidirectional Integration - Serve or consume MCP tools
- Transport Flexibility - SSE and Streamable HTTP support
📚 Examples & Use Cases
Explore practical examples demonstrating PyBotchi's capabilities:
🚀 Getting Started
tiny.py- Minimal implementationfull_spec.py- Complete feature demonstration
🔄 Flow Control
sequential.py- Sequential action executionnested_combination.py- Complex nested structures
⚡ Concurrency
concurrent_combination.py- Async parallel executionconcurrent_threading_combination.py- Multi-threaded processing
🌐 Distributed Systems
grpc/grpc_pybotchi_agent.py- gRPC server setupgrpc/grpc_pybotchi_client.py- Distributed orchestration
🔌 MCP Integration
mcp/mcp_pybotchi_agent.py- MCP server implementationmcp/mcp_pybotchi_client.py- MCP client integrationmcp/mcp_pybotchi_client_for_mcp_atlassian.py- Atlassian MCP integration
💼 Real-World Applications
interactive_action.py- Real-time WebSocket communication
⚔️ Framework Comparison
vs/pybotchi_approach.py- PyBotchi implementationvs/langgraph_approach.py- LangGraph comparison
🚀 Why Choose PyBotchi?
Maximum flexibility, zero lock-in. Build agents that combine human intelligence with AI precision.
Perfect for teams that need:
- ✅ Modular, maintainable agent architectures
- ✅ Framework flexibility and migration capabilities
- ✅ Community-driven agent development
- ✅ Enterprise-grade customization and control
- ✅ Real-time interactive agent communication
- ✅ Distributed execution without complexity
- ✅ Standard protocol integration (MCP)
📖 Documentation
Visit our full documentation for:
- Detailed lifecycle hook explanations
- Advanced patterns and best practices
- gRPC and MCP integration guides
- Complete API reference
🤝 Contributing
We welcome contributions! Whether it's:
- Bug reports and feature requests
- Documentation improvements
- Code contributions
- Example applications
Check out our contributing guidelines to get started.
📄 License
PyBotchi is released under the MIT License. See LICENSE for details.
🌟 Community
- GitHub: amadolid/pybotchi
- Issues: Report bugs or request features
- Discussions: Join the conversation
Ready to build smarter agents? Start with the examples and join the community building the future of human-AI collaboration.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pybotchi-3.2.0.tar.gz.
File metadata
- Download URL: pybotchi-3.2.0.tar.gz
- Upload date:
- Size: 49.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.13.11 Linux/6.6.87.2-microsoft-standard-WSL2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
53c866f91b2a40dc85ecbd6e7f67025bb2233aee58170209fb6a8bb0eeaa1338
|
|
| MD5 |
afe7288e7c4494f39fcf1fb8c6153986
|
|
| BLAKE2b-256 |
0d015ad9a560927638974e5b64a395f2589f591163652235de591f90a8589970
|
File details
Details for the file pybotchi-3.2.0-py3-none-any.whl.
File metadata
- Download URL: pybotchi-3.2.0-py3-none-any.whl
- Upload date:
- Size: 61.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.13.11 Linux/6.6.87.2-microsoft-standard-WSL2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
90ae7f31c2fcd9ede8c062f3cb6b441645edac03de0b4d06dced2bc35144b64d
|
|
| MD5 |
0c9c9e3677ec3e5dd143da572a576f4c
|
|
| BLAKE2b-256 |
fd47f09bab19412a4a9aa4f595f3b53d62ce89b6eb7ae10afe63ec8d12ddf9bc
|