A Simplified Agent Framework to create ai agent with tools,mcps and memory/deps management
Project description
T_AI - AI Agent Framework
A powerful framework for building AI agents with MCP (Model Context Protocol) integration, tools, memory management, and dependency handling. T_AI simplifies the creation of sophisticated AI agents by providing a unified interface for multiple LLMs, external tools, and advanced conversation management.
🚀 Key Features
- 🔗 MCP Integration: Seamless connection to external tools and services via Model Context Protocol
- 🛠️ Tool System: Built-in Google tools (image search, code execution) and easy custom tool integration
- 🧠 Memory Management: Intelligent conversation summarization for long-running sessions
- 📦 Dependency Management: Clean state management and user context handling
- 🤖 Multi-LLM Support: Compatible with Google Gemini, OpenAI, and Anthropic models
- 📱 Media Support: Handle text, audio, images, and PDF files seamlessly
- ⚡ Async/Await: Full asynchronous support for optimal performance
- 🔌 Extensible: Easy to extend with custom tools and integrations
📦 Installation
Using UV (Recommended)
# Clone the repository
git clone <repository-url>
cd T_AI
# Install using UV
uv sync
Using pip
pip install -e .
Using pip with requirements.txt
pip install -r requirements.txt
🔧 Core Dependencies
- pydantic-ai >= 0.4.0: Core AI framework
- tavily-python >= 0.5.1: Web search capabilities
🚀 Quick Start
Basic Agent Creation
import asyncio
from t_ai.t_agent import TAgent, Deps
from pydantic_ai.models.google import GoogleModel
from pydantic_ai.providers.google import GoogleProvider
# Initialize with Google Gemini
llm = GoogleModel('gemini-2.5-flash', provider=GoogleProvider(api_key="your-api-key"))
agent = TAgent(llm=llm)
# Simple conversation
async def main():
async with agent:
response = await agent.chat(["Hello, what can you help me with?"])
print(f"UI Version: {response.ui_version}")
print(f"Voice Version: {response.voice_version}")
asyncio.run(main())
With OpenAI
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openai import OpenAIProvider
llm = OpenAIModel('gpt-4-mini', provider=OpenAIProvider(api_key="your-openai-key"))
agent = TAgent(llm=llm)
🛠️ Framework Configuration
TAgent Parameters
agent = TAgent(
llm=your_llm, # Required: pydantic-ai compatible model
deps=custom_deps, # Optional: Custom dependencies object
instructions="Custom instructions", # Optional: Agent instructions
tools=[], # Optional: List of custom tools
mcp_servers=[], # Optional: List of MCP servers
summarizer=False, # Optional: Enable conversation summarization
custom_summarizer_agent=None, # Optional: Custom summarizer agent
memory_length=20, # Optional: Messages before summarization
memory_summarizer_length=15, # Optional: Messages to summarize
use_memory=True # Optional: Enable/disable memory
)
🔗 MCP Server Integration
Using MCP Helper (Recommended)
from t_ai.utils.helper_functions import MCP_server_helper
from t_ai.t_agent import TAgent
# Create MCP helper
mcp_helper = MCP_server_helper()
# Add different types of MCP servers
mcp_helper.add_mpc_server(type='http', mpc_server_url='https://mcp.notion.com/mcp')
mcp_helper.add_mpc_server(type='sse', mpc_server_url='https://mcp.notion.com/sse')
mcp_helper.add_mpc_server(type='stdio', command='npx', args=['-y', 'mcp-remote', 'https://mcp.notion.com/mcp'])
# Initialize agent with MCP servers
agent = TAgent(llm=llm, mcp_servers=mcp_helper.get_mpc_servers())
Direct MCP Server Setup
from pydantic_ai.mcp import MCPServerStreamableHTTP, MCPServerSSE, MCPServerStdio
mcp_servers = [
MCPServerStreamableHTTP(url='https://mcp.notion.com/mcp'),
MCPServerSSE(url='https://mcp.notion.com/sse'),
MCPServerStdio(command='npx', args=['-y', 'mcp-remote', 'https://mcp.notion.com/mcp'])
]
agent = TAgent(llm=llm, mcp_servers=mcp_servers)
🛠️ Built-in Tools
Google Image Search Tool
from t_ai.PrebuiltTools.google_tools import search_images_tool
# Setup image search
image_tool = search_images_tool(
api_key="your-google-api-key",
search_engine_id="your-custom-search-engine-id"
)
agent = TAgent(llm=llm, tools=[image_tool])
# Usage
response = await agent.chat(["Find me an image of a sunset"])
Google Code Execution Tool
from t_ai.PrebuiltTools.google_tools import code_execution_tool
# Setup code execution
code_tool = code_execution_tool(api_key="your-gemini-api-key")
agent = TAgent(llm=llm, tools=[code_tool])
# Usage
response = await agent.chat(["Calculate the factorial of 10 using Python"])
Combined Tools Example
from t_ai.PrebuiltTools.google_tools import search_images_tool, code_execution_tool
tools = [
search_images_tool(api_key=google_api_key, search_engine_id=search_engine_id),
code_execution_tool(api_key=google_api_key)
]
agent = TAgent(llm=llm, tools=tools)
💾 Memory Management
Enable Automatic Summarization
agent = TAgent(
llm=llm,
summarizer=True, # Enable summarization
memory_length=20, # Summarize after 20 messages
memory_summarizer_length=15 # Summarize oldest 15 messages
)
Custom Summarizer Agent
from pydantic_ai import Agent
custom_summarizer = Agent(
llm,
instructions='Create detailed technical summaries focusing on code and solutions.'
)
agent = TAgent(
llm=llm,
summarizer=True,
custom_summarizer_agent=custom_summarizer
)
Memory and State Management
# Access conversation history
messages = agent.memory.messages
# Access agent dependencies
deps = agent.deps
user_name = agent.deps.user
agents_output = agent.deps.agents_output
# Reset agent state
agent.reset()
📱 Media Support
Text Input
response = await agent.chat(["What's the weather like today?"])
Image Input
from pydantic_ai.messages import BinaryContent
# From file
with open("image.png", "rb") as f:
image_data = f.read()
response = await agent.chat([
"What do you see in this image?",
BinaryContent(data=image_data, media_type='image/png')
])
Audio Input
# Audio file
with open("audio.wav", "rb") as f:
audio_data = f.read()
response = await agent.chat([
"Transcribe this audio",
BinaryContent(data=audio_data, media_type='audio/wav')
])
PDF Input
# PDF file
with open("document.pdf", "rb") as f:
pdf_data = f.read()
response = await agent.chat([
"Summarize this document",
BinaryContent(data=pdf_data, media_type='application/pdf')
])
🔧 Advanced Usage
Context Manager (Recommended)
async def main():
async with TAgent(llm=llm, mcp_servers=mcp_servers) as agent:
# MCP servers are automatically connected
response = await agent.chat(["Help me with my Notion workspace"])
print(response.ui_version)
# MCP servers are automatically disconnected
Manual Connection Management
agent = TAgent(llm=llm, mcp_servers=mcp_servers)
# Connect manually
await agent.connect()
try:
response = await agent.chat(["Hello"])
finally:
# Disconnect manually
await agent.disconnect()
Custom Dependencies
from t_ai.t_agent import Deps
# Create custom dependencies
custom_deps = Deps(
agents_output={"previous_results": []},
user="Alice"
)
agent = TAgent(llm=llm, deps=custom_deps)
Custom Tools
from pydantic_ai.tools import Tool
def custom_weather_tool(location: str) -> str:
"""Get weather information for a location"""
# Your weather API logic here
return f"Weather in {location}: Sunny, 25°C"
weather_tool = Tool(
custom_weather_tool,
name='get_weather',
description='Get current weather for any location'
)
agent = TAgent(llm=llm, tools=[weather_tool])
📝 Complete Framework Example
import asyncio
import os
from dotenv import load_dotenv
from t_ai.t_agent import TAgent, Deps
from t_ai.utils.helper_functions import MCP_server_helper
from t_ai.PrebuiltTools.google_tools import search_images_tool, code_execution_tool
from pydantic_ai.models.google import GoogleModel
from pydantic_ai.providers.google import GoogleProvider
from pydantic_ai.messages import BinaryContent
# Load environment variables
load_dotenv()
async def main():
# Setup LLM
llm = GoogleModel('gemini-2.5-flash',
provider=GoogleProvider(api_key=os.getenv('GOOGLE_API_KEY')))
# Setup MCP servers
mcp_helper = MCP_server_helper()
mcp_helper.add_mcp_server(type='stdio', command='npx',
args=['-y', '@modelcontextprotocol/server-filesystem', '/tmp'])
# Setup tools
tools = [
search_images_tool(
api_key=os.getenv('GOOGLE_API_KEY'),
search_engine_id=os.getenv('GOOGLE_SEARCH_ENGINE_ID')
),
code_execution_tool(api_key=os.getenv('GOOGLE_API_KEY'))
]
# Setup custom dependencies
deps = Deps(agents_output={}, user="Alice")
# Initialize T_AI agent
agent = TAgent(
llm=llm,
deps=deps,
tools=tools,
mcp_servers=mcp_helper.get_mcp_servers(),
summarizer=True,
memory_length=20,
instructions="You are a helpful AI assistant with access to various tools and services."
)
# Use context manager for automatic connection handling
async with agent:
# Text conversation
response = await agent.chat(["Hello, what can you help me with?"])
print("Agent:", response.voice_version)
# Math problem with code execution
response = await agent.chat(["Calculate the sum of squares from 1 to 100"])
print("Math Result:", response.ui_version)
# Image search
response = await agent.chat(["Find me an image of a beautiful landscape"])
print("Image Search:", response.ui_version)
# Check conversation history
print(f"Total messages in memory: {len(agent.memory.messages)}")
if __name__ == "__main__":
asyncio.run(main())
🧪 Testing and Development
Run the included Jupyter notebooks to test different features:
notebooks/cortana_test.ipynb: Basic functionality testingnotebooks/cort_mcp_test.ipynb: MCP server integration testingnotebooks/cortana_voice_test.ipynb: Voice/audio capabilities testingnotebooks/memory_handling.ipynb: Memory management testing
🔑 Environment Variables
Create a .env file in your project root:
GOOGLE_API_KEY=your_google_api_key
GOOGLE_SEARCH_ENGINE_ID=your_custom_search_engine_id
OPENAI_API_KEY=your_openai_api_key
TAVILY_API_KEY=your_tavily_api_key
🏗️ Architecture
T_AI is built with a modular architecture:
- Core Agent (
TAgent): Main framework class handling LLM interactions, memory, and coordination - MCP Integration: Support for Model Context Protocol servers (HTTP, SSE, stdio)
- Tool System: Extensible tool framework with built-in Google tools
- Memory Management: Intelligent conversation summarization and state management
- Dependencies: Clean dependency injection for user context and shared state
🤝 Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests if applicable
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🆘 Support
For issues and questions:
- Check the notebooks in the
notebooks/directory for examples - Review the docstrings in the source code
- Open an issue on GitHub
🙏 Acknowledgments
- Built on top of pydantic-ai
- MCP (Model Context Protocol) integration
- Google AI and OpenAI API support
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file t_ai_project-0.1.2.tar.gz.
File metadata
- Download URL: t_ai_project-0.1.2.tar.gz
- Upload date:
- Size: 405.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4bb195fca23bbf541bc84f354a117bbdea7e2422c00a19236dd5488f979cbf40
|
|
| MD5 |
d8b870f91a11b3ed1729c1c5318ede56
|
|
| BLAKE2b-256 |
2faf66a94e6eb9c45e45e0a2fba883c67c4f6e60a29195c9605ccb132a20c7df
|
File details
Details for the file t_ai_project-0.1.2-py3-none-any.whl.
File metadata
- Download URL: t_ai_project-0.1.2-py3-none-any.whl
- Upload date:
- Size: 11.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2040ec155fe108c5b14e43ccc5119f40c16e7dac2677bdacef8acae3fd7b2f5c
|
|
| MD5 |
20924f2e696b7e19db19bda6dbbaa208
|
|
| BLAKE2b-256 |
868a040feb10b41c2d60a958fa77cc6ce819189fc173f4be39685de991e0292b
|