A dynamic and flexible AI agent framework for building intelligent, multi-modal AI agents
Project description
GRAMI-AI: Dynamic AI Agent Framework
📋 Table of Contents
- Overview
- Key Features
- Installation
- Quick Start
- Example Configurations
- Memory Providers
- Working with Tools
- Development Roadmap
- Communication Interfaces
- AsyncAgent Configuration
- Contributing
- License
🌟 Overview
GRAMI-AI is a cutting-edge, async-first AI agent framework designed to solve complex computational challenges through intelligent, collaborative agent interactions. Built with unprecedented flexibility, this library empowers developers to create sophisticated, context-aware AI systems that can adapt, learn, and collaborate across diverse domains.
Why GRAMI-AI?
- Async-First Architecture: Built from the ground up for asynchronous operations, ensuring optimal performance in high-concurrency environments
- Multi-Modal Capabilities: Seamlessly handle text, images, and other data types through a unified interface
- Provider Agnostic: Switch between different LLM providers (Gemini, OpenAI, Anthropic, Ollama) without changing your application code
- Enterprise Ready: Built with security, scalability, and maintainability in mind
- Developer Friendly: Intuitive API design with comprehensive documentation and examples
Core Design Principles
- Modularity: Every component is designed to be replaceable and extensible
- Type Safety: Comprehensive type hints and runtime checking for reliable code
- Performance: Optimized for both speed and resource efficiency
- Security: Built-in security best practices and configurable security policies
🚀 Key Features
- Async AI Agent Creation
- Multi-LLM Support (Gemini, OpenAI, Anthropic, Ollama)
- Extensible Tool Ecosystem
- Multiple Communication Interfaces
- Flexible Memory Management
- Secure and Scalable Architecture
💻 Installation
Using pip
pip install grami-ai==0.3.133
From Source
git clone https://github.com/YAFATEK/grami-ai.git
cd grami-ai
pip install -e .
🎬 Quick Start
import asyncio
from grami.agent import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider
async def main():
agent = AsyncAgent(
name="AssistantAI",
llm=GeminiProvider(api_key="YOUR_API_KEY"),
system_instructions="You are a helpful digital assistant."
)
response = await agent.send_message("Hello, how can you help me today?")
print(response)
asyncio.run(main())
🔧 Example Configurations
1. Async Agent with Memory
from grami.memory.lru import LRUMemory
agent = AsyncAgent(
name="MemoryAgent",
llm=provider,
memory=LRUMemory(capacity=100)
)
2. Async Agent with Streaming
async for token in agent.stream_message("Tell me a story"):
print(token, end='', flush=True)
🔧 Configuration
Environment Variables
# Required for different LLM providers
GEMINI_API_KEY=your_gemini_api_key
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
# Optional configuration
GRAMI_LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR
GRAMI_MEMORY_PROVIDER=redis # redis, dynamodb, memory
GRAMI_MAX_TOKENS=2000
Logging and Monitoring
GRAMI-AI uses Python's standard logging module with enhanced formatting:
import logging
from grami.utils.logging import setup_logging
# Setup logging with custom configuration
setup_logging(
log_level="INFO",
log_format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
log_file="grami.log"
)
Error Handling
GRAMI-AI provides custom exceptions for better error handling:
from grami.exceptions import LLMProviderError, MemoryProviderError
try:
response = await agent.send_message("Hello")
except LLMProviderError as e:
logger.error(f"LLM provider error: {e}")
except MemoryProviderError as e:
logger.error(f"Memory provider error: {e}")
💾 Memory Providers
GRAMI-AI supports multiple memory providers:
- LRU Memory: Local in-memory cache
- Redis Memory: Distributed memory storage
LRU Memory Example
from grami.memory import LRUMemory
memory = LRUMemory(capacity=50)
Redis Memory Example
from grami.memory import RedisMemory
memory = RedisMemory(
host='localhost',
port=6379,
capacity=100
)
🛠 Working with Tools
Creating Tools
Tools are simple Python functions used by AI agents:
def get_current_time() -> str:
return datetime.now().strftime("%Y-%m-%d %H:%M:%S")
def calculate_age(birth_year: int) -> int:
current_year = datetime.now().year
return current_year - birth_year
Adding Tools to AsyncAgent
agent = AsyncAgent(
name="ToolsAgent",
llm=gemini_provider,
tools=[get_current_time, calculate_age]
)
🌐 Communication Interfaces
GRAMI-AI supports multiple communication interfaces, including WebSocket for real-time, bidirectional communication between agents.
WebSocket Communication
Create a WebSocket-enabled agent using the built-in setup_communication()
method:
from grami.agent import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider
from grami.memory.lru import LRUMemory
# Create an agent with WebSocket communication
agent = AsyncAgent(
name="ToolsAgent",
llm=GeminiProvider(api_key=os.getenv('GEMINI_API_KEY')),
memory=LRUMemory(capacity=100),
tools=[calculate_area, generate_fibonacci]
)
# Setup WebSocket communication
communication_interface = await agent.setup_communication(
host='localhost',
port=0 # Dynamic port selection
)
Key Features of WebSocket Communication
- Real-time bidirectional messaging
- Dynamic port allocation
- Seamless tool and LLM interaction
- Secure communication channel
Example Use Cases
- Distributed AI systems
- Real-time collaborative agents
- Interactive tool-based services
- Event-driven agent communication
🤖 AsyncAgent Configuration
The AsyncAgent
class is the core component of GRAMI-AI, providing a flexible and powerful way to create AI agents. Here's a detailed breakdown of its parameters:
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
name |
str | Yes | - | Unique identifier for the agent instance |
llm |
BaseLLMProvider | Yes | - | Language model provider (e.g., GeminiProvider, OpenAIProvider) |
memory |
BaseMemoryProvider | No | None | Memory provider for conversation history management |
system_instructions |
str | No | None | System-level instructions to guide the model's behavior |
tools |
List[Callable] | No | None | List of functions the agent can use during interactions |
communication_interface |
Any | No | None | Interface for agent communication (e.g., WebSocket) |
Example Usage with Parameters
from grami.agent import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider
from grami.memory.lru import LRUMemory
# Create an agent with all parameters
agent = AsyncAgent(
name="AssistantAI",
llm=GeminiProvider(api_key="YOUR_API_KEY"),
memory=LRUMemory(capacity=100),
system_instructions="You are a helpful AI assistant focused on technical tasks.",
tools=[calculate_area, generate_fibonacci],
communication_interface=None # Will be set up later if needed
)
🗺 Development Roadmap
Core Framework Design
- Implement
AsyncAgent
base class with dynamic configuration - Create flexible system instruction definition mechanism
- Design abstract LLM provider interface
- Develop dynamic role and persona assignment system
- Comprehensive async example configurations
- Memory with streaming
- Memory without streaming
- No memory with streaming
- No memory without streaming
- Implement multi-modal agent capabilities (text, image, video)
LLM Provider Abstraction
- Unified interface for diverse LLM providers
- Google Gemini integration (start_chat(), send_message())
- Basic message sending
- Streaming support
- Memory integration
- OpenAI ChatGPT integration
- Basic message sending
- Streaming implementation
- Memory support
- Anthropic Claude integration
- Ollama local LLM support
- Google Gemini integration (start_chat(), send_message())
- Standardize function/tool calling across providers
- Dynamic prompt engineering support
- Provider-specific configuration handling
Communication Interfaces
- WebSocket real-time communication
- REST API endpoint design
- Kafka inter-agent communication
- gRPC support
- Event-driven agent notification system
- Secure communication protocols
Memory and State Management
- Pluggable memory providers
- In-memory state storage
- Redis distributed memory
- DynamoDB scalable storage
- S3 content storage
- Conversation and task history tracking
- Global state management for agent crews
- Persistent task and interaction logs
- Advanced memory indexing
- Memory compression techniques
Tool and Function Ecosystem
- Extensible tool integration framework
- Default utility tools
- Kafka message publisher
- Web search utility
- Content analysis tool
- Provider-specific function calling support
- Community tool marketplace
- Easy custom tool development
Agent Crew Collaboration
- Inter-agent communication protocol
- Workflow and task delegation mechanisms
- Approval and review workflows
- Notification and escalation systems
- Dynamic team composition
- Shared context and memory management
Use Case Implementations
- Digital Agency workflow template
- Growth Manager agent
- Content Creator agent
- Trend Researcher agent
- Media Creation agent
- Customer interaction management
- Approval and revision cycles
Security and Compliance
- Secure credential management
- Role-based access control
- Audit logging
- Compliance with data protection regulations
Performance and Scalability
- Async-first design
- Horizontal scaling support
- Performance benchmarking
- Resource optimization
Testing and Quality
- Comprehensive unit testing
- Integration testing for agent interactions
- Mocking frameworks for LLM providers
- Continuous integration setup
Documentation and Community
- Detailed API documentation
- Comprehensive developer guides
- Example use case implementations
- Contribution guidelines
- Community tool submission process
- Regular maintenance and updates
Future Roadmap
- Payment integration solutions
- Advanced agent collaboration patterns
- Specialized industry-specific agents
- Enhanced security features
- Extended provider support
🗺 Advanced Features (Q2 2024)
- Multi-Modal Processing
- Image generation and analysis
- Audio processing capabilities
- Video content analysis
- Advanced RAG Integration
- Vector store integration
- Semantic search capabilities
- Document processing pipeline
- Agent Specialization
- Domain-specific training
- Custom personality templates
- Behavior fine-tuning
🗺 Performance Optimization (Q3 2024)
- Response Caching
- Intelligent cache invalidation
- Distributed caching support
- Load Balancing
- Multiple LLM provider fallback
- Request rate optimization
- Resource Management
- Token usage optimization
- Cost management features
🗺 Enterprise Features (Q4 2024)
- Advanced Security
- SSO integration
- End-to-end encryption
- Audit logging
- Monitoring and Analytics
- Usage metrics dashboard
- Performance analytics
- Cost tracking
🤝 Contributing
We welcome contributions to GRAMI-AI! Here's how you can help:
Ways to Contribute
- Bug Reports: Open detailed issues on GitHub
- Feature Requests: Share your ideas for new features
- Code Contributions: Submit pull requests with improvements
- Documentation: Help improve our docs and examples
- Testing: Add test cases and improve coverage
Development Setup
- Fork the repository
- Create a virtual environment:
python -m venv venv source venv/bin/activate # or `venv\Scripts\activate` on Windows
- Install development dependencies:
pip install -e ".[dev]"
- Run tests:
pytest
Pull Request Guidelines
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
📄 License
This project is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file grami_ai-0.3.133.tar.gz
.
File metadata
- Download URL: grami_ai-0.3.133.tar.gz
- Upload date:
- Size: 22.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 32525398837f99e09ac0064a047ff16cb92286fb99443c456a5f229802293a6a |
|
MD5 | a1f637cb6d0376132cc19eefc2abbded |
|
BLAKE2b-256 | 4a8d30ff6cccd9c4636129da71498f7dd8d136ec9318d978d35e80a860efae21 |
File details
Details for the file grami_ai-0.3.133-py3-none-any.whl
.
File metadata
- Download URL: grami_ai-0.3.133-py3-none-any.whl
- Upload date:
- Size: 29.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c5c59e031f5296a52a2cc534dbff8c0955dd23c819339f3bd415bc0c1be1d438 |
|
MD5 | 88dbcf2aed4075f9271c5a72e9507dfe |
|
BLAKE2b-256 | 99eb8c0723d3f8d33d3d6184b802ca5481670469a93826ee21c91f9e91693697 |