A dynamic and flexible AI agent framework for building intelligent, multi-modal AI agents
Project description
GRAMI-AI: Dynamic AI Agent Framework
Overview
GRAMI-AI is a cutting-edge, async-first AI agent framework designed to solve complex computational challenges through intelligent, collaborative agent interactions. Built with unprecedented flexibility, this library empowers developers to create sophisticated, context-aware AI systems that can adapt, learn, and collaborate across diverse domains.
Key Features
- Async AI Agent Creation
- Multi-LLM Support (Gemini, OpenAI, Anthropic, Ollama)
- Extensible Tool Ecosystem
- Multiple Communication Interfaces
- Flexible Memory Management
- Secure and Scalable Architecture
Installation
Using pip
pip install grami-ai
From Source
git clone https://github.com/YAFATEK/grami-ai.git
cd grami-ai
pip install -e .
Quick Start
Basic Async Agent Creation
import asyncio
from grami.agent import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider
async def main():
# Initialize a Gemini-powered Async Agent
agent = AsyncAgent(
name="AssistantAI",
llm=GeminiProvider(api_key="YOUR_API_KEY"),
system_instructions="You are a helpful digital assistant."
)
# Send an async message
response = await agent.send_message("Hello, how can you help me today?")
print(response)
# Stream a response
async for token in agent.stream_message("Tell me a story"):
print(token, end='', flush=True)
asyncio.run(main())
Example Configurations
1. Async Agent with Memory and Streaming
from grami.agent import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider
from grami.memory.lru import LRUMemory
agent = AsyncAgent(
name="MemoryStreamingAgent",
llm=provider,
memory=LRUMemory(capacity=100),
system_instructions="You are a storyteller."
)
2. Async Agent without Memory
agent = AsyncAgent(
name="NoMemoryAgent",
llm=provider,
memory=None,
system_instructions="You are a concise assistant."
)
3. Async Agent with Streaming Disabled
response = await agent.send_message("Tell me about AI")
4. Async Agent with Streaming Enabled
async for token in agent.stream_message("Explain quantum computing"):
print(token, end='', flush=True)
Memory Providers
Grami AI supports multiple memory providers to suit different use cases:
-
LRU Memory: A local, in-memory cache with a configurable capacity.
from grami.memory import LRUMemory # Initialize with default 100-item capacity memory = LRUMemory(capacity=50)
-
Redis Memory: A distributed memory provider using Redis for scalable, shared memory storage.
from grami.memory import RedisMemory # Initialize with custom Redis configuration memory = RedisMemory( host='localhost', # Redis server host port=6379, # Redis server port capacity=100, # Maximum number of items provider_id='my_agent' # Optional provider identifier ) # Store memory items await memory.store('user_query', 'What is AI?') await memory.store('agent_response', 'AI is...') # Retrieve memory items query = await memory.retrieve('user_query') # List memory contents contents = await memory.list_contents() # Get recent items recent_items = await memory.get_recent_items(limit=5) # Clear memory await memory.clear()
Redis Memory Prerequisites
- Install Redis server locally or use a cloud-hosted Redis instance
- Ensure network accessibility to Redis server
- Install additional dependencies:
pip install grami-ai[redis]
Redis Memory Configuration Options
host
: Redis server hostname (default: 'localhost')port
: Redis server port (default: 6379)db
: Redis database number (default: 0)capacity
: Maximum memory items (default: 100)provider_id
: Unique memory namespace identifier
Best Practices
- Use unique
provider_id
for different conversations - Set appropriate
capacity
based on memory requirements - Handle potential network or connection errors
- Consider Redis persistence settings for data durability
Memory Usage with LLM Providers
Memory providers can be seamlessly integrated with LLM providers:
# Example with Gemini Provider
gemini_provider = GeminiProvider(
model_name='gemini-pro',
memory=memory # Use either LRUMemory or RedisMemory
)
Development Checklist
Core Framework Design
- Implement
AsyncAgent
base class with dynamic configuration - Create flexible system instruction definition mechanism
- Design abstract LLM provider interface
- Develop dynamic role and persona assignment system
- Comprehensive async example configurations
- Memory with streaming
- Memory without streaming
- No memory with streaming
- No memory without streaming
- Implement multi-modal agent capabilities (text, image, video)
LLM Provider Abstraction
- Unified interface for diverse LLM providers
- Google Gemini integration (start_chat(), send_message())
- Basic message sending
- Streaming support
- Memory integration
- OpenAI ChatGPT integration
- Basic message sending
- Streaming implementation
- Memory support
- Anthropic Claude integration
- Ollama local LLM support
- Google Gemini integration (start_chat(), send_message())
- Standardize function/tool calling across providers
- Dynamic prompt engineering support
- Provider-specific configuration handling
Communication Interfaces
- WebSocket real-time communication
- REST API endpoint design
- Kafka inter-agent communication
- gRPC support
- Event-driven agent notification system
- Secure communication protocols
Memory and State Management
- Pluggable memory providers
- In-memory state storage
- Redis distributed memory
- DynamoDB scalable storage
- S3 content storage
- Conversation and task history tracking
- Global state management for agent crews
- Persistent task and interaction logs
- Advanced memory indexing
- Memory compression techniques
Tool and Function Ecosystem
- Extensible tool integration framework
- Default utility tools
- Kafka message publisher
- Web search utility
- Content analysis tool
- Provider-specific function calling support
- Community tool marketplace
- Easy custom tool development
Agent Crew Collaboration
- Inter-agent communication protocol
- Workflow and task delegation mechanisms
- Approval and review workflows
- Notification and escalation systems
- Dynamic team composition
- Shared context and memory management
Use Case Implementations
- Digital Agency workflow template
- Growth Manager agent
- Content Creator agent
- Trend Researcher agent
- Media Creation agent
- Customer interaction management
- Approval and revision cycles
Security and Compliance
- Secure credential management
- Role-based access control
- Audit logging
- Compliance with data protection regulations
Performance and Scalability
- Async-first design
- Horizontal scaling support
- Performance benchmarking
- Resource optimization
Testing and Quality
- Comprehensive unit testing
- Integration testing for agent interactions
- Mocking frameworks for LLM providers
- Continuous integration setup
Documentation and Community
- Detailed API documentation
- Comprehensive developer guides
- Example use case implementations
- Contribution guidelines
- Community tool submission process
- Regular maintenance and updates
Future Roadmap
- Payment integration solutions
- Advanced agent collaboration patterns
- Specialized industry-specific agents
- Enhanced security features
- Extended provider support
Contributing
Contributions are welcome! Please check our GitHub repository for guidelines.
Support
- Email: support@yafatek.dev
- GitHub: GRAMI-AI Issues
2024 YAFATEK. All Rights Reserved.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file grami_ai-0.3.129.tar.gz
.
File metadata
- Download URL: grami_ai-0.3.129.tar.gz
- Upload date:
- Size: 16.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8f349f2403732bc4ab8d6aeb5ce4ca50a345fe913887c7598165c4b564e3c726 |
|
MD5 | 423771a1f8af2890381e1ffbf2cb2c59 |
|
BLAKE2b-256 | fdb09e2112d2ac8d12ca9d7f523c2acd2b478eb9cdd805f1d6b77dcc2f01376e |
File details
Details for the file grami_ai-0.3.129-py3-none-any.whl
.
File metadata
- Download URL: grami_ai-0.3.129-py3-none-any.whl
- Upload date:
- Size: 22.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bb64847283aa92ae64283e0655947b472da30a3b46fd11600e3676ce72bafe4c |
|
MD5 | 47bb9241f0c3067db6ae6246d509594e |
|
BLAKE2b-256 | dcef2ed3d5c5ac2271edf57d42105e35fc85d40805107fa5c5da8313715f985c |