A dynamic and flexible AI agent framework for building intelligent, multi-modal AI agents
Project description
GRAMI-AI: Dynamic AI Agent Framework
Overview
GRAMI-AI is a cutting-edge, async-first AI agent framework designed to solve complex computational challenges through intelligent, collaborative agent interactions. Built with unprecedented flexibility, this library empowers developers to create sophisticated, context-aware AI systems that can adapt, learn, and collaborate across diverse domains.
Key Features
- Dynamic AI Agent Creation (Sync and Async)
- Multi-LLM Support (Gemini, OpenAI, Anthropic, Ollama)
- Extensible Tool Ecosystem
- Multiple Communication Interfaces
- Flexible Memory Management
- Secure and Scalable Architecture
Installation
Using pip
pip install grami-ai
From Source
git clone https://github.com/YAFATEK/grami-ai.git
cd grami-ai
pip install -e .
Quick Start
Basic Agent Creation
from grami.agent import Agent
from grami.providers import GeminiProvider
# Initialize a Gemini-powered Agent
agent = Agent(
name="AssistantAI",
role="Helpful Digital Assistant",
llm_provider=GeminiProvider(api_key="YOUR_API_KEY"),
tools=[WebSearchTool(), CalculatorTool()]
)
# Send a message
response = await agent.send_message("Help me plan a trip to Paris")
print(response)
Async Agent Creation
from grami.agent import AsyncAgent
from grami.providers import GeminiProvider
# Initialize a Gemini-powered AsyncAgent
async_agent = AsyncAgent(
name="ScienceExplainerAI",
role="Scientific Concept Explainer",
llm_provider=GeminiProvider(api_key="YOUR_API_KEY"),
initial_context=[
{
"role": "system",
"content": "You are an expert at explaining complex scientific concepts clearly."
}
]
)
# Send a message
response = await async_agent.send_message("Explain quantum entanglement")
print(response)
# Stream a response
async for token in async_agent.stream_message("Explain photosynthesis"):
print(token, end='', flush=True)
Examples
We provide a variety of example implementations to help you get started:
Basic Agents
examples/simple_agent_example.py
: Basic mathematical calculation agentexamples/simple_async_agent.py
: Async scientific explanation agentexamples/gemini_example.py
: Multi-tool Gemini Agent with various capabilities
Advanced Scenarios
-
examples/content_creation_agent.py
: AI-Powered Content Creation Agent- Generates blog posts
- Conducts topic research
- Creates supporting visuals
- Tailors content to specific audiences
-
examples/web_research_agent.py
: Advanced Web Research and Trend Analysis Agent- Performs comprehensive market research
- Conducts web searches
- Analyzes sentiment
- Predicts industry trends
- Generates detailed reports
Collaborative Agents
examples/agent_crew_example.py
: Multi-Agent Collaboration- Demonstrates inter-agent communication
- Showcases specialized agent roles
- Enables complex task delegation
Tool Integration
examples/tools.py
: Collection of custom tools- Web Search
- Weather Information
- Calculator
- Sentiment Analysis
- Image Generation
Environment Variables
API Key Management
GRAMI-AI uses environment variables to manage sensitive credentials securely. To set up your API keys:
- Create a
.env
file in the project root directory - Add your API keys in the following format:
GEMINI_API_KEY=your_gemini_api_key_here
Important: Never commit your .env
file to version control. The .gitignore
is already configured to prevent this.
Development Checklist
Core Framework Design
- Implement
AsyncAgent
base class with dynamic configuration - Create flexible system instruction definition mechanism
- Design abstract LLM provider interface
- Develop dynamic role and persona assignment system
- Implement multi-modal agent capabilities (text, image, video)
LLM Provider Abstraction
- Unified interface for diverse LLM providers
- Google Gemini integration (start_chat(), send_message())
- OpenAI ChatGPT integration
- Anthropic Claude integration
- Ollama local LLM support
- Standardize function/tool calling across providers
- Dynamic prompt engineering support
- Provider-specific configuration handling
Communication Interfaces
- WebSocket real-time communication
- REST API endpoint design
- Kafka inter-agent communication
- gRPC support
- Event-driven agent notification system
- Secure communication protocols
Memory and State Management
- Pluggable memory providers
- In-memory state storage
- Redis distributed memory
- DynamoDB scalable storage
- S3 content storage
- Conversation and task history tracking
- Global state management for agent crews
- Persistent task and interaction logs
Tool and Function Ecosystem
- Extensible tool integration framework
- Default utility tools
- Kafka message publisher
- Web search utility
- Content analysis tool
- Provider-specific function calling support
- Community tool marketplace
- Easy custom tool development
Agent Crew Collaboration
- Inter-agent communication protocol
- Workflow and task delegation mechanisms
- Approval and review workflows
- Notification and escalation systems
- Dynamic team composition
- Shared context and memory management
Use Case Implementations
- Digital Agency workflow template
- Growth Manager agent
- Content Creator agent
- Trend Researcher agent
- Media Creation agent
- Customer interaction management
- Approval and revision cycles
Security and Compliance
- Secure credential management
- Role-based access control
- Audit logging
- Compliance with data protection regulations
Performance and Scalability
- Async-first design
- Horizontal scaling support
- Performance benchmarking
- Resource optimization
Testing and Quality
- Comprehensive unit testing
- Integration testing for agent interactions
- Mocking frameworks for LLM providers
- Continuous integration setup
Documentation and Community
- Detailed API documentation
- Comprehensive developer guides
- Example use case implementations
- Contribution guidelines
- Community tool submission process
- Regular maintenance and updates
Future Roadmap
- Payment integration solutions
- Advanced agent collaboration patterns
- Specialized industry-specific agents
- Enhanced security features
- Extended provider support
Memory Management
GRAMI-AI provides flexible memory management for AI agents, allowing seamless storage and retrieval of conversation context.
Memory Storage Examples
Synchronous Memory Storage
from grami.agent import Agent
from grami.providers import GeminiProvider
from grami.memory.lru import LRUMemory
# Initialize memory provider
memory = LRUMemory(capacity=1000)
# Create agent with memory
agent = Agent(
name="MemoryAssistant",
llm_provider=GeminiProvider(api_key="YOUR_API_KEY", memory_provider=memory)
)
# Conversation with automatic memory storage
response = agent.send_message("Tell me about quantum physics")
Streaming with Memory
from grami.agent import AsyncAgent
from grami.providers import GeminiProvider
from grami.memory.lru import LRUMemory
# Initialize memory provider
memory = LRUMemory(capacity=1000)
# Create async agent with memory
async_agent = AsyncAgent(
name="StreamingMemoryBot",
llm_provider=GeminiProvider(api_key="YOUR_API_KEY", memory_provider=memory)
)
# Stream response with automatic memory storage
async for token in async_agent.stream_message("Explain machine learning"):
print(token, end='', flush=True)
# Retrieve memory contents
memory_contents = await memory.list_contents()
Key Memory Features
- Automatic conversation turn storage
- Unique key generation for each memory entry
- Configurable memory providers (LRU, Redis, etc.)
- Supports both synchronous and streaming interactions
- Secure and efficient memory management
Documentation
For detailed documentation, visit our Documentation Website
Contributing
We welcome contributions! Please see our Contribution Guidelines
License
MIT License - Empowering open-source innovation
About YAFATEK Solutions
Pioneering AI innovation through flexible, powerful frameworks.
Contact & Support
- Email: support@yafatek.dev
- GitHub: GRAMI-AI Issues
Star ⭐ the project if you believe in collaborative AI innovation!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file grami_ai-0.3.124.tar.gz
.
File metadata
- Download URL: grami_ai-0.3.124.tar.gz
- Upload date:
- Size: 13.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cd76a5c7595d7cc098103aa47a51037701baed097d010e201282f9ffdac1aa9b |
|
MD5 | e5b465d548b882af1314c6fbc204cc9b |
|
BLAKE2b-256 | 9a09d37a22daa9bdce726cec1e55482aece9d59b1d810df78c1417200cbeee27 |
File details
Details for the file grami_ai-0.3.124-py3-none-any.whl
.
File metadata
- Download URL: grami_ai-0.3.124-py3-none-any.whl
- Upload date:
- Size: 18.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 43a1ea7b94f81ebedac2cb23b3f16ecac77ad55d26f0b7fb8d449dd737ac530e |
|
MD5 | e924d3303a6512206bf2f59622e2e1e2 |
|
BLAKE2b-256 | e9d9454372231d6539dfe0e41bdc6d99f6606e2a73c31e331748dce695d8e85c |