Skip to main content

A dynamic and flexible AI agent framework for building intelligent, multi-modal AI agents

Project description

GRAMI-AI: Dynamic AI Agent Framework

Version Python Versions License GitHub Stars

📋 Table of Contents

🌟 Overview

GRAMI-AI is a cutting-edge, async-first AI agent framework designed to solve complex computational challenges through intelligent, collaborative agent interactions. Built with unprecedented flexibility, this library empowers developers to create sophisticated, context-aware AI systems that can adapt, learn, and collaborate across diverse domains.

🚀 Key Features

  • Async AI Agent Creation
  • Multi-LLM Support (Gemini, OpenAI, Anthropic, Ollama)
  • Extensible Tool Ecosystem
  • Multiple Communication Interfaces
  • Flexible Memory Management
  • Secure and Scalable Architecture

💻 Installation

Using pip

pip install grami-ai==0.3.132

From Source

git clone https://github.com/YAFATEK/grami-ai.git
cd grami-ai
pip install -e .

🎬 Quick Start

import asyncio
from grami.agent import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider

async def main():
    agent = AsyncAgent(
        name="AssistantAI",
        llm=GeminiProvider(api_key="YOUR_API_KEY"),
        system_instructions="You are a helpful digital assistant."
    )

    response = await agent.send_message("Hello, how can you help me today?")
    print(response)

asyncio.run(main())

🔧 Example Configurations

1. Async Agent with Memory

from grami.memory.lru import LRUMemory

agent = AsyncAgent(
    name="MemoryAgent",
    llm=provider,
    memory=LRUMemory(capacity=100)
)

2. Async Agent with Streaming

async for token in agent.stream_message("Tell me a story"):
    print(token, end='', flush=True)

💾 Memory Providers

GRAMI-AI supports multiple memory providers:

  1. LRU Memory: Local in-memory cache
  2. Redis Memory: Distributed memory storage

LRU Memory Example

from grami.memory import LRUMemory

memory = LRUMemory(capacity=50)

Redis Memory Example

from grami.memory import RedisMemory

memory = RedisMemory(
    host='localhost',
    port=6379,
    capacity=100
)

🛠 Working with Tools

Creating Tools

Tools are simple Python functions used by AI agents:

def get_current_time() -> str:
    return datetime.now().strftime("%Y-%m-%d %H:%M:%S")

def calculate_age(birth_year: int) -> int:
    current_year = datetime.now().year
    return current_year - birth_year

Adding Tools to AsyncAgent

agent = AsyncAgent(
    name="ToolsAgent",
    llm=gemini_provider,
    tools=[get_current_time, calculate_age]
)

🌐 Communication Interfaces

GRAMI-AI supports multiple communication interfaces, including WebSocket for real-time, bidirectional communication between agents.

WebSocket Communication

Create a WebSocket-enabled agent using the built-in setup_communication() method:

from grami.agent import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider
from grami.memory.lru import LRUMemory

# Create an agent with WebSocket communication
agent = AsyncAgent(
    name="ToolsAgent", 
    llm=GeminiProvider(api_key=os.getenv('GEMINI_API_KEY')),
    memory=LRUMemory(capacity=100),
    tools=[calculate_area, generate_fibonacci]
)

# Setup WebSocket communication
communication_interface = await agent.setup_communication(
    host='localhost', 
    port=0  # Dynamic port selection
)

Key Features of WebSocket Communication

  • Real-time bidirectional messaging
  • Dynamic port allocation
  • Seamless tool and LLM interaction
  • Secure communication channel

Example Use Cases

  • Distributed AI systems
  • Real-time collaborative agents
  • Interactive tool-based services
  • Event-driven agent communication

🤖 AsyncAgent Configuration

The AsyncAgent class is the core component of GRAMI-AI, providing a flexible and powerful way to create AI agents. Here's a detailed breakdown of its parameters:

Parameter Type Required Default Description
name str Yes - Unique identifier for the agent instance
llm BaseLLMProvider Yes - Language model provider (e.g., GeminiProvider, OpenAIProvider)
memory BaseMemoryProvider No None Memory provider for conversation history management
system_instructions str No None System-level instructions to guide the model's behavior
tools List[Callable] No None List of functions the agent can use during interactions
communication_interface Any No None Interface for agent communication (e.g., WebSocket)

Example Usage with Parameters

from grami.agent import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider
from grami.memory.lru import LRUMemory

# Create an agent with all parameters
agent = AsyncAgent(
    name="AssistantAI",
    llm=GeminiProvider(api_key="YOUR_API_KEY"),
    memory=LRUMemory(capacity=100),
    system_instructions="You are a helpful AI assistant focused on technical tasks.",
    tools=[calculate_area, generate_fibonacci],
    communication_interface=None  # Will be set up later if needed
)

🗺 Development Roadmap

Core Framework Design

  • Implement AsyncAgent base class with dynamic configuration
  • Create flexible system instruction definition mechanism
  • Design abstract LLM provider interface
  • Develop dynamic role and persona assignment system
  • Comprehensive async example configurations
    • Memory with streaming
    • Memory without streaming
    • No memory with streaming
    • No memory without streaming
  • Implement multi-modal agent capabilities (text, image, video)

LLM Provider Abstraction

  • Unified interface for diverse LLM providers
    • Google Gemini integration (start_chat(), send_message())
      • Basic message sending
      • Streaming support
      • Memory integration
    • OpenAI ChatGPT integration
      • Basic message sending
      • Streaming implementation
      • Memory support
    • Anthropic Claude integration
    • Ollama local LLM support
  • Standardize function/tool calling across providers
  • Dynamic prompt engineering support
  • Provider-specific configuration handling

Communication Interfaces

  • WebSocket real-time communication
  • REST API endpoint design
  • Kafka inter-agent communication
  • gRPC support
  • Event-driven agent notification system
  • Secure communication protocols

Memory and State Management

  • Pluggable memory providers
    • In-memory state storage
    • Redis distributed memory
    • DynamoDB scalable storage
    • S3 content storage
  • Conversation and task history tracking
  • Global state management for agent crews
  • Persistent task and interaction logs
  • Advanced memory indexing
  • Memory compression techniques

Tool and Function Ecosystem

  • Extensible tool integration framework
  • Default utility tools
    • Kafka message publisher
    • Web search utility
    • Content analysis tool
  • Provider-specific function calling support
  • Community tool marketplace
  • Easy custom tool development

Agent Crew Collaboration

  • Inter-agent communication protocol
  • Workflow and task delegation mechanisms
  • Approval and review workflows
  • Notification and escalation systems
  • Dynamic team composition
  • Shared context and memory management

Use Case Implementations

  • Digital Agency workflow template
    • Growth Manager agent
    • Content Creator agent
    • Trend Researcher agent
    • Media Creation agent
  • Customer interaction management
  • Approval and revision cycles

Security and Compliance

  • Secure credential management
  • Role-based access control
  • Audit logging
  • Compliance with data protection regulations

Performance and Scalability

  • Async-first design
  • Horizontal scaling support
  • Performance benchmarking
  • Resource optimization

Testing and Quality

  • Comprehensive unit testing
  • Integration testing for agent interactions
  • Mocking frameworks for LLM providers
  • Continuous integration setup

Documentation and Community

  • Detailed API documentation
  • Comprehensive developer guides
  • Example use case implementations
  • Contribution guidelines
  • Community tool submission process
  • Regular maintenance and updates

Future Roadmap

  • Payment integration solutions
  • Advanced agent collaboration patterns
  • Specialized industry-specific agents
  • Enhanced security features
  • Extended provider support

🤝 Contributing

We welcome contributions to GRAMI-AI! Here's how you can help:

Ways to Contribute

  1. Bug Reports: Open detailed issues on GitHub
  2. Feature Requests: Share your ideas for new features
  3. Code Contributions: Submit pull requests with improvements
  4. Documentation: Help improve our docs and examples
  5. Testing: Add test cases and improve coverage

Development Setup

  1. Fork the repository
  2. Create a virtual environment:
    python -m venv venv
    source venv/bin/activate  # or `venv\Scripts\activate` on Windows
    
  3. Install development dependencies:
    pip install -e ".[dev]"
    
  4. Run tests:
    pytest
    

Pull Request Guidelines

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

grami_ai-0.3.132.tar.gz (21.5 kB view details)

Uploaded Source

Built Distribution

grami_ai-0.3.132-py3-none-any.whl (28.5 kB view details)

Uploaded Python 3

File details

Details for the file grami_ai-0.3.132.tar.gz.

File metadata

  • Download URL: grami_ai-0.3.132.tar.gz
  • Upload date:
  • Size: 21.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for grami_ai-0.3.132.tar.gz
Algorithm Hash digest
SHA256 e651dfeebdec49db652673353f4bce8ae8cf66028bc7f8c8f8c2b06f4fdace6f
MD5 46a53f2c2635c0e2a20ca9ffc938c662
BLAKE2b-256 71732a55dfe71eeab7603fa763b7f91b29081e60949b4b0c4e472b7770eff7f7

See more details on using hashes here.

File details

Details for the file grami_ai-0.3.132-py3-none-any.whl.

File metadata

  • Download URL: grami_ai-0.3.132-py3-none-any.whl
  • Upload date:
  • Size: 28.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for grami_ai-0.3.132-py3-none-any.whl
Algorithm Hash digest
SHA256 a62504b822c9a7b25f4e7acd4d4df7cd77e3abb897d32050a20099ee575ceff7
MD5 641e30e2d187d5e67ec8d3f4274e31bb
BLAKE2b-256 8b1b9980ddeb3b255013073574224adbc061652654c51ee5f0926dfcd8633266

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page