Skip to main content

A dynamic and flexible AI agent framework for building intelligent, multi-modal AI agents

Project description

GRAMI-AI: Dynamic AI Agent Framework

Version Python Versions License GitHub Stars

Overview

GRAMI-AI is a cutting-edge, async-first AI agent framework designed to solve complex computational challenges through intelligent, collaborative agent interactions. Built with unprecedented flexibility, this library empowers developers to create sophisticated, context-aware AI systems that can adapt, learn, and collaborate across diverse domains.

Key Features

  • Async AI Agent Creation
  • Multi-LLM Support (Gemini, OpenAI, Anthropic, Ollama)
  • Extensible Tool Ecosystem
  • Multiple Communication Interfaces
  • Flexible Memory Management
  • Secure and Scalable Architecture

Installation

Using pip

pip install grami-ai

From Source

git clone https://github.com/YAFATEK/grami-ai.git
cd grami-ai
pip install -e .

Quick Start

Basic Async Agent Creation

import asyncio
from grami.agent import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider

async def main():
    # Initialize a Gemini-powered Async Agent
    agent = AsyncAgent(
        name="AssistantAI",
        llm=GeminiProvider(api_key="YOUR_API_KEY"),
        system_instructions="You are a helpful digital assistant."
    )

    # Send an async message
    response = await agent.send_message("Hello, how can you help me today?")
    print(response)

    # Stream a response
    async for token in agent.stream_message("Tell me a story"):
        print(token, end='', flush=True)

asyncio.run(main())

Example Configurations

1. Async Agent with Memory and Streaming

from grami.agent import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider
from grami.memory.lru import LRUMemory

agent = AsyncAgent(
    name="MemoryStreamingAgent",
    llm=provider,
    memory=LRUMemory(capacity=100),
    system_instructions="You are a storyteller."
)

2. Async Agent without Memory

agent = AsyncAgent(
    name="NoMemoryAgent",
    llm=provider,
    memory=None,
    system_instructions="You are a concise assistant."
)

3. Async Agent with Streaming Disabled

response = await agent.send_message("Tell me about AI")

4. Async Agent with Streaming Enabled

async for token in agent.stream_message("Explain quantum computing"):
    print(token, end='', flush=True)

Memory Providers

Grami AI supports multiple memory providers to suit different use cases:

  1. LRU Memory: A local, in-memory cache with a configurable capacity.

    from grami.memory import LRUMemory
    
    # Initialize with default 100-item capacity
    memory = LRUMemory(capacity=50)
    
  2. Redis Memory: A distributed memory provider using Redis for scalable, shared memory storage.

    from grami.memory import RedisMemory
    
    # Initialize with custom Redis configuration
    memory = RedisMemory(
        host='localhost',  # Redis server host
        port=6379,         # Redis server port
        capacity=100,      # Maximum number of items
        provider_id='my_agent'  # Optional provider identifier
    )
    
    # Store memory items
    await memory.store('user_query', 'What is AI?')
    await memory.store('agent_response', 'AI is...')
    
    # Retrieve memory items
    query = await memory.retrieve('user_query')
    
    # List memory contents
    contents = await memory.list_contents()
    
    # Get recent items
    recent_items = await memory.get_recent_items(limit=5)
    
    # Clear memory
    await memory.clear()
    

    Redis Memory Prerequisites

    • Install Redis server locally or use a cloud-hosted Redis instance
    • Ensure network accessibility to Redis server
    • Install additional dependencies:
      pip install grami-ai[redis]
      

    Redis Memory Configuration Options

    • host: Redis server hostname (default: 'localhost')
    • port: Redis server port (default: 6379)
    • db: Redis database number (default: 0)
    • capacity: Maximum memory items (default: 100)
    • provider_id: Unique memory namespace identifier

    Best Practices

    • Use unique provider_id for different conversations
    • Set appropriate capacity based on memory requirements
    • Handle potential network or connection errors
    • Consider Redis persistence settings for data durability

Memory Usage with LLM Providers

Memory providers can be seamlessly integrated with LLM providers:

# Example with Gemini Provider
gemini_provider = GeminiProvider(
    model_name='gemini-pro',
    memory=memory  # Use either LRUMemory or RedisMemory
)

Working with Tools

Creating Tools

Tools in GRAMI-AI are simple Python functions that can be dynamically used by AI agents. Here's how to create and use tools:

def get_current_time() -> str:
    """Get the current timestamp."""
    return datetime.now().strftime("%Y-%m-%d %H:%M:%S")

def calculate_age(birth_year: int) -> int:
    """Calculate a person's age based on their birth year."""
    current_year = datetime.now().year
    return current_year - birth_year

def generate_advice(age: int, interests: Optional[str] = None) -> str:
    """Generate personalized life advice."""
    base_advice = {
        (0, 18): "Focus on education and personal growth.",
        (18, 30): "Explore career opportunities and build skills.",
        (30, 45): "Balance career and personal life, invest wisely.",
        (45, 60): "Plan for retirement and enjoy life experiences.",
        (60, 100): "Stay active, spend time with loved ones, and pursue hobbies."
    }
    
    # Find appropriate advice based on age
    advice = next((adv for (min_age, max_age), adv in base_advice.items() if min_age <= age < max_age), 
                  "Enjoy life and stay positive!")
    
    # Personalize advice if interests are provided
    if interests:
        advice += f" Consider exploring {interests} to enrich your life."
    
    return advice

Adding Tools to AsyncAgent

You can add tools to an AsyncAgent in two ways:

  1. During Agent Initialization:
agent = AsyncAgent(
    name="AdviceAgent",
    llm=gemini_provider,
    tools=[
        get_current_time,
        calculate_age,
        generate_advice
    ]
)
  1. Adding Tools Dynamically:
# Add a single tool
await agent.add_tool(some_tool)

# Or add multiple tools
for tool in [tool1, tool2, tool3]:
    await agent.add_tool(tool)

Tool Best Practices

  • Keep tools focused and single-purpose
  • Use type hints for better model understanding
  • Return simple, serializable data types
  • Handle potential errors gracefully
  • Provide clear, concise docstrings

Example: Tool-Powered Interaction

async def main():
    agent = AsyncAgent(
        name="PersonalAssistant",
        llm=gemini_provider,
        tools=[get_current_time, calculate_age, generate_advice]
    )

    # The agent can now use these tools dynamically
    response = await agent.send_message(
        "What advice would you give to a 35-year-old interested in technology?"
    )
    print(response)

Tools provide a powerful way to extend your agent's capabilities, allowing it to perform specific tasks, retrieve information, and generate context-aware responses.

Development Checklist

Core Framework Design

  • Implement AsyncAgent base class with dynamic configuration
  • Create flexible system instruction definition mechanism
  • Design abstract LLM provider interface
  • Develop dynamic role and persona assignment system
  • Comprehensive async example configurations
    • Memory with streaming
    • Memory without streaming
    • No memory with streaming
    • No memory without streaming
  • Implement multi-modal agent capabilities (text, image, video)

LLM Provider Abstraction

  • Unified interface for diverse LLM providers
    • Google Gemini integration (start_chat(), send_message())
      • Basic message sending
      • Streaming support
      • Memory integration
    • OpenAI ChatGPT integration
      • Basic message sending
      • Streaming implementation
      • Memory support
    • Anthropic Claude integration
    • Ollama local LLM support
  • Standardize function/tool calling across providers
  • Dynamic prompt engineering support
  • Provider-specific configuration handling

Communication Interfaces

  • WebSocket real-time communication
  • REST API endpoint design
  • Kafka inter-agent communication
  • gRPC support
  • Event-driven agent notification system
  • Secure communication protocols

Memory and State Management

  • Pluggable memory providers
    • In-memory state storage
    • Redis distributed memory
    • DynamoDB scalable storage
    • S3 content storage
  • Conversation and task history tracking
  • Global state management for agent crews
  • Persistent task and interaction logs
  • Advanced memory indexing
  • Memory compression techniques

Tool and Function Ecosystem

  • Extensible tool integration framework
  • Default utility tools
    • Kafka message publisher
    • Web search utility
    • Content analysis tool
  • Provider-specific function calling support
  • Community tool marketplace
  • Easy custom tool development

Agent Crew Collaboration

  • Inter-agent communication protocol
  • Workflow and task delegation mechanisms
  • Approval and review workflows
  • Notification and escalation systems
  • Dynamic team composition
  • Shared context and memory management

Use Case Implementations

  • Digital Agency workflow template
    • Growth Manager agent
    • Content Creator agent
    • Trend Researcher agent
    • Media Creation agent
  • Customer interaction management
  • Approval and revision cycles

Security and Compliance

  • Secure credential management
  • Role-based access control
  • Audit logging
  • Compliance with data protection regulations

Performance and Scalability

  • Async-first design
  • Horizontal scaling support
  • Performance benchmarking
  • Resource optimization

Testing and Quality

  • Comprehensive unit testing
  • Integration testing for agent interactions
  • Mocking frameworks for LLM providers
  • Continuous integration setup

Documentation and Community

  • Detailed API documentation
  • Comprehensive developer guides
  • Example use case implementations
  • Contribution guidelines
  • Community tool submission process
  • Regular maintenance and updates

Future Roadmap

  • Payment integration solutions
  • Advanced agent collaboration patterns
  • Specialized industry-specific agents
  • Enhanced security features
  • Extended provider support

Contributing

Contributions are welcome! Please check our GitHub repository for guidelines.

Support


2024 YAFATEK. All Rights Reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

grami_ai-0.3.130.tar.gz (18.5 kB view details)

Uploaded Source

Built Distribution

grami_ai-0.3.130-py3-none-any.whl (24.1 kB view details)

Uploaded Python 3

File details

Details for the file grami_ai-0.3.130.tar.gz.

File metadata

  • Download URL: grami_ai-0.3.130.tar.gz
  • Upload date:
  • Size: 18.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for grami_ai-0.3.130.tar.gz
Algorithm Hash digest
SHA256 61a03ea3972d02328f753743e223d1ed7ff1d021a86665824eb1ab363e4f4fdc
MD5 0c1261a1ee120fee0ebfa57bf6b6033c
BLAKE2b-256 aa88087640cec5f8dfb7cb374541b06ee5c3cc3bed305ec3b7f8ab1506737851

See more details on using hashes here.

File details

Details for the file grami_ai-0.3.130-py3-none-any.whl.

File metadata

  • Download URL: grami_ai-0.3.130-py3-none-any.whl
  • Upload date:
  • Size: 24.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for grami_ai-0.3.130-py3-none-any.whl
Algorithm Hash digest
SHA256 ebbcf01cd0392b3f6877db345fe1b8de0c9640d53f13ab1d1666930e654e5d7c
MD5 4655b86ca238c38afdeb733ce0024935
BLAKE2b-256 7308d2f4fdb85dd22fe0b755c42704f501d0561acb7506511f23b5d69009cad6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page