Skip to main content

A dynamic and flexible AI agent framework for building intelligent, multi-modal AI agents

Project description

GRAMI-AI: Dynamic AI Agent Framework

Version Python Versions License GitHub Stars

Overview

GRAMI-AI is a cutting-edge, async-first AI agent framework designed to solve complex computational challenges through intelligent, collaborative agent interactions. Built with unprecedented flexibility, this library empowers developers to create sophisticated, context-aware AI systems that can adapt, learn, and collaborate across diverse domains.

Key Features

  • Dynamic AI Agent Creation (Sync and Async)
  • Multi-LLM Support (Gemini, OpenAI, Anthropic, Ollama)
  • Extensible Tool Ecosystem
  • Multiple Communication Interfaces
  • Flexible Memory Management
  • Secure and Scalable Architecture

Installation

Using pip

pip install grami-ai

From Source

git clone https://github.com/YAFATEK/grami-ai.git
cd grami-ai
pip install -e .

Quick Start

Basic Agent Creation

from grami.agent import Agent
from grami.providers import GeminiProvider

# Initialize a Gemini-powered Agent
agent = Agent(
    name="AssistantAI",
    role="Helpful Digital Assistant",
    llm_provider=GeminiProvider(api_key="YOUR_API_KEY"),
    tools=[WebSearchTool(), CalculatorTool()]
)

# Send a message
response = await agent.send_message("Help me plan a trip to Paris")
print(response)

Async Agent Creation

from grami.agent import AsyncAgent
from grami.providers import GeminiProvider

# Initialize a Gemini-powered AsyncAgent
async_agent = AsyncAgent(
    name="ScienceExplainerAI",
    role="Scientific Concept Explainer",
    llm_provider=GeminiProvider(api_key="YOUR_API_KEY"),
    initial_context=[
        {
            "role": "system", 
            "content": "You are an expert at explaining complex scientific concepts clearly."
        }
    ]
)

# Send a message
response = await async_agent.send_message("Explain quantum entanglement")
print(response)

# Stream a response
async for token in async_agent.stream_message("Explain photosynthesis"):
    print(token, end='', flush=True)

Examples

We provide a variety of example implementations to help you get started:

Basic Agents

  • examples/simple_agent_example.py: Basic mathematical calculation agent
  • examples/simple_async_agent.py: Async scientific explanation agent
  • examples/gemini_example.py: Multi-tool Gemini Agent with various capabilities

Advanced Scenarios

  • examples/content_creation_agent.py: AI-Powered Content Creation Agent

    • Generates blog posts
    • Conducts topic research
    • Creates supporting visuals
    • Tailors content to specific audiences
  • examples/web_research_agent.py: Advanced Web Research and Trend Analysis Agent

    • Performs comprehensive market research
    • Conducts web searches
    • Analyzes sentiment
    • Predicts industry trends
    • Generates detailed reports

Collaborative Agents

  • examples/agent_crew_example.py: Multi-Agent Collaboration
    • Demonstrates inter-agent communication
    • Showcases specialized agent roles
    • Enables complex task delegation

Tool Integration

  • examples/tools.py: Collection of custom tools
    • Web Search
    • Weather Information
    • Calculator
    • Sentiment Analysis
    • Image Generation

Environment Variables

API Key Management

GRAMI-AI uses environment variables to manage sensitive credentials securely. To set up your API keys:

  1. Create a .env file in the project root directory
  2. Add your API keys in the following format:
    GEMINI_API_KEY=your_gemini_api_key_here
    

Important: Never commit your .env file to version control. The .gitignore is already configured to prevent this.

Development Checklist

Core Framework Design

  • Implement AsyncAgent base class with dynamic configuration
  • Create flexible system instruction definition mechanism
  • Design abstract LLM provider interface
  • Develop dynamic role and persona assignment system
  • Implement multi-modal agent capabilities (text, image, video)

LLM Provider Abstraction

  • Unified interface for diverse LLM providers
    • Google Gemini integration (start_chat(), send_message())
    • OpenAI ChatGPT integration
    • Anthropic Claude integration
    • Ollama local LLM support
  • Standardize function/tool calling across providers
  • Dynamic prompt engineering support
  • Provider-specific configuration handling

Communication Interfaces

  • WebSocket real-time communication
  • REST API endpoint design
  • Kafka inter-agent communication
  • gRPC support
  • Event-driven agent notification system
  • Secure communication protocols

Memory and State Management

  • Pluggable memory providers
    • In-memory state storage
    • Redis distributed memory
    • DynamoDB scalable storage
    • S3 content storage
  • Conversation and task history tracking
  • Global state management for agent crews
  • Persistent task and interaction logs

Tool and Function Ecosystem

  • Extensible tool integration framework
  • Default utility tools
    • Kafka message publisher
    • Web search utility
    • Content analysis tool
  • Provider-specific function calling support
  • Community tool marketplace
  • Easy custom tool development

Agent Crew Collaboration

  • Inter-agent communication protocol
  • Workflow and task delegation mechanisms
  • Approval and review workflows
  • Notification and escalation systems
  • Dynamic team composition
  • Shared context and memory management

Use Case Implementations

  • Digital Agency workflow template
    • Growth Manager agent
    • Content Creator agent
    • Trend Researcher agent
    • Media Creation agent
  • Customer interaction management
  • Approval and revision cycles

Security and Compliance

  • Secure credential management
  • Role-based access control
  • Audit logging
  • Compliance with data protection regulations

Performance and Scalability

  • Async-first design
  • Horizontal scaling support
  • Performance benchmarking
  • Resource optimization

Testing and Quality

  • Comprehensive unit testing
  • Integration testing for agent interactions
  • Mocking frameworks for LLM providers
  • Continuous integration setup

Documentation and Community

  • Detailed API documentation
  • Comprehensive developer guides
  • Example use case implementations
  • Contribution guidelines
  • Community tool submission process
  • Regular maintenance and updates

Future Roadmap

  • Payment integration solutions
  • Advanced agent collaboration patterns
  • Specialized industry-specific agents
  • Enhanced security features
  • Extended provider support

Memory Management

GRAMI-AI provides flexible memory management for AI agents, allowing you to store and retrieve conversation context, user information, and agent state.

from grami.agent import AsyncAgent
from grami.providers import GeminiProvider
from grami.memory import LRUMemory

# Initialize memory with a capacity of 1000 items
memory = LRUMemory(capacity=1000)

# Create an agent with memory
agent = AsyncAgent(
    name="MemoryBot",
    role="AI Assistant with memory capabilities",
    llm_provider=GeminiProvider(api_key="YOUR_API_KEY"),
    memory_provider=memory
)

# Conversation with memory tracking
response = await agent.send_message("Hi, I'm Alice and I love chess!")

# Retrieve memory contents
keys = await memory.list_keys()
for key in keys:
    value = await memory.retrieve(key)
    print(f"Memory Entry: {key} - {value}")

Memory Providers

  • LRUMemory: Least Recently Used memory with configurable capacity
  • Easy to extend with custom memory providers
  • Supports storing and retrieving conversation context
  • Automatic management of memory capacity

Documentation

For detailed documentation, visit our Documentation Website

Contributing

We welcome contributions! Please see our Contribution Guidelines

License

MIT License - Empowering open-source innovation

About YAFATEK Solutions

Pioneering AI innovation through flexible, powerful frameworks.

Contact & Support


Star ⭐ the project if you believe in collaborative AI innovation!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

grami_ai-0.3.123.tar.gz (13.3 kB view details)

Uploaded Source

Built Distribution

grami_ai-0.3.123-py3-none-any.whl (17.7 kB view details)

Uploaded Python 3

File details

Details for the file grami_ai-0.3.123.tar.gz.

File metadata

  • Download URL: grami_ai-0.3.123.tar.gz
  • Upload date:
  • Size: 13.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for grami_ai-0.3.123.tar.gz
Algorithm Hash digest
SHA256 4098b40fd19f62fb510d33e79068840e4102bc40c91e8c216fc808fc965aa483
MD5 b1a41a9354339376d2c894fbc6ef156b
BLAKE2b-256 adbcd25c98c858eb447139af182baba32f22d44ec80ab9b259ed4de262c6aa0d

See more details on using hashes here.

File details

Details for the file grami_ai-0.3.123-py3-none-any.whl.

File metadata

  • Download URL: grami_ai-0.3.123-py3-none-any.whl
  • Upload date:
  • Size: 17.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for grami_ai-0.3.123-py3-none-any.whl
Algorithm Hash digest
SHA256 a944faa8add40c0cd0c54fcb64f675c83870ec162874ad39b518b86a777b2eae
MD5 79eb546a2575f975fae8d3f204089341
BLAKE2b-256 89d304c60ec8a7570fcf26649883f8aca0ce7e47adb882ef218b0d218e7ef6de

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page