Skip to main content

A dynamic and flexible AI agent framework for building intelligent, multi-modal AI agents

Project description

GRAMI-AI: Dynamic AI Agent Framework

Version Python Versions License GitHub Stars

Overview

GRAMI-AI is a cutting-edge, async-first AI agent framework designed to solve complex computational challenges through intelligent, collaborative agent interactions. Built with unprecedented flexibility, this library empowers developers to create sophisticated, context-aware AI systems that can adapt, learn, and collaborate across diverse domains.

Key Features

  • Dynamic AI Agent Creation (Sync and Async)
  • Multi-LLM Support (Gemini, OpenAI, Anthropic, Ollama)
  • Extensible Tool Ecosystem
  • Multiple Communication Interfaces
  • Flexible Memory Management
  • Secure and Scalable Architecture

Installation

Using pip

pip install grami-ai

From Source

git clone https://github.com/YAFATEK/grami-ai.git
cd grami-ai
pip install -e .

Quick Start

Basic Agent Creation

from grami.agent import Agent
from grami.providers import GeminiProvider

# Initialize a Gemini-powered Agent
agent = Agent(
    name="AssistantAI",
    role="Helpful Digital Assistant",
    llm_provider=GeminiProvider(api_key="YOUR_API_KEY"),
    tools=[WebSearchTool(), CalculatorTool()]
)

# Send a message
response = await agent.send_message("Help me plan a trip to Paris")
print(response)

Async Agent Creation

from grami.agent import AsyncAgent
from grami.providers import GeminiProvider

# Initialize a Gemini-powered AsyncAgent
async_agent = AsyncAgent(
    name="ScienceExplainerAI",
    role="Scientific Concept Explainer",
    llm_provider=GeminiProvider(api_key="YOUR_API_KEY"),
    initial_context=[
        {
            "role": "system", 
            "content": "You are an expert at explaining complex scientific concepts clearly."
        }
    ]
)

# Send a message
response = await async_agent.send_message("Explain quantum entanglement")
print(response)

# Stream a response
async for token in async_agent.stream_message("Explain photosynthesis"):
    print(token, end='', flush=True)

Examples

We provide a variety of example implementations to help you get started:

Basic Agents

  • examples/simple_agent_example.py: Basic mathematical calculation agent
  • examples/simple_async_agent.py: Async scientific explanation agent
  • examples/gemini_example.py: Multi-tool Gemini Agent with various capabilities

Advanced Scenarios

  • examples/content_creation_agent.py: AI-Powered Content Creation Agent

    • Generates blog posts
    • Conducts topic research
    • Creates supporting visuals
    • Tailors content to specific audiences
  • examples/web_research_agent.py: Advanced Web Research and Trend Analysis Agent

    • Performs comprehensive market research
    • Conducts web searches
    • Analyzes sentiment
    • Predicts industry trends
    • Generates detailed reports

Collaborative Agents

  • examples/agent_crew_example.py: Multi-Agent Collaboration
    • Demonstrates inter-agent communication
    • Showcases specialized agent roles
    • Enables complex task delegation

Tool Integration

  • examples/tools.py: Collection of custom tools
    • Web Search
    • Weather Information
    • Calculator
    • Sentiment Analysis
    • Image Generation

Environment Variables

API Key Management

GRAMI-AI uses environment variables to manage sensitive credentials securely. To set up your API keys:

  1. Create a .env file in the project root directory
  2. Add your API keys in the following format:
    GEMINI_API_KEY=your_gemini_api_key_here
    

Important: Never commit your .env file to version control. The .gitignore is already configured to prevent this.

Development Checklist

Core Framework Design

  • Implement AsyncAgent base class with dynamic configuration
  • Create flexible system instruction definition mechanism
  • Design abstract LLM provider interface
  • Develop dynamic role and persona assignment system
  • Implement multi-modal agent capabilities (text, image, video)

LLM Provider Abstraction

  • Unified interface for diverse LLM providers
    • Google Gemini integration (start_chat(), send_message())
    • OpenAI ChatGPT integration
    • Anthropic Claude integration
    • Ollama local LLM support
  • Standardize function/tool calling across providers
  • Dynamic prompt engineering support
  • Provider-specific configuration handling

Communication Interfaces

  • WebSocket real-time communication
  • REST API endpoint design
  • Kafka inter-agent communication
  • gRPC support
  • Event-driven agent notification system
  • Secure communication protocols

Memory and State Management

  • Pluggable memory providers
    • In-memory state storage
    • Redis distributed memory
    • DynamoDB scalable storage
    • S3 content storage
  • Conversation and task history tracking
  • Global state management for agent crews
  • Persistent task and interaction logs

Tool and Function Ecosystem

  • Extensible tool integration framework
  • Default utility tools
    • Kafka message publisher
    • Web search utility
    • Content analysis tool
  • Provider-specific function calling support
  • Community tool marketplace
  • Easy custom tool development

Agent Crew Collaboration

  • Inter-agent communication protocol
  • Workflow and task delegation mechanisms
  • Approval and review workflows
  • Notification and escalation systems
  • Dynamic team composition
  • Shared context and memory management

Use Case Implementations

  • Digital Agency workflow template
    • Growth Manager agent
    • Content Creator agent
    • Trend Researcher agent
    • Media Creation agent
  • Customer interaction management
  • Approval and revision cycles

Security and Compliance

  • Secure credential management
  • Role-based access control
  • Audit logging
  • Compliance with data protection regulations

Performance and Scalability

  • Async-first design
  • Horizontal scaling support
  • Performance benchmarking
  • Resource optimization

Testing and Quality

  • Comprehensive unit testing
  • Integration testing for agent interactions
  • Mocking frameworks for LLM providers
  • Continuous integration setup

Documentation and Community

  • Detailed API documentation
  • Comprehensive developer guides
  • Example use case implementations
  • Contribution guidelines
  • Community tool submission process
  • Regular maintenance and updates

Future Roadmap

  • Payment integration solutions
  • Advanced agent collaboration patterns
  • Specialized industry-specific agents
  • Enhanced security features
  • Extended provider support

Documentation

For detailed documentation, visit our Documentation Website

Contributing

We welcome contributions! Please see our Contribution Guidelines

License

MIT License - Empowering open-source innovation

About YAFATEK Solutions

Pioneering AI innovation through flexible, powerful frameworks.

Contact & Support


Star ⭐ the project if you believe in collaborative AI innovation!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

grami_ai-0.3.122.tar.gz (11.1 kB view details)

Uploaded Source

Built Distribution

grami_ai-0.3.122-py3-none-any.whl (13.8 kB view details)

Uploaded Python 3

File details

Details for the file grami_ai-0.3.122.tar.gz.

File metadata

  • Download URL: grami_ai-0.3.122.tar.gz
  • Upload date:
  • Size: 11.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for grami_ai-0.3.122.tar.gz
Algorithm Hash digest
SHA256 b00e3e5ffe43fe245216e56e7ca36c7b5063cb34ed7ca9ef646913329c799506
MD5 027f7e766c5b9b7febfcdd2917b01e16
BLAKE2b-256 6a3fc9d6bd1ec888b67e931f10a5a73b0757aa7e211e8f8e533b3b43146b9062

See more details on using hashes here.

File details

Details for the file grami_ai-0.3.122-py3-none-any.whl.

File metadata

  • Download URL: grami_ai-0.3.122-py3-none-any.whl
  • Upload date:
  • Size: 13.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for grami_ai-0.3.122-py3-none-any.whl
Algorithm Hash digest
SHA256 6a69066c31215bdd518cfdaa81c462a1221c8fa7be87532b037e4d5a5e7e3475
MD5 56ad1785c2b220dbccd2158d11193bdd
BLAKE2b-256 659ccbb178aab5639327f0269716dd9fe20f9e9f2507c31c4dc63c4dfab8d823

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page