Skip to main content

A professional, optimized Python library for creating and managing AI agents with intelligent tool selection and LLM integration

Project description

๐Ÿš€ DAIE โ€” Decentralized AI Ecosystem

Build autonomous AI agents that reason, use tools, communicate over P2P networks, and stream responses โ€” powered by any LLM

The lightweight, offline-first alternative to LangChain for building production-ready AI agents


Why DAIE?

Feature DAIE LangChain CrewAI
Offline-first โœ… Full Ollama support โŒ Cloud-dependent โŒ Cloud-dependent
P2P Networking โœ… Built-in โŒ No โŒ No
Agent Personas โœ… Gender, personality, behavior โŒ Limited โŒ Limited
Intelligent Routing โœ… LLM-based agent selection โŒ No โŒ No
File Transfer โœ… A2A secure transfer โŒ No โŒ No
Vision Support โœ… Camera + vision models โš ๏ธ Limited โŒ No
Streaming โœ… Library-level โš ๏ธ Per-call โŒ No
Custom Tools โœ… @tool decorator โš ๏ธ Complex โš ๏ธ Complex
Multi-Agent โœ… Orchestrator pattern โš ๏ธ Chains โœ… Crews
Dependencies ๐Ÿชถ Ultra-lightweight ๐Ÿ“ฆ๐Ÿ“ฆ๐Ÿ“ฆ Heavy ๐Ÿ“ฆ๐Ÿ“ฆ Medium
Philosophy โœ… Zero-dependency core โŒ Dependency heavy โŒ Dependency heavy

DAIE is for you if you want:

  • ๐Ÿ  Offline-first AI โ€” Run everything locally with Ollama, no cloud required
  • ๐Ÿ”— Decentralized agents โ€” Agents communicate directly over P2P networks
  • ๐ŸŽญ Human-like personas โ€” Configure gender, personality, and behavior traits
  • ๐Ÿง  Intelligent routing โ€” LLM automatically selects the best agent for each message
  • ๐Ÿ“ Secure file transfers โ€” Send files between agents with Base64 encoding
  • ๐Ÿ‘๏ธ Vision capabilities โ€” Camera integration with vision models like Qwen-VL
  • โšก Real-time streaming โ€” Tokens stream as they arrive, no buffering
  • ๐Ÿ’ฌ Pre-configured chat loops โ€” Ready-to-use chat loops for agents, nodes, orchestrators, and hybrid systems

๐Ÿ—๏ธ Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    USER / APPLICATION                       โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                            โ”‚
                            โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                      DAIE FRAMEWORK                         โ”‚
โ”‚    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”        โ”‚
โ”‚    โ”‚   AGENTS    โ”‚  โ”‚    TOOLS    โ”‚  โ”‚   MEMORY    โ”‚        โ”‚
โ”‚    โ”‚  โ€ข ReAct    โ”‚  โ”‚  โ€ข File     โ”‚  โ”‚  โ€ข Working  โ”‚        โ”‚
โ”‚    โ”‚  โ€ข Persona  โ”‚  โ”‚  โ€ข API      โ”‚  โ”‚  โ€ข Semantic โ”‚        โ”‚
โ”‚    โ”‚  โ€ข Config   โ”‚  โ”‚  โ€ข Selenium โ”‚  โ”‚  โ€ข Episodic โ”‚        โ”‚
โ”‚    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜        โ”‚
โ”‚           โ”‚                โ”‚                โ”‚               โ”‚
โ”‚           โ–ผ                โ–ผ                โ–ผ               โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”‚
โ”‚  โ”‚              ORCHESTRATOR (Multi-Agent)             โ”‚    โ”‚
โ”‚  โ”‚  โ€ข Task delegation  โ€ข Sub-agent coordination        โ”‚    โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ”‚
โ”‚                            โ”‚                                โ”‚
โ”‚                            โ–ผ                                โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”‚
โ”‚  โ”‚              AGENT ROUTER (Intelligent)             โ”‚    โ”‚
โ”‚  โ”‚  โ€ข LLM-based routing  โ€ข Content analysis            โ”‚    โ”‚
โ”‚  โ”‚  โ€ข Dynamic agent selection  โ€ข Routing history       โ”‚    โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ”‚
โ”‚                            โ”‚                                โ”‚
โ”‚                            โ–ผ                                โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”‚
โ”‚  โ”‚              COMMUNICATION MANAGER                  โ”‚    โ”‚
โ”‚  โ”‚  โ€ข P2P networking  โ€ข WebSocket Support  โ€ข Auth      โ”‚    โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ”‚
โ”‚                            โ”‚                                โ”‚
โ”‚                            โ–ผ                                โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”‚
โ”‚  โ”‚                  LLM MANAGER                        โ”‚    โ”‚
โ”‚  โ”‚  โ€ข Ollama  โ€ข OpenAI  โ€ข Anthropic  โ€ข Google  โ€ข Azure โ”‚    โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                            โ”‚
                            โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    RAG ENGINE (TF-IDF)                      โ”‚
โ”‚  โ€ข Document loading  โ€ข Context retrieval  โ€ข Knowledge base  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Features

๐Ÿค– AI Agents

  • ReAct agent loop โ€” LLM reasons โ†’ picks a tool โ†’ sees the result โ†’ iterates until it gives a final answer
  • Multi-Agent Orchestration โ€” Coordinate main agents and sub-agents for complex goals (e.g. Research Lab, Courtroom)
  • Intelligent Agent Router โ€” LLM-based routing that automatically selects the best agent for each message based on content analysis
  • Agent persona โ€” configure gender, personality, and behavior traits injected directly into the LLM prompt
  • Per-agent LLM overrides โ€” each agent can have its own temperature and max_tokens

๐Ÿ” RAG Systems

  • Decentralized RAG โ€” Every agent can have its own unique knowledge base (TF-IDF retrieval)
  • Document-based knowledge โ€” Load .txt, .pdf, .md files for context-aware responses

โš™๏ธ Automation Tools

  • Pre-built tools โ€” file system, HTTP API calls, Selenium Chrome browser automation
  • Custom tools โ€” decorate any function with @tool and it works identically to built-in tools
  • A2A file transfer โ€” securely send files between agents over the network using Base64 encoding

๐Ÿ’ฌ Chatbots & Vision

  • Streaming tokens โ€” set stream=True once, tokens print as they arrive
  • Vision Capabilities โ€” Support for vision models (e.g. qwen3-vl:2b) with camera integration
  • Camera & audio โ€” optional OpenCV camera capture and PyAudio microphone/speaker support
  • Chat Loop Configs โ€” Pre-configured chat loops for agents, nodes, orchestrators, and hybrid systems

๐ŸŒ Networking & Communication

  • P2P networking โ€” agents communicate across machines via WebSocket with authentication & authorization
  • WebSocket Support โ€” Real-time bidirectional communication
  • Multi-provider LLM โ€” Ollama (default), OpenAI, Anthropic, Google, Azure, OpenRouter

๐Ÿ› ๏ธ Developer Tools

  • CLI โ€” manage agents and the core system from the terminal

Documentation

For detailed documentation, see the docs folder:

๐Ÿš€ Getting Started

๐Ÿค– AI Agents

  • Agents โ€” Agent creation, configuration, and the ReAct loop
  • Orchestrator โ€” Multi-agent coordination and task delegation
  • Memory โ€” Agent memory management (working, semantic, episodic)
  • Agent Router โ€” LLM-based intelligent agent routing

๐Ÿ” RAG Systems

  • RAG โ€” Retrieval-Augmented Generation with TF-IDF

โš™๏ธ Automation Tools

  • Tools โ€” Pre-built tools, custom tools, and the @tool decorator

๐ŸŒ Networking & Communication

  • P2P Networking โ€” Peer-to-peer communication protocol for agents
  • Network Configuration โ€” Detailed guide on network_url and network_connections
  • Node โ€” Node abstraction for managing agents and resources
  • Orchestrator โ€” Multi-agent coordination and task delegation
  • Node vs Orchestrator โ€” Complete comparison guide with 100+ use cases and decision matrix
  • Communication โ€” P2P networking, messaging, and file transfers
  • LLM Configuration โ€” Multi-provider LLM setup and streaming
  • Chat Configs โ€” Pre-configured chat loops for agents, nodes, orchestrators, and hybrid systems

๐Ÿ—๏ธ Architecture Patterns

  • Node Architecture: Distributed infrastructure for multi-location systems
  • Orchestrator Pattern: Hierarchical workflow coordination for complex tasks
  • Hybrid Architecture: Combine Node + Orchestrator for enterprise-scale systems
  • HybridOrchestratorNode: Simplified hybrid setup combining Node + Orchestrator in one class
  • Agent Router: LLM-based intelligent routing for optimal agent selection

๐Ÿ› ๏ธ Developer Tools

  • CLI โ€” Command-line interface for agent and system management
  • Utils โ€” Camera, audio, encryption, and utility functions
  • API Reference โ€” Complete API reference for all modules

โšก Quick Start (30 seconds)

# 1. Install DAIE
pip install daie

# 2. Install Ollama (local LLM)
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.2:1b

# 3. Run your first agent
python -c "
import asyncio
from daie import Agent, AgentConfig, set_llm

set_llm(ollama_llm='llama3.2:1b', stream=True)

async def main():
    agent = Agent(config=AgentConfig(name='Alex', personality='helpful and witty'))
    await agent.start()
    response = await agent.send_message('Hello! What can you do?')
    await agent.stop()

asyncio.run(main())
"

That's it! You now have a working AI agent in 30 seconds.


๐ŸŽฏ Real Output Example

$ python examples/01_basic_chat.py

=== Basic Chat Loop ===
Type 'exit' or press Ctrl+C to quit.

You: What's the weather like today?

LUNA: ๐ŸŒค๏ธ Hey there! I'd love to help with the weather, but I don't have access to real-time data. However, I can tell you that I'm feeling sunny and energetic today! โ˜€๏ธ

If you want actual weather info, you could:
1. Ask me to search the web using my browser tool
2. Tell me your location and I'll look it up
3. Just chat with me about anything else!

What would you like to do? ๐Ÿ˜Š

You: Search for weather in San Francisco

LUNA: ๐Ÿ” Let me search that for you!

[Using tool: selenium_chrome]
[Opening browser...]
[Searching: "weather in San Francisco"]

LUNA: ๐ŸŒ Found it! San Francisco is currently:
- Temperature: 62ยฐF (17ยฐC)
- Conditions: Partly cloudy
- Humidity: 75%
- Wind: 12 mph from the west

Perfect weather for a walk across the Golden Gate Bridge! ๐ŸŒ‰

Installation

pip install daie

Optional extras:

pip install "daie[dev]"      # pytest, black, mypy, flake8, pytest-asyncio, pytest-cov
pip install "daie[docs]"     # sphinx, sphinx-rtd-theme, nbsphinx

Requires Python 3.10+

Core dependencies: pyyaml, selenium, webdriver-manager, uvicorn, websockets, nats-py, pyaudio, zeroconf, kademlia, numpy, pydantic, pydantic-settings

[!TIP] Zero-Dependency Philosophy: DAIE includes in-house, lightweight replacements for requests, python-dotenv, rich, typer, and cryptography to keep the core footprint minimal and avoid dependency hell.


Quick Start

1. Simple streaming chat with persona

import asyncio
from daie import Agent, AgentConfig, set_llm
from daie.agents import AgentRole

set_llm(ollama_llm="wizard-vicuna-uncensored:7b", stream=True)

async def main():
    agent = Agent(config=AgentConfig(
        name="Alex",
        role=AgentRole.GENERAL_PURPOSE,
        system_prompt="You are a helpful and concise AI assistant.",
        gender="female",
        personality="sassy, witty, and very direct",
        behavior="always uses emojis and speaks enthusiastically",
        temperature=0.9,
        max_tokens=1024
    ))
    await agent.start()

    print("=== Chat Loop ===")
    print("Type 'exit' to quit.\n")

    while True:
        try:
            user_input = input("You: ")
            if user_input.lower() in ("exit", "quit"):
                break
        except (KeyboardInterrupt, EOFError):
            print("\nExiting...")
            break

        response = await agent.send_message(user_input)
        print("\n")

    await agent.stop()

asyncio.run(main())

2. Agent with tools (ReAct loop)

import asyncio
from daie import Agent, AgentConfig, set_llm
from daie.agents import AgentRole
from daie.tools import FileManagerTool, APICallTool, tool

set_llm(ollama_llm="llama3.2:1b", stream=True)

# Custom tool via decorator
@tool(name="calculate_math", description="Evaluate a basic math expression.")
async def calculate_math(expression: str) -> str:
    return str(eval(expression))

async def main():
    agent = Agent(config=AgentConfig(
        name="MathBot",
        role=AgentRole.GENERAL_PURPOSE,
        system_prompt="You are a capable agent with access to math and file tools.",
    ))

    agent.add_tool(calculate_math)
    agent.add_tool(FileManagerTool())

    await agent.start()

    # LLM autonomously picks the right tools via the ReAct loop
    result = await agent.execute_task(
        "Calculate 25 * 14 and save the result into a file called result.txt"
    )
    print("Final Answer:", result)

    await agent.stop()

asyncio.run(main())

3. P2P multi-agent networking & file transfer

import asyncio
from daie import Agent, AgentConfig, set_llm
from daie.agents import AgentRole
from daie.communication import CommunicationManager
from daie.agents.message import AgentMessage

set_llm(ollama_llm="wizard-vicuna-uncensored:7b")

async def main():
    # Shared communication bus
    comm = CommunicationManager()
    await comm.start()

    # Agent 1
    agent1 = Agent(config=AgentConfig(
        name="NodeAlfa",
        role=AgentRole.GENERAL_PURPOSE,
        network_url="ws://localhost:8000",
    ))
    await agent1.start(communication_manager=comm)

    # Agent 2 (with auth + file transfers)
    agent2 = Agent(config=AgentConfig(
        name="NodeBravo",
        role=AgentRole.GENERAL_PURPOSE,
        network_url="ws://localhost:8001",
        auth_token="secure_token_123",
        allow_file_transfers=True,
    ))
    await agent2.start(communication_manager=comm)

    # Send direct message
    msg = AgentMessage(
        sender_id=agent1.id,
        receiver_id=agent2.id,
        content="Hello from NodeAlfa!",
        message_type="text",
    )
    await comm.send_message(msg)

    # A2A file transfer
    file_tool = agent1.get_tool("a2a_send_file")
    if file_tool:
        await file_tool._execute({
            "receiver_id": agent2.id,
            "file_path": "payload.txt",
            "message": "Secure payload!",
        })

    await agent1.stop()
    await agent2.stop()
    await comm.stop()

asyncio.run(main())

4. Multi-Agent Orchestration

The Orchestrator allows a main agent to coordinate multiple sub-agents to solve complex problems.

from daie import Agent, AgentConfig, Orchestrator
from daie.agents import AgentRole

Professor = Agent(config=AgentConfig(name="Professor", role=AgentRole.COORDINATOR))
Nova = Agent(config=AgentConfig(name="NOVA", goal="Handle technical research"))

orchestrator = Orchestrator(
    main_agent=Professor,
    sub_agents=[Nova],
    context_name="research_lab"
)

await orchestrator.start()
response = await orchestrator.execute_task("Research decentralized consensus")

5. Decentralized RAG

Agents can maintain independent knowledge bases using simple directory-based RAG.

config = AgentConfig(
    name="Expert",
    rag_document_path="data/expert_knowledge/"  # Local folder with .txt, .pdf, .md files
)
agent = Agent(config=config)
# The agent will automatically retrieve relevant context before answering

6. Intelligent Agent Routing

The AgentRouter uses LLM to automatically select the best agent for each message based on content analysis.

import asyncio
from daie import Agent, AgentConfig, set_llm
from daie.agents import AgentRole, AgentRouter

set_llm(ollama_llm="llama3.2:1b", stream=True)

async def main():
    # Create specialized agents
    assistant = Agent(config=AgentConfig(
        name="Assistant",
        role=AgentRole.GENERAL_PURPOSE,
        system_prompt="You are a helpful general-purpose assistant."
    ))
    
    coder = Agent(config=AgentConfig(
        name="Coder",
        role=AgentRole.SPECIALIZED,
        system_prompt="You are an expert programmer. Write clean, efficient code."
    ))
    
    researcher = Agent(config=AgentConfig(
        name="Researcher",
        role=AgentRole.SPECIALIZED,
        system_prompt="You are a research specialist. Analyze and summarize information."
    ))
    
    # Create router from agents list
    router = AgentRouter.from_agents([assistant, coder, researcher])
    
    # Router automatically selects the best agent
    agent_type = await router.route("Write a Python function to sort a list")
    # Returns: "coder"
    
    agent_type = await router.route("Explain quantum computing")
    # Returns: "researcher"
    
    agent_type = await router.route("What's the weather like?")
    # Returns: "assistant"
    
    # Get routing history
    history = router.get_routing_history()
    print(f"Routed {len(history)} messages")

asyncio.run(main())

7. Full-Power One-File Demo (Orchestrator + Tools + Guardrails)

Copy this into a single file (e.g., demo.py) and run it to see the full architecture in action.

import asyncio
from daie import Agent, AgentConfig, Orchestrator, set_llm
from daie.agents import AgentRole
from daie.tools import FileManagerTool, APICallTool, tool

# 1. Setup - Local LLM with streaming enabled
set_llm(ollama_llm="llama3.2:1b", stream=True)

# 2. Define a custom tool for the agents to use
@tool(name="code_executor", description="Executes snippets of python code safely.")
async def execute_code(code: str) -> str:
    # In a real app, use a sandbox!
    return f"Code executed successfully. Output: [Simulated result for {len(code)} chars]"

async def main():
    print("๐Ÿš€ Initializing Decentralized AI Ecosystem Demo...")

    # 3. Create a specialized Researcher agent
    researcher = Agent(config=AgentConfig(
        name="Researcher",
        role=AgentRole.SPECIALIZED,
        goal="Gather and summarize technical information",
        rag_document_path="docs/",  # Optional: local knowledge base
    ))

    # 4. Create a specialized Coder agent with guardrails
    coder = Agent(config=AgentConfig(
        name="Coder",
        role=AgentRole.SPECIALIZED,
        goal="Write and verify optimized Python code",
        max_tokens_per_task=2000,   # Production guardrail
        max_tool_calls_per_task=5,  # Production guardrail
    ))
    coder.add_tool(execute_code)
    coder.add_tool(FileManagerTool())

    # 5. Create the Orchestrator to coordinate them
    # The Coordinator agent manages the sub-agents autonomously
    boss = Agent(config=AgentConfig(name="Boss", role=AgentRole.COORDINATOR))
    
    system = Orchestrator(
        main_agent=boss,
        sub_agents=[researcher, coder],
        context_name="SoftwareDevelopmentLab"
    )

    # 6. Start the system (Lifecycle management is mandatory)
    await system.start()

    print("\n--- System is online. Executing high-level task ---")
    
    # 7. Execute a complex multi-step task
    task = "Research how to implement uuid7 in Python, then write a sample script and save it to 'uuid_sample.py'."
    result = await system.execute_task(task)

    print("\n--- Task Complete ---")
    print(f"Final Outcome:\n{result}")

    # 8. Shutdown cleanly
    await system.stop()

if __name__ == "__main__":
    asyncio.run(main())

8. Chat Loop Config (Pre-configured Chat Loops)

The daie.chat module provides pre-configured chat loop setups so you don't need to write the full boilerplate code. Simply configure and run!

from daie import Agent, AgentConfig, set_llm
from daie.chat import ChatLoopConfig

set_llm(ollama_llm="llama3.2:1b", stream=True)

# Create your agent
agent = Agent(config=AgentConfig(
    name="LUNA",
    system_prompt="You are a helpful AI assistant.",
    personality="friendly and helpful"
))

# Run the chat loop with minimal code!
chat_loop = ChatLoopConfig(agent=agent)
chat_loop.run()

Available Chat Loop Configs:

Config Target Use Case
ChatLoopConfig Simple Agent Basic chat with an agent
NodeChatConfig Single Node Advanced chat with orchestrator and sub-agents
OrchestratorChatConfig Multi-Node System Multi-node collaboration and task execution
HybridChatConfig Hybrid System Simple chat with hybrid systems

๐Ÿ“– Full guide: Chat Configs โ€” Complete documentation for all chat loop configurations


Agent Configuration

from daie.agents.config import AgentConfig, AgentRole

config = AgentConfig(
    name="MyAgent name",   # (ALEX, NOVA, BOB, etc)
    role=AgentRole.GENERAL_PURPOSE,   # or SPECIALIZED, COORDINATOR, WORKER, ANALYZER, EXECUTOR
    goal="Help users with tasks",
    backstory="A capable AI assistant",
    system_prompt="You are a helpful assistant.",

    # Persona traits (automatically injected into LLM prompts)
    gender="female",                             # Literal["male", "female"] or None
    personality="sarcastic, witty, very direct",  # free-form string
    behavior="always starts sentences with Hmm",  # free-form string

    # Per-agent LLM overrides (take priority over global set_llm settings)
    temperature=0.7,
    max_tokens=1000,

    # Task settings
    task_timeout=30,       # seconds before execute_task times out

    # P2P Networking
    network_url="ws://your-ip-or-devtunnel:8000",
    auth_token="secure_secret_here",
    allow_file_transfers=True,
    allowed_senders=["agent-id-1", "agent-id-2"],   # whitelist (empty = allow all)
)

LLM Configuration

from daie import set_llm, get_llm_config, LLMType

# Ollama (local, default)
set_llm(ollama_llm="llama3.2:latest", temperature=0.7, max_tokens=1000)
set_llm(ollama_llm="gemma3:1b", stream=True)   # enable streaming

# OpenAI
set_llm(llm_type=LLMType.OPENAI, model_name="gpt-4o-mini", api_key="sk-...")

# Anthropic
set_llm(llm_type=LLMType.ANTHROPIC, model_name="claude-3-sonnet-20240229", api_key="...")

# Google
set_llm(llm_type=LLMType.GOOGLE, model_name="gemini-pro", api_key="...")

# Azure OpenAI
set_llm(llm_type=LLMType.AZURE, model_name="gpt-4", api_key="...", base_url="https://<resource>.openai.azure.com")

# OpenRouter
set_llm(llm_type=LLMType.OPENROUTER, model_name="mistralai/mistral-7b-instruct", api_key="...")

# Check current config
cfg = get_llm_config()
print(cfg.llm_type, cfg.model_name, cfg.stream)

Streaming

Streaming is a library-level setting โ€” set it once, it applies everywhere:

set_llm(ollama_llm="llama3.2:latest", stream=True)

When stream=True, send_message() prints tokens as they arrive and returns the full response string when done. execute_task() always runs the reasoning loop without streaming (for reliability), then streams the final answer.


Tools

Pre-built tools

Tool Description
FileManagerTool Create, read, write, delete, copy, move files and directories
APICallTool HTTP GET / POST / PUT / DELETE / PATCH requests
HTTPGetTool Simplified HTTP GET
HTTPPostTool Simplified HTTP POST
SeleniumChromeTool Full Chrome browser automation
A2ASendFileTool Transfer files securely between agents over P2P network (import from daie.tools.a2a_file)
A2ASendMessageTool Send messages between agents (import from daie.tools.a2a)
A2ADelegateTaskTool Delegate tasks to other agents via ACP (import from daie.tools.a2a)

FileManagerTool actions

from daie.tools import FileManagerTool

fm = FileManagerTool()

# Create
await fm.execute({"action": "create_file", "path": "notes.txt", "content": "hello"})

# Read
result = await fm.execute({"action": "read_file", "path": "notes.txt"})
print(result["content"])

# List directory
result = await fm.execute({"action": "list_contents", "path": ".", "recursive": False})

# Delete
await fm.execute({"action": "delete_file", "path": "notes.txt"})

APICallTool

from daie.tools import APICallTool

api = APICallTool()
result = await api.execute({
    "url": "https://api.github.com/users/octocat",
    "method": "GET",
    "headers": {"Accept": "application/json"},
})
print(result["json"])

SeleniumChromeTool (browser automation)

from daie.tools import SeleniumChromeTool

browser = SeleniumChromeTool()

await browser.execute({"action": "open_url", "url": "https://example.com", "headless": True})
result = await browser.execute({"action": "get_title"})
print(result["page_title"])

await browser.execute({"action": "screenshot", "screenshot_path": "page.png"})

Custom @tool decorator

from daie.tools import tool

@tool(name="calculate", description="Evaluate a math expression")
async def calculate(expression: str) -> str:
    return str(eval(expression))  # use safely in production

agent.add_tool(calculate)
result = await agent.execute_task("What is 12 * 34?")

P2P Networking & File Transfers

DAIE supports multi-agent communication via its CommunicationManager. Agents can:

  • Discover peers via the built-in NodeRegistry
  • Send direct messages between agents (in-process or via WebSocket for remote agents)
  • Transfer files securely using Base64 encoding with the A2ASendFileTool
  • Authorize senders with allowed_senders whitelists
  • Authenticate connections with auth_token

Setting Up Networked Agents

from daie import Agent, AgentConfig
from daie.communication import CommunicationManager

comm = CommunicationManager()
await comm.start()

config = AgentConfig(
    name="NetworkWorker",
    network_url="ws://<your-public-ip-or-devtunnel>:8000",
    auth_token="secure_cross_machine_token123",
    allow_file_transfers=True
)
agent = Agent(config=config)
await agent.start(communication_manager=comm)

Authorization Whitelist

config = AgentConfig(
    name="SecureNode",
    allowed_senders=["trusted-agent-id-1", "trusted-agent-id-2"],
)
# Only messages from whitelisted sender IDs will be accepted.
# Empty list = allow all senders.

Camera (OpenCV)

pip install opencv-python
from daie.utils import CameraManager, capture_image, list_camera_devices

# List cameras
devices = list_camera_devices()
print("Available cameras:", devices)

# Capture a single image
capture_image("photo.jpg", device_index=0)

# Stream frames
cam = CameraManager()
cam.initialize_camera(device_index=0)

def on_frame(frame):
    print("Got frame:", frame.shape)

cam.start_streaming(callback=on_frame)
# ... do work ...
cam.stop_streaming()
cam.release()

Vision Chat with Qwen-VL

DAIE supports local vision models via Ollama.

import cv2
import base64
from daie import Agent, set_llm

set_llm(ollama_llm="qwen3-vl:2b")

# Capture and encode image
cam = CameraManager()
frame = cam.get_frame()
_, buffer = cv2.imencode('.jpg', frame)
img_b64 = base64.b64encode(buffer).decode('utf-8')

# Query the vision agent
agent = Agent()
response = await agent.execute_task("What do you see?", images=[img_b64])

Audio (PyAudio)

pip install pyaudio
from daie.utils import AudioManager, record_audio_file, play_audio_file

# List audio devices
am = AudioManager()
am.initialize_audio()
devices = am.list_audio_devices()
print(devices)

# Record 5 seconds to a WAV file
record_audio_file("recording.wav", duration=5.0, sample_rate=16000)

# Play it back
play_audio_file("recording.wav")

CLI

# Agent management
daie agent list
daie agent create --name "MyAgent" --role "general-purpose"
daie agent start <agent-id>
daie agent stop <agent-id>
daie agent status <agent-id>
daie agent delete <agent-id>

# Core system
daie core init
daie core start
daie core stop
daie core status
daie core health
daie core logs

Architecture

src/daie/
โ”œโ”€โ”€ agents/         Agent, AgentConfig, AgentRole, AgentMessage, Orchestrator, AgentRouter
โ”œโ”€โ”€ core/           LLMManager, LLMConfig, LLMType, set_llm(), get_llm(), DecentralizedAISystem, Node
โ”œโ”€โ”€ tools/          Tool base class, @tool decorator, FileManagerTool,
โ”‚                   APICallTool, HTTPGetTool, HTTPPostTool, SeleniumChromeTool, ToolRegistry,
โ”‚                   A2ASendFileTool, A2ASendMessageTool, A2ADelegateTaskTool
โ”œโ”€โ”€ utils/          AudioManager, CameraManager, encryption, logging, serialization
โ”œโ”€โ”€ communication/  CommunicationManager (in-memory + WebSocket P2P)
โ”œโ”€โ”€ registry/       NodeRegistry (decentralized agent discovery)
โ”œโ”€โ”€ memory/         MemoryManager (working, semantic, episodic)
โ”œโ”€โ”€ protocols/      Protocol definitions (ACP - Agent Connect Protocol)
โ”œโ”€โ”€ rag/            RAGEngine, DocumentLoader (TF-IDF retrieval)
โ””โ”€โ”€ cli/            Typer-based CLI (agent management, core system control)

ReAct loop flow:

execute_task("Create notes.txt")
  โ”‚
  โ”œโ”€ LLM: {"tool":"file_manager","params":{"action":"create_file",...}}
  โ”œโ”€ Run FileManagerTool โ†’ {"success":true,...}
  โ”œโ”€ LLM: {"answer":"Done! File created."}
  โ””โ”€ return "Done! File created."

Examples

๐Ÿ’ฌ Chatbots

Level File Description
๐ŸŸข Beginner examples/01_basic_chat.py Interactive streaming chat with persona traits (gender, personality, behavior)
๐ŸŸก Intermediate examples/05_vision_chat.py Real-time vision-enabled chat using qwen3-vl:2b and local camera
๐ŸŸก Intermediate examples/12_chat_loop_config.py Pre-configured chat loops for agents, nodes, orchestrators, and hybrid systems

๐Ÿค– AI Agents

Level File Description
๐ŸŸก Intermediate examples/02_custom_tools.py Custom @tool decorator + FileManagerTool with ReAct agent loop
๐ŸŸก Intermediate examples/09_intelligent_routing.py LLM-based intelligent agent routing with multiple specialized agents
๐Ÿ”ด Advanced examples/classroom_demo.py Multi-agent classroom orchestration with Professor and Student agents
๐Ÿ”ด Advanced examples/courtroom_demo.py Multi-agent courtroom simulation with Judge, Prosecutor, and Defender

๐Ÿ” RAG Systems

Level File Description
๐ŸŸก Intermediate examples/04_rag_chat.py RAG-enabled chat with document-based knowledge retrieval

๐ŸŒ Networking & Communication

Level File Description
๐Ÿ”ด Advanced examples/03_p2p_networking.py Multi-agent P2P messaging, authorization, and A2A file transfer
๐Ÿ”ด Advanced examples/07_node_agents_interactive.py Interactive Node-based chat system with multiple agents
๐Ÿ”ด Advanced examples/08_node_agents_demo.py Automated Node demonstration with resource management

๐Ÿ—๏ธ Architecture Examples

Level File Description
๐Ÿ”ด Advanced examples/classroom_demo.py Multi-agent classroom orchestration with Professor and Student agents
๐Ÿ”ด Advanced examples/courtroom_demo.py Multi-agent courtroom simulation with Judge, Prosecutor, and Defender

Run any example:

source venv/bin/activate
python examples/01_basic_chat.py

๐Ÿ—๏ธ Architecture Patterns

When to Use Node

Use Node when you need:

  • Distributed networks across multiple machines/locations
  • Resource management (GPU, memory, model cache)
  • Peer-to-peer communication between agents
  • Horizontal scalability by adding nodes
  • Edge computing with local processing
  • High availability with no single point of failure
  • Geographic distribution across regions
  • Multi-tenant systems with resource isolation

Don't use Node when:

  • Simple task coordination on a single machine
  • Quick prototyping without infrastructure setup
  • Stateless operations that don't need resource management
  • Team lacks distributed systems expertise

When to Use Orchestrator

Use Orchestrator when you need:

  • Task decomposition into manageable sub-tasks
  • Specialized agents with different skills
  • Result aggregation from multiple agents
  • Workflow coordination with clear hierarchy
  • Research/analysis tasks requiring multiple experts
  • Content creation workflows
  • Customer support routing
  • Multi-step workflows

Don't use Orchestrator when:

  • Flat peer structure with equal agents
  • Direct communication without mediation
  • Resource management is required
  • Distributed network across multiple machines

When to Use Hybrid (Node + Orchestrator)

Use Hybrid when you need:

  • Enterprise-scale systems with multiple teams
  • Distributed teams with local coordination
  • Resource-aware task execution
  • Complex distributed workflows
  • Maximum scalability and flexibility
  • Edge computing with central coordination
  • Multi-location with specialized teams

Decision Matrix:

Scenario Node Orchestrator Hybrid
Single machine, simple tasks โŒ โœ… โŒ
Multiple machines, no coordination โœ… โŒ โŒ
Single machine, complex workflows โŒ โœ… โŒ
Multiple machines, complex workflows โŒ โŒ โœ…
Resource management needed โœ… โŒ โœ…
Task delegation needed โŒ โœ… โœ…
Geographic distribution โœ… โŒ โœ…
Enterprise-scale systems โœ… โŒ โœ…

๐Ÿ“– Full guide: Node vs Orchestrator โ€” 100+ use cases, decision matrix, and real-world examples


๐ŸŒ Real-World Use Cases

Distributed Research Network

  • Multiple labs across different locations
  • Each lab manages its own resources (GPU clusters, specialized hardware)
  • Labs collaborate on research projects
  • Orchestrator within each lab coordinates local tasks

Smart City Traffic Management

  • Multiple districts with local coordination
  • Each district manages traffic cameras and sensors
  • Orchestrator coordinates traffic signals within district
  • Nodes share traffic data across districts

Multi-Location Customer Support

  • Support centers in different time zones
  • 24/7 coverage across time zones
  • Each center manages its own resources
  • Orchestrator routes tickets to appropriate specialist

Autonomous Vehicle Fleet

  • Each vehicle manages its own sensors and compute
  • Orchestrator coordinates navigation decisions
  • Nodes share traffic and road condition data
  • Resource management tracks battery, compute, sensors

Distributed Content Creation

  • Multiple teams (writing, design, video)
  • Each team manages its own tools and resources
  • Orchestrator coordinates content workflow
  • Nodes share assets and drafts

๐Ÿ’ก Project Ideas

Beginner Projects

  1. Personal AI Assistant Network โ€” Create a node with multiple specialized assistants (calendar, email, research, coding)
  2. Study Group Simulator โ€” Simulate a study group with a professor and students using Orchestrator

Intermediate Projects

  1. Multi-Location News Network โ€” Create a distributed news network with editorial teams in different locations
  2. E-commerce Support System โ€” Build a distributed customer support system with specialized teams

Advanced Projects

  1. Distributed AI Research Lab โ€” Create a research network with multiple labs, each with specialized equipment and expertise
  2. Smart Factory Automation โ€” Build an automated factory system with multiple production lines

๐Ÿ“– Full project ideas: Node vs Orchestrator โ€” Detailed code examples for each project


๐Ÿ”ง Core Components

Agent System

  • ReAct Loop: LLM reasons โ†’ picks a tool โ†’ sees result โ†’ iterates until final answer
  • Persona System: Configure gender, personality, and behavior traits
  • Tool Integration: 8+ pre-built tools with custom @tool decorator
  • Memory Management: Working, semantic, and episodic memory
  • Chat Loop Configs: Pre-configured chat loops for agents, nodes, orchestrators, and hybrid systems

Multi-Agent Coordination

  • Orchestrator: Main agent coordinates sub-agents for complex tasks
  • HybridOrchestratorNode: Simplified hybrid setup combining Node + Orchestrator in one class
  • Agent Router: LLM-based intelligent routing for optimal agent selection
  • Task Delegation: Automatic task decomposition and result aggregation

Networking & Communication

  • P2P Networking: Direct agent-to-agent communication via WebSocket
  • Authentication: Token-based auth with sender whitelists
  • File Transfer: Secure A2A file transfer with Base64 encoding
  • Node Registry: Decentralized agent discovery

RAG System

  • TF-IDF Retrieval: Simple but effective document retrieval
  • Per-Agent Knowledge: Each agent can have its own knowledge base
  • Document Loading: Support for .txt, .pdf, .md files

LLM Support

  • Multi-Provider: Ollama (default), OpenAI, Anthropic, Google, Azure, OpenRouter
  • Streaming: Library-level streaming for real-time responses
  • Per-Agent Overrides: Each agent can have its own temperature and max_tokens

๐Ÿ“Š Performance & Scalability

Performance Ratings

Component Rating Notes
Setup Time โญโญโญโญโญ Quick to get started
Scalability โญโญโญโญโญ Horizontal (nodes) + vertical (sub-agents)
Resource Efficiency โญโญโญโญโญ Built-in resource tracking per node
Communication Speed โญโญโญโญ Direct P2P + A2A messaging
Fault Tolerance โญโญโญโญ Distributed + orchestrator backup
Complexity โญโญโญ Moderate learning curve

Scalability Features

  • Horizontal Scaling: Add nodes to increase capacity
  • Vertical Scaling: Add sub-agents to orchestrators
  • Load Distribution: Distribute work across multiple machines
  • Resource Isolation: Separate resources per node
  • Geographic Distribution: Deploy across multiple regions

Optimization Tips

  1. Use streaming for real-time responses
  2. Enable RAG for context-aware answers
  3. Configure personas for better agent behavior
  4. Use AgentRouter for intelligent task routing
  5. Deploy nodes close to data sources
  6. Monitor resources per node
  7. Implement health checks for node status

๐Ÿ”’ Security Features

  • Authentication: Token-based auth for agent connections
  • Authorization: Sender whitelists for message filtering
  • Encryption: Built-in encryption utilities
  • Secure File Transfer: Base64 encoding for A2A file transfers
  • Resource Isolation: Per-node resource isolation
  • Access Control: Per-node and per-agent access control

๐Ÿ› ๏ธ Developer Experience

Easy Setup

pip install daie

Simple API

from daie import Agent, AgentConfig, set_llm

set_llm(ollama_llm="llama3.2:1b", stream=True)
agent = Agent(config=AgentConfig(name="Alex", personality="helpful"))
await agent.start()
response = await agent.send_message("Hello!")

Pre-configured Chat Loops

from daie import Agent, AgentConfig
from daie.chat import ChatLoopConfig

agent = Agent(config=AgentConfig(name="LUNA", personality="friendly"))
chat_loop = ChatLoopConfig(agent=agent)
chat_loop.run()  # Start interactive chat with minimal code!

Comprehensive Documentation

Testing

# Run all tests
pytest tests/

# Run specific test file
pytest tests/test_agents.py

# Run with coverage
pytest --cov=src/daie tests/

Development

git clone https://github.com/kanishkkumarsingh2004/DAIE.git
cd DAIE
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"

# Run tests
pytest tests/

# Run example chat loop
python examples/01_basic_chat.py

Troubleshooting

Problem Fix
Could not connect to Ollama Run ollama serve and pull a model: ollama pull wizard-vicuna-uncensored:7b
ModuleNotFoundError: cv2 pip install opencv-python
ModuleNotFoundError: pyaudio pip install pyaudio
Agent not responding Call await agent.start() before execute_task()
Task timeout Increase task_timeout in AgentConfig
LLM returns plain text instead of JSON Normal โ€” the agent treats plain text as a final answer
execute_task takes 30-60s on first call The local LLM model is loading into memory. Subsequent calls are faster
Failed to load registry warning Ensure node_registry.json contains valid JSON (not empty)
Persona traits not applied Verify gender, personality, or behavior are set in AgentConfig

Current Status

โœ… Production Ready

DAIE is a mature, production-ready framework with comprehensive features:

  • Core Framework: Fully implemented and tested
  • Agent System: Complete with ReAct loop, personas, and tool integration
  • Multi-Agent Orchestration: Orchestrator pattern for complex task coordination
  • Intelligent Routing: LLM-based agent selection with AgentRouter
  • P2P Networking: Full peer-to-peer communication with authentication
  • RAG System: TF-IDF based retrieval with per-agent knowledge bases
  • Tools: 8+ pre-built tools with custom @tool decorator support
  • Memory Management: Working, semantic, and episodic memory systems
  • CLI: Complete command-line interface for agent and system management
  • Documentation: Comprehensive docs with examples and guides
  • Chat Loop Configs: Pre-configured chat loops for agents, nodes, orchestrators, and hybrid systems

๐Ÿ“Š Test Coverage

  • Unit Tests: 20+ test files covering all major components
  • Integration Tests: End-to-end testing for multi-agent scenarios
  • Example Tests: All examples have corresponding test coverage

๐Ÿ”ง Recent Improvements

  • Enhanced Node vs Orchestrator documentation with 100+ use cases
  • Added decision matrix for architecture selection
  • Expanded "When to Use" guides with detailed scenarios
  • Improved error handling and logging throughout

๐Ÿค Community & Support

Getting Help

  • Documentation: Comprehensive docs in the docs folder
  • Examples: Working code examples in the examples folder
  • Issues: Report bugs and request features on GitHub
  • Discussions: Join community discussions

Contributing

We welcome contributions! Here's how to get started:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes and add tests
  4. Run tests: pytest tests/
  5. Commit your changes: git commit -m 'Add amazing feature'
  6. Push to the branch: git push origin feature/amazing-feature
  7. Open a Pull Request

Development Setup

git clone https://github.com/kanishkkumarsingh2004/DAIE.git
cd DAIE
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
pytest tests/

Code Style

  • Formatter: Black
  • Linter: Flake8
  • Type Checker: MyPy
  • Tests: pytest with pytest-asyncio

๐Ÿ“š Learning Resources

Tutorials

  1. Getting Started: docs/getting-started.md
  2. Building Your First Agent: examples/01_basic_chat.py
  3. Adding Tools: examples/02_custom_tools.py
  4. P2P Networking: examples/03_p2p_networking.py
  5. Multi-Agent Orchestration: examples/classroom_demo.py
  6. Pre-configured Chat Loops: examples/12_chat_loop_config.py

Architecture Guides

Video Tutorials

  • Coming soon!

๐Ÿ“Š Statistics

  • Lines of Code: 10,000+
  • Test Files: 20+
  • Examples: 10+
  • Documentation Pages: 15+
  • Supported LLM Providers: 6 (Ollama, OpenAI, Anthropic, Google, Azure, OpenRouter)
  • Pre-built Tools: 8+
  • Architecture Patterns: 3 (Node, Orchestrator, Hybrid)
  • Chat Loop Configs: 4 (ChatLoopConfig, NodeChatConfig, OrchestratorChatConfig, HybridChatConfig)

๐Ÿ™ Acknowledgments

  • Ollama for local LLM support
  • LangChain for inspiration
  • FastAPI for HTTP server
  • Pydantic for data validation
  • Rich for beautiful terminal output
  • Typer for CLI framework

License

MIT โ€” see LICENSE

Author

Built by Kanishk Kumar Singh โ€” kanishkkumar2004@gmail.com


โญ Star History

If you find DAIE useful, please give it a star on GitHub! It helps others discover the project.

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

daie-1.0.5.tar.gz (187.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

daie-1.0.5-py3-none-any.whl (156.9 kB view details)

Uploaded Python 3

File details

Details for the file daie-1.0.5.tar.gz.

File metadata

  • Download URL: daie-1.0.5.tar.gz
  • Upload date:
  • Size: 187.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for daie-1.0.5.tar.gz
Algorithm Hash digest
SHA256 9a8a706701e1d9f208bb4098c01f97f1ef0d580d7fa12e55f8cf2f5fec00bc94
MD5 151a3fcde28382f2454668ef031a2249
BLAKE2b-256 f8ce5ba3740db3a83d71d19dbb35f46c18af8e3eb2865d21678d0ded51b2431c

See more details on using hashes here.

File details

Details for the file daie-1.0.5-py3-none-any.whl.

File metadata

  • Download URL: daie-1.0.5-py3-none-any.whl
  • Upload date:
  • Size: 156.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for daie-1.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 c9c904eb002d32c8b981c613382b49210a907b216bbd483abaf6b4a78a875ee1
MD5 02ea068662f1d482c93a8fd6ede888c2
BLAKE2b-256 95e5fbd89da4d2939be4571f296e74cb3183944fbb0ee9da78660be965179cba

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page