A professional, production-ready Python library for building autonomous AI agents with tool use, multi-agent orchestration, P2P networking, and persistent memory
Project description
๐ DAIE โ Decentralized AI Ecosystem
Build autonomous AI agents that reason, use tools, communicate over P2P networks, and stream responses โ powered by any LLM
The lightweight, offline-first alternative to LangChain for building production-ready AI agents
Why DAIE?
| Feature | DAIE | LangChain | CrewAI |
|---|---|---|---|
| Offline-first | โ Full Ollama support | โ Cloud-dependent | โ Cloud-dependent |
| P2P Networking | โ Decentralized / No Central Server | โ No | โ No |
| Parallel Execution | โ Smart Concurrency Controller | โ ๏ธ Complex/Manual | โ ๏ธ Basic |
| Persistent Memory | โ SQLite/Vector + Shared Namespaces | โ ๏ธ Manual Storage | โ ๏ธ Limited |
| Peer Review | โ MoA / Parliament Consensus | โ No | โ No |
| Intelligent Routing | โ Content-aware Agent Selection | โ No | โ No |
| Temporal Context | โ Native Date/Time Awareness | โ No | โ No |
| Agent Personas | โ Gender, Personality, Behavior | โ Limited | โ Limited |
| File Transfer | โ A2A Secure Network Transfer | โ No | โ No |
| Vision Support | โ Camera + Vision Models | โ ๏ธ Limited | โ No |
| Streaming | โ Library-level (tokens as they arrive) | โ ๏ธ Per-call | โ No |
| Dependencies | ๐ชถ Ultra-lightweight (Zero-dep Core) | ๐ฆ๐ฆ๐ฆ Heavy | ๐ฆ๐ฆ Medium |
DAIE is for you if you want:
- ๐ Offline-first AI โ Run everything locally with Ollama/Llama.cpp, no API keys required.
- ๐ Decentralized Intelligence โ Agents communicate directly over P2P networks; build a global "Living LLM" tapestry.
- โก Parallel Efficiency โ True parallel agent execution with smart concurrency limits to protect your GPU/VRAM.
- ๐ง Collective Intelligence โ Shared persistent memory namespaces allow agent clusters to learn and evolve together.
- ๐๏ธ Unbreakable Reasoning โ Use the Parliament Architecture to pitch specialists against each other for peer-reviewed consensus.
- ๐ญ Rich Personality โ Go beyond "helpful assistant" with deeply configured personas, traits, and behavioral guardrails.
- ๐๏ธ Visual Awareness โ Direct camera integration and vision model support out of the box.
- ๐ฌ Ready-to-Ship Loops โ Professional chat interfaces for individual agents, nodes, and hybrid multi-agent systems.
๐๏ธ Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ USER / APPLICATION โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ DAIE FRAMEWORK โ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ
โ โ AGENTS โ โ TOOLS โ โ MEMORY โ โ
โ โ โข ReAct โ โ โข File โ โ โข Working โ โ
โ โ โข Persona โ โ โข API โ โ โข Semantic โ โ
โ โ โข Config โ โ โข Selenium โ โ โข Episodic โ โ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ
โ โ โ โ โ
โ โผ โผ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ ORCHESTRATOR & PARLIAMENT (Multi-Agent) โ โ
โ โ โข Task delegation โข Peer-Review & Consensus โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ AGENT ROUTER (Intelligent) โ โ
โ โ โข LLM-based routing โข Content analysis โ โ
โ โ โข Dynamic agent selection โข Routing history โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ COMMUNICATION MANAGER โ โ
โ โ โข P2P networking โข WebSocket Support โข Auth โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ LLM MANAGER โ โ
โ โ โข Ollama โข OpenAI โข Anthropic โข Google โข Azure โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ RAG ENGINE (TF-IDF) โ
โ โข Document loading โข Context retrieval โข Knowledge base โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Features
๐ค AI Agents
- ReAct agent loop โ LLM reasons โ picks a tool โ sees the result โ iterates until it gives a final answer
- Parliament Architecture โ A "Mixture-of-Agents" peer-review mechanism supporting configurable
max_review_roundsand dynamicTF-IDFearly stopping to reach mathematical Pydantic-enforced consensus. - Distributed/P2P Parliament โ Parliament can span multiple physical nodes. Agents on different machines participate in deliberation via broadcast-based signaling.
- Hybrid Parliament Orchestration โ Synthesizes abstract high-level tasks via the parliament, dynamically pushing the resulting roadmap into an Orchestrator.
- Multi-Agent Orchestration โ Coordinate main agents and sub-agents for complex goals (e.g. Research Lab, Courtroom)
- Intelligent Agent Router โ LLM-based routing that automatically selects the best agent for each message based on content analysis
- Agent persona โ configure
gender,personality, andbehaviortraits injected directly into the LLM prompt - Per-agent LLM overrides โ each agent can have its own
temperatureandmax_tokens - Real-Time Temporal Awareness โ All agents have instant knowledge of current date, time, and timezone context via automatic prompt injection.
- Parallel Execution Layer โ Smart concurrency management via
ParallelExecutormapping native thread locks to cleanly parallelize local topology without hitting GPU/OOM faults.
๐ง Persistent Memory
- Persistent Storage โ Support for
SQLiteStoragebackend, ensuring agent memories survive restarts. - Shared Memory Namespaces โ Agents in an
OrchestratororParliamentcan share a unified memory context for collective intelligence. - Auto-Summarization โ Automatic condensation of episodic memories into long-term summaries using LLM to keep the working context lean.
- Memory Snapshots โ Manual
save_memory()andload_memory()methods for versioned agent states.
๐ RAG Systems
โ๏ธ Automation Tools
- Hardened Tools โ Secure sandbox-ready code execution, robust web search (DuckDuckGo + Tavily), and modern Playwright browser automation.
- Pre-built tools โ file system, HTTP API calls, SQLite/PostgreSQL Database, Selenium & Playwright browser automation.
- Custom tools โ decorate any function with
@tooland it works identically to built-in tools - A2A file transfer โ securely send files between agents over the network using Base64 encoding
๐ฌ Chatbots & Vision
- Streaming tokens โ set
stream=Trueonce, tokens print as they arrive - Vision Capabilities โ Support for vision models (e.g.
qwen3-vl:2b) with camera integration - Camera & audio โ optional OpenCV camera capture and PyAudio microphone/speaker support
- Chat Loop Configs โ Pre-configured chat loops for agents, nodes, orchestrators, and hybrid systems
๐ Networking & Communication
- P2P networking โ agents communicate across machines via WebSocket with authentication & authorization
- WebSocket Support โ Real-time bidirectional communication
- Multi-provider LLM โ Ollama (default), OpenAI, Anthropic, Google, Azure, OpenRouter
๐ ๏ธ Developer Tools
- CLI โ manage agents and the core system from the terminal
Documentation
For detailed documentation, see the docs folder:
๐ Getting Started
- Getting Started โ Installation, quick start, and basic concepts
๐ค AI Agents
- Agents โ Agent creation, configuration, and the ReAct loop
- Parliament โ Mixture-of-Agents deliberation, peer review, and consensus synthesis
- Orchestrator โ Multi-agent coordination and task delegation
- Hybrid Pipeline โ Sequential strategic planning (Parliament) and task execution (Orchestrator)
- Memory โ Agent memory management (working, semantic, episodic)
- Agent Router โ LLM-based intelligent agent routing
๐ RAG Systems
- RAG โ Retrieval-Augmented Generation with TF-IDF
โ๏ธ Automation Tools
- Tools โ Pre-built tools, custom tools, and the @tool decorator
๐ Networking & Communication
- P2P Networking โ Peer-to-peer communication protocol for agents
- Network Configuration โ Detailed guide on
network_urlandnetwork_connections - Node โ Node abstraction for managing agents and resources
- Orchestrator โ Multi-agent coordination and task delegation
- Node vs Orchestrator โ Complete comparison guide with 100+ use cases and decision matrix
- Communication โ P2P networking, messaging, and file transfers
- LLM Configuration โ Multi-provider LLM setup and streaming
- Chat Configs โ Pre-configured chat loops for agents, nodes, orchestrators, and hybrid systems
๐๏ธ Architecture Patterns
- Parliament Pattern: Strict peer-reviewed consensus engine with structural Pydantic outputs and dynamic early stopping, dramatically reducing hallucination rates.
- Node Architecture: Distributed infrastructure for multi-location systems.
- Orchestrator Pattern: Hierarchical workflow coordination for complex tasks.
- Hybrid Architecture: Combine Node + Orchestrator for enterprise-scale systems.
- HybridOrchestratorNode: Simplified hybrid setup combining Node + Orchestrator in one class.
- HybridParliamentOrchestrator: Combines Parliament's theoretical strategic modeling natively with the Orchestrator's execution delegation.
- Agent Router: LLM-based intelligent routing for optimal agent selection.
๐ ๏ธ Developer Tools
- CLI โ Command-line interface for agent and system management
- Utils โ Camera, audio, encryption, and utility functions
- API Reference โ Complete API reference for all modules
โก Quick Start (30 seconds)
# 1. Install DAIE
pip install daie
# 2. Install Ollama (local LLM)
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.2:1b
# 3. Run your first agent
python -c "
import asyncio
from daie import Agent, AgentConfig, set_llm
set_llm(ollama_llm='llama3.2:1b', stream=True)
async def main():
agent = Agent(config=AgentConfig(name='Alex', personality='helpful and witty'))
await agent.start()
response = await agent.send_message('Hello! What can you do?')
await agent.stop()
asyncio.run(main())
"
That's it! You now have a working AI agent in 30 seconds.
๐ฏ Real Output Example
$ python examples/01_basic_chat.py
=== Basic Chat Loop ===
Type 'exit' or press Ctrl+C to quit.
You: What's the weather like today?
LUNA: ๐ค๏ธ Hey there! I'd love to help with the weather, but I don't have access to real-time data. However, I can tell you that I'm feeling sunny and energetic today! โ๏ธ
If you want actual weather info, you could:
1. Ask me to search the web using my browser tool
2. Tell me your location and I'll look it up
3. Just chat with me about anything else!
What would you like to do? ๐
You: Search for weather in San Francisco
LUNA: ๐ Connecting to the globe! Let me find that for you.
[Using tool: web_search]
[Querying DuckDuckGo & Tavily fallback...]
[Search completed in 0.8s]
LUNA: ๐ Here's the current situation in San Francisco:
- Temperature: 62ยฐF (17ยฐC)
- Conditions: Partly cloudy
- Humidity: 75%
- Wind: 12 mph from the west
Perfect weather for a walk across the Golden Gate Bridge! ๐
Installation
pip install daie
Optional extras:
pip install "daie[dev]" # pytest, black, mypy, flake8, pytest-asyncio, pytest-cov
pip install "daie[docs]" # sphinx, sphinx-rtd-theme, nbsphinx
Requires Python 3.10+
Core dependencies: pyyaml, selenium, webdriver-manager, uvicorn, websockets, nats-py, pyaudio, zeroconf, kademlia, numpy, pydantic, pydantic-settings
[!TIP] Zero-Dependency Philosophy: DAIE includes in-house, lightweight replacements for
requests,python-dotenv,rich,typer, andcryptographyto keep the core footprint minimal and avoid dependency hell.
Quick Start
1. Simple streaming chat with persona
import asyncio
from daie import Agent, AgentConfig, set_llm
from daie.agents import AgentRole
set_llm(ollama_llm="wizard-vicuna-uncensored:7b", stream=True)
async def main():
agent = Agent(config=AgentConfig(
name="Alex",
role=AgentRole.GENERAL_PURPOSE,
system_prompt="You are a helpful and concise AI assistant.",
gender="female",
personality="sassy, witty, and very direct",
behavior="always uses emojis and speaks enthusiastically",
temperature=0.9,
max_tokens=1024
))
await agent.start()
print("=== Chat Loop ===")
print("Type 'exit' to quit.\n")
while True:
try:
user_input = input("You: ")
if user_input.lower() in ("exit", "quit"):
break
except (KeyboardInterrupt, EOFError):
print("\nExiting...")
break
response = await agent.send_message(user_input)
print("\n")
await agent.stop()
asyncio.run(main())
2. Agent with tools (ReAct loop)
import asyncio
from daie import Agent, AgentConfig, set_llm
from daie.agents import AgentRole
from daie.tools import FileManagerTool, APICallTool, tool
set_llm(ollama_llm="llama3.2:1b", stream=True)
# Custom tool via decorator
@tool(name="calculate_math", description="Evaluate a basic math expression.")
async def calculate_math(expression: str) -> str:
return str(eval(expression))
async def main():
agent = Agent(config=AgentConfig(
name="MathBot",
role=AgentRole.GENERAL_PURPOSE,
system_prompt="You are a capable agent with access to math and file tools.",
))
agent.add_tool(calculate_math)
agent.add_tool(FileManagerTool())
await agent.start()
# LLM autonomously picks the right tools via the ReAct loop
result = await agent.execute_task(
"Calculate 25 * 14 and save the result into a file called result.txt"
)
print("Final Answer:", result)
await agent.stop()
asyncio.run(main())
3. P2P multi-agent networking & file transfer
import asyncio
from daie import Agent, AgentConfig, set_llm
from daie.agents import AgentRole
from daie.communication import CommunicationManager
from daie.agents.message import AgentMessage
set_llm(ollama_llm="wizard-vicuna-uncensored:7b")
async def main():
# Shared communication bus
comm = CommunicationManager()
await comm.start()
# Agent 1
agent1 = Agent(config=AgentConfig(
name="NodeAlfa",
role=AgentRole.GENERAL_PURPOSE,
network_url="ws://localhost:8000",
))
await agent1.start(communication_manager=comm)
# Agent 2 (with auth + file transfers)
agent2 = Agent(config=AgentConfig(
name="NodeBravo",
role=AgentRole.GENERAL_PURPOSE,
network_url="ws://localhost:8001",
auth_token="secure_token_123",
allow_file_transfers=True,
))
await agent2.start(communication_manager=comm)
# Send direct message
msg = AgentMessage(
sender_id=agent1.id,
receiver_id=agent2.id,
content="Hello from NodeAlfa!",
message_type="text",
)
await comm.send_message(msg)
# A2A file transfer
file_tool = agent1.get_tool("a2a_send_file")
if file_tool:
await file_tool._execute({
"receiver_id": agent2.id,
"file_path": "payload.txt",
"message": "Secure payload!",
})
await agent1.stop()
await agent2.stop()
await comm.stop()
asyncio.run(main())
4. Multi-Agent Orchestration
The Orchestrator allows a main agent to coordinate multiple sub-agents to solve complex problems.
from daie import Agent, AgentConfig, Orchestrator
from daie.agents import AgentRole
Professor = Agent(config=AgentConfig(name="Professor", role=AgentRole.COORDINATOR))
Nova = Agent(config=AgentConfig(name="NOVA", goal="Handle technical research"))
orchestrator = Orchestrator(
main_agent=Professor,
sub_agents=[Nova],
context_name="research_lab"
)
await orchestrator.start()
response = await orchestrator.execute_task("Research decentralized consensus")
5. Decentralized RAG
Agents can maintain independent knowledge bases using simple directory-based RAG.
config = AgentConfig(
name="Expert",
rag_document_path="data/expert_knowledge/" # Local folder with .txt, .pdf, .md files
)
agent = Agent(config=config)
# The agent will automatically retrieve relevant context before answering
6. Intelligent Agent Routing
The AgentRouter uses LLM to automatically select the best agent for each message based on content analysis.
import asyncio
from daie import Agent, AgentConfig, set_llm
from daie.agents import AgentRole, AgentRouter
set_llm(ollama_llm="llama3.2:1b", stream=True)
async def main():
# Create specialized agents
assistant = Agent(config=AgentConfig(
name="Assistant",
role=AgentRole.GENERAL_PURPOSE,
system_prompt="You are a helpful general-purpose assistant."
))
coder = Agent(config=AgentConfig(
name="Coder",
role=AgentRole.SPECIALIZED,
system_prompt="You are an expert programmer. Write clean, efficient code."
))
researcher = Agent(config=AgentConfig(
name="Researcher",
role=AgentRole.SPECIALIZED,
system_prompt="You are a research specialist. Analyze and summarize information."
))
# Create router from agents list
router = AgentRouter.from_agents([assistant, coder, researcher])
# Router automatically selects the best agent
agent_type = await router.route("Write a Python function to sort a list")
# Returns: "coder"
agent_type = await router.route("Explain quantum computing")
# Returns: "researcher"
agent_type = await router.route("What's the weather like?")
# Returns: "assistant"
# Get routing history
history = router.get_routing_history()
print(f"Routed {len(history)} messages")
asyncio.run(main())
7. Parliament Deliberation
The Parliament allows you to pitch specialists against one another to peer-review answers iteratively until they generate a verified architectural consensus string (halting early to save cost if they naturally agree using TF-IDF checks).
from daie import Agent, AgentConfig
from daie.agents import Parliament, AgentRole
from daie.chat import ParliamentChatConfig
coder = Agent(config=AgentConfig(name="Coder", role=AgentRole.SOFTWARE_ENGINEER))
auditor = Agent(config=AgentConfig(name="Security", role=AgentRole.SECURITY_AUDITOR))
manager = Agent(config=AgentConfig(name="Manager", role=AgentRole.GENERAL_PURPOSE))
# Construct a parliament mapping maximum 3 review iterations
parliament = Parliament(sub_agents=[coder, auditor, manager], max_review_rounds=3)
# Start an interactive discussion cleanly using standard loop formats
config = ParliamentChatConfig(parliament=parliament)
config.run()
8. Hybrid Parliament Orchestration
Combines deep abstract debates into actual execution via the HybridParliamentOrchestrator. Sub-agents outline a roadmap, and the orchestrator dynamically delegates task nodes securely.
from daie.agents import OrchestratorAgent, HybridParliamentOrchestrator
from daie.chat import HybridParliamentChatConfig
# Re-use our abstract parliament setup from above
orchestrator = OrchestratorAgent()
hybrid_pipeline = HybridParliamentOrchestrator(
parliament=parliament,
orchestrator=orchestrator,
min_confidence_threshold=60.0 # Failsafe abort for terrible planning protocols
)
config = HybridParliamentChatConfig(hybrid_pipeline=hybrid_pipeline)
config.run()
9. Full-Power One-File Demo (Orchestrator + Tools + Guardrails)
Copy this into a single file (e.g., demo.py) and run it to see the full architecture in action.
import asyncio
from daie import Agent, AgentConfig, Orchestrator, set_llm
from daie.agents import AgentRole
from daie.tools import FileManagerTool, APICallTool, tool
# 1. Setup - Local LLM with streaming enabled
set_llm(ollama_llm="llama3.2:1b", stream=True)
# 2. Define a custom tool for the agents to use
@tool(name="code_executor", description="Executes snippets of python code safely.")
async def execute_code(code: str) -> str:
# In a real app, use a sandbox!
return f"Code executed successfully. Output: [Simulated result for {len(code)} chars]"
async def main():
print("๐ Initializing Decentralized AI Ecosystem Demo...")
# 3. Create a specialized Researcher agent
researcher = Agent(config=AgentConfig(
name="Researcher",
role=AgentRole.SPECIALIZED,
goal="Gather and summarize technical information",
rag_document_path="docs/", # Optional: local knowledge base
))
# 4. Create a specialized Coder agent with guardrails
coder = Agent(config=AgentConfig(
name="Coder",
role=AgentRole.SPECIALIZED,
goal="Write and verify optimized Python code",
max_tokens_per_task=2000, # Production guardrail
max_tool_calls_per_task=5, # Production guardrail
))
coder.add_tool(execute_code)
coder.add_tool(FileManagerTool())
# 5. Create the Orchestrator to coordinate them
# The Coordinator agent manages the sub-agents autonomously
boss = Agent(config=AgentConfig(name="Boss", role=AgentRole.COORDINATOR))
system = Orchestrator(
main_agent=boss,
sub_agents=[researcher, coder],
context_name="SoftwareDevelopmentLab"
)
# 6. Start the system (Lifecycle management is mandatory)
await system.start()
print("\n--- System is online. Executing high-level task ---")
# 7. Execute a complex multi-step task
task = "Research how to implement uuid7 in Python, then write a sample script and save it to 'uuid_sample.py'."
result = await system.execute_task(task)
print("\n--- Task Complete ---")
print(f"Final Outcome:\n{result}")
# 8. Shutdown cleanly
await system.stop()
if __name__ == "__main__":
asyncio.run(main())
10. Chat Loop Config (Pre-configured Chat Loops)
The daie.chat module provides pre-configured chat loop setups so you don't need to write the full boilerplate code. Simply configure and run!
from daie import Agent, AgentConfig, set_llm
from daie.chat import ChatLoopConfig
set_llm(ollama_llm="llama3.2:1b", stream=True)
# Create your agent
agent = Agent(config=AgentConfig(
name="LUNA",
system_prompt="You are a helpful AI assistant.",
personality="friendly and helpful"
))
# Run the chat loop with minimal code!
chat_loop = ChatLoopConfig(agent=agent)
chat_loop.run()
Available Chat Loop Configs:
| Config | Target | Use Case |
|---|---|---|
ChatLoopConfig |
Simple Agent | Basic chat with an agent |
NodeChatConfig |
Single Node | Advanced chat with orchestrator and sub-agents |
OrchestratorChatConfig |
Multi-Node System | Multi-node collaboration and task execution |
HybridChatConfig |
Hybrid System | Simple chat with hybrid systems |
๐ Full guide: Chat Configs โ Complete documentation for all chat loop configurations
Agent Configuration
from daie.agents.config import AgentConfig, AgentRole
config = AgentConfig(
name="MyAgent name", # (ALEX, NOVA, BOB, etc)
role=AgentRole.GENERAL_PURPOSE, # or SPECIALIZED, COORDINATOR, WORKER, ANALYZER, EXECUTOR
goal="Help users with tasks",
backstory="A capable AI assistant",
system_prompt="You are a helpful assistant.",
# Persona traits (automatically injected into LLM prompts)
gender="female", # Literal["male", "female"] or None
personality="sarcastic, witty, very direct", # free-form string
behavior="always starts sentences with Hmm", # free-form string
# Temporal Context
include_datetime=True, # Inject current date/time context
# Per-agent LLM overrides (take priority over global set_llm settings)
temperature=0.7,
max_tokens=1000,
# Task settings
task_timeout=30, # seconds before execute_task times out
# P2P Networking
network_url="ws://your-ip-or-devtunnel:8000",
auth_token="secure_secret_here",
allow_file_transfers=True,
allowed_senders=["agent-id-1", "agent-id-2"], # whitelist (empty = allow all)
)
LLM Configuration
from daie import set_llm, get_llm_config, LLMType
# Ollama (local, default)
set_llm(ollama_llm="llama3.2:latest", temperature=0.7, max_tokens=1000)
set_llm(ollama_llm="gemma3:1b", stream=True) # enable streaming
# OpenAI
set_llm(llm_type=LLMType.OPENAI, model_name="gpt-4o-mini", api_key="sk-...")
# Anthropic
set_llm(llm_type=LLMType.ANTHROPIC, model_name="claude-3-sonnet-20240229", api_key="...")
# Google
set_llm(llm_type=LLMType.GOOGLE, model_name="gemini-pro", api_key="...")
# Azure OpenAI
set_llm(llm_type=LLMType.AZURE, model_name="gpt-4", api_key="...", base_url="https://<resource>.openai.azure.com")
# OpenRouter
set_llm(llm_type=LLMType.OPENROUTER, model_name="mistralai/mistral-7b-instruct", api_key="...")
# Check current config
cfg = get_llm_config()
print(cfg.llm_type, cfg.model_name, cfg.stream)
Streaming
Streaming is a library-level setting โ set it once, it applies everywhere:
set_llm(ollama_llm="llama3.2:latest", stream=True)
When stream=True, send_message() prints tokens as they arrive and returns the full response string when done.
execute_task() always runs the reasoning loop without streaming (for reliability), then streams the final answer.
Tools
Pre-built tools
| Tool | Description |
|---|---|
FileManagerTool |
Complete file and directory manipulation |
APICallTool |
Comprehensive HTTP requests (GET/POST/etc) |
WebSearchTool |
Robust web search (DuckDuckGo + Tavily fallback) |
CodeSandboxTool |
Secure Python code execution in restricted context |
DatabaseTool |
Execute SQL queries against SQLite or PostgreSQL |
PlaywrightBrowserTool |
Modern, fast browser automation (Highly Recommended) |
SeleniumChromeTool |
Legacy browser automation with deep Chrome support |
A2ASendFileTool |
Transfer files securely between agents over P2P network |
A2ASendMessageTool |
Send messages between agents |
A2ADelegateTaskTool |
Delegate tasks to other agents via ACP |
CalendarEmailTool |
Mock interactions for schedule and mail management |
FileManagerTool actions
from daie.tools import FileManagerTool
fm = FileManagerTool()
# Create
await fm.execute({"action": "create_file", "path": "notes.txt", "content": "hello"})
# Read
result = await fm.execute({"action": "read_file", "path": "notes.txt"})
print(result["content"])
# List directory
result = await fm.execute({"action": "list_contents", "path": ".", "recursive": False})
# Delete
await fm.execute({"action": "delete_file", "path": "notes.txt"})
APICallTool
from daie.tools import APICallTool
api = APICallTool()
result = await api.execute({
"url": "https://api.github.com/users/octocat",
"method": "GET",
"headers": {"Accept": "application/json"},
})
print(result["json"])
SeleniumChromeTool (browser automation)
from daie.tools import SeleniumChromeTool
browser = SeleniumChromeTool()
await browser.execute({"action": "open_url", "url": "https://example.com", "headless": True})
result = await browser.execute({"action": "get_title"})
print(result["page_title"])
await browser.execute({"action": "screenshot", "screenshot_path": "page.png"})
Custom @tool decorator
from daie.tools import tool
@tool(name="calculate", description="Evaluate a math expression")
async def calculate(expression: str) -> str:
return str(eval(expression)) # use safely in production
agent.add_tool(calculate)
result = await agent.execute_task("What is 12 * 34?")
P2P Networking & File Transfers
DAIE supports multi-agent communication via its CommunicationManager. Agents can:
- Discover peers via the built-in
NodeRegistry - Send direct messages between agents (in-process or via WebSocket for remote agents)
- Transfer files securely using Base64 encoding with the
A2ASendFileTool - Authorize senders with
allowed_senderswhitelists - Authenticate connections with
auth_token
Setting Up Networked Agents
from daie import Agent, AgentConfig
from daie.communication import CommunicationManager
comm = CommunicationManager()
await comm.start()
config = AgentConfig(
name="NetworkWorker",
network_url="ws://<your-public-ip-or-devtunnel>:8000",
auth_token="secure_cross_machine_token123",
allow_file_transfers=True
)
agent = Agent(config=config)
await agent.start(communication_manager=comm)
Authorization Whitelist
config = AgentConfig(
name="SecureNode",
allowed_senders=["trusted-agent-id-1", "trusted-agent-id-2"],
)
# Only messages from whitelisted sender IDs will be accepted.
# Empty list = allow all senders.
Camera (OpenCV)
pip install opencv-python
from daie.utils import CameraManager, capture_image, list_camera_devices
# List cameras
devices = list_camera_devices()
print("Available cameras:", devices)
# Capture a single image
capture_image("photo.jpg", device_index=0)
# Stream frames
cam = CameraManager()
cam.initialize_camera(device_index=0)
def on_frame(frame):
print("Got frame:", frame.shape)
cam.start_streaming(callback=on_frame)
# ... do work ...
cam.stop_streaming()
cam.release()
Vision Chat with Qwen-VL
DAIE supports local vision models via Ollama.
import cv2
import base64
from daie import Agent, set_llm
set_llm(ollama_llm="qwen3-vl:2b")
# Capture and encode image
cam = CameraManager()
frame = cam.get_frame()
_, buffer = cv2.imencode('.jpg', frame)
img_b64 = base64.b64encode(buffer).decode('utf-8')
# Query the vision agent
agent = Agent()
response = await agent.execute_task("What do you see?", images=[img_b64])
Audio (PyAudio)
pip install pyaudio
from daie.utils import AudioManager, record_audio_file, play_audio_file
# List audio devices
am = AudioManager()
am.initialize_audio()
devices = am.list_audio_devices()
print(devices)
# Record 5 seconds to a WAV file
record_audio_file("recording.wav", duration=5.0, sample_rate=16000)
# Play it back
play_audio_file("recording.wav")
CLI
# Agent management
daie agent list
daie agent create --name "MyAgent" --role "general-purpose"
daie agent start <agent-id>
daie agent stop <agent-id>
daie agent status <agent-id>
daie agent delete <agent-id>
# Core system
daie core init
daie core start
daie core stop
daie core status
daie core health
daie core logs
Architecture
src/daie/
โโโ agents/ Agent, AgentConfig, AgentRole, AgentMessage, Orchestrator, AgentRouter
โโโ core/ LLMManager, LLMConfig, LLMType, set_llm(), get_llm(), DecentralizedAISystem, Node
โโโ tools/ Tool base class, @tool decorator, FileManagerTool, WebSearchTool,
โ CodeSandboxTool, DatabaseTool, PlaywrightBrowserTool, APICallTool,
โ A2ASendFileTool, A2ASendMessageTool, A2ADelegateTaskTool, VisionTool
โโโ utils/ AudioManager, CameraManager, encryption, logging, serialization
โโโ communication/ CommunicationManager (in-memory + WebSocket P2P)
โโโ registry/ NodeRegistry (decentralized agent discovery)
โโโ memory/ MemoryManager (working, semantic, episodic)
โโโ protocols/ Protocol definitions (ACP - Agent Connect Protocol)
โโโ rag/ RAGEngine, DocumentLoader (TF-IDF retrieval)
โโโ cli/ Typer-based CLI (agent management, core system control)
ReAct loop flow:
execute_task("Create notes.txt")
โ
โโ LLM: {"tool":"file_manager","params":{"action":"create_file",...}}
โโ Run FileManagerTool โ {"success":true,...}
โโ LLM: {"answer":"Done! File created."}
โโ return "Done! File created."
Examples
๐ฌ Chatbots
| Level | File | Description |
|---|---|---|
| ๐ข Beginner | examples/01_basic_chat.py |
Interactive streaming chat with persona traits (gender, personality, behavior) |
| ๐ก Intermediate | examples/05_vision_chat.py |
Real-time vision-enabled chat using qwen3-vl:2b and local camera |
| ๐ก Intermediate | examples/12_chat_loop_config.py |
Pre-configured chat loops for agents, nodes, orchestrators, and hybrid systems |
๐ค AI Agents
| Level | File | Description |
|---|---|---|
| ๐ก Intermediate | examples/02_custom_tools.py |
Custom @tool decorator + FileManagerTool with ReAct agent loop |
| ๐ก Intermediate | examples/09_intelligent_routing.py |
LLM-based intelligent agent routing with multiple specialized agents |
| ๐ด Advanced | examples/classroom_demo.py |
Multi-agent classroom orchestration with Professor and Student agents |
| ๐ด Advanced | examples/courtroom_demo.py |
Multi-agent courtroom simulation with Judge, Prosecutor, and Defender |
๐ RAG Systems
| Level | File | Description |
|---|---|---|
| ๐ก Intermediate | examples/04_rag_chat.py |
RAG-enabled chat with document-based knowledge retrieval |
๐ Networking & Communication
| Level | File | Description |
|---|---|---|
| ๐ด Advanced | examples/03_p2p_networking.py |
Multi-agent P2P messaging, authorization, and A2A file transfer |
| ๐ด Advanced | examples/07_node_agents_interactive.py |
Interactive Node-based chat system with multiple agents |
| ๐ด Advanced | examples/08_node_agents_demo.py |
Automated Node demonstration with resource management |
๐๏ธ Architecture Examples
| Level | File | Description |
|---|---|---|
| ๐ด Advanced | examples/classroom_demo.py |
Multi-agent classroom orchestration with Professor and Student agents |
| ๐ด Advanced | examples/courtroom_demo.py |
Multi-agent courtroom simulation with Judge, Prosecutor, and Defender |
Run any example:
source venv/bin/activate
python examples/01_basic_chat.py
๐๏ธ Architecture Patterns
When to Use Node
Use Node when you need:
- Distributed networks across multiple machines/locations
- Resource management (GPU, memory, model cache)
- Peer-to-peer communication between agents
- Horizontal scalability by adding nodes
- Edge computing with local processing
- High availability with no single point of failure
- Geographic distribution across regions
- Multi-tenant systems with resource isolation
Don't use Node when:
- Simple task coordination on a single machine
- Quick prototyping without infrastructure setup
- Stateless operations that don't need resource management
- Team lacks distributed systems expertise
When to Use Orchestrator
Use Orchestrator when you need:
- Task decomposition into manageable sub-tasks
- Specialized agents with different skills
- Result aggregation from multiple agents
- Workflow coordination with clear hierarchy
- Research/analysis tasks requiring multiple experts
- Content creation workflows
- Customer support routing
- Multi-step workflows
Don't use Orchestrator when:
- Flat peer structure with equal agents
- Direct communication without mediation
- Resource management is required
- Distributed network across multiple machines
When to Use Hybrid (Node + Orchestrator)
Use Hybrid when you need:
- Enterprise-scale systems with multiple teams
- Distributed teams with local coordination
- Resource-aware task execution
- Complex distributed workflows
- Maximum scalability and flexibility
- Edge computing with central coordination
- Multi-location with specialized teams
Decision Matrix:
| Scenario | Node | Orchestrator | Hybrid |
|---|---|---|---|
| Single machine, simple tasks | โ | โ | โ |
| Multiple machines, no coordination | โ | โ | โ |
| Single machine, complex workflows | โ | โ | โ |
| Multiple machines, complex workflows | โ | โ | โ |
| Resource management needed | โ | โ | โ |
| Task delegation needed | โ | โ | โ |
| Geographic distribution | โ | โ | โ |
| Enterprise-scale systems | โ | โ | โ |
๐ Full guide: Node vs Orchestrator โ 100+ use cases, decision matrix, and real-world examples
๐ Real-World Use Cases
Distributed Research Network
- Multiple labs across different locations
- Each lab manages its own resources (GPU clusters, specialized hardware)
- Labs collaborate on research projects
- Orchestrator within each lab coordinates local tasks
Smart City Traffic Management
- Multiple districts with local coordination
- Each district manages traffic cameras and sensors
- Orchestrator coordinates traffic signals within district
- Nodes share traffic data across districts
Multi-Location Customer Support
- Support centers in different time zones
- 24/7 coverage across time zones
- Each center manages its own resources
- Orchestrator routes tickets to appropriate specialist
Autonomous Vehicle Fleet
- Each vehicle manages its own sensors and compute
- Orchestrator coordinates navigation decisions
- Nodes share traffic and road condition data
- Resource management tracks battery, compute, sensors
Distributed Content Creation
- Multiple teams (writing, design, video)
- Each team manages its own tools and resources
- Orchestrator coordinates content workflow
- Nodes share assets and drafts
๐ก Project Ideas
Beginner Projects
- Personal AI Assistant Network โ Create a node with multiple specialized assistants (calendar, email, research, coding)
- Study Group Simulator โ Simulate a study group with a professor and students using Orchestrator
Intermediate Projects
- Multi-Location News Network โ Create a distributed news network with editorial teams in different locations
- E-commerce Support System โ Build a distributed customer support system with specialized teams
Advanced Projects
- Distributed AI Research Lab โ Create a research network with multiple labs, each with specialized equipment and expertise
- Smart Factory Automation โ Build an automated factory system with multiple production lines
๐ Full project ideas: Node vs Orchestrator โ Detailed code examples for each project
๐ง Core Components
Agent System
- ReAct Loop: LLM reasons โ picks a tool โ sees result โ iterates until final answer
- Persona System: Configure gender, personality, and behavior traits
- Tool Integration: 8+ pre-built tools with custom
@tooldecorator - Memory Management: Working, semantic, and episodic memory
- Chat Loop Configs: Pre-configured chat loops for agents, nodes, orchestrators, and hybrid systems
Multi-Agent Coordination
- Orchestrator: Main agent coordinates sub-agents for complex tasks
- HybridOrchestratorNode: Simplified hybrid setup combining Node + Orchestrator in one class
- Agent Router: LLM-based intelligent routing for optimal agent selection
- Task Delegation: Automatic task decomposition and result aggregation
Networking & Communication
- P2P Networking: Direct agent-to-agent communication via WebSocket
- Authentication: Token-based auth with sender whitelists
- File Transfer: Secure A2A file transfer with Base64 encoding
- Node Registry: Decentralized agent discovery
RAG System
- TF-IDF Retrieval: Simple but effective document retrieval
- Per-Agent Knowledge: Each agent can have its own knowledge base
- Document Loading: Support for .txt, .pdf, .md files
LLM Support
- Multi-Provider: Ollama (default), OpenAI, Anthropic, Google, Azure, OpenRouter
- Streaming: Library-level streaming for real-time responses
- Per-Agent Overrides: Each agent can have its own temperature and max_tokens
๐ Performance & Scalability
Performance Ratings
| Component | Rating | Notes |
|---|---|---|
| Setup Time | โญโญโญโญโญ | Quick to get started |
| Scalability | โญโญโญโญโญ | Horizontal (nodes) + vertical (sub-agents) |
| Resource Efficiency | โญโญโญโญโญ | Built-in resource tracking per node |
| Communication Speed | โญโญโญโญ | Direct P2P + A2A messaging |
| Fault Tolerance | โญโญโญโญ | Distributed + orchestrator backup |
| Complexity | โญโญโญ | Moderate learning curve |
Scalability Features
- Horizontal Scaling: Add nodes to increase capacity
- Vertical Scaling: Add sub-agents to orchestrators
- Load Distribution: Distribute work across multiple machines
- Resource Isolation: Separate resources per node
- Geographic Distribution: Deploy across multiple regions
Optimization Tips
- Use streaming for real-time responses
- Enable RAG for context-aware answers
- Configure personas for better agent behavior
- Use AgentRouter for intelligent task routing
- Deploy nodes close to data sources
- Monitor resources per node
- Implement health checks for node status
๐ Security Features
- Authentication: Token-based auth for agent connections
- Authorization: Sender whitelists for message filtering
- Encryption: Built-in encryption utilities
- Secure File Transfer: Base64 encoding for A2A file transfers
- Resource Isolation: Per-node resource isolation
- Access Control: Per-node and per-agent access control
๐ ๏ธ Developer Experience
Easy Setup
pip install daie
Simple API
from daie import Agent, AgentConfig, set_llm
set_llm(ollama_llm="llama3.2:1b", stream=True)
agent = Agent(config=AgentConfig(name="Alex", personality="helpful"))
await agent.start()
response = await agent.send_message("Hello!")
Pre-configured Chat Loops
from daie import Agent, AgentConfig
from daie.chat import ChatLoopConfig
agent = Agent(config=AgentConfig(name="LUNA", personality="friendly"))
chat_loop = ChatLoopConfig(agent=agent)
chat_loop.run() # Start interactive chat with minimal code!
Comprehensive Documentation
- Getting Started โ Installation and quick start
- Agents โ Agent creation and configuration
- Node vs Orchestrator โ Architecture comparison
- Tools โ Pre-built and custom tools
- Examples โ Working code examples
Testing
# Run all tests
pytest tests/
# Run specific test file
pytest tests/test_agents.py
# Run with coverage
pytest --cov=src/daie tests/
Development
git clone https://github.com/kanishkkumarsingh2004/DAIE.git
cd DAIE
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
# Run tests
pytest tests/
# Run example chat loop
python examples/01_basic_chat.py
Troubleshooting
| Problem | Fix |
|---|---|
Could not connect to Ollama |
Run ollama serve and pull a model: ollama pull wizard-vicuna-uncensored:7b |
ModuleNotFoundError: cv2 |
pip install opencv-python |
ModuleNotFoundError: pyaudio |
pip install pyaudio |
| Agent not responding | Call await agent.start() before execute_task() |
| Task timeout | Increase task_timeout in AgentConfig |
| LLM returns plain text instead of JSON | Normal โ the agent treats plain text as a final answer |
execute_task takes 30-60s on first call |
The local LLM model is loading into memory. Subsequent calls are faster |
Failed to load registry warning |
Ensure node_registry.json contains valid JSON (not empty) |
| Persona traits not applied | Verify gender, personality, or behavior are set in AgentConfig |
Current Status
โ Production Ready
DAIE is a mature, production-ready framework with comprehensive features:
- Core Framework: Fully implemented and tested
- Agent System: Complete with ReAct loop, personas, and tool integration
- Multi-Agent Orchestration: Orchestrator pattern for complex task coordination
- Intelligent Routing: LLM-based agent selection with AgentRouter
- P2P Networking: Full peer-to-peer communication with authentication
- RAG System: TF-IDF based retrieval with per-agent knowledge bases
- Tools: 12+ pre-built tools with custom
@tooldecorator support - Memory Management: Working, semantic, and episodic memory systems
- CLI: Complete command-line interface for agent and system management
- Documentation: Comprehensive docs with examples and guides
- Chat Loop Configs: Pre-configured chat loops for agents, nodes, orchestrators, and hybrid systems
๐ Test Coverage
- Unit Tests: 20+ test files covering all major components
- Integration Tests: End-to-end testing for multi-agent scenarios
- Example Tests: All examples have corresponding test coverage
๐ง Recent Improvements
- Zero-Drift Temporal Awareness: Solved frozen-in-time hallucinations via mandatory date/time injection for all agents.
- Tool Hardening: Added sandbox-ready code execution, modern Playwright support, and SQL database interactions.
- Communication Stabilization: Implemented inbound/outbound rate limiting for high-reliability NATS delivery.
- Improved Use Cases: Added decision matrix and 100+ architecture scenarios to documentation.
๐ค Community & Support
Getting Help
- Documentation: Comprehensive docs in the docs folder
- Examples: Working code examples in the examples folder
- Issues: Report bugs and request features on GitHub
- Discussions: Join community discussions
Contributing
We welcome contributions! Here's how to get started:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes and add tests
- Run tests:
pytest tests/ - Commit your changes:
git commit -m 'Add amazing feature' - Push to the branch:
git push origin feature/amazing-feature - Open a Pull Request
Development Setup
git clone https://github.com/kanishkkumarsingh2004/DAIE.git
cd DAIE
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
pytest tests/
Code Style
- Formatter: Black
- Linter: Flake8
- Type Checker: MyPy
- Tests: pytest with pytest-asyncio
๐ Learning Resources
Tutorials
- Getting Started: docs/getting-started.md
- Building Your First Agent: examples/01_basic_chat.py
- Adding Tools: examples/02_custom_tools.py
- P2P Networking: examples/03_p2p_networking.py
- Multi-Agent Orchestration: examples/classroom_demo.py
- Pre-configured Chat Loops: examples/12_chat_loop_config.py
Architecture Guides
- Node vs Orchestrator: docs/node-vs-orchestrator.md
- Agent Configuration: docs/agents.md
- Communication: docs/communication.md
- Memory Management: docs/memory.md
- Chat Configs: docs/chat-configs.md
Video Tutorials
- Coming soon!
๐ Statistics
- Lines of Code: 10,000+
- Test Files: 20+
- Examples: 10+
- Documentation Pages: 15+
- Supported LLM Providers: 6 (Ollama, OpenAI, Anthropic, Google, Azure, OpenRouter)
- Pre-built Tools: 12+
- Architecture Patterns: 3 (Node, Orchestrator, Hybrid)
- Chat Loop Configs: 4 (ChatLoopConfig, NodeChatConfig, OrchestratorChatConfig, HybridChatConfig)
๐ Acknowledgments
- Ollama for local LLM support
- LangChain for inspiration
- FastAPI for HTTP server
- Pydantic for data validation
- Rich for beautiful terminal output
- Typer for CLI framework
License
MIT โ see LICENSE
Author
Built by Kanishk Kumar Singh โ kanishkkumar2004@gmail.com
โญ Star History
If you find DAIE useful, please give it a star on GitHub! It helps others discover the project.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file daie-1.0.6.tar.gz.
File metadata
- Download URL: daie-1.0.6.tar.gz
- Upload date:
- Size: 251.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c0884c12847aaf5a41b60ba765d9856d6391ec87702de7777d7266f3b2ac837e
|
|
| MD5 |
7c6b0e20cf56d7426875fa2d40dfea92
|
|
| BLAKE2b-256 |
dff24df3741f4c27cd16c1b3123e794d267c9578dc02d1179d4c11432a9ada89
|
File details
Details for the file daie-1.0.6-py3-none-any.whl.
File metadata
- Download URL: daie-1.0.6-py3-none-any.whl
- Upload date:
- Size: 223.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a7764dbc967a2e563d4e19725ba3c0125da255952cb57e5cb7d3aff5508437b8
|
|
| MD5 |
78146371d4fb9354b4a9a860b3051f66
|
|
| BLAKE2b-256 |
286f764a7685a3f2a3c408d34e7d38c74f4e1921f76d792d286eb7c2e3232979
|