PNNL Auto Multi-Agent AI: Dynamic multi-agent system for building applications
Project description
AUTOMA-AI - Autonomous Multi-Agent Network (Formerly BEM-AI)
AUTOMA-AI is an open-source framework for building production-ready AI agents powered by modern language models such as Gemini, ChatGPT, and Claude.
It provides a structured way to turn LLMs from simple chat interfaces into task-oriented agents that can plan, reason, and interact with external systems. Out of the box, AUTOMA-AI equips agents with capabilities such as:
- Tool and API integration (via MCP or AUTOMA-AI tool interface)
- Retrieval pipelines for grounding responses in data
- Memory systems for session and long-term context
- Skills and workflows for structured task execution
- Multi-agent orchestration for complex problem-solving
AUTOMA-AI is designed with real-world deployment in mind. It supports AWS-based architectures and integrates with major cloud services, enabling teams to move from prototype to production quickly.
Under the hood, the framework builds on emerging standards like Google’s A2A (Agent-to-Agent) and Anthropic’s MCP (Model Context Protocol), and leverages ecosystems such as LangChain and modern orchestration patterns to coordinate agents in engineering workflows.
Ready to use automa-ai? do:
pip install automa-ai
Wanna start a AI development with automa-ai? Don't miss the sim_chat_stream_demo example to help you bootstrap an AI chatbot.
NOTE: BEM-AI has moved to an example folder: bem-ai
⚠️ Project Status
This project is in its early development phase and is considered highly unstable. APIs, interfaces, and core functionality are subject to significant changes. Use for development and experimentation only.
🚀 Overview
AUTOMA-AI creates a distributed multi-agent system that enables intelligent agents to communicate, collaborate, and coordinate using industry-standard protocols. The system leverages:
- Google A2A Protocol: For agent-to-agent communication
- Anthropic MCP Protocol: For model context management
- LangChain / LangGraph: For LLM-based agent orchestration and workflow management
- Google GenAI: For AI model integration
🛠️ Technology Stack
Core Dependencies
- LangChain / LangGraph: Agent framework and orchestration
- Google GenAI: AI model integration
- Google A2A: Agent-to-agent communication protocol
- Anthropic MCP: Model context protocol implementation
Development Tools
- uv: Modern Python package management
- Python 3.12: Runtime environment
📁 Project Structure
BEM-AI/
├── examples/ # Example engineering applications built with the foundational framework
├── automa_ai/
│ ├── agent_test/ # Test implementations and examples
│ ├── agents/ # Generic agent classes
│ │ ├── react_langgraph_agent.py # langchain/langgraph based agent
│ │ ├── agent_factor.py # Agent factory - recommend utility to initialize an agent
│ │ ├── orchestrator_agent.py # An agent that orchestrates the task workflow
│ │ └── adk_agent.py # Google ADK based agent
│ ├── client/ # Under development
│ ├── mcp_servers/ # MCP library
│ ├── network/ # Network
│ ├── common/ # Common utilities
│ └── prompt_engineering/ # Under development
├── pyproject.toml # Project configuration
├── uv.lock # Dependency lock file
└── README.md # This file
🔧 Installation
We recommend install AUTOMA-AI through PYPI:
pip install automa-ai
This will install all packages needed under automa_ai folder.
Prerequisites
- Python 3.12+
- uv package manager
Setup
-
Clone the repository
git clone <repository-url> cd bem-ai
-
Install dependencies using uv
uv sync -
Activate the virtual environment
uv shell
🧪 Running Tests
TBD
🏗️ Architecture
- Orchestrator: Assemble workflow, access agent card storage
- Task Memory: Task memory including shared blackboard and conversation history
- Planner: A planner agent
- Summary: A summary agent
- Specialized agents: Domain specific agents
- Agent Card Service: A RAG pipeline stores agent cards
- Tool and Resources: External tool and resource access through MCPs
📝 Configuration
Project configuration is managed through pyproject.toml. Key configuration areas include:
- Dependencies: Core and development packages
- Build System: uv-based build configuration
- Project Metadata: Version, description, and author information
- Optional: optional packages to use for UI integration and running examples.
Default tools configuration
You can enable built-in tools directly from config using a tools list.
tools:
- type: web_search
config:
provider: auto
serper:
api_key: ${SERPER_API_KEY}
firecrawl:
api_key: ${FIRECRAWL_API_KEY}
scrape:
enabled: true
max_pages: 5
rerank:
provider: opensource
top_k: 5
Then pass this to AgentFactory(..., tools_config=tools) for LANGGRAPHCHAT agents.
See docs/tools.md and examples/web_search_demo.py for a runnable example.
Checkpointer configuration
LANGGRAPHCHAT agents can also be configured with an explicit checkpointer backend through AgentFactory.
The default backend is in-memory. Redis is opt-in and requires a connection URL.
There are two Redis backends:
redis_plain: Uses only core Redis commands. Choose this for standard Redis-compatible deployments, including typical Amazon ElastiCache deployments that do not expose RediSearch and RedisJSON.redis_stack: Uses LangGraph's Redis saver and requires both RediSearch and RedisJSON support. Choose this only when your Redis deployment supports commands such asFT._LISTandJSON.GET.
Use type: default to force the in-memory saver explicitly.
redis_plain
checkpointer:
type: redis_plain
redis_url: redis://localhost:6379
redis_plain is intended for deployments where you want Redis-backed checkpoint persistence without Redis module dependencies.
This is the safest choice for plain ElastiCache Redis/Valkey deployments.
redis_stack
checkpointer:
type: redis_stack
redis_url: redis://localhost:6379
Then pass this to AgentFactory(..., checkpointer_config=checkpointer).
At startup, AUTOMA-AI validates that the configured Redis server supports:
FT._LISTfor RediSearchJSON.GETfor RedisJSON
If either command is unavailable, startup fails with a clear error and tells you to switch to redis_plain.
Choosing the backend
- Choose
redis_plainwhen your deployment target is standard Redis or ElastiCache and you do not specifically need Redis Stack modules. - Choose
redis_stackonly when the Redis service is known to support RediSearch and RedisJSON. - Do not use the old ambiguous
redislabel. The backend must be selected explicitly.
A2A Server Base Path
You can mount an A2A agent server under a URL prefix by passing base_url_path to
A2AAgentServer. This is useful when serving behind a reverse proxy or when you
want a dedicated path segment for the agent.
from automa_ai.common.agent_registry import A2AAgentServer
chatbot_a2a = A2AAgentServer(chatbot, public_agent_card, base_url_path="/permit")
Notes:
- Include a trailing slash in client URLs to avoid 307 redirects (SSE does not follow redirects): e.g.,
SimpleClient(agent_url=f"{A2A_SERVER_URL}/permit/")
Retriever configuration
Automa-AI retrieval uses a provider-based spec (by name or dotted import path). Registry names must
be registered with register_retriever_provider(...), and only the embedding section is standardized;
retrieval_provider_config is passed through to the selected provider.
Registered provider (registry name)
retriever:
enabled: true
provider: "helpdesk_chroma"
top_k: 6
embedding:
provider: "ollama"
model: "nomic-embed-text"
api_key: null
base_url: "http://localhost:11434"
extra: {}
retrieval_provider_config:
db_path: "/data/chroma"
collection_name: "my_collection"
Custom provider (dotted import path)
retriever:
enabled: true
impl: "my_project.retrieval:MyRetrieverProvider"
top_k: 10
embedding:
provider: "openai"
model: "text-embedding-3-large"
api_key: "${OPENAI_API_KEY}"
base_url: null
extra:
dimensions: 3072
retrieval_provider_config:
index_name: "prod-index"
namespace: "tenant-a"
pinecone_api_key: "${PINECONE_API_KEY}"
pinecone_env: "us-west-2"
Examples
Single Agent Chatbot with Streamlit UI interface
This example demonstrates the use of automa-ai for creating a live-streaming chatbot. The example uses QWEN3:4B as the language model and a sample MCP server is built to connect with the agent, demonstrating the capabilities of streaming and tool calling using a single chat bot. See README
Simple BEM typical building Network
This example is the prototype of BEM-AI, which consists of multiple agents collaboratively completing a building energy modeling task together. See README
EnergyPlus Chatbot with EnergyPlus MCP server
This example shows automa-ai integrates with EnergyPlus MCP, developed by LBNL. See README
🔍 Development Guidelines
Code Organization
TBD
Dependency Management
- Use
uv add <package>to add new dependencies - Update
uv.lockwithuv lockafter dependency changes - Keep dependencies minimal and focused
Testing Strategy
TBD
🤝 Contributing
TBD
📄 License
see LICENSE
Note: This project is experimental and under active development. Use in production environments is not recommended at this time.
📚 Citation
If you use this framework in your research or projects, please cite the following paper:
@article{xu5447218development,
title={Development of a dynamic multi-agent network for building energy modeling: A case study towards scalable and autonomous energy modeling},
author={Xu, Weili and Wan, Hanlong and Antonopoulos, Chrissi and Goel, Supriya},
journal={Available at SSRN 5447218}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file automa_ai-0.5.10.tar.gz.
File metadata
- Download URL: automa_ai-0.5.10.tar.gz
- Upload date:
- Size: 3.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f92bf6c010bb7f6d1a21f65c3c3ce442b904bf793accbffb75a843c3b8be8ef0
|
|
| MD5 |
a6e0806b8db1f373cf9e554035c0bc7a
|
|
| BLAKE2b-256 |
e84202f94bd28199ea9aa8bfc523cc250e45d2e3c11ea6bfa9c6bb49e01e9312
|
File details
Details for the file automa_ai-0.5.10-py3-none-any.whl.
File metadata
- Download URL: automa_ai-0.5.10-py3-none-any.whl
- Upload date:
- Size: 147.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b432e9e0d72435dc6e7310ffd4a17f7f9acd0e110ad372e2af1057ca98f839b4
|
|
| MD5 |
3c03991f0ec4e082771c5477b1d98952
|
|
| BLAKE2b-256 |
2b724b03b5294122d65a409fa59ef20c4572a2a6f87637894cadbd034aa511d0
|