Multi-Agent System Framework for AI Agents with Vanilla SDK Wrappers and Custom LangGraph
Project description
MASAI - Multi-Agent System AI Framework
A powerful, production-ready framework for building multi-agent AI systems with advanced features like persistent memory, long-context management, and sophisticated agent orchestration.
⭐ Please star this project if you find it useful!
🆕 New Documentation
We've added comprehensive guides to help you get the most out of MASAI:
- Model Parameters Guide - Complete reference for all supported models (Gemini, OpenAI, Anthropic) with ALL parameters, examples, and best practices
- Tools Guide - How to define tools, use them, implement Redis caching, and integrate with agents
- Singular Agent Guide - Complete guide for single agent architecture, execution, memory, and tools
- Multi-Agent System Guide - Comprehensive guide for decentralized and hierarchical multi-agent coordination
- OMAN Guide - Orchestrated Multi-Agent Network for enterprise-level multi-domain systems
📋 Quick Navigation
Getting Started
| Document | Description |
|---|---|
| Quick Start | Get started in 5 minutes |
| Installation | Setup instructions and requirements |
| Configuration | Configuration options and setup |
Core Concepts
| Document | Description |
|---|---|
| Framework Overview | Architecture and design principles |
| Model Parameters | NEW! Complete model configuration guide |
| Tools Guide | NEW! Tool definition, usage, and caching |
| Memory System | Persistent memory and long-context management |
Agent Systems
| Document | Description |
|---|---|
| Agent Manager Detailed | AgentManager API and usage |
| Singular Agent Guide | NEW! Single agent architecture and usage |
| Multi-Agent System Guide | NEW! Decentralized and hierarchical MAS |
| OMAN Guide | NEW! Orchestrated Multi-Agent Network |
Advanced Topics
| Document | Description |
|---|---|
| Advanced Usage | Expert patterns and techniques |
| Multi-Agent Orchestration | Complex multi-agent workflows |
| LangChain Agnostic Guide | Using MASAI without LangChain |
Reference
| Document | Description |
|---|---|
| API Reference | Complete API documentation |
| Troubleshooting | Common issues and solutions |
| Usage Guide | Common usage patterns |
Quick Start
Installation
pip install masai-framework
Basic Usage
from masai.AgentManager import AgentManager, AgentDetails
import asyncio
# Create agent manager
manager = AgentManager(user_id="user_123")
# Create an agent
agent = manager.create_agent(
agent_name="assistant",
tools=[], # Add LangChain tools here
agent_details=AgentDetails(
capabilities=["analysis", "reasoning"],
description="Helpful assistant",
style="concise"
)
)
# Use the agent - Full execution
result = await agent.initiate_agent(
query="What is 2+2?",
passed_from="user"
)
print(result["answer"])
# Or use streaming for real-time updates
async for state in agent.initiate_agent_astream(
query="What is 2+2?",
passed_from="user"
):
node_name, state_dict = state
state_value = [v for k, v in state_dict.items()][0]
print(f"Node: {state_value['current_node']}")
See docs/QUICK_START.md for detailed examples.
Core Features
🧠 Multi-Agent Architecture
- Router-Evaluator-Reflector Pattern: Sophisticated agent decision-making
- Agent Orchestration: Coordinate multiple agents for complex tasks
- Tool Integration: Seamless integration with LangChain tools
- Streaming Support: Real-time response streaming
💾 Persistent Memory
- Redis Backend: Fast vector storage with RediSearch
- Qdrant Backend: Distributed vector database support
- User Isolation: Multi-user support with automatic filtering
- Deduplication: Automatic duplicate detection and merging
🔄 Long-Context Management
- Context Summarization: Automatic summarization of long conversations
- Memory Overflow Handling: Intelligent flushing to persistent storage
- Semantic Search: Find relevant memories using embeddings
- Category Filtering: Organize memories by categories
🎯 Flexible Configuration
- Multiple LLM Providers: OpenAI, Google Gemini, Anthropic Claude
- Custom Embeddings: Support for any embedding model
- Scalable Parameters: Configure all model parameters via config
- Component Customization: Override any component behavior
🤝 Multi Agent Orchestration
- Sequential Workflow: Fixed agent pipeline
- Hierarchical Workflow: Supervisor-based delegation
- Decentralized Workflow: Peer-to-peer collaboration
- Orchestrated Multi-Agent Network (OMAN): Coordinate multiple MAS instances
- Data & Context Management: Shared memory, context propagation, isolation
- See docs/MULTI_AGENT_ORCHESTRATION.md for details
📚 Documentation
Core Documentation
- Quick Start - Get started in 5 minutes
- Architecture - System design and components
- Memory System - Persistent memory and long-context management
- Installation - Complete setup guide
- Configuration - All configuration options
Advanced Documentation
- Multi-Agent Orchestration - Sequential, hierarchical, decentralized, and OMAN patterns
- Data & Context Management - Data flow, context sharing, and isolation
- Usage Guide - Common usage patterns
- Advanced Topics - Expert patterns and customization
- Troubleshooting - Common issues and solutions
- API Reference - Complete API documentation
Architecture
System Overview
User Application
↓
AgentManager (Orchestrator)
↓
Agent (Router-Evaluator-Reflector)
├─ MASGenerativeModel (LLM + Memory)
├─ Tool Executor
└─ State Manager
↓
Memory System
├─ LongTermMemory
├─ Redis/Qdrant Backend
└─ Embedding Model
See docs/ARCHITECTURE.md for detailed architecture.
Installation
Requirements
- Python 3.8+
- Redis (for persistent memory) or Qdrant
- API keys for LLM providers (OpenAI, Google, etc.)
Setup
# Clone repository
git clone https://github.com/shaunthecomputerscientist/mas-ai.git
cd mas-ai
# Install dependencies
pip install -r requirements.txt
# Set up environment variables
cp .env.example .env
# Edit .env with your API keys
See docs/INSTALLATION.md for detailed setup.
Usage Guide
Creating Agents
agent = manager.create_agent(
agent_name="research_agent",
agent_details=AgentDetails(
capabilities=["research", "analysis"],
description="Research specialist",
style="detailed"
),
tools=[] # Add LangChain tools here
)
Executing Agent
# Full execution
result = await agent.initiate_agent(
query="Explain quantum computing",
passed_from="user"
)
print(result["answer"])
print(f"Reasoning: {result['reasoning']}")
print(f"Satisfied: {result['satisfied']}")
Streaming Responses
async for state in agent.initiate_agent_astream(
query="Tell me a story",
passed_from="user"
):
node_name, state_dict = state
state_value = [v for k, v in state_dict.items()][0]
# Access state information
if state_value.get("answer"):
print(state_value["answer"])
See docs/USAGE_GUIDE.md for comprehensive examples.
Memory System
Persistent Memory Setup
from masai.AgentManager import AgentManager
from masai.Memory.LongTermMemory import RedisConfig
from langchain_openai import OpenAIEmbeddings
# Configure memory backend
redis_config = RedisConfig(
redis_url="redis://localhost:6379",
index_name="masai_vectors",
vector_size=1536,
embedding_model=OpenAIEmbeddings(model="text-embedding-3-small")
)
# Create manager with memory config
manager = AgentManager(
user_id="user_123",
model_config_path="model_config.json",
memory_config=redis_config # Pass to AgentManager
)
# Create agent with persistent memory enabled
agent = manager.create_agent(
agent_name="assistant",
agent_details=AgentDetails(
capabilities=["reasoning"],
description="Assistant"
),
persist_memory=True, # Enable persistence
long_context=True,
long_context_order=5
)
Memory Operations
# Access long-term memory through agent's LLM components
from masai.schema.Document import Document
# Save memories
await agent.llm_router.long_term_memory.save(
user_id="user_123",
documents=[Document(page_content="User likes Python")]
)
# Search memories
memories = await agent.llm_router.long_term_memory.search(
user_id="user_123",
query="What does user like?",
k=5
)
See docs/MEMORY_SYSTEM.md for detailed memory docs.
Configuration
Agent Configuration Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
agent_name |
str | Required | Unique agent identifier |
agent_details |
AgentDetails | Required | Agent capabilities and style |
tools |
list | [] | LangChain tools |
memory_order |
int | 10 | Messages to keep in memory |
long_context |
bool | True | Enable long-context mode |
long_context_order |
int | 20 | Context summarization threshold |
persist_memory |
bool | None | Enable persistent storage (requires memory_config in AgentManager) |
plan |
bool | False | Enable planner component |
temperature |
float | 0.2 | Sampling temperature |
See docs/CONFIGURATION.md for all options.
Advanced Topics
Custom Tool Integration
from langchain.tools import tool
@tool
def calculate(expression: str) -> str:
"""Calculate mathematical expressions"""
return str(eval(expression))
agent = manager.create_agent(
agent_name="calculator",
tools=[calculate]
)
Multi-Agent Orchestration
from masai.MultiAgents.MultiAgent import MultiAgentSystem, SupervisorConfig
# Decentralized MAS (peer-to-peer)
mas = MultiAgentSystem(agentManager=manager)
result = await mas.initiate_decentralized_mas(
query="Complex task",
set_entry_agent=agent1,
memory_order=3
)
# Hierarchical MAS (supervisor-based)
supervisor_config = SupervisorConfig(
model_name="gpt-4o",
temperature=0.7,
model_category="openai",
memory_order=20,
memory=True,
extra_context={}
)
mas_hierarchical = MultiAgentSystem(
agentManager=manager,
supervisor_config=supervisor_config
)
result = await mas_hierarchical.initiate_hierarchical_mas(query="Complex task")
See docs/ADVANCED.md for advanced patterns.
API Reference
Core Classes
- AgentManager: Orchestrates agent creation and management
- Agent: Router-Evaluator-Reflector architecture
- MASGenerativeModel: LLM with memory management
- LongTermMemory: Persistent memory interface
- RedisConfig/QdrantConfig: Backend configuration
See docs/API_REFERENCE.md for complete API.
Troubleshooting
Redis Connection Refused
redis-server
# or
docker run -d -p 6379:6379 redis:latest
OpenAI API Key Not Found
export OPENAI_API_KEY="your-key-here"
Memory Not Being Retrieved
# Verify context overflow (access through LLM component)
print(f"Summaries: {len(agent.llm_router.context_summaries)}")
print(f"Long context order: {agent.llm_router.long_context_order}")
print(f"Persist memory: {agent.llm_router.persist_memory}")
See docs/TROUBLESHOOTING.md for more solutions.
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
License
MIT License - see LICENSE for details.
Support
- 📖 Documentation
- 🐛 Issues
- 💬 Discussions
Last Updated: October 31, 2025
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file masai_framework-0.4.7.tar.gz.
File metadata
- Download URL: masai_framework-0.4.7.tar.gz
- Upload date:
- Size: 213.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1bb39a2df404d4b7b2347e841a8afa8bb615625862a25aa7a2deafeea22fcdeb
|
|
| MD5 |
9bfa126d20e148869564229921348716
|
|
| BLAKE2b-256 |
10df3261387607fca5174b8ffeef635f6058e20d71fb73463cb0642f50a295fa
|
File details
Details for the file masai_framework-0.4.7-py3-none-any.whl.
File metadata
- Download URL: masai_framework-0.4.7-py3-none-any.whl
- Upload date:
- Size: 156.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1f7a4e57ec2f65dffc9a058d1fb6d7533a60279e80a64d9fc383a958e57ad0bd
|
|
| MD5 |
1fea157ffb8e32c6881d5c6b62cd8eeb
|
|
| BLAKE2b-256 |
b8438f6c7ef68cd4a4a7cd4d0359e801f78dbb545a5ee867adae46bcaa6e98b9
|