Skip to main content

Multi-Agent System Framework for AI Agents with Vanilla SDK Wrappers and Custom LangGraph

Project description

MASAI - Multi-Agent System AI Framework

A powerful, production-ready framework for building multi-agent AI systems with advanced features like persistent memory, long-context management, and sophisticated agent orchestration.

MAS AI

Please star this project if you find it useful!

🆕 New Documentation

We've added comprehensive guides to help you get the most out of MASAI:

  • Model Parameters Guide - Complete reference for all supported models (Gemini, OpenAI, Anthropic) with ALL parameters, examples, and best practices
  • Tools Guide - How to define tools, use them, implement Redis caching, and integrate with agents
  • Singular Agent Guide - Complete guide for single agent architecture, execution, memory, and tools
  • Multi-Agent System Guide - Comprehensive guide for decentralized and hierarchical multi-agent coordination
  • OMAN Guide - Orchestrated Multi-Agent Network for enterprise-level multi-domain systems

📋 Quick Navigation

Getting Started

Document Description
Quick Start Get started in 5 minutes
Installation Setup instructions and requirements
Configuration Configuration options and setup

Core Concepts

Document Description
Framework Overview Architecture and design principles
Model Parameters NEW! Complete model configuration guide
Tools Guide NEW! Tool definition, usage, and caching
Memory System Persistent memory and long-context management

Agent Systems

Document Description
Agent Manager Detailed AgentManager API and usage
Singular Agent Guide NEW! Single agent architecture and usage
Multi-Agent System Guide NEW! Decentralized and hierarchical MAS
OMAN Guide NEW! Orchestrated Multi-Agent Network

Advanced Topics

Document Description
Advanced Usage Expert patterns and techniques
Multi-Agent Orchestration Complex multi-agent workflows
LangChain Agnostic Guide Using MASAI without LangChain

Reference

Document Description
API Reference Complete API documentation
Troubleshooting Common issues and solutions
Usage Guide Common usage patterns

Quick Start

Installation

pip install masai-framework

Basic Usage

from masai.AgentManager import AgentManager, AgentDetails
import asyncio

# Create agent manager
manager = AgentManager(user_id="user_123")

# Create an agent
agent = manager.create_agent(
    agent_name="assistant",
    tools=[],  # Add LangChain tools here
    agent_details=AgentDetails(
        capabilities=["analysis", "reasoning"],
        description="Helpful assistant",
        style="concise"
    )
)

# Use the agent - Full execution
result = await agent.initiate_agent(
    query="What is 2+2?",
    passed_from="user"
)
print(result["answer"])

# Or use streaming for real-time updates
async for state in agent.initiate_agent_astream(
    query="What is 2+2?",
    passed_from="user"
):
    node_name, state_dict = state
    state_value = [v for k, v in state_dict.items()][0]
    print(f"Node: {state_value['current_node']}")

See docs/QUICK_START.md for detailed examples.


Core Features

🧠 Multi-Agent Architecture

  • Router-Evaluator-Reflector Pattern: Sophisticated agent decision-making
  • Agent Orchestration: Coordinate multiple agents for complex tasks
  • Tool Integration: Seamless integration with LangChain tools
  • Streaming Support: Real-time response streaming

💾 Persistent Memory

  • Redis Backend: Fast vector storage with RediSearch
  • Qdrant Backend: Distributed vector database support
  • User Isolation: Multi-user support with automatic filtering
  • Deduplication: Automatic duplicate detection and merging

🔄 Long-Context Management

  • Context Summarization: Automatic summarization of long conversations
  • Memory Overflow Handling: Intelligent flushing to persistent storage
  • Semantic Search: Find relevant memories using embeddings
  • Category Filtering: Organize memories by categories

🎯 Flexible Configuration

  • Multiple LLM Providers: OpenAI, Google Gemini, Anthropic Claude
  • Custom Embeddings: Support for any embedding model
  • Scalable Parameters: Configure all model parameters via config
  • Component Customization: Override any component behavior

🤝 Multi Agent Orchestration

  • Sequential Workflow: Fixed agent pipeline
  • Hierarchical Workflow: Supervisor-based delegation
  • Decentralized Workflow: Peer-to-peer collaboration
  • Orchestrated Multi-Agent Network (OMAN): Coordinate multiple MAS instances
  • Data & Context Management: Shared memory, context propagation, isolation
  • See docs/MULTI_AGENT_ORCHESTRATION.md for details

📚 Documentation

Core Documentation

Advanced Documentation


Architecture

System Overview

User Application
       ↓
AgentManager (Orchestrator)
       ↓
Agent (Router-Evaluator-Reflector)
       ├─ MASGenerativeModel (LLM + Memory)
       ├─ Tool Executor
       └─ State Manager
       ↓
Memory System
       ├─ LongTermMemory
       ├─ Redis/Qdrant Backend
       └─ Embedding Model

See docs/ARCHITECTURE.md for detailed architecture.


Installation

Requirements

  • Python 3.8+
  • Redis (for persistent memory) or Qdrant
  • API keys for LLM providers (OpenAI, Google, etc.)

Setup

# Clone repository
git clone https://github.com/shaunthecomputerscientist/mas-ai.git
cd mas-ai

# Install dependencies
pip install -r requirements.txt

# Set up environment variables
cp .env.example .env
# Edit .env with your API keys

See docs/INSTALLATION.md for detailed setup.


Usage Guide

Creating Agents

agent = manager.create_agent(
    agent_name="research_agent",
    agent_details=AgentDetails(
        capabilities=["research", "analysis"],
        description="Research specialist",
        style="detailed"
    ),
    tools=[]  # Add LangChain tools here
)

Executing Agent

# Full execution
result = await agent.initiate_agent(
    query="Explain quantum computing",
    passed_from="user"
)
print(result["answer"])
print(f"Reasoning: {result['reasoning']}")
print(f"Satisfied: {result['satisfied']}")

Streaming Responses

async for state in agent.initiate_agent_astream(
    query="Tell me a story",
    passed_from="user"
):
    node_name, state_dict = state
    state_value = [v for k, v in state_dict.items()][0]

    # Access state information
    if state_value.get("answer"):
        print(state_value["answer"])

See docs/USAGE_GUIDE.md for comprehensive examples.


Memory System

Persistent Memory Setup

from masai.AgentManager import AgentManager
from masai.Memory.LongTermMemory import RedisConfig
from langchain_openai import OpenAIEmbeddings

# Configure memory backend
redis_config = RedisConfig(
    redis_url="redis://localhost:6379",
    index_name="masai_vectors",
    vector_size=1536,
    embedding_model=OpenAIEmbeddings(model="text-embedding-3-small")
)

# Create manager with memory config
manager = AgentManager(
    user_id="user_123",
    model_config_path="model_config.json",
    memory_config=redis_config  # Pass to AgentManager
)

# Create agent with persistent memory enabled
agent = manager.create_agent(
    agent_name="assistant",
    agent_details=AgentDetails(
        capabilities=["reasoning"],
        description="Assistant"
    ),
    persist_memory=True,  # Enable persistence
    long_context=True,
    long_context_order=5
)

Memory Operations

# Access long-term memory through agent's LLM components
from masai.schema.Document import Document

# Save memories
await agent.llm_router.long_term_memory.save(
    user_id="user_123",
    documents=[Document(page_content="User likes Python")]
)

# Search memories
memories = await agent.llm_router.long_term_memory.search(
    user_id="user_123",
    query="What does user like?",
    k=5
)

See docs/MEMORY_SYSTEM.md for detailed memory docs.


Configuration

Agent Configuration Parameters

Parameter Type Default Description
agent_name str Required Unique agent identifier
agent_details AgentDetails Required Agent capabilities and style
tools list [] LangChain tools
memory_order int 10 Messages to keep in memory
long_context bool True Enable long-context mode
long_context_order int 20 Context summarization threshold
persist_memory bool None Enable persistent storage (requires memory_config in AgentManager)
plan bool False Enable planner component
temperature float 0.2 Sampling temperature

See docs/CONFIGURATION.md for all options.


Advanced Topics

Custom Tool Integration

from langchain.tools import tool

@tool
def calculate(expression: str) -> str:
    """Calculate mathematical expressions"""
    return str(eval(expression))

agent = manager.create_agent(
    agent_name="calculator",
    tools=[calculate]
)

Multi-Agent Orchestration

from masai.MultiAgents.MultiAgent import MultiAgentSystem, SupervisorConfig

# Decentralized MAS (peer-to-peer)
mas = MultiAgentSystem(agentManager=manager)
result = await mas.initiate_decentralized_mas(
    query="Complex task",
    set_entry_agent=agent1,
    memory_order=3
)

# Hierarchical MAS (supervisor-based)
supervisor_config = SupervisorConfig(
    model_name="gpt-4o",
    temperature=0.7,
    model_category="openai",
    memory_order=20,
    memory=True,
    extra_context={}
)

mas_hierarchical = MultiAgentSystem(
    agentManager=manager,
    supervisor_config=supervisor_config
)
result = await mas_hierarchical.initiate_hierarchical_mas(query="Complex task")

See docs/ADVANCED.md for advanced patterns.


API Reference

Core Classes

  • AgentManager: Orchestrates agent creation and management
  • Agent: Router-Evaluator-Reflector architecture
  • MASGenerativeModel: LLM with memory management
  • LongTermMemory: Persistent memory interface
  • RedisConfig/QdrantConfig: Backend configuration

See docs/API_REFERENCE.md for complete API.


Troubleshooting

Redis Connection Refused

redis-server
# or
docker run -d -p 6379:6379 redis:latest

OpenAI API Key Not Found

export OPENAI_API_KEY="your-key-here"

Memory Not Being Retrieved

# Verify context overflow (access through LLM component)
print(f"Summaries: {len(agent.llm_router.context_summaries)}")
print(f"Long context order: {agent.llm_router.long_context_order}")
print(f"Persist memory: {agent.llm_router.persist_memory}")

See docs/TROUBLESHOOTING.md for more solutions.


Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

License

MIT License - see LICENSE for details.

Support


Last Updated: October 31, 2025

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

masai_framework-0.5.2.tar.gz (214.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

masai_framework-0.5.2-py3-none-any.whl (158.0 kB view details)

Uploaded Python 3

File details

Details for the file masai_framework-0.5.2.tar.gz.

File metadata

  • Download URL: masai_framework-0.5.2.tar.gz
  • Upload date:
  • Size: 214.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.4

File hashes

Hashes for masai_framework-0.5.2.tar.gz
Algorithm Hash digest
SHA256 3836d80ae4709236ba8d9a5ca8156be786dc3dc1de5b6509d2c40981e757a355
MD5 64d400f9308d13daae91bcd95605a8d9
BLAKE2b-256 3fe6843f9464e6aa647286dd29bb2da4bc78a0afd1b0bcfff19f0def3de104bf

See more details on using hashes here.

File details

Details for the file masai_framework-0.5.2-py3-none-any.whl.

File metadata

File hashes

Hashes for masai_framework-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 8034601d3e31a187310e498801dd9a61d23b050f39484a5aaaa5cad0ecd33f39
MD5 c56c8574a71faffac16b071946425cab
BLAKE2b-256 41c78f2fdb068b7d5499d28b5dfac0da0940fadf1a0b81f4ec7159b53ede3ad4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page