Skip to main content

Universal MCP Client with multi-transport support and LLM-powered tool routing

Project description

๐Ÿš€ MCPOmni Connect - Complete AI Platform: OmniAgent + Universal MCP Client

PyPI Downloads Python Version License Tests PyPI version Last Commit Open Issues Pull Requests

MCPOmni Connect is the complete AI platform that evolved from a world-class MCP client into a revolutionary ecosystem. It now includes OmniAgent - the ultimate AI agent builder born from MCPOmni Connect's powerful foundation. Build production-ready AI agents, use the advanced MCP CLI, or combine both for maximum power.

๐Ÿ“‹ Table of Contents


๐Ÿš€ Quick Start (2 minutes)

New to MCPOmni Connect? Get started in 2 minutes:

Step 1: Install

# Install with uv (recommended)
uv add mcpomni-connect

# Or with pip
pip install mcpomni-connect

Step 2: Set API Key

# Create .env file with your LLM API key
echo "LLM_API_KEY=your_openai_api_key_here" > .env

Step 3: Run Examples

# Try the basic CLI example
python examples/basic.py

# Or try OmniAgent with custom tools
python examples/omni_agent_example.py

# Or use the advanced MCP CLI
python run.py

What Can You Build?

  • Custom AI Agents: Register your Python functions as AI tools
  • MCP Integration: Connect to any Model Context Protocol server
  • Smart Memory: Vector databases for long-term AI memory
  • Background Agents: Self-flying autonomous task execution

โžก๏ธ Next: Check out Examples or jump to Configuration Guide


๐ŸŒŸ Complete AI Platform - Two Powerful Systems:

1. ๐Ÿค– OmniAgent System (Revolutionary AI Agent Builder)

Born from MCPOmni Connect's foundation - create intelligent, autonomous agents with:

  • ๐Ÿ› ๏ธ Local Tools System - Register your Python functions as AI tools
  • ๐Ÿš Self-Flying Background Agents - Autonomous task execution
  • ๐Ÿง  Multi-Tier Memory - Vector databases, Redis, PostgreSQL, MySQL, SQLite
  • ๐Ÿ“ก Real-Time Events - Live monitoring and streaming
  • ๐Ÿ”ง MCP + Local Tool Orchestration - Seamlessly combine both tool types

2. ๐Ÿ”Œ Universal MCP Client (World-Class CLI)

Advanced command-line interface for connecting to any Model Context Protocol server with:

  • ๐ŸŒ Multi-Protocol Support - stdio, SSE, HTTP, Docker, NPX transports
  • ๐Ÿ” Authentication - OAuth 2.0, Bearer tokens, custom headers
  • ๐Ÿง  Advanced Memory - Redis, Database, Vector storage with intelligent retrieval
  • ๐Ÿ“ก Event Streaming - Real-time monitoring and debugging
  • ๐Ÿค– Agentic Modes - ReAct, Orchestrator, and Interactive chat modes

๐ŸŽฏ Perfect for: Developers who want the complete AI ecosystem - build custom agents AND have world-class MCP connectivity.

๐Ÿš€ NEW: OmniAgent - Build Your Own AI Agents!

๐ŸŒŸ Introducing OmniAgent - A revolutionary AI agent system that brings plug-and-play intelligence to your applications!

โœ… OmniAgent Revolutionary Capabilities:

  • ๐Ÿง  Multi-tier memory management with vector search and semantic retrieval
  • ๐Ÿ› ๏ธ XML-based reasoning with strict tool formatting for reliable execution
  • ๐Ÿ”ง Advanced tool orchestration - Seamlessly combine MCP server tools + local tools
  • ๐Ÿš Self-flying background agents with autonomous task execution
  • ๐Ÿ“ก Real-time event streaming for monitoring and debugging
  • ๐Ÿ—๏ธ Production-ready infrastructure with error handling and retry logic
  • โšก Plug-and-play intelligence - No complex setup required!

๐Ÿ”ฅ LOCAL TOOLS SYSTEM (MAJOR FEATURE!)

  • ๐ŸŽฏ Easy Tool Registration: @tool_registry.register_tool("tool_name")
  • ๐Ÿ”Œ Custom Tool Creation: Register your own Python functions as AI tools
  • ๐Ÿ”„ Runtime Tool Management: Add/remove tools dynamically
  • โš™๏ธ Type-Safe Interface: Automatic parameter validation and documentation
  • ๐Ÿ“– Rich Examples: Study run_omni_agent.py for 12+ EXAMPLE tool registration patterns

๐Ÿ’ก What Can You Build? (See Real Examples)

๐Ÿค– Custom AI Agents

# Complete OmniAgent demo with custom tools
python examples/omni_agent_example.py

# Agent with existing memory integration
python examples/agent_with_existing_memory.py

๐Ÿš Background Automation

# Self-flying background agents
python examples/background_agent_example.py

๐Ÿ”Œ MCP Server Integration

# Basic MCP client usage
python examples/basic.py

# Advanced MCP tool patterns
python examples/run_omni_agent.py

๐ŸŒ Web Applications

# FastAPI web server with OmniAgent
python examples/web_server.py
# Open http://localhost:8000

# Or FastAPI implementation example
python examples/fast_api_iml.py

๐Ÿง  Vector Database Memory

# Advanced vector database examples
python examples/vector_db_examples.py

๐Ÿ”ง LLM Provider Examples

# Different LLM providers
python examples/anthropic.py      # Anthropic Claude
python examples/groq.py           # Groq models
python examples/azure.py          # Azure OpenAI
python examples/ollama.py         # Local Ollama models

โœจ Key Features

๐Ÿค– Intelligent Agent System

  • ReAct Agent Mode
    • Autonomous task execution with reasoning and action cycles
    • Independent decision-making without human intervention
    • Advanced problem-solving through iterative reasoning
    • Self-guided tool selection and execution
    • Complex task decomposition and handling
  • Orchestrator Agent Mode
    • Strategic multi-step task planning and execution
    • Intelligent coordination across multiple MCP servers
    • Dynamic agent delegation and communication
    • Parallel task execution when possible
    • Sophisticated workflow management with real-time progress monitoring
  • Interactive Chat Mode
    • Human-in-the-loop task execution with approval workflows
    • Step-by-step guidance and explanations
    • Educational mode for understanding AI decision processes

๐Ÿ”Œ Universal Connectivity

  • Multi-Protocol Support
    • Native support for stdio transport
    • Server-Sent Events (SSE) for real-time communication
    • Streamable HTTP for efficient data streaming
    • Docker container integration
    • NPX package execution
    • Extensible transport layer for future protocols
  • Authentication Support
    • OAuth 2.0 authentication flow
    • Bearer token authentication
    • Custom header support
    • Secure credential management
  • Agentic Operation Modes
    • Seamless switching between chat, autonomous, and orchestrator modes
    • Context-aware mode selection based on task complexity
    • Persistent state management across mode transitions

๐Ÿง  AI-Powered Intelligence

  • Unified LLM Integration with LiteLLM
    • Single unified interface for all AI providers
    • Support for 100+ models across providers including:
      • OpenAI (GPT-4, GPT-3.5, etc.)
      • Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku, etc.)
      • Google (Gemini Pro, Gemini Flash, etc.)
      • Groq (Llama, Mixtral, Gemma, etc.)
      • DeepSeek (DeepSeek-V3, DeepSeek-Coder, etc.)
      • Azure OpenAI
      • OpenRouter (access to 200+ models)
      • Ollama (local models)
    • Simplified configuration and reduced complexity
    • Dynamic system prompts based on available capabilities
    • Intelligent context management
    • Automatic tool selection and chaining
    • Universal model support through custom ReAct Agent
      • Handles models without native function calling
      • Dynamic function execution based on user requests
      • Intelligent tool orchestration

๐Ÿ”’ Security & Privacy

  • Explicit User Control
    • All tool executions require explicit user approval in chat mode
    • Clear explanation of tool actions before execution
    • Transparent disclosure of data access and usage
  • Data Protection
    • Strict data access controls
    • Server-specific data isolation
    • No unauthorized data exposure
  • Privacy-First Approach
    • Minimal data collection
    • User data remains on specified servers
    • No cross-server data sharing without consent
  • Secure Communication
    • Encrypted transport protocols
    • Secure API key management
    • Environment variable protection

๐Ÿ’พ Advanced Memory Management (UPDATED!)

  • Multi-Backend Memory Storage
    • In-Memory: Fast development storage
    • Redis: Persistent memory with real-time access
    • Database: PostgreSQL, MySQL, SQLite support
    • File Storage: Save/load conversation history
    • Runtime switching: /memory_store:redis, /memory_store:database:postgresql://user:pass@host/db
  • Multi-Tier Memory Strategy
    • Short-term Memory: Sliding window or token budget strategies
    • Long-term Memory: Vector database storage for semantic retrieval
    • Episodic Memory: Context-aware conversation history
    • Runtime configuration: /memory_mode:sliding_window:5, /memory_mode:token_budget:3000
  • **Vector Database Integration (NEW!)
    • Multiple Provider Support: Choose your preferred vector database
      • ChromaDB: Full support for local, remote, and cloud modes
      • Qdrant: Production-grade remote vector search
    • Smart Provider Selection via OMNI_MEMORY_PROVIDER:
      • chroma-local: Local storage (default, automatic fallback)
      • chroma-remote: Remote ChromaDB server
      • chroma-cloud: ChromaDB Cloud service
      • qdrant-remote: Remote Qdrant server
    • Automatic Failover: If remote connections fail, safely falls back to ChromaDB local
    • Semantic Search: Intelligent context retrieval across conversations
    • Enable: Set ENABLE_VECTOR_DB=true for long-term and episodic memory
  • **Real-Time Event Streaming (NEW!)
    • In-Memory Events: Fast development event processing
    • Redis Streams: Persistent event storage and streaming
    • Runtime switching: /event_store:redis_stream, /event_store:in_memory

๐Ÿ’ฌ Prompt Management

  • Advanced Prompt Handling
    • Dynamic prompt discovery across servers
    • Flexible argument parsing (JSON and key-value formats)
    • Cross-server prompt coordination
    • Intelligent prompt validation
    • Context-aware prompt execution
    • Real-time prompt responses
    • Support for complex nested arguments
    • Automatic type conversion and validation
  • Client-Side Sampling Support
    • Dynamic sampling configuration from client
    • Flexible LLM response generation
    • Customizable sampling parameters
    • Real-time sampling adjustments

๐Ÿ› ๏ธ Tool Orchestration

  • Dynamic Tool Discovery & Management
    • Automatic tool capability detection
    • Cross-server tool coordination
    • Intelligent tool selection based on context
    • Real-time tool availability updates

๐Ÿ“ฆ Resource Management

  • Universal Resource Access
    • Cross-server resource discovery
    • Unified resource addressing
    • Automatic resource type detection
    • Smart content summarization

๐Ÿ”„ Server Management

  • Advanced Server Handling
    • Multiple simultaneous server connections
    • Automatic server health monitoring
    • Graceful connection management
    • Dynamic capability updates
    • Flexible authentication methods
    • Runtime server configuration updates

๐Ÿ—๏ธ Architecture

Core Components

MCPOmni Connect Platform
โ”œโ”€โ”€ ๐Ÿค– OmniAgent System (Revolutionary Agent Builder)
โ”‚   โ”œโ”€โ”€ Local Tools Registry
โ”‚   โ”œโ”€โ”€ Background Agent Manager  
โ”‚   โ”œโ”€โ”€ Custom Agent Creation
โ”‚   โ””โ”€โ”€ Agent Orchestration Engine
โ”œโ”€โ”€ ๐Ÿ”Œ Universal MCP Client (World-Class CLI)
โ”‚   โ”œโ”€โ”€ Transport Layer (stdio, SSE, HTTP, Docker, NPX)
โ”‚   โ”œโ”€โ”€ Multi-Server Orchestration
โ”‚   โ”œโ”€โ”€ Authentication & Security
โ”‚   โ””โ”€โ”€ Connection Lifecycle Management
โ”œโ”€โ”€ ๐Ÿง  Shared Memory System (Both Systems)
โ”‚   โ”œโ”€โ”€ Multi-Backend Storage (Redis, DB, In-Memory)
โ”‚   โ”œโ”€โ”€ Vector Database Integration (Qdrant, ChromaDB)
โ”‚   โ”œโ”€โ”€ Memory Strategies (Sliding Window, Token Budget)
โ”‚   โ””โ”€โ”€ Session Management
โ”œโ”€โ”€ ๐Ÿ“ก Event System (Both Systems)
โ”‚   โ”œโ”€โ”€ In-Memory Event Processing
โ”‚   โ”œโ”€โ”€ Redis Streams for Persistence
โ”‚   โ””โ”€โ”€ Real-Time Event Monitoring
โ”œโ”€โ”€ ๐Ÿ› ๏ธ Tool Management (Both Systems)
โ”‚   โ”œโ”€โ”€ Dynamic Tool Discovery
โ”‚   โ”œโ”€โ”€ Cross-Server Tool Routing
โ”‚   โ”œโ”€โ”€ Local Python Tool Registration
โ”‚   โ””โ”€โ”€ Tool Execution Engine
โ””โ”€โ”€ ๐Ÿค– AI Integration (Both Systems)
    โ”œโ”€โ”€ LiteLLM (100+ Models)
    โ”œโ”€โ”€ Context Management
    โ”œโ”€โ”€ ReAct Agent Processing
    โ””โ”€โ”€ Response Generation

๐Ÿš€ Getting Started

โœ… Minimal Setup (Just Python + API Key)

Required:

  • Python 3.10+
  • LLM API key (OpenAI, Anthropic, Groq, etc.)

Optional (for advanced features):

  • Redis (persistent memory)
  • Vector DB (ChromaDB auto-installed, Qdrant for production)
  • Database (PostgreSQL/MySQL/SQLite)

๐Ÿ“ฆ Installation

# Option 1: UV (recommended - faster)
uv add mcpomni-connect

# Option 2: Pip (standard)
pip install mcpomni-connect

โšก Quick Configuration

Minimal setup (get started immediately):

# Just set your API key - that's it!
echo "LLM_API_KEY=your_api_key_here" > .env

Advanced setup (optional features):

# Enable vector memory (ChromaDB local - auto-configured)
echo "ENABLE_VECTOR_DB=true" >> .env

# Or connect to Redis for persistent memory
echo "REDIS_URL=redis://localhost:6379/0" >> .env

# Or use database for memory storage
echo "DATABASE_URL=sqlite:///mcpomni_memory.db" >> .env

๐ŸŽฏ Choose Your Path

Path A: Build Custom Agents (OmniAgent)

python examples/omni_agent_example.py

Path B: Advanced MCP Client (CLI)

python run.py

Path C: Web Interface

python examples/web_server.py
# Open http://localhost:8000

โš™๏ธ Configuration Guide

๐ŸŽฏ Quick Setup: Most users only need the .env file with an API key. Advanced features require additional configuration.

Configuration Overview - Two Simple Files

MCPOmni Connect uses two separate configuration files for different purposes:

1. .env File - Environment Variables

Contains sensitive information like API keys and optional settings:

# Required: Your LLM provider API key
LLM_API_KEY=your_api_key_here

# Optional: Memory Storage Configuration  
DATABASE_URL=sqlite:///mcpomni_memory.db
REDIS_URL=redis://localhost:6379/0

2. servers_config.json - Server & Agent Configuration

Contains application settings, LLM configuration, and MCP server connections:

{
  "AgentConfig": {
    "tool_call_timeout": 30,
    "max_steps": 15,
    "request_limit": 1000,
    "total_tokens_limit": 100000
  },
  "LLM": {
    "provider": "openai",
    "model": "gpt-4o-mini",
    "temperature": 0.7,
    "max_tokens": 5000,
    "top_p": 0.7
  },
  "mcpServers": {
    "your-server-name": {
      "transport_type": "stdio",
      "command": "uvx",
      "args": ["mcp-server-package"]
    }
  }
}

๐Ÿšฆ Transport Types & Authentication

MCPOmni Connect supports multiple ways to connect to MCP servers:

1. stdio - Direct Process Communication

Use when: Connecting to local MCP servers that run as separate processes

{
  "server-name": {
    "transport_type": "stdio",
    "command": "uvx",
    "args": ["mcp-server-package"]
  }
}
  • No authentication needed
  • No OAuth server started
  • Most common for local development

2. sse - Server-Sent Events

Use when: Connecting to HTTP-based MCP servers using Server-Sent Events

{
  "server-name": {
    "transport_type": "sse",
    "url": "http://your-server.com:4010/sse",
    "headers": {
      "Authorization": "Bearer your-token"
    },
    "timeout": 60,
    "sse_read_timeout": 120
  }
}
  • Uses Bearer token or custom headers
  • No OAuth server started

3. streamable_http - HTTP with Optional OAuth

Use when: Connecting to HTTP-based MCP servers with or without OAuth

Without OAuth (Bearer Token):

{
  "server-name": {
    "transport_type": "streamable_http",
    "url": "http://your-server.com:4010/mcp",
    "headers": {
      "Authorization": "Bearer your-token"
    },
    "timeout": 60
  }
}
  • Uses Bearer token or custom headers
  • No OAuth server started

With OAuth:

{
  "server-name": {
    "transport_type": "streamable_http",
    "auth": {
      "method": "oauth"
    },
    "url": "http://your-server.com:4010/mcp"
  }
}
  • OAuth callback server automatically starts on http://localhost:3000
  • This is hardcoded and cannot be changed
  • Required for OAuth flow to work properly

๐Ÿ” OAuth Server Behavior

Important: When using OAuth authentication, MCPOmni Connect automatically starts an OAuth callback server.

What You'll See:

๐Ÿ–ฅ๏ธ  Started callback server on http://localhost:3000

Key Points:

  • This is normal behavior - not an error
  • The address http://localhost:3000 is hardcoded and cannot be changed
  • The server only starts when you have "auth": {"method": "oauth"} in your config
  • The server stops when the application shuts down
  • Only used for OAuth token handling - no other purpose

When OAuth is NOT Used:

  • Remove the entire "auth" section from your server configuration
  • Use "headers" with "Authorization": "Bearer token" instead
  • No OAuth server will start

๐Ÿ› ๏ธ Troubleshooting Common Issues

"Failed to connect to server: Session terminated"

Possible Causes & Solutions:

  1. Wrong Transport Type

    Problem: Your server expects 'stdio' but you configured 'streamable_http'
    Solution: Check your server's documentation for the correct transport type
    
  2. OAuth Configuration Mismatch

    Problem: Your server doesn't support OAuth but you have "auth": {"method": "oauth"}
    Solution: Remove the "auth" section entirely and use headers instead:
    
    "headers": {
        "Authorization": "Bearer your-token"
    }
    
  3. Server Not Running

    Problem: The MCP server at the specified URL is not running
    Solution: Start your MCP server first, then connect with MCPOmni Connect
    
  4. Wrong URL or Port

    Problem: URL in config doesn't match where your server is running
    Solution: Verify the server's actual address and port
    

"Started callback server on http://localhost:3000" - Is This Normal?

Yes, this is completely normal when:

  • You have "auth": {"method": "oauth"} in any server configuration
  • The OAuth server handles authentication tokens automatically
  • You cannot and should not try to change this address

If you don't want the OAuth server:

  • Remove "auth": {"method": "oauth"} from all server configurations
  • Use alternative authentication methods like Bearer tokens

๐Ÿ“‹ Configuration Examples by Use Case

Local Development (stdio)

{
  "mcpServers": {
    "local-tools": {
      "transport_type": "stdio",
      "command": "uvx",
      "args": ["mcp-server-tools"]
    }
  }
}

Remote Server with Token

{
  "mcpServers": {
    "remote-api": {
      "transport_type": "streamable_http",
      "url": "http://api.example.com:8080/mcp",
      "headers": {
        "Authorization": "Bearer abc123token"
      }
    }
  }
}

Remote Server with OAuth

{
  "mcpServers": {
    "oauth-server": {
      "transport_type": "streamable_http",
      "auth": {
        "method": "oauth"
      },
      "url": "http://oauth-server.com:8080/mcp"
    }
  }
}

Start CLI

Start the CLI - ensure your API key is exported or create .env file:

mcpomni_connect

๐Ÿงช Testing

Running Tests

# Run all tests with verbose output
pytest tests/ -v

# Run specific test file
pytest tests/test_specific_file.py -v

# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing

Test Structure

tests/
โ”œโ”€โ”€ unit/           # Unit tests for individual components

Development Quick Start

  1. Installation

    # Clone the repository
    git clone https://github.com/Abiorh001/mcp_omni_connect.git
    cd mcp_omni_connect
    
    # Create and activate virtual environment
    uv venv
    source .venv/bin/activate
    
    # Install dependencies
    uv sync
    
  2. Configuration

    # Set up environment variables
    echo "LLM_API_KEY=your_api_key_here" > .env
    
    # Configure your servers in servers_config.json
    
  3. Start Client

    uv run run.py
    

    Or:

    python run.py
    

๐Ÿง‘โ€๐Ÿ’ป Examples

Basic CLI Example

You can run the basic CLI example to interact with MCPOmni Connect directly from the terminal.

Using uv (recommended):

uv run examples/basic.py

Or using Python directly:

python examples/basic.py

๐Ÿค– OmniAgent - Create Your Own AI Agents

Build intelligent agents that combine MCP tools with local tools for powerful automation.

Basic OmniAgent Creation

from mcpomni_connect.omni_agent import OmniAgent
from mcpomni_connect.memory_store.memory_router import MemoryRouter
from mcpomni_connect.events.event_router import EventRouter
from mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry

# Create local tools registry
tool_registry = ToolRegistry()

# Register your custom tools directly with the agent
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
    """Calculate the area of a rectangle."""
    area = length * width
    return f"Area of rectangle ({length} x {width}): {area} square units"

@tool_registry.register_tool("analyze_text")
def analyze_text(text: str) -> str:
    """Analyze text and return word count and character count."""
    words = len(text.split())
    chars = len(text)
    return f"Analysis: {words} words, {chars} characters"

# Initialize memory store
memory_store = MemoryRouter(memory_store_type="redis")  # or "postgresql", "sqlite", "mysql"
event_router = EventRouter(event_store_type="in_memory")

# Create OmniAgent with LOCAL TOOLS + MCP TOOLS
agent = OmniAgent(
    name="my_agent",
    system_instruction="You are a helpful assistant with access to custom tools and file operations.",
    model_config={
        "provider": "openai",
        "model": "gpt-4o",
        "max_context_length": 50000,
    },
    # Your custom local tools
    local_tools=tool_registry,
    # MCP server tools  
    mcp_tools=[
        {
            "name": "filesystem",
            "transport_type": "stdio",
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"],
        }
    ],
    memory_store=memory_store,
    event_router=event_router
)

# Now the agent can use BOTH your custom tools AND MCP tools!
result = await agent.run("Calculate the area of a 10x5 rectangle, then analyze this text: 'Hello world'")
print(f"Response: {result['response']}")
print(f"Session ID: {result['session_id']}")

๐Ÿš Self-Flying Background Agents (NEW!)

Create autonomous agents that run in the background and execute tasks automatically:

from mcpomni_connect.omni_agent.background_agent.background_agent_manager import BackgroundAgentManager
from mcpomni_connect.memory_store.memory_router import MemoryRouter
from mcpomni_connect.events.event_router import EventRouter

# Initialize components
memory_store = MemoryRouter(memory_store_type="in_memory")
event_router = EventRouter(event_store_type="in_memory")

# Create background agent manager
manager = BackgroundAgentManager(
    memory_store=memory_store,
    event_router=event_router
)

# Create a self-flying background agent
agent_config = {
    "agent_id": "system_monitor",
    "system_instruction": "You are a system monitoring agent that checks system health.",
    "model_config": {
        "provider": "openai",
        "model": "gpt-4o",
        "temperature": 0.7,
    },
    "local_tools": tool_registry,  # Your tool registry
    "agent_config": {
        "max_steps": 10,
        "tool_call_timeout": 30,
    },
    "interval": 60,  # Run every 60 seconds
    "max_retries": 3,
    "retry_delay": 30,
    "task_config": {
        "query": "Check system status and report any critical issues.",
        "description": "System health monitoring task"
    }
}

# Create and start the background agent
result = manager.create_agent(agent_config)
manager.start()  # Start all background agents

# Monitor events in real-time
async for event in manager.get_agent("system_monitor").stream_events(result["session_id"]):
    print(f"Background Agent Event: {event.type} - {event.payload}")

# Runtime task updates
manager.update_task_config("system_monitor", {
    "query": "Perform emergency system check and report critical issues immediately.",
    "description": "Emergency system check task",
    "priority": "high"
})

๐Ÿ“ Session Management

Maintain conversation continuity across multiple interactions:

# Use session ID for conversation continuity
session_id = "user_123_conversation"
result1 = await agent.run("Hello! My name is Alice.", session_id)
result2 = await agent.run("What did I tell you my name was?", session_id)

# Get conversation history
history = await agent.get_session_history(session_id)

# Stream events in real-time
async for event in agent.stream_events(session_id):
    print(f"Event: {event.type} - {event.payload}")

๐Ÿ“š Learn from Examples

Study these comprehensive examples to see OmniAgent in action:

  • examples/omni_agent_example.py - โญ COMPLETE DEMO showing all OmniAgent features
  • examples/background_agent_example.py - Self-flying background agents
  • run_omni_agent.py - Advanced EXAMPLE patterns (study only, not for end-user use)
  • examples/basic.py - Simple agent setup patterns
  • examples/web_server.py - FastAPI web interface
  • examples/vector_db_examples.py - Advanced vector memory
  • Provider Examples: anthropic.py, groq.py, azure.py, ollama.py

๐Ÿ’ก Pro Tip: Run python examples/omni_agent_example.py to see the full capabilities in action!

๐ŸŽฏ Getting Started - Choose Your Path

When to Use What?

Use Case Choose Best For
Build custom AI apps OmniAgent Web apps, automation, custom workflows
Connect to MCP servers MCP CLI Daily workflow, server management, debugging
Learn & experiment Examples Understanding patterns, proof of concepts
Production deployment Both Full-featured AI applications

Path 1: ๐Ÿค– Build Custom AI Agents (OmniAgent)

Perfect for: Custom applications, automation, web apps

# Study the examples to learn patterns:
python examples/basic.py                    # Simple setup
python examples/omni_agent_example.py       # Complete demo
python examples/background_agent_example.py # Self-flying agents
python examples/web_server.py              # Web interface

# Then build your own using the patterns!

Path 2: ๐Ÿ”Œ Advanced MCP Client (CLI)

Perfect for: Daily workflow, server management, debugging

# World-class MCP client with advanced features
python run.py
# OR: mcpomni-connect --config servers_config.json

# Features: Connect to MCP servers, agentic modes, advanced memory

Path 3: ๐Ÿงช Study Tool Patterns (Learning)

Perfect for: Learning, understanding patterns, experimentation

# Comprehensive testing interface - Study 12+ EXAMPLE tools
python run_omni_agent.py --mode cli

# Study this file to see tool registration patterns and CLI features
# Contains many examples of how to create custom tools

๐Ÿ’ก Pro Tip: Most developers use both paths - the MCP CLI for daily workflow and OmniAgent for building custom solutions!


๐Ÿ”ฅ Local Tools System - Create Custom AI Tools!

One of OmniAgent's most powerful features is the ability to register your own Python functions as AI tools. The agent can then intelligently use these tools to complete tasks.

๐ŸŽฏ Quick Tool Registration Example

from mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry

# Create tool registry
tool_registry = ToolRegistry()

# Register your custom tools with simple decorator
@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
    """Calculate the area of a rectangle."""
    area = length * width
    return f"Area of rectangle ({length} x {width}): {area} square units"

@tool_registry.register_tool("analyze_text")
def analyze_text(text: str) -> str:
    """Analyze text and return word count and character count."""
    words = len(text.split())
    chars = len(text)
    return f"Analysis: {words} words, {chars} characters"

@tool_registry.register_tool("system_status")
def get_system_status() -> str:
    """Get current system status information."""
    import platform
    import time
    return f"System: {platform.system()}, Time: {time.strftime('%Y-%m-%d %H:%M:%S')}"

# Use tools with OmniAgent
agent = OmniAgent(
    name="my_agent",
    local_tools=tool_registry,  # Your custom tools!
    # ... other config
)

# Now the AI can use your tools!
result = await agent.run("Calculate the area of a 10x5 rectangle and tell me the current system time")

๐Ÿ“– Tool Registration Patterns (Create Your Own!)

No built-in tools - You create exactly what you need! Study these EXAMPLE patterns from run_omni_agent.py:

Mathematical Tools Examples:

@tool_registry.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
    area = length * width
    return f"Area: {area} square units"

@tool_registry.register_tool("analyze_numbers") 
def analyze_numbers(numbers: str) -> str:
    num_list = [float(x.strip()) for x in numbers.split(",")]
    return f"Count: {len(num_list)}, Average: {sum(num_list)/len(num_list):.2f}"

System Tools Examples:

@tool_registry.register_tool("system_info")
def get_system_info() -> str:
    import platform
    return f"OS: {platform.system()}, Python: {platform.python_version()}"

File Tools Examples:

@tool_registry.register_tool("list_files")
def list_directory(path: str = ".") -> str:
    import os
    files = os.listdir(path)
    return f"Found {len(files)} items in {path}"

๐ŸŽจ Tool Registration Patterns

1. Simple Function Tools:

@tool_registry.register_tool("weather_check")
def check_weather(city: str) -> str:
    """Get weather information for a city."""
    # Your weather API logic here
    return f"Weather in {city}: Sunny, 25ยฐC"

2. Complex Analysis Tools:

@tool_registry.register_tool("data_analysis")
def analyze_data(data: str, analysis_type: str = "summary") -> str:
    """Analyze data with different analysis types."""
    import json
    try:
        data_obj = json.loads(data)
        if analysis_type == "summary":
            return f"Data contains {len(data_obj)} items"
        elif analysis_type == "detailed":
            # Complex analysis logic
            return "Detailed analysis results..."
    except:
        return "Invalid data format"

3. File Processing Tools:

@tool_registry.register_tool("process_file")
def process_file(file_path: str, operation: str) -> str:
    """Process files with different operations."""
    try:
        if operation == "read":
            with open(file_path, 'r') as f:
                content = f.read()
            return f"File content (first 100 chars): {content[:100]}..."
        elif operation == "count_lines":
            with open(file_path, 'r') as f:
                lines = len(f.readlines())
            return f"File has {lines} lines"
    except Exception as e:
        return f"Error processing file: {e}"

โš™๏ธ Configuration Guide (UPDATED!)

Environment Variables

Create a .env file with your configuration:

# ===============================================
# Required: AI Model API Key
# ===============================================
LLM_API_KEY=your_api_key_here

# ===============================================
# Memory Storage Configuration (NEW!)
# ===============================================
# Database backend (PostgreSQL, MySQL, SQLite)
DATABASE_URL=sqlite:///mcpomni_memory.db
# DATABASE_URL=postgresql://user:password@localhost:5432/mcpomni
# DATABASE_URL=mysql://user:password@localhost:3306/mcpomni

# Redis for memory and event storage (single URL)
REDIS_URL=redis://localhost:6379/0
# REDIS_URL=redis://:password@localhost:6379/0  # With password

# ===============================================
# Vector Database Configuration (NEW!)
# ===============================================
# Enable vector databases for long-term & episodic memory
ENABLE_VECTOR_DB=true

# Vector DB Provider (optional โ€“ defaults to chroma-local)
# Options: chroma-local (default), chroma-remote, chroma-cloud, qdrant-remote
OMNI_MEMORY_PROVIDER=chroma-local

# ChromaDB Remote Configuration
# Set these only when using OMNI_MEMORY_PROVIDER=chroma-remote
# CHROMA_HOST=localhost
# CHROMA_PORT=8000

# ChromaDB Cloud Configuration  
# Set these only when using OMNI_MEMORY_PROVIDER=chroma-cloud
# CHROMA_TENANT=your_tenant
# CHROMA_DATABASE=your_database
# CHROMA_API_KEY=your_api_key

# Qdrant Remote Configuration
# Set these only when using OMNI_MEMORY_PROVIDER=qdrant-remote
# QDRANT_HOST=localhost
# QDRANT_PORT=6333

๐Ÿง  Vector Database Setup (NEW!)

For Long-term & Episodic Memory:

  1. Enable Vector Databases:

    ENABLE_VECTOR_DB=true
    
  2. Option A: Use Qdrant (Recommended for Production):

    # Install and run Qdrant
    docker run -p 6333:6333 qdrant/qdrant
    
    # Set environment variables
    QDRANT_HOST=localhost
    QDRANT_PORT=6333
    OMNI_MEMORY_PROVIDER=qdrant-remote
    
  3. Option B: Use ChromaDB (Automatic Local Fallback):

    # No config needed for local fallback
    # When ENABLE_VECTOR_DB=true and no provider is set โ†’ uses local .chroma_db directory
    # Explicitly use remote:
    export OMNI_MEMORY_PROVIDER=chroma-remote
    export CHROMA_HOST=localhost
    export CHROMA_PORT=8000
    # Or cloud:
    export OMNI_MEMORY_PROVIDER=chroma-cloud
    export CHROMA_TENANT=your_tenant
    export CHROMA_DATABASE=your_database
    export CHROMA_API_KEY=your_api_key
    

๐Ÿงฉ Vector DB Provider Selection & Fallback (How it works)

  • Disable: If ENABLE_VECTOR_DB is not true, vector memory features are off.
  • Default: If ENABLE_VECTOR_DB=true and OMNI_MEMORY_PROVIDER is not set, the system uses chroma-local by default.
  • Explicit provider: Set OMNI_MEMORY_PROVIDER to one of:
    • chroma-local: Local persistent storage under .chroma_db/ (default)
    • chroma-remote: Remote ChromaDB server - requires CHROMA_HOST and CHROMA_PORT
    • chroma-cloud: ChromaDB Cloud service - requires CHROMA_TENANT, CHROMA_DATABASE, CHROMA_API_KEY
    • qdrant-remote: Remote Qdrant server - requires QDRANT_HOST and QDRANT_PORT
  • Smart fallback behavior (built-in safety):
    • If qdrant-remote fails to initialize/connect โ†’ automatically falls back to chroma-local
    • If chroma-remote fails to initialize/connect โ†’ automatically falls back to chroma-local
    • If chroma-cloud is misconfigured/missing credentials โ†’ automatically falls back to chroma-local
    • All ChromaDB modes (local, remote, cloud) are supported - fallback only happens on connection failure

Key Point: You can use any ChromaDB client type (local, remote, or cloud). The fallback to chroma-local only occurs when remote connections fail, ensuring uninterrupted operation.

๐Ÿ–ฅ๏ธ Updated CLI Commands (NEW!)

Memory Store Management:

# Switch between memory backends
/memory_store:in_memory                    # Fast in-memory storage (default)
/memory_store:redis                        # Redis persistent storage  
/memory_store:database                     # SQLite database storage
/memory_store:database:postgresql://user:pass@host/db  # PostgreSQL
/memory_store:database:mysql://user:pass@host/db       # MySQL

# Memory strategy configuration
/memory_mode:sliding_window:10             # Keep last 10 messages
/memory_mode:token_budget:5000             # Keep under 5000 tokens

Event Store Management:

# Switch between event backends
/event_store:in_memory                     # Fast in-memory events (default)
/event_store:redis_stream                  # Redis Streams for persistence

Enhanced Commands:

# Memory operations
/history                                   # Show conversation history
/clear_history                            # Clear conversation history
/save_history <file>                      # Save history to file
/load_history <file>                      # Load history from file

# Server management
/add_servers:<config.json>                # Add servers from config
/remove_server:<server_name>              # Remove specific server
/refresh                                  # Refresh server capabilities

# Debugging and monitoring
/debug                                    # Toggle debug mode
/api_stats                               # Show API usage statistics

๐Ÿš€ MCPOmni Connect CLI - World-Class MCP Client

The MCPOmni Connect CLI is the most advanced MCP client available, providing professional-grade MCP functionality with enhanced memory, event management, and agentic modes:

# Launch the advanced MCP CLI
python run.py
# OR: mcpomni-connect --config servers_config.json

# Core MCP client commands:
/tools                                    # List all available tools
/prompts                                  # List all available prompts  
/resources                               # List all available resources
/prompt:<name>                           # Execute a specific prompt
/resource:<uri>                          # Read a specific resource
/subscribe:<uri>                         # Subscribe to resource updates
/query <your_question>                   # Ask questions using tools

# Advanced platform features:
/memory_store:redis                      # Switch to Redis memory
/event_store:redis_stream               # Switch to Redis events
/add_servers:<config.json>              # Add MCP servers dynamically
/remove_server:<name>                   # Remove MCP server
/mode:auto                              # Switch to autonomous agentic mode
/mode:orchestrator                      # Switch to multi-server orchestration

๐Ÿ› ๏ธ Developer Integration

MCPOmni Connect is not just a CLI toolโ€”it's also a powerful Python library. OmniAgent consolidates everything - you no longer need to manually manage MCP clients, configurations, and agents separately!

Build Apps with OmniAgent (Recommended)

OmniAgent automatically includes MCP client functionality - just specify your MCP servers and you're ready to go:

from mcpomni_connect.omni_agent import OmniAgent
from mcpomni_connect.memory_store.memory_router import MemoryRouter
from mcpomni_connect.events.event_router import EventRouter
from mcpomni_connect.agents.tools.local_tools_registry import ToolRegistry

# Create tool registry for custom tools
tool_registry = ToolRegistry()

@tool_registry.register_tool("analyze_data")
def analyze_data(data: str) -> str:
    """Analyze data and return insights."""
    return f"Analysis complete: {len(data)} characters processed"

# OmniAgent automatically handles MCP connections + your tools
agent = OmniAgent(
    name="my_app_agent",
    system_instruction="You are a helpful assistant with access to MCP servers and custom tools.",
    model_config={
        "provider": "openai", 
        "model": "gpt-4o",
        "temperature": 0.7
    },
    # Your custom local tools
    local_tools=tool_registry,
    # MCP servers - automatically connected!
    mcp_tools=[
        {
            "name": "filesystem",
            "transport_type": "stdio", 
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
        },
        {
            "name": "github",
            "transport_type": "streamable_http",
            "url": "http://localhost:8080/mcp",
            "headers": {"Authorization": "Bearer your-token"}
        }
    ],
    memory_store=MemoryRouter(memory_store_type="redis"),
    event_router=EventRouter(event_store_type="in_memory")
)

# Use in your app - gets both MCP tools AND your custom tools!
result = await agent.run("List files in the current directory and analyze the filenames")

Legacy Manual Approach (Not Recommended)

If you need the old manual approach for some reason:

FastAPI Integration with OmniAgent

OmniAgent makes building APIs incredibly simple. See examples/web_server.py for a complete FastAPI example:

from fastapi import FastAPI
from mcpomni_connect.omni_agent import OmniAgent

app = FastAPI()
agent = OmniAgent(...)  # Your agent setup from above

@app.post("/chat")
async def chat(message: str, session_id: str = None):
    result = await agent.run(message, session_id)
    return {"response": result['response'], "session_id": result['session_id']}

@app.get("/tools") 
async def get_tools():
    # Returns both MCP tools AND your custom tools automatically
    return agent.get_available_tools()

Key Benefits:

  • One OmniAgent = MCP + Custom Tools + Memory + Events
  • Automatic tool discovery from all connected MCP servers
  • Built-in session management and conversation history
  • Real-time event streaming for monitoring
  • Easy integration with any Python web framework

Server Configuration Examples

Basic OpenAI Configuration

{
  "AgentConfig": {
    "tool_call_timeout": 30,
    "max_steps": 15,
    "request_limit": 1000,
    "total_tokens_limit": 100000
  },
  "LLM": {
    "provider": "openai",
    "model": "gpt-4",
    "temperature": 0.5,
    "max_tokens": 5000,
    "max_context_length": 30000,
    "top_p": 0
  },
  "mcpServers": {
    "ev_assistant": {
      "transport_type": "streamable_http",
      "auth": {
        "method": "oauth"
      },
      "url": "http://localhost:8000/mcp"
    },
    "sse-server": {
      "transport_type": "sse",
      "url": "http://localhost:3000/sse",
      "headers": {
        "Authorization": "Bearer token"
      },
      "timeout": 60,
      "sse_read_timeout": 120
    },
    "streamable_http-server": {
      "transport_type": "streamable_http",
      "url": "http://localhost:3000/mcp",
      "headers": {
        "Authorization": "Bearer token"
      },
      "timeout": 60,
      "sse_read_timeout": 120
    }
  }
}

Anthropic Claude Configuration

{
  "LLM": {
    "provider": "anthropic",
    "model": "claude-3-5-sonnet-20241022",
    "temperature": 0.7,
    "max_tokens": 4000,
    "max_context_length": 200000,
    "top_p": 0.95
  }
}

Groq Configuration

{
  "LLM": {
    "provider": "groq",
    "model": "llama-3.1-8b-instant",
    "temperature": 0.5,
    "max_tokens": 2000,
    "max_context_length": 8000,
    "top_p": 0.9
  }
}

Azure OpenAI Configuration

{
  "LLM": {
    "provider": "azureopenai",
    "model": "gpt-4",
    "temperature": 0.7,
    "max_tokens": 2000,
    "max_context_length": 100000,
    "top_p": 0.95,
    "azure_endpoint": "https://your-resource.openai.azure.com",
    "azure_api_version": "2024-02-01",
    "azure_deployment": "your-deployment-name"
  }
}

Ollama Local Model Configuration

{
  "LLM": {
    "provider": "ollama",
    "model": "llama3.1:8b",
    "temperature": 0.5,
    "max_tokens": 5000,
    "max_context_length": 100000,
    "top_p": 0.7,
    "ollama_host": "http://localhost:11434"
  }
}

OpenRouter Configuration

{
  "LLM": {
    "provider": "openrouter",
    "model": "anthropic/claude-3.5-sonnet",
    "temperature": 0.7,
    "max_tokens": 4000,
    "max_context_length": 200000,
    "top_p": 0.95
  }
}

๐Ÿ” Authentication Methods

MCPOmni Connect supports multiple authentication methods for secure server connections:

OAuth 2.0 Authentication

{
  "server_name": {
    "transport_type": "streamable_http",
    "auth": {
      "method": "oauth"
    },
    "url": "http://your-server/mcp"
  }
}

Bearer Token Authentication

{
  "server_name": {
    "transport_type": "streamable_http",
    "headers": {
      "Authorization": "Bearer your-token-here"
    },
    "url": "http://your-server/mcp"
  }
}

Custom Headers

{
  "server_name": {
    "transport_type": "streamable_http",
    "headers": {
      "X-Custom-Header": "value",
      "Authorization": "Custom-Auth-Scheme token"
    },
    "url": "http://your-server/mcp"
  }
}

๐Ÿ”„ Dynamic Server Configuration

MCPOmni Connect supports dynamic server configuration through commands:

Add New Servers

# Add one or more servers from a configuration file
/add_servers:path/to/config.json

The configuration file can include multiple servers with different authentication methods:

{
  "new-server": {
    "transport_type": "streamable_http",
    "auth": {
      "method": "oauth"
    },
    "url": "http://localhost:8000/mcp"
  },
  "another-server": {
    "transport_type": "sse",
    "headers": {
      "Authorization": "Bearer token"
    },
    "url": "http://localhost:3000/sse"
  }
}

Remove Servers

# Remove a server by its name
/remove_server:server_name

๐ŸŽฏ Usage

Interactive Commands

  • /tools - List all available tools across servers
  • /prompts - View available prompts
  • /prompt:<name>/<args> - Execute a prompt with arguments
  • /resources - List available resources
  • /resource:<uri> - Access and analyze a resource
  • /debug - Toggle debug mode
  • /refresh - Update server capabilities
  • /memory - Toggle Redis memory persistence (on/off)
  • /mode:auto - Switch to autonomous agentic mode
  • /mode:chat - Switch back to interactive chat mode
  • /add_servers:<config.json> - Add one or more servers from a configuration file
  • /remove_server:<server_name> - Remove a server by its name

Memory and Chat History

# Enable Redis memory persistence
/memory

# Check memory status
Memory persistence is now ENABLED using Redis

# Disable memory persistence
/memory

# Check memory status
Memory persistence is now DISABLED

Operation Modes

# Switch to autonomous mode
/mode:auto

# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.

# Switch back to chat mode
/mode:chat

# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.

Mode Differences

  • Chat Mode (Default)

    • Requires explicit approval for tool execution
    • Interactive conversation style
    • Step-by-step task execution
    • Detailed explanations of actions
  • Autonomous Mode

    • Independent task execution
    • Self-guided decision making
    • Automatic tool selection and chaining
    • Progress updates and final results
    • Complex task decomposition
    • Error handling and recovery
  • Orchestrator Mode

    • Advanced planning for complex multi-step tasks
    • Strategic delegation across multiple MCP servers
    • Intelligent agent coordination and communication
    • Parallel task execution when possible
    • Dynamic resource allocation
    • Sophisticated workflow management
    • Real-time progress monitoring across agents
    • Adaptive task prioritization

Prompt Management

# List all available prompts
/prompts

# Basic prompt usage
/prompt:weather/location=tokyo

# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25

# JSON format for complex arguments
/prompt:analyze-data/{
    "dataset": "sales_2024",
    "metrics": ["revenue", "growth"],
    "filters": {
        "region": "europe",
        "period": "q1"
    }
}

# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
    "price_range": {"min": 500, "max": 1000},
    "features": ["5G", "wireless-charging"],
    "markets": ["US", "EU", "Asia"]
}

Advanced Prompt Features

  • Argument Validation: Automatic type checking and validation
  • Default Values: Smart handling of optional arguments
  • Context Awareness: Prompts can access previous conversation context
  • Cross-Server Execution: Seamless execution across multiple MCP servers
  • Error Handling: Graceful handling of invalid arguments with helpful messages
  • Dynamic Help: Detailed usage information for each prompt

AI-Powered Interactions

The client intelligently:

  • Chains multiple tools together
  • Provides context-aware responses
  • Automatically selects appropriate tools
  • Handles errors gracefully
  • Maintains conversation context

Model Support with LiteLLM

  • Unified Model Access
    • Single interface for 100+ models across all major providers
    • Automatic provider detection and routing
    • Consistent API regardless of underlying provider
    • Native function calling for compatible models
    • ReAct Agent fallback for models without function calling
  • Supported Providers
    • OpenAI: GPT-4, GPT-3.5, and all model variants
    • Anthropic: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
    • Google: Gemini Pro, Gemini Flash, PaLM models
    • Groq: Ultra-fast inference for Llama, Mixtral, Gemma
    • DeepSeek: DeepSeek-V3, DeepSeek-Coder, and specialized models
    • Azure OpenAI: Enterprise-grade OpenAI models
    • OpenRouter: Access to 200+ models from various providers
    • Ollama: Local model execution with privacy
  • Advanced Features
    • Automatic model capability detection
    • Dynamic tool execution based on model features
    • Intelligent fallback mechanisms
    • Provider-specific optimizations

Token & Usage Management

MCPOmni Connect now provides advanced controls and visibility over your API usage and resource limits.

View API Usage Stats

Use the /api_stats command to see your current usage:

/api_stats

This will display:

  • Total tokens used
  • Total requests made
  • Total response tokens
  • Number of requests

Set Usage Limits

You can set limits to automatically stop execution when thresholds are reached:

  • Total Request Limit: Set the maximum number of requests allowed in a session.
  • Total Token Usage Limit: Set the maximum number of tokens that can be used.
  • Tool Call Timeout: Set the maximum time (in seconds) a tool call can take before being terminated.
  • Max Steps: Set the maximum number of steps the agent can take before stopping.

You can configure these in your servers_config.json under the AgentConfig section:

"AgentConfig": {
    "tool_call_timeout": 30,        // Tool call timeout in seconds
    "max_steps": 15,                // Max number of steps before termination
    "request_limit": 1000,          // Max number of requests allowed
    "total_tokens_limit": 100000    // Max number of tokens allowed
}
  • When any of these limits are reached, the agent will automatically stop running and notify you.

Example Commands

# Check your current API usage and limits
/api_stats

# Set a new request limit (example)
# (This can be done by editing servers_config.json or via future CLI commands)

๐Ÿ”ง Advanced Features

Tool Orchestration

# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"

# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results

Resource Analysis

# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"

# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary

Demo

mcp_client_new1-MadewithClipchamp-ezgif com-optimize

๐Ÿ” Troubleshooting

๐Ÿ“– For comprehensive configuration help, see the โš™๏ธ Configuration Guide section above, which covers:

  • Config file differences (.env vs servers_config.json)
  • Transport type selection and authentication
  • OAuth server behavior explanation
  • Common connection issues and solutions

Common Issues and Solutions

  1. Connection Issues

    Error: Could not connect to MCP server
    
    • Check if the server is running
    • Verify server configuration in servers_config.json
    • Ensure network connectivity
    • Check server logs for errors
    • See Transport Types & Authentication for detailed setup
  2. API Key Issues

    Error: Invalid API key
    
    • Verify API key is correctly set in .env
    • Check if API key has required permissions
    • Ensure API key is for correct environment (production/development)
    • See Configuration Files Overview for correct setup
  3. Redis Connection

    Error: Could not connect to Redis
    
    • Verify Redis server is running
    • Check Redis connection settings in .env
    • Ensure Redis password is correct (if configured)
  4. Tool Execution Failures

    Error: Tool execution failed
    
    • Check tool availability on connected servers
    • Verify tool permissions
    • Review tool arguments for correctness

๐Ÿšจ Quick Fixes (Common Issues)

Error Quick Fix
Error: Invalid API key Check your .env file: LLM_API_KEY=your_actual_key
ModuleNotFoundError: mcpomni_connect Run: uv add mcpomni-connect or pip install mcpomni-connect
Connection refused Ensure MCP server is running before connecting
ChromaDB not available Install: pip install chromadb (usually auto-installed)
Redis connection failed Install Redis or use in-memory mode (default)
Tool execution failed Check tool permissions and arguments

Debug Mode

Enable debug mode for detailed logging:

/debug

Getting Help

  1. First: Check the Quick Fixes above
  2. Examples: Study working examples in the examples/ directory
  3. Issues: Search GitHub Issues for similar problems
  4. New Issue: Create a new issue with detailed information

๐Ÿค Contributing

We welcome contributions! See our Contributing Guide for details.

๐Ÿ“– Documentation

Complete documentation is available at: MCPOmni Connect Docs

To build documentation locally:

./docs.sh serve    # Start development server at http://127.0.0.1:8080
./docs.sh build    # Build static documentation

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ“ฌ Contact & Support


Built with โค๏ธ by the MCPOmni Connect Team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcpomni_connect-0.1.20.tar.gz (133.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcpomni_connect-0.1.20-py3-none-any.whl (146.6 kB view details)

Uploaded Python 3

File details

Details for the file mcpomni_connect-0.1.20.tar.gz.

File metadata

  • Download URL: mcpomni_connect-0.1.20.tar.gz
  • Upload date:
  • Size: 133.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for mcpomni_connect-0.1.20.tar.gz
Algorithm Hash digest
SHA256 a9dd1301241bc954dddeb6726b9cc959ac9e7f7d248b05cd416c20b5f9f40108
MD5 bbbcb853f922419826b428f19bd511d8
BLAKE2b-256 0d7834b7ca25c82bec34c44e488a9b5ea69e74945ff64ac673039f2201848079

See more details on using hashes here.

File details

Details for the file mcpomni_connect-0.1.20-py3-none-any.whl.

File metadata

File hashes

Hashes for mcpomni_connect-0.1.20-py3-none-any.whl
Algorithm Hash digest
SHA256 9f81402bae0415714c6fdff99602df42a1a3c5f4058cdb9a56fd42b817cecfcd
MD5 db44f641b8692237941507a112c3a535
BLAKE2b-256 bd0d6debdd79b36b2b8e8298cfc26f242bde282da18e7d8aba7c35d714e03784

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page