Skip to main content

OmniCoreAgent is a powerful Python AI Agent framework for building autonomous AI agents that think, reason, and execute complex tasks. Production-ready agents that use tools, manage memory, coordinate workflows, and handle real-world business logic.

Project description

OmniCoreAgent Logo

๐Ÿš€ OmniCoreAgent

[!IMPORTANT] OmniAgent has been renamed to OmniCoreAgent. To avoid breaking changes, OmniAgent is still available as a deprecated alias, but please update your imports and class usage to OmniCoreAgent as soon as possible.

Production-Ready AI Agent Framework
Build autonomous AI agents that think, reason, and execute complex tasks.

PyPI Downloads PyPI version Python Version License Last Commit

Quick Start โ€ข Features โ€ข Examples โ€ข Configuration โ€ข Documentation


๐Ÿ“‹ Table of Contents

Click to expand

Getting Started

Core Features

  1. ๐Ÿค– OmniCoreAgent โ€” The Heart of the Framework
  2. ๐Ÿง  Multi-Tier Memory System
  3. ๐Ÿ“ก Event System
  4. ๐Ÿ”Œ Built-in MCP Client
  5. ๐Ÿ› ๏ธ Local Tools System
  6. ๐Ÿงฉ Agent Skills System
  7. ๐Ÿ’พ Memory Tool Backend
  8. ๐Ÿ‘ฅ Sub-Agents System
  9. ๐Ÿš Background Agents
  10. ๐Ÿ”„ Workflow Agents
  11. ๐Ÿง  Advanced Tool Use (BM25)
  12. ๐Ÿ“Š Production Observability & Metrics
  13. ๐Ÿ›ก๏ธ Prompt Injection Guardrails
  14. ๐ŸŒ Universal Model Support

Reference


๐ŸŒ The OmniRexFlora AI Ecosystem

OmniCoreAgent is part of a complete "Operating System for AI Agents" โ€” three powerful tools that work together:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                     ๐ŸŒ OmniRexFlora AI Ecosystem                            โ”‚
โ”‚                    "The Operating System for AI Agents"                     โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚                                                                             โ”‚
โ”‚   ๐Ÿง  OmniMemory                    ๐Ÿค– OmniCoreAgent                         โ”‚
โ”‚   โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€                    โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€                        โ”‚
โ”‚   The Brain                        The Worker           โšก OmniDaemon       โ”‚
โ”‚                                                         โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€       โ”‚
โ”‚   โ€ข Self-evolving memory      โ”€โ”€โ”€โ–บ โ€ข Agent building     The Runtime         โ”‚
โ”‚   โ€ข Dual-agent synthesis           โ€ข Tool orchestration                     โ”‚
โ”‚   โ€ข Conflict resolution            โ€ข Multi-backend      โ€ข Event-driven  โ—„โ”€โ”€โ”€โ”ค
โ”‚   โ€ข Composite scoring              โ€ข Workflow agents      execution         โ”‚
โ”‚                                                         โ€ข Production        โ”‚
โ”‚   github.com/omnirexflora-        YOU ARE HERE            deployment       โ”‚
โ”‚   labs/omnimemory                                       โ€ข Framework-        โ”‚
โ”‚                                                           agnostic          โ”‚
โ”‚                                                                             โ”‚
โ”‚                                                         github.com/         โ”‚
โ”‚                                                         omnirexflora-labs/  โ”‚
โ”‚                                                         OmniDaemon          โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
Tool Role Description
๐Ÿง  OmniMemory The Brain Self-evolving memory with dual-agent synthesis & conflict resolution
๐Ÿค– OmniCoreAgent The Worker Agent building, tool orchestration, multi-backend flexibility
โšก OmniDaemon The Runtime Event-driven execution, production deployment, framework-agnostic

๐Ÿ’ก Like how Linux runs applications, OmniRexFlora runs AI agents โ€” reliably, at scale, in production.


๐ŸŽฏ What is OmniCoreAgent?

OmniCoreAgent is a production-ready Python framework for building autonomous AI agents that:

Capability Description
๐Ÿค– Think & Reason Not just chatbots โ€” agents that plan multi-step workflows
๐Ÿ› ๏ธ Use Tools Connect to APIs, databases, files, MCP servers, with Advanced Tool Use
๐Ÿง  Remember Context Multi-tier memory: Redis, PostgreSQL, MongoDB, SQLite
๐Ÿ”„ Orchestrate Workflows Sequential, Parallel, and Router agents
๐Ÿš€ Run in Production Monitoring, observability, error handling built-in
๐Ÿ”Œ Plug & Play Switch backends at runtime (Redis โ†” MongoDB โ†” PostgreSQL)

โšก Quick Start

1. Install (10 seconds)

# Using uv (recommended)
uv add omnicoreagent

# Or with pip
pip install omnicoreagent

2. Set API Key (10 seconds)

echo "LLM_API_KEY=your_openai_api_key_here" > .env

๐Ÿ’ก Get your key from OpenAI, Anthropic, or Groq

3. Create Your First Agent (30 seconds)

import asyncio
from omnicoreagent import OmniCoreAgent

async def main():
    agent = OmniCoreAgent(
        name="my_agent",
        system_instruction="You are a helpful assistant.",
        model_config={"provider": "openai", "model": "gpt-4o"}
    )
    
    result = await agent.run("Hello, world!")
    print(result['response'])
    
    await agent.cleanup()

if __name__ == "__main__":
    asyncio.run(main())

โœ… That's it! You just built an AI agent with session management, memory persistence, event streaming, and error handling.

๐Ÿšจ Common Errors & Fixes
Error Fix
Invalid API key Check .env file: LLM_API_KEY=sk-... (no quotes)
ModuleNotFoundError Run: pip install omnicoreagent
Event loop is closed Use asyncio.run(main())

๐Ÿ—๏ธ Architecture Overview

OmniCoreAgent Framework
โ”œโ”€โ”€ ๐Ÿค– Core Agent System
โ”‚   โ”œโ”€โ”€ OmniCoreAgent (Main Class)
โ”‚   โ”œโ”€โ”€ ReactAgent (Reasoning Engine)
โ”‚   โ””โ”€โ”€ Tool Orchestration
โ”‚
โ”œโ”€โ”€ ๐Ÿง  Memory System (5 Backends)
โ”‚   โ”œโ”€โ”€ InMemoryStore (Fast Dev)
โ”‚   โ”œโ”€โ”€ RedisMemoryStore (Production)
โ”‚   โ”œโ”€โ”€ DatabaseMemory (PostgreSQL/MySQL/SQLite)
โ”‚   โ””โ”€โ”€ MongoDBMemory (Document Storage)
โ”‚
โ”œโ”€โ”€ ๐Ÿ“ก Event System
โ”‚   โ”œโ”€โ”€ InMemoryEventStore (Development)
โ”‚   โ””โ”€โ”€ RedisStreamEventStore (Production)
โ”‚
โ”œโ”€โ”€ ๐Ÿ› ๏ธ Tool System
โ”‚   โ”œโ”€โ”€ Local Tools Registry
โ”‚   โ”œโ”€โ”€ MCP Integration
โ”‚   โ”œโ”€โ”€ Advanced Tool Use (BM25)
โ”‚   โ””โ”€โ”€ Memory Tool Backend
โ”‚
โ”œโ”€โ”€ ๐Ÿš Background Agents
โ”‚   โ””โ”€โ”€ Autonomous Scheduled Tasks
โ”‚
โ”œโ”€โ”€ ๐Ÿ”„ Workflow Agents
โ”‚   โ”œโ”€โ”€ SequentialAgent
โ”‚   โ”œโ”€โ”€ ParallelAgent
โ”‚   โ””โ”€โ”€ RouterAgent
โ”‚
โ”œโ”€โ”€ ๐Ÿงฉ Agent Skills System
โ”‚   โ”œโ”€โ”€ SkillManager (Discovery)
โ”‚   โ”œโ”€โ”€ Multi-language Script Dispatcher
โ”‚   โ””โ”€โ”€ agentskills.io Spec Alignment
โ”‚
โ””โ”€โ”€ ๐Ÿ”Œ Built-in MCP Client
    โ”œโ”€โ”€ stdio, SSE, HTTP transports
    โ””โ”€โ”€ OAuth & Bearer auth

๐ŸŽฏ Core Features

1. ๐Ÿค– OmniCoreAgent โ€” The Heart of the Framework

from omnicoreagent import OmniCoreAgent, ToolRegistry, MemoryRouter, EventRouter

# Basic Agent
agent = OmniCoreAgent(
    name="assistant",
    system_instruction="You are a helpful assistant.",
    model_config={"provider": "openai", "model": "gpt-4o"}
)

# Production Agent with All Features
agent = OmniCoreAgent(
    name="production_agent",
    system_instruction="You are a production agent.",
    model_config={"provider": "openai", "model": "gpt-4o"},
    local_tools=tool_registry,
    mcp_tools=[...],
    memory_router=MemoryRouter("redis"),
    event_router=EventRouter("redis_stream"),
    agent_config={
        "max_steps": 20,
        "enable_advanced_tool_use": True,
        "enable_agent_skills": True,
        "memory_tool_backend": "local",
        "guardrail_config": {"strict_mode": True}  # Enable Safety Guardrails
    }
)


# Key Methods
await agent.run(query)                      # Execute task
await agent.run(query, session_id="user_1") # With session context
await agent.connect_mcp_servers()           # Connect MCP tools
await agent.list_all_available_tools()      # List all tools
await agent.swith_memory_store("mongodb")         # Switch backend at runtime!
await agent.get_session_history(session_id)      # Retrieve conversation history
await agent.clear_session_history(session_id)     # Clear history (session_id optional, clears all if None)
await agent.get_events(session_id)               # Get event history
await agent.get_memory_store_type()              # Get current memory router type
await agent.cleanup()                       # Clean up resources and remove the agent and the config
await agent.cleanup_mcp_servers()               # Clean up MCP servers without removing the agent and the config
await agent.get_metrics()                       # Get cumulative usage (tokens, requests, time)

[!TIP] Each agent.run() call now returns a metric field containing fine-grained usage for that specific request.

๐Ÿ’ก When to Use: OmniCoreAgent is your go-to for any AI task โ€” from simple Q&A to complex multi-step workflows. Start here for any agent project.

2. ๐Ÿง  Multi-Tier Memory System (Plug & Play)

5 backends with runtime switching โ€” start with Redis, switch to MongoDB, then PostgreSQL โ€” all on the fly!

from omnicoreagent import OmniCoreAgent, MemoryRouter

# Start with Redis
agent = OmniCoreAgent(
    name="my_agent",
    memory_router=MemoryRouter("redis"),
    model_config={"provider": "openai", "model": "gpt-4o"}
)

# Switch at runtime โ€” no restart needed!
agent.swith_memory_store("mongodb")     # Switch to MongoDB
agent.swith_memory_store("database")    # Switch to PostgreSQL/MySQL/SQLite
agent.swith_memory_store("in_memory")   # Switch to in-memory
agent.swith_memory_store("redis")       # Back to Redis
Backend Use Case Environment Variable
in_memory Fast development โ€”
redis Production persistence REDIS_URL
database PostgreSQL/MySQL/SQLite DATABASE_URL
mongodb Document storage MONGODB_URI

๐Ÿ’ก When to Use: Use in_memory for development/testing, redis for production with fast access, database for SQL-based systems, mongodb for document-heavy applications.

3. ๐Ÿ“ก Event System (Plug & Play)

Real-time event streaming with runtime switching:

from omnicoreagent import EventRouter

# Start with in-memory
agent = OmniCoreAgent(
    event_router=EventRouter("in_memory"),
    ...
)

# Switch to Redis Streams for production
agent.switch_event_store("redis_stream")
agent.get_event_store_type()                    # Get current event router type
# Stream events in real-time
async for event in agent.stream_events(session_id):
    print(f"{event.type}: {event.payload}")

Event Types: user_message, agent_message, tool_call_started, tool_call_result, final_answer, agent_thought, sub_agent_started, sub_agent_error, sub_agent_result

๐Ÿ’ก When to Use: Enable events when you need real-time monitoring, debugging, or building UIs that show agent progress. Essential for production observability.

4. ๐Ÿ”Œ Built-in MCP Client

Connect to any MCP-compatible service with support for multiple transport protocols and authentication methods.

Transport Types

1. stdio โ€” Local MCP servers (process communication)

{
    "name": "filesystem",
    "transport_type": "stdio",
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
}

2. streamable_http โ€” Remote servers with HTTP streaming

# With Bearer Token
{
    "name": "github",
    "transport_type": "streamable_http",
    "url": "http://localhost:8080/mcp",
    "headers": {
        "Authorization": "Bearer your-token" # optional
    },
    "timeout": 60 # optional
}

# With OAuth 2.0 (auto-starts callback server on localhost:3000)
{
    "name": "oauth_server",
    "transport_type": "streamable_http",
    "auth": {
        "method": "oauth"
    },
    "url": "http://localhost:8000/mcp"
}

3. sse โ€” Server-Sent Events

{
    "name": "sse_server",
    "transport_type": "sse",
    "url": "http://localhost:3000/sse",
    "headers": {
        "Authorization": "Bearer token" # optional
    },
    "timeout": 60, # optional
    "sse_read_timeout": 120 # optional
}

Complete Example with All 3 Transport Types

agent = OmniCoreAgent(
    name="multi_mcp_agent",
    system_instruction="You have access to filesystem, GitHub, and live data.",
    model_config={"provider": "openai", "model": "gpt-4o"},
    mcp_tools=[
        # 1. stdio - Local filesystem
        {
            "name": "filesystem",
            "transport_type": "stdio",
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
        },
        # 2. streamable_http - Remote API (supports Bearer token or OAuth)
        {
            "name": "github",
            "transport_type": "streamable_http",
            "url": "http://localhost:8080/mcp",
            "headers": {"Authorization": "Bearer github-token"},
            "timeout": 60
        },
        # 3. sse - Real-time streaming
        {
            "name": "live_data",
            "transport_type": "sse",
            "url": "http://localhost:3000/sse",
            "headers": {"Authorization": "Bearer token"},
            "sse_read_timeout": 120
        }
    ]
)

await agent.connect_mcp_servers()
tools = await agent.list_all_available_tools()  # All MCP + local tools
result = await agent.run("List all Python files and get latest commits")

Transport Comparison

Transport Use Case Auth Methods
stdio Local MCP servers, CLI tools None (local process)
streamable_http Remote APIs, cloud services Bearer token, OAuth 2.0
sse Real-time data, streaming Bearer token, custom headers

๐Ÿ’ก When to Use: Use MCP when you need to connect to external tools and services. Choose stdio for local CLI tools, streamable_http for REST APIs, and sse for real-time streaming data.


5. ๐Ÿ› ๏ธ Local Tools System

Register any Python function as an AI tool:

from omnicoreagent import ToolRegistry

tools = ToolRegistry()

@tools.register_tool("get_weather")
def get_weather(city: str) -> str:
    """Get weather for a city."""
    return f"Weather in {city}: Sunny, 25ยฐC"

@tools.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
    """Calculate rectangle area."""
    return f"Area: {length * width} square units"

agent = OmniCoreAgent(
    name="tool_agent",
    local_tools=tools,  # Your custom tools!
    ...
)

๐Ÿ’ก When to Use: Use Local Tools when you need custom business logic, internal APIs, or any Python functionality that isn't available via MCP servers.


6. ๐Ÿงฉ Agent Skills System (Packaged Capabilities)

OmniCoreAgent supports the Agent Skills specification โ€” self-contained capability packages that provide specialized knowledge, executable scripts, and documentation.

agent_config = {
    "enable_agent_skills": True  # Enable discovery and tools for skills
}

Key Concepts:

  • Discovery: Agents automatically discover skills installed in .agents/skills/[skill-name].
  • Activation (SKILL.md): Agents are instructed to read the "Activation Document" first to understand how to use the skill's specific capabilities.
  • Polyglot Execution: The run_skill_script tool handles scripts in Python, JavaScript/Node, TypeScript, Ruby, Perl, and Shell (bash/sh).

Directory Structure:

.agents/skills/my-skill-name/
โ”œโ”€โ”€ SKILL.md        # The "Activation" document (instructions + metadata)
โ”œโ”€โ”€ scripts/        # Multi-language executable scripts
โ”œโ”€โ”€ references/     # Deep-dive documentation
โ””โ”€โ”€ assets/         # Templates, examples, and resources

Skill Tools:

  • read_skill_file(skill_name, file_path): Access any file within a skill (start with SKILL.md).
  • run_skill_script(skill_name, script_name, args?): Execute bundled scripts with automatic interpreter detection.

๐Ÿ“š Learn More: To learn how to create your own agent skills, visit agentskills.io.


7. ๐Ÿ’พ Memory Tool Backend (File-Based Working Memory)

A file-based persistent storage system that gives your agent a local workspace to save and manage files during long-running tasks. Files are stored in a ./memories/ directory with safe concurrent access and path traversal protection.

agent_config = {
    "memory_tool_backend": "local"  # Enable file-based memory
}

# Agent automatically gets these tools:
# - memory_view: View/list files in memory directory
# - memory_create_update: Create new files or append/overwrite existing ones
# - memory_str_replace: Find and replace text within files
# - memory_insert: Insert text at specific line numbers
# - memory_delete: Delete files from memory
# - memory_rename: Rename or move files
# - memory_clear_all: Clear entire memory directory

How It Works:

  • Files stored in ./memories/ directory (auto-created)
  • Thread-safe with file locking for concurrent access
  • Path traversal protection for security
  • Persists across agent restarts

Use Cases:

Use Case Description
Long-running workflows Save progress as agent works through complex tasks
Resumable tasks Continue where you left off after interruption
Multi-step planning Agent can save plans, execute, and update
Code generation Save code incrementally, run tests, iterate
Data processing Store intermediate results between steps

Example: A code generation agent can save its plan to memory, write code incrementally, run tests, and resume if interrupted.


8. ๐Ÿ‘ฅ Sub-Agents System

Delegate tasks to specialized child agents:

weather_agent = OmniCoreAgent(name="weather_agent", ...)
filesystem_agent = OmniCoreAgent(name="filesystem_agent", mcp_tools=MCP_TOOLS, ...)

parent_agent = OmniCoreAgent(
    name="parent_agent",
    sub_agents=[weather_agent, filesystem_agent],
    ...
)

๐Ÿ’ก When to Use: Use Sub-Agents when you have specialized agents (e.g., weather, code, data) and want a parent agent to delegate tasks intelligently. Great for building modular, reusable agent architectures.


9. ๐Ÿš Background Agents

Autonomous agents that run on schedule:

from omnicoreagent import BackgroundAgentService, MemoryRouter, EventRouter

bg_service = BackgroundAgentService(
    MemoryRouter("redis"),
    EventRouter("redis_stream")
)
bg_service.start_manager()

agent_config = {
    "agent_id": "system_monitor",
    "system_instruction": "Monitor system resources.",
    "model_config": {"provider": "openai", "model": "gpt-4o-mini"},
    "interval": 300,  # Run every 5 minutes
    "task_config": {
        "query": "Monitor CPU and alert if > 80%",
        "max_retries": 2
    }
}

await bg_service.create(agent_config)
bg_service.start_agent("system_monitor")

Management: start_agent(), pause_agent(), resume_agent(), stop_agent(), get_agent_status()

๐Ÿ’ก When to Use: Perfect for scheduled tasks like system monitoring, periodic reports, data syncing, or any automation that runs independently without user interaction.


10. ๐Ÿ”„ Workflow Agents

Orchestrate multiple agents for complex tasks:

from omnicoreagent import SequentialAgent, ParallelAgent, RouterAgent

# Sequential: Chain agents step-by-step
seq_agent = SequentialAgent(sub_agents=[agent1, agent2, agent3])
result = await seq_agent.run(initial_task="Analyze and report")

# Parallel: Run agents concurrently
par_agent = ParallelAgent(sub_agents=[agent1, agent2, agent3])
results = await par_agent.run(agent_tasks={
    "analyzer": "Analyze data",
    "processor": "Process results"
})

# Router: Intelligent task routing
router = RouterAgent(
    sub_agents=[code_agent, data_agent, research_agent],
    model_config={"provider": "openai", "model": "gpt-4o"}
)
result = await router.run(task="Find and summarize AI research")

๐Ÿ’ก When to Use:

  • SequentialAgent: When tasks depend on each other (output of one โ†’ input of next)
  • ParallelAgent: When tasks are independent and can run simultaneously for speed
  • RouterAgent: When you need intelligent task routing to specialized agents

11. ๐Ÿง  Advanced Tool Use (BM25 Retrieval)

Automatically discover relevant tools at runtime using BM25 lexical search:

agent_config = {
    "enable_advanced_tool_use": True  # Enable BM25 retrieval
}

How It Works:

  1. All MCP tools loaded into in-memory registry
  2. BM25 index built over tool names, descriptions, parameters
  3. User task used as search query
  4. Top 5 relevant tools dynamically injected

Benefits: Scales to 1000+ tools, zero network I/O, deterministic, container-friendly.

๐Ÿ’ก When to Use: Enable when you have many MCP tools (10+) and want the agent to automatically discover the right tools for each task without manual selection.


12. ๐Ÿ“Š Production Observability & Metrics

๐Ÿ“ˆ Real-time Usage Metrics

OmniCoreAgent tracks every token, request, and millisecond. Each run() returns a metric object, and you can get cumulative stats anytime.

result = await agent.run("Analyze this data")
print(f"Request Tokens: {result['metric'].request_tokens}")
print(f"Time Taken: {result['metric'].total_time:.2f}s")

# Get aggregated metrics for the agent's lifecycle
stats = await agent.get_metrics()
print(f"Avg Response Time: {stats['average_time']:.2f}s")

๐Ÿ” Opik Tracing

Monitor and optimize your agents with deep traces:

# Add to .env
OPIK_API_KEY=your_opik_api_key
OPIK_WORKSPACE=your_workspace

What's Tracked: LLM call performance, tool execution traces, memory operations, agent workflow, bottlenecks.

Agent Execution Trace:
โ”œโ”€โ”€ agent_execution: 4.6s
    โ”œโ”€โ”€ tools_registry_retrieval: 0.02s โœ…
    โ”œโ”€โ”€ memory_retrieval_step: 0.08s โœ…
    โ”œโ”€โ”€ llm_call: 4.5s โš ๏ธ (bottleneck!)
    โ””โ”€โ”€ action_execution: 0.03s โœ…

๐Ÿ’ก When to Use: Essential for production. Use Metrics for cost/performance monitoring, and Opik for identifying bottlenecks and debugging complex agent logic.


13. ๐Ÿ›ก๏ธ Prompt Injection Guardrails

Protect your agents against malicious inputs, jailbreaks, and instruction overrides before they reach the LLM.

agent_config = {
    "guardrail_config": {
        "strict_mode": True,      # Block all suspicious inputs
        "sensitivity": 0.85,      # 0.0 to 1.0 (higher = more sensitive)
        "enable_pattern_matching": True,
        "enable_heuristic_analysis": True
    }
}

agent = OmniCoreAgent(..., agent_config=agent_config)

# If a threat is detected:
# result['response'] -> "I'm sorry, but I cannot process this request due to safety concerns..."
# result['guardrail_result'] -> Full metadata about the detected threat

Key Protections:

  • Instruction Overrides: "Ignore previous instructions..."
  • Jailbreaks: DAN mode, roleplay escapes, etc.
  • Toxicity & Abuse: Built-in pattern recognition.
  • Payload Splitting: Detects fragmented attack attempts.

โš™๏ธ Configuration Options

Parameter Type Default Description
strict_mode bool False When True, any detection (even low confidence) blocks the request.
sensitivity float 1.0 Scaling factor for threat scores (0.0 to 1.0). Higher = more sensitive.
max_input_length int 10000 Maximum allowed query length before blocking.
enable_encoding_detection bool True Detects base64, hex, and other obfuscation attempts.
enable_heuristic_analysis bool True Analyzes prompt structure for typical attack patterns.
enable_sequential_analysis bool True Checks for phased attacks across multiple tokens.
enable_entropy_analysis bool True Detects high-entropy payloads common in injections.
allowlist_patterns list [] List of regex patterns that bypass safety checks.
blocklist_patterns list [] Custom regex patterns to always block.

๐Ÿ’ก When to Use: Always enable in user-facing applications to prevent prompt injection attacks and ensure agent reliability.


14. ๐ŸŒ Universal Model Support

Model-agnostic through LiteLLM โ€” use any provider:

# OpenAI
model_config = {"provider": "openai", "model": "gpt-4o"}

# Anthropic
model_config = {"provider": "anthropic", "model": "claude-3-5-sonnet-20241022"}

# Groq (Ultra-fast)
model_config = {"provider": "groq", "model": "llama-3.1-8b-instant"}

# Ollama (Local)
model_config = {"provider": "ollama", "model": "llama3.1:8b", "ollama_host": "http://localhost:11434"}

# OpenRouter (200+ models)
model_config = {"provider": "openrouter", "model": "anthropic/claude-3.5-sonnet"}

#mistral ai
model_config = {"provider": "mistral", "model": "mistral-7b-instruct"}

#deepseek
model_config = {"provider": "deepseek", "model": "deepseek-chat"}

#google gemini
model_config = {"provider": "google", "model": "gemini-2.0-flash-exp"}

#azure openai
model_config = {"provider": "azure_openai", "model": "gpt-4o"}

Supported: OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, Mistral, Azure OpenAI, OpenRouter, Ollama

๐Ÿ’ก When to Use: Switch providers based on your needs โ€” use cheaper models (Groq, DeepSeek) for simple tasks, powerful models (GPT-4o, Claude) for complex reasoning, and local models (Ollama) for privacy-sensitive applications.

๐Ÿ“š Examples

Basic Examples

python examples/cli/basic.py                    # Simple introduction
python examples/cli/run_omni_agent.py          # All features demo

Custom Agents

python examples/custom_agents/e_commerce_personal_shopper_agent.py
python examples/custom_agents/flightBooking_agent.py
python examples/custom_agents/real_time_customer_support_agent.py

Workflow Agents

python examples/workflow_agents/sequential_agent.py
python examples/workflow_agents/parallel_agent.py
python examples/workflow_agents/router_agent.py

Production Examples

Example Description Location
DevOps Copilot Safe bash execution, rate limiting, Prometheus metrics examples/devops_copilot_agent/
Deep Code Agent Sandbox execution, memory backend, code analysis examples/deep_code_agent/

โš™๏ธ Configuration

Environment Variables

# Required
LLM_API_KEY=your_api_key

# Optional: Memory backends
REDIS_URL=redis://localhost:6379/0
DATABASE_URL=postgresql://user:pass@localhost:5432/db
MONGODB_URI=mongodb://localhost:27017/omnicoreagent

# Optional: Observability
OPIK_API_KEY=your_opik_key
OPIK_WORKSPACE=your_workspace

Agent Configuration

agent_config = {
    "max_steps": 15,                    # Max reasoning steps
    "tool_call_timeout": 30,            # Tool timeout (seconds)
    "request_limit": 0,                 # 0 = unlimited
    "total_tokens_limit": 0,            # 0 = unlimited
    "memory_config": {"mode": "sliding_window", "value": 10000},
    "enable_advanced_tool_use": True,   # BM25 tool retrieval
    "enable_agent_skills": True,        # Specialized packaged skills
    "memory_tool_backend": "local"      # Persistent working memory
}

Model Configuration

model_config = {
    "provider": "openai",
    "model": "gpt-4o",
    "temperature": 0.7,
    "max_tokens": 2000,
    "top_p": 0.95
}
๐Ÿ“‹ Additional Model Configurations
# Azure OpenAI
model_config = {
    "provider": "azureopenai",
    "model": "gpt-4",
    "azure_endpoint": "https://your-resource.openai.azure.com",
    "azure_api_version": "2024-02-01"
}

# Ollama (Local)
model_config = {
    "provider": "ollama",
    "model": "llama3.1:8b",
    "ollama_host": "http://localhost:11434"
}

๐Ÿงช Testing & Development

# Clone
git clone https://github.com/omnirexflora-labs/omnicoreagent.git
cd omnicoreagent

# Setup
uv venv && source .venv/bin/activate
uv sync --dev

# Test
pytest tests/ -v
pytest tests/ --cov=src --cov-report=term-missing

๐Ÿ” Troubleshooting

Error Fix
Invalid API key Check .env: LLM_API_KEY=your_key
ModuleNotFoundError pip install omnicoreagent
Redis connection failed Start Redis or use MemoryRouter("in_memory")
MCP connection refused Ensure MCP server is running
๐Ÿ“‹ More Troubleshooting

OAuth Server Starts: Normal when using "auth": {"method": "oauth"}. Remove if not needed.

Debug Mode: agent = OmniCoreAgent(..., debug=True)

Help: Check GitHub Issues


๐Ÿค Contributing

# Fork & clone
git clone https://github.com/omnirexflora-labs/omnicoreagent.git

# Setup
uv venv && source .venv/bin/activate
uv sync --dev
pre-commit install

# Submit PR

See CONTRIBUTING.md for guidelines.


๐Ÿ“„ License

MIT License โ€” see LICENSE


๐Ÿ‘จโ€๐Ÿ’ป Author & Credits

Created by Abiola Adeshina

๐ŸŒŸ The OmniRexFlora Ecosystem

Project Description
๐Ÿง  OmniMemory Self-evolving memory for autonomous agents
๐Ÿค– OmniCoreAgent Production-ready AI agent framework (this project)
โšก OmniDaemon Event-driven runtime engine for AI agents

๐Ÿ™ Acknowledgments

Built on: LiteLLM, FastAPI, Redis, Opik, Pydantic, APScheduler


Building the future of production-ready AI agent frameworks

โญ Star us on GitHub โ€ข ๐Ÿ› Report Bug โ€ข ๐Ÿ’ก Request Feature โ€ข ๐Ÿ“– Documentation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

omnicoreagent-0.3.1.tar.gz (123.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

omnicoreagent-0.3.1-py3-none-any.whl (143.7 kB view details)

Uploaded Python 3

File details

Details for the file omnicoreagent-0.3.1.tar.gz.

File metadata

  • Download URL: omnicoreagent-0.3.1.tar.gz
  • Upload date:
  • Size: 123.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for omnicoreagent-0.3.1.tar.gz
Algorithm Hash digest
SHA256 56b1efb3e7d53c0df09a08f7a4499c2e9b4d10a7c094c53334f339acc5b9c0c8
MD5 b4effd6c7867ca3cad59e54c248e37da
BLAKE2b-256 a5991f31506cb8d56da903d0fce466ac1537d2274b63a7aae1153e0ac3e6acd1

See more details on using hashes here.

File details

Details for the file omnicoreagent-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: omnicoreagent-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 143.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for omnicoreagent-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4bb3f381f6a050f559de8e717549826fd7e68e2551ee4f725de3d7dfba94a7d7
MD5 0349a4b284a517cb0c387616d5d1dde0
BLAKE2b-256 e8aecbcaa094676c99d85aa7f6a36686de410c6fff28b2abe8cfabfb41e3ad8c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page