OmniCoreAgent is a powerful Python AI Agent framework for building autonomous AI agents that think, reason, and execute complex tasks. Production-ready agents that use tools, manage memory, coordinate workflows, and handle real-world business logic.
Project description
๐ OmniCoreAgent
[!IMPORTANT] OmniAgent has been renamed to OmniCoreAgent. To avoid breaking changes,
OmniAgentis still available as a deprecated alias, but please update your imports and class usage toOmniCoreAgentas soon as possible.
Production-Ready AI Agent Framework
Build autonomous AI agents that think, reason, and execute complex tasks.
Quick Start โข Features โข Examples โข Configuration โข Documentation
๐ Table of Contents
Click to expand
Getting Started
- ๐ The OmniRexFlora AI Ecosystem
- ๐ฏ What is OmniCoreAgent?
- โก Quick Start
- ๐๏ธ Architecture Overview
Core Features
- ๐ค OmniCoreAgent โ The Heart of the Framework
- ๐ง Multi-Tier Memory System
- ๐ก Event System
- ๐ Built-in MCP Client
- ๐ ๏ธ Local Tools System
- ๐งฉ Agent Skills System
- ๐พ Memory Tool Backend
- ๐ฅ Sub-Agents System
- ๐ Background Agents
- ๐ Workflow Agents
- ๐ง Advanced Tool Use (BM25)
- ๐ Production Observability & Metrics
- ๐ก๏ธ Prompt Injection Guardrails
- ๐ Universal Model Support
Reference
๐ The OmniRexFlora AI Ecosystem
OmniCoreAgent is part of a complete "Operating System for AI Agents" โ three powerful tools that work together:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ OmniRexFlora AI Ecosystem โ
โ "The Operating System for AI Agents" โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ ๐ง OmniMemory ๐ค OmniCoreAgent โ
โ โโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โ
โ The Brain The Worker โก OmniDaemon โ
โ โโโโโโโโโโโโโ โ
โ โข Self-evolving memory โโโโบ โข Agent building The Runtime โ
โ โข Dual-agent synthesis โข Tool orchestration โ
โ โข Conflict resolution โข Multi-backend โข Event-driven โโโโโค
โ โข Composite scoring โข Workflow agents execution โ
โ โข Production โ
โ github.com/omnirexflora- YOU ARE HERE deployment โ
โ labs/omnimemory โข Framework- โ
โ agnostic โ
โ โ
โ github.com/ โ
โ omnirexflora-labs/ โ
โ OmniDaemon โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
| Tool | Role | Description |
|---|---|---|
| ๐ง OmniMemory | The Brain | Self-evolving memory with dual-agent synthesis & conflict resolution |
| ๐ค OmniCoreAgent | The Worker | Agent building, tool orchestration, multi-backend flexibility |
| โก OmniDaemon | The Runtime | Event-driven execution, production deployment, framework-agnostic |
๐ก Like how Linux runs applications, OmniRexFlora runs AI agents โ reliably, at scale, in production.
๐ฏ What is OmniCoreAgent?
OmniCoreAgent is a production-ready Python framework for building autonomous AI agents that:
| Capability | Description |
|---|---|
| ๐ค Think & Reason | Not just chatbots โ agents that plan multi-step workflows |
| ๐ ๏ธ Use Tools | Connect to APIs, databases, files, MCP servers, with Advanced Tool Use |
| ๐ง Remember Context | Multi-tier memory: Redis, PostgreSQL, MongoDB, SQLite |
| ๐ Orchestrate Workflows | Sequential, Parallel, and Router agents |
| ๐ Run in Production | Monitoring, observability, error handling built-in |
| ๐ Plug & Play | Switch backends at runtime (Redis โ MongoDB โ PostgreSQL) |
โก Quick Start
1. Install (10 seconds)
# Using uv (recommended)
uv add omnicoreagent
# Or with pip
pip install omnicoreagent
2. Set API Key (10 seconds)
echo "LLM_API_KEY=your_openai_api_key_here" > .env
3. Create Your First Agent (30 seconds)
import asyncio
from omnicoreagent import OmniCoreAgent
async def main():
agent = OmniCoreAgent(
name="my_agent",
system_instruction="You are a helpful assistant.",
model_config={"provider": "openai", "model": "gpt-4o"}
)
result = await agent.run("Hello, world!")
print(result['response'])
await agent.cleanup()
if __name__ == "__main__":
asyncio.run(main())
โ That's it! You just built an AI agent with session management, memory persistence, event streaming, and error handling.
๐จ Common Errors & Fixes
| Error | Fix |
|---|---|
Invalid API key |
Check .env file: LLM_API_KEY=sk-... (no quotes) |
ModuleNotFoundError |
Run: pip install omnicoreagent |
Event loop is closed |
Use asyncio.run(main()) |
๐๏ธ Architecture Overview
OmniCoreAgent Framework
โโโ ๐ค Core Agent System
โ โโโ OmniCoreAgent (Main Class)
โ โโโ ReactAgent (Reasoning Engine)
โ โโโ Tool Orchestration
โ
โโโ ๐ง Memory System (5 Backends)
โ โโโ InMemoryStore (Fast Dev)
โ โโโ RedisMemoryStore (Production)
โ โโโ DatabaseMemory (PostgreSQL/MySQL/SQLite)
โ โโโ MongoDBMemory (Document Storage)
โ
โโโ ๐ก Event System
โ โโโ InMemoryEventStore (Development)
โ โโโ RedisStreamEventStore (Production)
โ
โโโ ๐ ๏ธ Tool System
โ โโโ Local Tools Registry
โ โโโ MCP Integration
โ โโโ Advanced Tool Use (BM25)
โ โโโ Memory Tool Backend
โ
โโโ ๐ Background Agents
โ โโโ Autonomous Scheduled Tasks
โ
โโโ ๐ Workflow Agents
โ โโโ SequentialAgent
โ โโโ ParallelAgent
โ โโโ RouterAgent
โ
โโโ ๐งฉ Agent Skills System
โ โโโ SkillManager (Discovery)
โ โโโ Multi-language Script Dispatcher
โ โโโ agentskills.io Spec Alignment
โ
โโโ ๐ Built-in MCP Client
โโโ stdio, SSE, HTTP transports
โโโ OAuth & Bearer auth
๐ฏ Core Features
1. ๐ค OmniCoreAgent โ The Heart of the Framework
from omnicoreagent import OmniCoreAgent, ToolRegistry, MemoryRouter, EventRouter
# Basic Agent
agent = OmniCoreAgent(
name="assistant",
system_instruction="You are a helpful assistant.",
model_config={"provider": "openai", "model": "gpt-4o"}
)
# Production Agent with All Features
agent = OmniCoreAgent(
name="production_agent",
system_instruction="You are a production agent.",
model_config={"provider": "openai", "model": "gpt-4o"},
local_tools=tool_registry,
mcp_tools=[...],
memory_router=MemoryRouter("redis"),
event_router=EventRouter("redis_stream"),
agent_config={
"max_steps": 20,
"enable_advanced_tool_use": True,
"enable_agent_skills": True,
"memory_tool_backend": "local",
"guardrail_config": {"strict_mode": True} # Enable Safety Guardrails
}
)
# Key Methods
await agent.run(query) # Execute task
await agent.run(query, session_id="user_1") # With session context
await agent.connect_mcp_servers() # Connect MCP tools
await agent.list_all_available_tools() # List all tools
await agent.swith_memory_store("mongodb") # Switch backend at runtime!
await agent.get_session_history(session_id) # Retrieve conversation history
await agent.clear_session_history(session_id) # Clear history (session_id optional, clears all if None)
await agent.get_events(session_id) # Get event history
await agent.get_memory_store_type() # Get current memory router type
await agent.cleanup() # Clean up resources and remove the agent and the config
await agent.cleanup_mcp_servers() # Clean up MCP servers without removing the agent and the config
await agent.get_metrics() # Get cumulative usage (tokens, requests, time)
[!TIP] Each
agent.run()call now returns ametricfield containing fine-grained usage for that specific request.
๐ก When to Use: OmniCoreAgent is your go-to for any AI task โ from simple Q&A to complex multi-step workflows. Start here for any agent project.
2. ๐ง Multi-Tier Memory System (Plug & Play)
5 backends with runtime switching โ start with Redis, switch to MongoDB, then PostgreSQL โ all on the fly!
from omnicoreagent import OmniCoreAgent, MemoryRouter
# Start with Redis
agent = OmniCoreAgent(
name="my_agent",
memory_router=MemoryRouter("redis"),
model_config={"provider": "openai", "model": "gpt-4o"}
)
# Switch at runtime โ no restart needed!
agent.swith_memory_store("mongodb") # Switch to MongoDB
agent.swith_memory_store("database") # Switch to PostgreSQL/MySQL/SQLite
agent.swith_memory_store("in_memory") # Switch to in-memory
agent.swith_memory_store("redis") # Back to Redis
| Backend | Use Case | Environment Variable |
|---|---|---|
in_memory |
Fast development | โ |
redis |
Production persistence | REDIS_URL |
database |
PostgreSQL/MySQL/SQLite | DATABASE_URL |
mongodb |
Document storage | MONGODB_URI |
๐ก When to Use: Use
in_memoryfor development/testing,redisfor production with fast access,databasefor SQL-based systems,mongodbfor document-heavy applications.
3. ๐ก Event System (Plug & Play)
Real-time event streaming with runtime switching:
from omnicoreagent import EventRouter
# Start with in-memory
agent = OmniCoreAgent(
event_router=EventRouter("in_memory"),
...
)
# Switch to Redis Streams for production
agent.switch_event_store("redis_stream")
agent.get_event_store_type() # Get current event router type
# Stream events in real-time
async for event in agent.stream_events(session_id):
print(f"{event.type}: {event.payload}")
Event Types: user_message, agent_message, tool_call_started, tool_call_result, final_answer, agent_thought, sub_agent_started, sub_agent_error, sub_agent_result
๐ก When to Use: Enable events when you need real-time monitoring, debugging, or building UIs that show agent progress. Essential for production observability.
4. ๐ Built-in MCP Client
Connect to any MCP-compatible service with support for multiple transport protocols and authentication methods.
Transport Types
1. stdio โ Local MCP servers (process communication)
{
"name": "filesystem",
"transport_type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
}
2. streamable_http โ Remote servers with HTTP streaming
# With Bearer Token
{
"name": "github",
"transport_type": "streamable_http",
"url": "http://localhost:8080/mcp",
"headers": {
"Authorization": "Bearer your-token" # optional
},
"timeout": 60 # optional
}
# With OAuth 2.0 (auto-starts callback server on localhost:3000)
{
"name": "oauth_server",
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
}
3. sse โ Server-Sent Events
{
"name": "sse_server",
"transport_type": "sse",
"url": "http://localhost:3000/sse",
"headers": {
"Authorization": "Bearer token" # optional
},
"timeout": 60, # optional
"sse_read_timeout": 120 # optional
}
Complete Example with All 3 Transport Types
agent = OmniCoreAgent(
name="multi_mcp_agent",
system_instruction="You have access to filesystem, GitHub, and live data.",
model_config={"provider": "openai", "model": "gpt-4o"},
mcp_tools=[
# 1. stdio - Local filesystem
{
"name": "filesystem",
"transport_type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
},
# 2. streamable_http - Remote API (supports Bearer token or OAuth)
{
"name": "github",
"transport_type": "streamable_http",
"url": "http://localhost:8080/mcp",
"headers": {"Authorization": "Bearer github-token"},
"timeout": 60
},
# 3. sse - Real-time streaming
{
"name": "live_data",
"transport_type": "sse",
"url": "http://localhost:3000/sse",
"headers": {"Authorization": "Bearer token"},
"sse_read_timeout": 120
}
]
)
await agent.connect_mcp_servers()
tools = await agent.list_all_available_tools() # All MCP + local tools
result = await agent.run("List all Python files and get latest commits")
Transport Comparison
| Transport | Use Case | Auth Methods |
|---|---|---|
stdio |
Local MCP servers, CLI tools | None (local process) |
streamable_http |
Remote APIs, cloud services | Bearer token, OAuth 2.0 |
sse |
Real-time data, streaming | Bearer token, custom headers |
๐ก When to Use: Use MCP when you need to connect to external tools and services. Choose
stdiofor local CLI tools,streamable_httpfor REST APIs, andssefor real-time streaming data.
5. ๐ ๏ธ Local Tools System
Register any Python function as an AI tool:
from omnicoreagent import ToolRegistry
tools = ToolRegistry()
@tools.register_tool("get_weather")
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: Sunny, 25ยฐC"
@tools.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
"""Calculate rectangle area."""
return f"Area: {length * width} square units"
agent = OmniCoreAgent(
name="tool_agent",
local_tools=tools, # Your custom tools!
...
)
๐ก When to Use: Use Local Tools when you need custom business logic, internal APIs, or any Python functionality that isn't available via MCP servers.
6. ๐งฉ Agent Skills System (Packaged Capabilities)
OmniCoreAgent supports the Agent Skills specification โ self-contained capability packages that provide specialized knowledge, executable scripts, and documentation.
agent_config = {
"enable_agent_skills": True # Enable discovery and tools for skills
}
Key Concepts:
- Discovery: Agents automatically discover skills installed in
.agents/skills/[skill-name]. - Activation (
SKILL.md): Agents are instructed to read the "Activation Document" first to understand how to use the skill's specific capabilities. - Polyglot Execution: The
run_skill_scripttool handles scripts in Python, JavaScript/Node, TypeScript, Ruby, Perl, and Shell (bash/sh).
Directory Structure:
.agents/skills/my-skill-name/
โโโ SKILL.md # The "Activation" document (instructions + metadata)
โโโ scripts/ # Multi-language executable scripts
โโโ references/ # Deep-dive documentation
โโโ assets/ # Templates, examples, and resources
Skill Tools:
read_skill_file(skill_name, file_path): Access any file within a skill (start withSKILL.md).run_skill_script(skill_name, script_name, args?): Execute bundled scripts with automatic interpreter detection.
๐ Learn More: To learn how to create your own agent skills, visit agentskills.io.
7. ๐พ Memory Tool Backend (File-Based Working Memory)
A file-based persistent storage system that gives your agent a local workspace to save and manage files during long-running tasks. Files are stored in a ./memories/ directory with safe concurrent access and path traversal protection.
agent_config = {
"memory_tool_backend": "local" # Enable file-based memory
}
# Agent automatically gets these tools:
# - memory_view: View/list files in memory directory
# - memory_create_update: Create new files or append/overwrite existing ones
# - memory_str_replace: Find and replace text within files
# - memory_insert: Insert text at specific line numbers
# - memory_delete: Delete files from memory
# - memory_rename: Rename or move files
# - memory_clear_all: Clear entire memory directory
How It Works:
- Files stored in
./memories/directory (auto-created) - Thread-safe with file locking for concurrent access
- Path traversal protection for security
- Persists across agent restarts
Use Cases:
| Use Case | Description |
|---|---|
| Long-running workflows | Save progress as agent works through complex tasks |
| Resumable tasks | Continue where you left off after interruption |
| Multi-step planning | Agent can save plans, execute, and update |
| Code generation | Save code incrementally, run tests, iterate |
| Data processing | Store intermediate results between steps |
Example: A code generation agent can save its plan to memory, write code incrementally, run tests, and resume if interrupted.
8. ๐ฅ Sub-Agents System
Delegate tasks to specialized child agents:
weather_agent = OmniCoreAgent(name="weather_agent", ...)
filesystem_agent = OmniCoreAgent(name="filesystem_agent", mcp_tools=MCP_TOOLS, ...)
parent_agent = OmniCoreAgent(
name="parent_agent",
sub_agents=[weather_agent, filesystem_agent],
...
)
๐ก When to Use: Use Sub-Agents when you have specialized agents (e.g., weather, code, data) and want a parent agent to delegate tasks intelligently. Great for building modular, reusable agent architectures.
9. ๐ Background Agents
Autonomous agents that run on schedule:
from omnicoreagent import BackgroundAgentService, MemoryRouter, EventRouter
bg_service = BackgroundAgentService(
MemoryRouter("redis"),
EventRouter("redis_stream")
)
bg_service.start_manager()
agent_config = {
"agent_id": "system_monitor",
"system_instruction": "Monitor system resources.",
"model_config": {"provider": "openai", "model": "gpt-4o-mini"},
"interval": 300, # Run every 5 minutes
"task_config": {
"query": "Monitor CPU and alert if > 80%",
"max_retries": 2
}
}
await bg_service.create(agent_config)
bg_service.start_agent("system_monitor")
Management: start_agent(), pause_agent(), resume_agent(), stop_agent(), get_agent_status()
๐ก When to Use: Perfect for scheduled tasks like system monitoring, periodic reports, data syncing, or any automation that runs independently without user interaction.
10. ๐ Workflow Agents
Orchestrate multiple agents for complex tasks:
from omnicoreagent import SequentialAgent, ParallelAgent, RouterAgent
# Sequential: Chain agents step-by-step
seq_agent = SequentialAgent(sub_agents=[agent1, agent2, agent3])
result = await seq_agent.run(initial_task="Analyze and report")
# Parallel: Run agents concurrently
par_agent = ParallelAgent(sub_agents=[agent1, agent2, agent3])
results = await par_agent.run(agent_tasks={
"analyzer": "Analyze data",
"processor": "Process results"
})
# Router: Intelligent task routing
router = RouterAgent(
sub_agents=[code_agent, data_agent, research_agent],
model_config={"provider": "openai", "model": "gpt-4o"}
)
result = await router.run(task="Find and summarize AI research")
๐ก When to Use:
- SequentialAgent: When tasks depend on each other (output of one โ input of next)
- ParallelAgent: When tasks are independent and can run simultaneously for speed
- RouterAgent: When you need intelligent task routing to specialized agents
11. ๐ง Advanced Tool Use (BM25 Retrieval)
Automatically discover relevant tools at runtime using BM25 lexical search:
agent_config = {
"enable_advanced_tool_use": True # Enable BM25 retrieval
}
How It Works:
- All MCP tools loaded into in-memory registry
- BM25 index built over tool names, descriptions, parameters
- User task used as search query
- Top 5 relevant tools dynamically injected
Benefits: Scales to 1000+ tools, zero network I/O, deterministic, container-friendly.
๐ก When to Use: Enable when you have many MCP tools (10+) and want the agent to automatically discover the right tools for each task without manual selection.
12. ๐ Production Observability & Metrics
๐ Real-time Usage Metrics
OmniCoreAgent tracks every token, request, and millisecond. Each run() returns a metric object, and you can get cumulative stats anytime.
result = await agent.run("Analyze this data")
print(f"Request Tokens: {result['metric'].request_tokens}")
print(f"Time Taken: {result['metric'].total_time:.2f}s")
# Get aggregated metrics for the agent's lifecycle
stats = await agent.get_metrics()
print(f"Avg Response Time: {stats['average_time']:.2f}s")
๐ Opik Tracing
Monitor and optimize your agents with deep traces:
# Add to .env
OPIK_API_KEY=your_opik_api_key
OPIK_WORKSPACE=your_workspace
What's Tracked: LLM call performance, tool execution traces, memory operations, agent workflow, bottlenecks.
Agent Execution Trace:
โโโ agent_execution: 4.6s
โโโ tools_registry_retrieval: 0.02s โ
โโโ memory_retrieval_step: 0.08s โ
โโโ llm_call: 4.5s โ ๏ธ (bottleneck!)
โโโ action_execution: 0.03s โ
๐ก When to Use: Essential for production. Use Metrics for cost/performance monitoring, and Opik for identifying bottlenecks and debugging complex agent logic.
13. ๐ก๏ธ Prompt Injection Guardrails
Protect your agents against malicious inputs, jailbreaks, and instruction overrides before they reach the LLM.
agent_config = {
"guardrail_config": {
"strict_mode": True, # Block all suspicious inputs
"sensitivity": 0.85, # 0.0 to 1.0 (higher = more sensitive)
"enable_pattern_matching": True,
"enable_heuristic_analysis": True
}
}
agent = OmniCoreAgent(..., agent_config=agent_config)
# If a threat is detected:
# result['response'] -> "I'm sorry, but I cannot process this request due to safety concerns..."
# result['guardrail_result'] -> Full metadata about the detected threat
Key Protections:
- Instruction Overrides: "Ignore previous instructions..."
- Jailbreaks: DAN mode, roleplay escapes, etc.
- Toxicity & Abuse: Built-in pattern recognition.
- Payload Splitting: Detects fragmented attack attempts.
โ๏ธ Configuration Options
| Parameter | Type | Default | Description |
|---|---|---|---|
strict_mode |
bool |
False |
When True, any detection (even low confidence) blocks the request. |
sensitivity |
float |
1.0 |
Scaling factor for threat scores (0.0 to 1.0). Higher = more sensitive. |
max_input_length |
int |
10000 |
Maximum allowed query length before blocking. |
enable_encoding_detection |
bool |
True |
Detects base64, hex, and other obfuscation attempts. |
enable_heuristic_analysis |
bool |
True |
Analyzes prompt structure for typical attack patterns. |
enable_sequential_analysis |
bool |
True |
Checks for phased attacks across multiple tokens. |
enable_entropy_analysis |
bool |
True |
Detects high-entropy payloads common in injections. |
allowlist_patterns |
list |
[] |
List of regex patterns that bypass safety checks. |
blocklist_patterns |
list |
[] |
Custom regex patterns to always block. |
๐ก When to Use: Always enable in user-facing applications to prevent prompt injection attacks and ensure agent reliability.
14. ๐ Universal Model Support
Model-agnostic through LiteLLM โ use any provider:
# OpenAI
model_config = {"provider": "openai", "model": "gpt-4o"}
# Anthropic
model_config = {"provider": "anthropic", "model": "claude-3-5-sonnet-20241022"}
# Groq (Ultra-fast)
model_config = {"provider": "groq", "model": "llama-3.1-8b-instant"}
# Ollama (Local)
model_config = {"provider": "ollama", "model": "llama3.1:8b", "ollama_host": "http://localhost:11434"}
# OpenRouter (200+ models)
model_config = {"provider": "openrouter", "model": "anthropic/claude-3.5-sonnet"}
#mistral ai
model_config = {"provider": "mistral", "model": "mistral-7b-instruct"}
#deepseek
model_config = {"provider": "deepseek", "model": "deepseek-chat"}
#google gemini
model_config = {"provider": "google", "model": "gemini-2.0-flash-exp"}
#azure openai
model_config = {"provider": "azure_openai", "model": "gpt-4o"}
Supported: OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, Mistral, Azure OpenAI, OpenRouter, Ollama
๐ก When to Use: Switch providers based on your needs โ use cheaper models (Groq, DeepSeek) for simple tasks, powerful models (GPT-4o, Claude) for complex reasoning, and local models (Ollama) for privacy-sensitive applications.
๐ Examples
Basic Examples
python examples/cli/basic.py # Simple introduction
python examples/cli/run_omni_agent.py # All features demo
Custom Agents
python examples/custom_agents/e_commerce_personal_shopper_agent.py
python examples/custom_agents/flightBooking_agent.py
python examples/custom_agents/real_time_customer_support_agent.py
Workflow Agents
python examples/workflow_agents/sequential_agent.py
python examples/workflow_agents/parallel_agent.py
python examples/workflow_agents/router_agent.py
Production Examples
| Example | Description | Location |
|---|---|---|
| DevOps Copilot | Safe bash execution, rate limiting, Prometheus metrics | examples/devops_copilot_agent/ |
| Deep Code Agent | Sandbox execution, memory backend, code analysis | examples/deep_code_agent/ |
โ๏ธ Configuration
Environment Variables
# Required
LLM_API_KEY=your_api_key
# Optional: Memory backends
REDIS_URL=redis://localhost:6379/0
DATABASE_URL=postgresql://user:pass@localhost:5432/db
MONGODB_URI=mongodb://localhost:27017/omnicoreagent
# Optional: Observability
OPIK_API_KEY=your_opik_key
OPIK_WORKSPACE=your_workspace
Agent Configuration
agent_config = {
"max_steps": 15, # Max reasoning steps
"tool_call_timeout": 30, # Tool timeout (seconds)
"request_limit": 0, # 0 = unlimited
"total_tokens_limit": 0, # 0 = unlimited
"memory_config": {"mode": "sliding_window", "value": 10000},
"enable_advanced_tool_use": True, # BM25 tool retrieval
"enable_agent_skills": True, # Specialized packaged skills
"memory_tool_backend": "local" # Persistent working memory
}
Model Configuration
model_config = {
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7,
"max_tokens": 2000,
"top_p": 0.95
}
๐ Additional Model Configurations
# Azure OpenAI
model_config = {
"provider": "azureopenai",
"model": "gpt-4",
"azure_endpoint": "https://your-resource.openai.azure.com",
"azure_api_version": "2024-02-01"
}
# Ollama (Local)
model_config = {
"provider": "ollama",
"model": "llama3.1:8b",
"ollama_host": "http://localhost:11434"
}
๐งช Testing & Development
# Clone
git clone https://github.com/omnirexflora-labs/omnicoreagent.git
cd omnicoreagent
# Setup
uv venv && source .venv/bin/activate
uv sync --dev
# Test
pytest tests/ -v
pytest tests/ --cov=src --cov-report=term-missing
๐ Troubleshooting
| Error | Fix |
|---|---|
Invalid API key |
Check .env: LLM_API_KEY=your_key |
ModuleNotFoundError |
pip install omnicoreagent |
Redis connection failed |
Start Redis or use MemoryRouter("in_memory") |
MCP connection refused |
Ensure MCP server is running |
๐ More Troubleshooting
OAuth Server Starts: Normal when using "auth": {"method": "oauth"}. Remove if not needed.
Debug Mode: agent = OmniCoreAgent(..., debug=True)
Help: Check GitHub Issues
๐ค Contributing
# Fork & clone
git clone https://github.com/omnirexflora-labs/omnicoreagent.git
# Setup
uv venv && source .venv/bin/activate
uv sync --dev
pre-commit install
# Submit PR
See CONTRIBUTING.md for guidelines.
๐ License
MIT License โ see LICENSE
๐จโ๐ป Author & Credits
Created by Abiola Adeshina
- GitHub: @Abiorh001
- X (Twitter): @abiorhmangana
- Email: abiolaadedayo1993@gmail.com
๐ The OmniRexFlora Ecosystem
| Project | Description |
|---|---|
| ๐ง OmniMemory | Self-evolving memory for autonomous agents |
| ๐ค OmniCoreAgent | Production-ready AI agent framework (this project) |
| โก OmniDaemon | Event-driven runtime engine for AI agents |
๐ Acknowledgments
Built on: LiteLLM, FastAPI, Redis, Opik, Pydantic, APScheduler
Building the future of production-ready AI agent frameworks
โญ Star us on GitHub โข ๐ Report Bug โข ๐ก Request Feature โข ๐ Documentation
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file omnicoreagent-0.3.0.tar.gz.
File metadata
- Download URL: omnicoreagent-0.3.0.tar.gz
- Upload date:
- Size: 122.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4f8c74b888deaf03f7c28ae5280a3dcf654996c20bdaff6d37e70667c98dc6c4
|
|
| MD5 |
99e466efec67a2b877fb2c3d7ceee721
|
|
| BLAKE2b-256 |
0d040c7ba93a5a476599f2de7466eb2196b497777c2eac14b01b440cd4844f66
|
File details
Details for the file omnicoreagent-0.3.0-py3-none-any.whl.
File metadata
- Download URL: omnicoreagent-0.3.0-py3-none-any.whl
- Upload date:
- Size: 143.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
61e90b4293c87dc2a28ff27d4f505f5625a8b7b00f0e739b4e1b81879f9e133d
|
|
| MD5 |
a742cb9c0b2ce1ee6d74107376a9be50
|
|
| BLAKE2b-256 |
fb9bf81f8c539f3ddeca52e45bff998db750a1024ee31998ab2ceb68764e8cf0
|