Production-Ready AI Agent Runtime - Automate everything with LLM-powered agents
Project description
AgentOS - Production AI Agent Runtime
AgentOS is a production-ready runtime for autonomous AI agents with built-in memory management, safe tool sandboxing, and multi-provider LLM support.
๐ฌ Demo
๐ Quick Start
Installation
Then run the installer:
# Linux
python3 install_linux.py
# Windows
python install_windows.py
Basic Usage
- Create an agent manifest (
agent.yaml):
name: my_assistant
model_provider: github
model_version: openai/gpt-4o-mini
isolated: false
- Run your agent:
agentos run agent.yaml --task "create a Python script that prints hello world"
- Monitor running agents:
agentos ps
๐๏ธ Features
โ Production Ready
- Comprehensive logging with structured output and per-agent log files
- Intelligent retry logic with exponential backoff for LLM API calls
- Process management with real-time monitoring and graceful shutdown
- Security controls blocking destructive commands and injection attacks
- Timeout protection preventing runaway processes
- Resource limits for memory, CPU, and execution steps
๐ฌ Interactive Chat Mode
- Real-time conversations with AI using any LLM provider
- Rich terminal UI with markdown rendering and syntax highlighting
- Persistent chat history with SQLite backend and search functionality
- Conversation export to JSON, Markdown, or plain text formats
- Context preservation across sessions with configurable context window
- Customizable prompts and temperature settings
- Offline support with local Ollama models
- API-free options using GitHub or Ollama
๐ Security First
- Command filtering blocks 20+ dangerous operations (rm, sudo, dd, etc.)
- Input validation prevents shell injection with pattern detection
- Path traversal protection blocks
../and absolute path escapes - Docker isolation (optional) with memory/CPU limits and network isolation
- Resource limits configurable per-agent (memory, CPU, timeout, steps)
- Security context for audit logging and tracking
๐ค Multi-LLM Support (6+ Providers)
- GitHub Models (default) - Free tier available
- OpenAI GPT-4o, GPT-4, GPT-3.5-turbo
- Anthropic Claude 3.5 Sonnet, Claude 3 Opus
- Google Gemini 2.0 Flash, 1.5 Pro
- Cohere Command R+, Command
- Ollama (local models) - No API key required
๐ Process Management
- Agent registry with SQLite backend
- Real-time process monitoring with CPU/memory tracking
- Status tracking (running, completed, failed, stopped)
- Log aggregation per agent with rotation support
- Graceful shutdown with signal handlers (SIGTERM/SIGINT)
- Agent lifecycle management with context managers
๐ Retry Logic & Resilience
- Exponential backoff with configurable jitter
- Automatic retry for transient API failures
- Customizable retry strategies (aggressive, gentle, default)
- Per-provider retry configuration
- Circuit breaker patterns for failing services
๐ Commands
Run Agent
agentos run <manifest> --task "<task>" [--timeout 300] [--verbose]
Interactive Chat Mode โจ
Chat with any LLM provider in a conversational interface:
# Start chat with default OpenAI
agentos chat
# Use different providers
agentos chat --provider claude
agentos chat --provider gemini --temperature 0.3
agentos chat --provider ollama # Local models, no API key needed
# Customize the experience
agentos chat --system-prompt "You are a Python expert"
agentos chat --provider openai --model gpt-4
In-chat commands: exit / quit (end), clear (history), help (commands), status (info)
See Chat Mode Guide for detailed usage.
List Agents
agentos ps
View Logs
agentos logs <agent_name> [--tail 50]
Stop Agent
agentos stop <agent_name>
Clean Up
agentos prune # Remove stopped agents
๐ Agent Manifest
name: research_assistant
model_provider: github
model_version: openai/gpt-4o-mini
isolated: false
DESTRUCTIVE_COMMANDS:
- rm
- rmdir
- sudo
- dd
- mkfs
- format
Required Fields
name: Agent identifiermodel_provider: LLM provider (github, openai, claude, gemini, cohere, ollama)model_version: Specific model to use
Optional Fields
isolated: Enable Docker sandboxing (default: true)DESTRUCTIVE_COMMANDS: Custom list of blocked commands
๐ง Configuration
Environment Variables
Create .env file:
# API Keys (set as needed)
GIT_HUB_TOKEN=your_github_token
OPENAI_API_KEY=your_openai_key
CLAUDE_API_KEY=your_claude_key
GEMINI_API_KEY=your_gemini_key
COHERE_API_KEY=your_cohere_key
Logging
Logs are stored in ~/.agentos/logs/:
agentos.log- Main system log<agent_name>_<id>.log- Per-agent execution logs
Database
Agent registry stored in ~/.agentos/runtime.db (SQLite)
๐งฐ MCP Tooling (Optional)
AgentOS can prefer MCP servers (Model Context Protocol) instead of emitting shell commands.
- Enable MCP in your manifest:
mcp:
enabled: true
servers:
- name: local_tools
kind: stdio
command: my-mcp-server --stdio
- Install a Python MCP SDK (one of):
pip install mcp
# or install the official Model Context Protocol Python SDK if available
- Chat/Web will now prompt models to output MCP calls in a JSON block. AgentOS parses and executes those calls via the MCP client, with safe fallback to command extraction when no MCP calls are present.
๐ณ Docker Support
Enable isolation for safe execution:
name: secure_agent
model_provider: github
model_version: openai/gpt-4o-mini
isolated: true
Requires Docker daemon running.
๐ก๏ธ Security Features
Command Filtering
Blocks dangerous commands automatically:
- File deletion:
rm,rmdir,shred - System modification:
sudo,su,chown,chmod - Disk operations:
dd,mkfs,fdisk,format - Process control:
kill,killall,pkill - Network:
nc,netcat,wget,curl(to unknown hosts)
Input Validation
Prevents command injection attacks:
- Shell metacharacters:
;,&&,||,| - Command substitution:
`,$() - Variable expansion:
$VAR,${VAR} - Path traversal:
../, absolute paths outside workspace
Resource Limits
Configure per-agent resource constraints:
resource_limits:
max_steps: 50 # Maximum execution steps
timeout: 300 # Timeout in seconds
max_memory_mb: 512 # Memory limit (Docker only)
max_cpu_percent: 50 # CPU limit (Docker only)
Security Context
Track and audit agent actions:
from agentos.core.security import SecurityContext, validate_command
with SecurityContext(agent_id="my_agent") as ctx:
result = validate_command("ls -la")
if result.is_safe:
# Execute command
pass
# All actions logged automatically
๐ Retry Configuration
Configure retry behavior for LLM API calls:
retry_config:
max_retries: 3 # Maximum retry attempts
initial_delay: 1.0 # Initial delay in seconds
max_delay: 30.0 # Maximum delay cap
exponential_base: 2.0 # Exponential backoff multiplier
jitter: true # Add randomness to prevent thundering herd
Retry Strategies
from agentos.core.retry import DEFAULT_LLM_RETRY, AGGRESSIVE_RETRY, GENTLE_RETRY
# Default: 3 retries, 1-30s delay
config = DEFAULT_LLM_RETRY
# Aggressive: 5 retries, 0.5-60s delay (for critical operations)
config = AGGRESSIVE_RETRY
# Gentle: 2 retries, 2-10s delay (for user-facing features)
config = GENTLE_RETRY
๐พ Chat History
Persistent chat history with SQLite backend:
from agentos.core.chat_history import ChatHistoryManager
# Initialize manager
history = ChatHistoryManager()
# Create conversation
conv_id = history.create_conversation(
agent_id="assistant",
title="Python Help Session"
)
# Add messages
history.add_message(conv_id, "user", "How do I read a file?")
history.add_message(conv_id, "assistant", "Use open() function...")
# Search history
results = history.search_messages("file", agent_id="assistant")
# Export conversation
history.export_conversation(conv_id, "chat.md", format="markdown")
๐ณ Docker Sandbox
Enhanced Docker isolation for safe execution:
name: secure_agent
model_provider: github
model_version: openai/gpt-4o-mini
isolated: true
Advanced Docker Configuration
from agentos.core.docker_sandbox import DockerSandbox
sandbox = DockerSandbox(
memory_limit="256m", # Memory constraint
cpu_quota=50000, # CPU microseconds per period
network_mode="none", # No network access
read_only=True, # Read-only filesystem
working_dir="/workspace"
)
result = sandbox.run_in_sandbox("python script.py")
Requires Docker daemon running.
๐ Process Monitoring
Real-time process monitoring and lifecycle management:
from agentos.core.process_manager import ProcessMonitor, AgentLifecycle
# Get singleton monitor
monitor = ProcessMonitor()
# Use lifecycle context manager
with AgentLifecycle("my_agent", task="Process data") as agent:
# Agent is registered and tracked
# CPU/memory monitored in real-time
pass # Do work
# Automatically cleaned up
# Query running agents
agents = monitor.get_running_agents()
for agent_id, info in agents.items():
print(f"{agent_id}: {info['status']} - CPU: {info['cpu_percent']}%")
๐ Graceful Shutdown
Signal handling for clean termination:
from agentos.core.shutdown import ShutdownManager, ShutdownContext
# Register cleanup callbacks
manager = ShutdownManager()
manager.register_callback(lambda: print("Cleaning up..."))
# Use context manager
with ShutdownContext():
# Protected execution
# SIGTERM/SIGINT handled gracefully
pass
๐ Monitoring
Status Codes
running: Agent is executingcompleted: Task finished successfullyfailed: Task failed with errorstopped: Manually terminated
Exit Codes
0: Success1: General error124: Timeout130: User interrupt (Ctrl+C)
๐งฉ Architecture
agentos/
โโโ agent/ # Agent execution and planning
โโโ cli/ # Command-line interface
โโโ core/ # Core utilities
โ โโโ config.py # Configuration management
โ โโโ retry.py # Retry logic with backoff
โ โโโ security.py # Security validation
โ โโโ chat_history.py # Persistent chat storage
โ โโโ shutdown.py # Graceful shutdown
โ โโโ docker_sandbox.py # Docker isolation
โ โโโ process_manager.py # Process monitoring
โโโ database/ # SQLite backend
โโโ llm/ # LLM provider integrations
โโโ mcp/ # Model Context Protocol
โโโ web/ # Web UI
๐ Development
Local Setup
git clone https://github.com/agents-os/agentos
cd agentos
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
Testing
python -m pytest tests/
Code Quality
black .
flake8 .
๐ License
MIT License - see LICENSE file.
๐ค Contributing
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
๐ Support
- Repository: https://github.com/agents-os/agentos
- Issues: GitHub Issues
AgentOS - Making AI agents production-ready, secure, and scalable.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentos_ai-1.1.7.tar.gz.
File metadata
- Download URL: agentos_ai-1.1.7.tar.gz
- Upload date:
- Size: 740.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b959a849f8617e9e91cddaff37c271fdf5dee60c78da85654483b87057298974
|
|
| MD5 |
b0b972db66bb905af2dbca58f01e1b66
|
|
| BLAKE2b-256 |
f3ce1ad80bdfe68b58a173db25436f289e4d64fc8b53a67277f99c4a242a3feb
|
File details
Details for the file agentos_ai-1.1.7-py3-none-any.whl.
File metadata
- Download URL: agentos_ai-1.1.7-py3-none-any.whl
- Upload date:
- Size: 118.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b8464a9f15d128440804a274698be2bdadf3986f3a40ce0115ec06b5060c6d15
|
|
| MD5 |
e709b58191bc8e84ab37fff66689cc34
|
|
| BLAKE2b-256 |
7f2514d797f50103b4a9d65626157aa49a9daf0993a8dcb6fda25e2412510133
|