Production-grade multi-agent framework with minimal dependencies
Project description
AgenticOptio
Command. Coordinate. Execute.
AgenticOptio is a disciplined AI agent library built for reliable multi-model orchestration. Named after the Roman military Optio—the trusted second-in-command who coordinated formations, supervised operations, and stepped in as acting commander—this framework embodies the same principles of tactical coordination, operational resilience, and execution discipline.
Why "Optio"? In Roman legions, an Optio was the backbone of military precision—responsible for coordination, training supervision, and maintaining formation integrity. They were the reliable officers who transformed strategic vision into flawless tactical execution. AgenticOptio brings this same operational excellence to AI agent coordination.
Features
Current (v0.1.0)
- OllamaChat: Chat with local Ollama models
- OllamaEmbedding: Generate embeddings using local Ollama models
- Async Support: Full async/await support for all operations
- Streaming: Real-time streaming responses
- Tool Support: Function calling capabilities (model dependent)
Coming Soon
- OpenAI Integration: GPT-4, GPT-3.5-turbo support
- Anthropic Claude: Claude 3.5 Sonnet and other models
- Google Gemini: Gemini Pro and Flash models
- Groq: Fast inference with Llama and Mixtral
- Azure OpenAI: Enterprise-grade OpenAI models
Installation
# Install directly from GitHub
pip install git+https://github.com/yourusername/agenticoptio.git
# Or clone and install locally
git clone https://github.com/yourusername/agenticoptio.git
cd agenticoptio
pip install -e .
# Or install dependencies manually
pip install openai
Prerequisites
For Ollama (Current)
- Install and run Ollama
- Pull a model:
ollama pull llama3.2
For Other Providers (Coming Soon)
- OpenAI: API key from OpenAI Platform
- Anthropic: API key from Anthropic Console
- Google: API key from Google AI Studio
- Groq: API key from Groq Console
Quick Start
Basic Chat
import asyncio
from agenticoptio import OllamaChat
async def main():
# Create chat model
llm = OllamaChat(model="llama3.2")
# Send a message
messages = [{"role": "user", "content": "Hello!"}]
response = await llm.ainvoke(messages)
print(response.content)
asyncio.run(main())
Streaming Responses
import asyncio
from agenticoptio import OllamaChat
async def main():
llm = OllamaChat(model="llama3.2")
messages = [{"role": "user", "content": "Tell me a story"}]
async for chunk in llm.astream(messages):
print(chunk.content, end="", flush=True)
asyncio.run(main())
Embeddings
import asyncio
from agenticoptio import OllamaEmbedding
async def main():
embedder = OllamaEmbedding(model="nomic-embed-text")
# Embed multiple texts
texts = ["Hello world", "How are you?"]
embeddings = await embedder.aembed(texts)
print(f"Generated {len(embeddings)} embeddings")
print(f"Embedding dimension: {len(embeddings[0])}")
asyncio.run(main())
Configuration
Custom Ollama Host
from agenticoptio import OllamaChat
# Connect to remote Ollama instance
llm = OllamaChat(
model="llama3.2",
host="http://192.168.1.100:11434"
)
Future Provider Examples
# Coming soon - OpenAI
from agenticoptio import OpenAIChat
llm = OpenAIChat(model="gpt-4o", api_key="your-key")
# Coming soon - Anthropic
from agenticoptio import AnthropicChat
llm = AnthropicChat(model="claude-3-5-sonnet-20241022", api_key="your-key")
# Coming soon - Unified factory
from agenticoptio import create_chat
llm = create_chat("openai", model="gpt-4o")
Environment Variables
OLLAMA_HOST: Default Ollama host URL (default:http://localhost:11434)OPENAI_API_KEY: OpenAI API key (coming soon)ANTHROPIC_API_KEY: Anthropic API key (coming soon)
API Reference
OllamaChat
class OllamaChat:
def __init__(
self,
model: str = "llama3.2",
host: str = "http://localhost:11434",
temperature: float = 0.0,
max_tokens: int | None = None,
timeout: float = 60.0,
max_retries: int = 2,
)
async def ainvoke(self, messages: list[dict]) -> AIMessage
def invoke(self, messages: list[dict]) -> AIMessage
async def astream(self, messages: list[dict]) -> AsyncIterator[AIMessage]
def bind_tools(self, tools: list) -> "OllamaChat"
OllamaEmbedding
class OllamaEmbedding:
def __init__(
self,
model: str = "nomic-embed-text",
host: str = "http://localhost:11434",
timeout: float = 60.0,
max_retries: int = 2,
batch_size: int = 100,
)
async def aembed(self, texts: list[str]) -> list[list[float]]
def embed(self, texts: list[str]) -> list[list[float]]
async def aembed_query(self, text: str) -> list[float]
def embed_query(self, text: str) -> list[float]
Examples
See the examples/ directory for usage examples:
basic_usage.py- Simple chat conversation with Ollamastreaming_example.py- Streaming responses with Ollama
More examples will be added as new providers are integrated.
Roadmap
v0.2.0 - Strategic Alliance
- OpenAI GPT command integration (GPT-4o, GPT-4o-mini, GPT-3.5-turbo)
- OpenAI embedding reconnaissance units
- Unified deployment protocols
v0.3.0 - Allied Forces
- Anthropic Claude integration
- Google Gemini coordination
- Groq rapid response units
- Azure OpenAI enterprise formations
v0.4.0 - Advanced Tactics
- Standardized tool/function calling protocols
- Enhanced streaming for real-time operations
- Batch processing for large-scale deployments
- Intelligent rate limiting and resilience patterns
v1.0.0 - Battle Ready
- Full test coverage and battle-hardened reliability
- Comprehensive field manual documentation
- Performance optimizations for high-stakes operations
- Production stability guarantees
The Optio Advantage
In Roman legions, the Optio was the disciplined officer who transformed strategy into flawless execution. They coordinated formations, supervised training, enforced standards, and stepped in as acting commander when needed. AgenticOptio brings this same operational excellence to AI:
Command Structure
- Unified Command: Single interface governing all model providers
- Chain of Command: Clear hierarchies with fallback mechanisms
- Tactical Flexibility: Adapt to any model or provider seamlessly
Operational Discipline
- Reliability First: Battle-tested patterns with comprehensive error handling
- Resource Management: Efficient coordination of compute and memory
- Formation Control: Structured execution flows that maintain order under pressure
Strategic Readiness
- Multi-Theater Operations: Local models (Ollama) and cloud APIs in unified formation
- Rapid Deployment: Minimal dependencies for quick battlefield setup
- Scalable Command: From single agents to complex multi-agent orchestrations
License
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agenticoptio-0.1.0.tar.gz.
File metadata
- Download URL: agenticoptio-0.1.0.tar.gz
- Upload date:
- Size: 10.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
98429fd22296a823e3ca6f54a2fc05da607ff5546bfe8720d8990ca70eebd855
|
|
| MD5 |
94f817f6305e532c6167aa7a2d60f4e3
|
|
| BLAKE2b-256 |
c313ec95e90d79d945cac934cd39a6ad3203af76578413026c7fff504d551a3b
|
File details
Details for the file agenticoptio-0.1.0-py3-none-any.whl.
File metadata
- Download URL: agenticoptio-0.1.0-py3-none-any.whl
- Upload date:
- Size: 12.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8de21b1feea1677c42a97bbb4796d86a155872268ed53ccc8941ecf5ff78a436
|
|
| MD5 |
c4c1e93f4d13a17d91aa700db4a5327b
|
|
| BLAKE2b-256 |
c7f4df847ee729486bb1b8cdcd6dd8e1ae3ed35056e3e31c084c5775e7c3ecde
|