Unified orchestration pipeline for Antaris Analytics Suite
Project description
Antaris Pipeline 2.0
Unified orchestration pipeline for the Antaris Analytics Suite
The central orchestration engine that unifies antaris-memory, antaris-router, antaris-guard, and antaris-context with cross-package intelligence and real-time telemetrics.
๐ Why Antaris Pipeline?
The Problem: AI developers waste weeks cobbling together Pinecone + Portkey + Lakera + custom context management, dealing with inconsistent APIs, complex pricing, and zero cross-optimization.
Our Solution: One unified pipeline that makes all 4 packages smarter together, with 10x faster integration and guaranteed performance SLAs.
Key Advantages Over Competitors
| Feature | Antaris Pipeline | Competitors |
|---|---|---|
| Integration Time | 5 minutes | 2-5 days |
| Cross-Package Intelligence | โ MemoryโRouterโGuard optimization | โ Isolated packages |
| Visual Security Config | โ GUI-based policy builder | โ Code-only configuration |
| Performance SLAs | โ Guaranteed cost savings | โ No guarantees |
| Agent-Native Patterns | โ Conversation-aware flows | โ LLM-call focused |
| Real-time Telemetrics | โ Built-in observatory | โ Third-party required |
| Dry-Run Mode | โ Zero-cost demos/debugging | โ Not available |
OpenClaw Integration
antaris-pipeline is the orchestration layer for the full antaris-suite within OpenClaw. It wires together memory recall, safety checking, model routing, and context management into a single event-driven lifecycle.
from antaris_pipeline import Pipeline
from antaris_memory import MemorySystem
from antaris_guard import PromptGuard
from antaris_router import Router
from antaris_context import ContextManager
pipeline = Pipeline(
memory=MemorySystem(workspace="./mem"),
guard=PromptGuard(),
router=Router(config_path="router.json"),
context=ContextManager(total_budget=8000),
)
# Each turn runs: guard โ router โ context โ memory โ LLM โ memory ingest
result = pipeline.run(user_input)
The full suite โ antaris-memory, antaris-router, antaris-guard, antaris-context, and antaris-pipeline โ forms the Antaris Analytics Agent Infrastructure, built natively for OpenClaw deployments.
๐๏ธ Architecture Overview
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Your Application โโโโโโ Antaris Pipeline (Orchestrator) โ
โโโโโโโโโโโโโโโโโโโ โ โ
โ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โ
โ โ Memory โ โ Router โ โ Guard โ โ
โ โ v1.1.0 โ โ v2.0.0 โ โ v1.1.0 โ โ
โ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โ
โ โ๏ธ โ๏ธ โ๏ธ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Cross-Package Intelligence โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Context โ โ Real-time Telemetricsโ โ
โ โ v1.1.0 โ โ & Performance SLAs โ โ
โ โโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Cross-Package Intelligence Flows:
- Memory โ Router: Historical performance data informs model selection
- Router โ Context: Budget allocation based on model capabilities
- Guard โ Memory: Risk assessment affects storage policies
- Context โ Guard: Compression feedback for security optimization
โก Quick Start
Installation
# Install the unified suite (all 4 packages + pipeline)
pip install antaris-pipeline
# Or install with telemetrics dashboard
pip install antaris-pipeline[telemetrics]
Basic Usage
from antaris_pipeline import Pipeline, create_config
# One-line setup (vs competitors' multi-day configurations)
config = create_config(profile="balanced")
pipeline = Pipeline.from_config(config)
# Process with cross-package intelligence
async def my_model_function(text: str) -> str:
# Your LLM call here (OpenAI, Anthropic, etc.)
return "AI response"
result = await pipeline.process("Hello world", my_model_function)
print(f"Success: {result.success}")
print(f"Output: {result.output}")
print(f"Performance: {result.performance}")
Dry-Run Mode (Zero API Costs)
# Perfect for demos, debugging, and development
simulation = pipeline.dry_run("What would happen with this input?")
print(simulation)
# {
# "guard_input": {"would_allow": True, "scan_time_ms": 15},
# "memory": {"would_retrieve": 3, "retrieval_time_ms": 45},
# "router": {"would_select": "claude-sonnet-4-20250514", "confidence": 0.85},
# "total_estimated_time_ms": 150
# }
Profile-Based Configuration
# Security-first configuration
strict_pipeline = create_config(profile="strict_safety")
# Cost-optimized configuration
cost_pipeline = create_config(profile="cost_optimized")
# Performance-optimized configuration
perf_pipeline = create_config(profile="performance")
# Debug mode with full telemetrics
debug_pipeline = create_config(profile="debug")
๐จ Visual Security Configuration
Unlike competitors that require coding security policies, Antaris provides a GUI-based policy builder:
# Start the telemetrics dashboard with visual policy editor
from antaris_pipeline import TelemetricsServer
collector = TelemetricsCollector("my_session")
server = TelemetricsServer(collector, port=8080)
server.start() # Dashboard at http://localhost:8080
Features:
- Drag-and-drop policy creation
- Real-time policy testing
- Team collaboration on security configs
- Compliance templates (SOC2, HIPAA, GDPR)
๐ Real-Time Telemetrics & Performance SLAs
Built-in Observatory
Every pipeline operation is automatically tracked:
# Get comprehensive performance statistics
stats = pipeline.get_performance_stats()
print(f"Total requests: {stats['total_requests']}")
print(f"Average latency: {stats['avg_latency_ms']}ms")
print(f"Cost savings: {stats['cost_savings_percent']}%")
Performance SLAs
Antaris provides guaranteed performance with automatic credits:
config = create_config(
profile="balanced",
max_total_latency_ms=2000, # 2-second SLA
enable_performance_slas=True
)
# If latency exceeds SLA, automatic credits applied
# If cost savings don't meet guarantees, credits applied
Telemetrics Export
# Export telemetrics for analysis
collector.export_events(
output_path=Path("analysis.jsonl"),
format="jsonl",
filter_module="router" # Optional filtering
)
๐ง Cross-Package Intelligence Examples
Memory-Informed Routing
# Router learns from memory about model performance
# Automatically routes complex tasks to better models
# Routes frequent patterns to cheaper models
result = await pipeline.process("Complex reasoning task", model_caller)
# โ Router selects claude-opus-4-6 based on historical performance
result = await pipeline.process("Simple greeting", model_caller)
# โ Router selects claude-sonnet-4-20250514 for cost optimization
Security-Aware Context Management
# Context manager uses guard risk scores for retention
# High-risk content gets shorter retention
# Safe content can be kept longer for efficiency
config.context.enable_security_aware_retention = True
pipeline = Pipeline.from_config(config)
Performance Feedback Loops
# All packages learn from each other
# Memory stores performance data
# Router adjusts based on success rates
# Guard adapts to conversation patterns
# Context optimizes based on model feedback
# This happens automatically - no configuration needed!
๐ง Advanced Configuration
Custom Profiles
from antaris_pipeline import PipelineConfig, ProfileType
config = PipelineConfig(
profile=ProfileType.CUSTOM,
# Memory configuration
memory={
"max_memory_mb": 2048,
"decay_half_life_hours": 72.0,
"enable_concurrent_access": True
},
# Router configuration
router={
"default_model": "claude-sonnet-4-20250514",
"confidence_threshold": 0.8,
"enable_cost_optimization": True,
"max_cost_per_request_usd": 0.10
},
# Guard configuration
guard={
"default_policy_strictness": 0.7,
"enable_behavioral_analysis": True,
"max_scan_time_ms": 1000
},
# Context configuration
context={
"default_max_tokens": 8000,
"compression_ratio_target": 0.8,
"enable_adaptive_budgeting": True
},
# Cross-package intelligence
enable_cross_optimization=True,
cross_optimization_aggressiveness=0.7
)
YAML Configuration
# antaris-config.yaml
profile: balanced
session_id: "production_v1"
memory:
storage_path: "./memory_store"
max_memory_mb: 1024
decay_half_life_hours: 168.0
router:
default_model: "claude-sonnet-4-20250514"
fallback_models: ["claude-opus-4-6"]
confidence_threshold: 0.7
guard:
enable_input_scanning: true
enable_output_scanning: true
default_policy_strictness: 0.7
context:
default_max_tokens: 8000
enable_compression: true
compression_ratio_target: 0.8
telemetrics:
enable_telemetrics: true
enable_server: true
server_port: 8080
# Load from YAML
config = PipelineConfig.from_file("antaris-config.yaml")
pipeline = Pipeline.from_config(config)
๐ Performance Benchmarks
Integration Speed
| Task | Antaris Pipeline | Competitors |
|---|---|---|
| Initial Setup | 5 minutes | 4-8 hours |
| Memory Integration | Pre-configured | 2-4 hours |
| Security Policies | GUI-based | 4-6 hours |
| Telemetrics Setup | Built-in | 8-12 hours |
| Cross-optimization | Automatic | Not available |
| Total Time | 5 minutes | 2-5 days |
Cost Performance
| Model Routing Strategy | Cost Reduction | Accuracy Maintained |
|---|---|---|
| Static Routing | 0% | 100% |
| Simple Classification | 25-35% | 98% |
| Antaris Intelligence | 40-60% | 99.2% |
Latency Performance
| Operation | Antaris Pipeline | Typical Setup |
|---|---|---|
| Security Scan | 15ms | 50-100ms |
| Memory Retrieval | 45ms | 100-200ms |
| Model Routing | 30ms | Not optimized |
| Context Building | 25ms | 100-300ms |
| Total Pipeline | 115ms | 250-600ms |
๐ ๏ธ Command Line Interface
# Generate configuration
antaris-pipeline config --profile balanced --output config.yaml
# Validate installation
antaris-pipeline validate
# Dry-run processing
antaris-pipeline process "Hello world" --dry-run
# Analyze telemetrics
antaris-pipeline telemetrics --file logs.jsonl --summary
# Start dashboard server
antaris-pipeline serve --port 8080
๐ Troubleshooting
Common Issues
Q: ImportError when importing pipeline
# Install all required packages
pip install antaris-memory antaris-router antaris-guard antaris-context
Q: Telemetrics dashboard won't start
# Install telemetrics dependencies
pip install antaris-pipeline[telemetrics]
Q: Cross-package optimization not working
# Ensure cross-optimization is enabled
config.enable_cross_optimization = True
config.cross_optimization_aggressiveness = 0.7 # 0.0-1.0
Q: Performance SLAs not triggering
# Verify SLA configuration
config.enable_performance_slas = True
config.max_total_latency_ms = 2000 # Set appropriate limits
Debug Mode
# Enable comprehensive debugging
debug_config = create_config(profile="debug")
debug_config.telemetrics.enable_real_time_analytics = True
debug_config.enable_dry_run_mode = True
pipeline = Pipeline.from_config(debug_config)
Validation
# Validate configuration before deployment
validation_results = config.validate_sla_requirements()
for requirement, valid in validation_results.items():
if not valid:
print(f"โ ๏ธ {requirement} may not be achievable with current config")
๐ค Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Development Setup
git clone https://github.com/antaris-analytics/antaris-pipeline.git
cd antaris-pipeline
pip install -e ".[dev,telemetrics]"
pytest
Testing
# Run all tests
pytest
# Run with coverage
pytest --cov=antaris_pipeline --cov-report=html
# Run specific test categories
pytest tests/test_pipeline.py -v
pytest tests/test_cross_intelligence.py -v
๐ License
Apache 2.0 - see LICENSE for details.
๐ Related Packages
- antaris-memory - Persistent memory for AI agents
- antaris-router - Adaptive model routing
- antaris-guard - Security and safety
- antaris-context - Context window optimization
๐ Support
- Documentation: docs.antarisanalytics.ai
- Email: dev@antarisanalytics.com
- Website: antarisanalytics.ai
Built with โค๏ธ by Antaris Analytics
Deterministic infrastructure for AI agents
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file antaris_pipeline-2.0.1.tar.gz.
File metadata
- Download URL: antaris_pipeline-2.0.1.tar.gz
- Upload date:
- Size: 69.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
48eaa545d5f9c76ea1c51048a54217f86fa90c313ee8337e005cda1684ef0f96
|
|
| MD5 |
1cd9a1af0e3515e413a5f63835e487ca
|
|
| BLAKE2b-256 |
5ea1f85d99b5eba9c9b1dc8e46f440a50772606f4605db65765e30401786f16a
|
File details
Details for the file antaris_pipeline-2.0.1-py3-none-any.whl.
File metadata
- Download URL: antaris_pipeline-2.0.1-py3-none-any.whl
- Upload date:
- Size: 49.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0cb7fdae90cefd5e07b62a084fc3b3b8dbc95462400b1092543870620c554a97
|
|
| MD5 |
c7abcecd18316b758d4a898e746e89de
|
|
| BLAKE2b-256 |
9bc8004267b77782f75a0dc22cafcdd0abc21cd565eb73eac3439cf32c3a9d56
|