Complete Antaris AI infrastructure for OpenClaw — memory, routing, safety, and context management
Project description
Antaris Core v5.2.3
Agent infrastructure for intelligent, secure, and memory-persistent AI systems.
v5.2.3 — synced with AntarisBot engine: enrichers, MemoryManager, cross-session recall, provider-ready templates
Packages
| Package | Version | Description |
|---|---|---|
antaris-memory |
4.9.20 | Persistent memory with 11-layer BM25F search, LLM enrichment, WAL, sharding, export/import |
antaris-router |
4.9.18 | Intelligent model routing with cost tracking, confidence gating, A/B testing |
antaris-guard |
4.9.18 | Prompt injection detection, PII filtering, rate limiting, behavioral analysis |
antaris-context |
4.9.18 | Context compression, hard budget enforcement, summarization, relevance scoring |
antaris-pipeline |
4.9.18 | Agent orchestration pipeline with per-stage telemetry and OpenClaw bridge |
antaris-contracts |
4.9.18 | Versioned state schemas, failure semantics, and debug CLI |
antaris-openclaw-plugin |
4.9.20 | OpenClaw plugin — auto-recall, auto-ingest, Discord bridge, compaction recovery, /context sync |
Architecture
antaris-openclaw-plugin (lifecycle hooks — auto-recall + auto-ingest + /context sync)
│
antaris-pipeline (orchestration)
┌────┴────────────────────┐
antaris-memory antaris-router antaris-guard antaris-context
(persistence) (model selection) (security) (compression)
Design Principles
- Zero external dependencies on all core Python packages — stdlib only
- File-based persistence — no database required
- Multi-process safe — cross-platform
FileLockusingos.mkdir()atomicity - Fully portable — no hardcoded paths, works on any machine
- Fully tested — 1,149 tests passing
Installation
# Install the full suite
pip install antaris-memory==4.9.20
# Or install individual packages
pip install antaris-memory antaris-guard antaris-router antaris-context antaris-pipeline antaris-contracts
For OpenClaw plugin installation, see antaris-openclaw-plugin/INSTALL.md.
Commands
/prune — Memory Store Cleanup
Tiered memory cleanup with dual-layer protection (content keywords + enrichment/access score gate).
| Command | Description |
|---|---|
/prune small |
Dry-run: pipeline fragments, heartbeats, entries <40 chars |
/prune medium |
Dry-run: extends small + zero-access >30 days, near-duplicates |
/prune large |
Dry-run: extends medium + zero-access >14 days with low decay |
/prune small|medium|large confirm |
Apply the prune (auto-backup first) |
/prune sessions |
Dry-run stale/aborted sessions |
/prune sessions confirm |
Remove stale sessions |
/prune undo |
Restore last backup |
/prune backups |
List available backups |
context — Cross-Channel Context Sync
Reads recent Discord channel history, summarizes each channel's activity via LLM, and ingests the summaries into the memory store. Any instance that runs this immediately gets caught up on what every other channel has been doing.
Usage: Say context 12 (or 3, 6, 24, 36, 48) in any channel. The number is hours of history to sync.
| Command | Description |
|---|---|
context 3 |
Sync last 3 hours |
context 6 |
Sync last 6 hours |
context 12 |
Sync last 12 hours (default) |
context 24 |
Sync last 24 hours |
context 36 |
Sync last 36 hours |
context 48 |
Sync last 48 hours |
Default channels synced:
#antaris-analytics-llc#antaris-suite#antaris-bot#wealthhealth-antaris-forge#antaris-search- Personal DM channel
How it works:
- Reads all messages from each channel within the time window (paginated, no caps)
- Includes both human and bot messages (so instances see each other's work)
- Summarizes each active channel via Haiku (cheap/fast)
- Ingests each summary as
source="channel_sync"episodic memory - Reports: channels synced, message counts, which channels had activity
Note: Currently runs through the agent (say "context 12" without slash). Plugin command routing (/context) is pending an OpenClaw command registration fix.
Benchmarks
v4.9.20 — Mac Mini M4 (10-core, 32GB) · Python 3.14 · 7,658 memories
Search Quality (doc2query self-recall, 150-sample benchmark)
| Metric | Result |
|---|---|
| R@1 | 61.9% |
| R@3 | 75.1% |
| R@5 | 79.3% |
| MRR | 0.688 |
| p50 | 84ms |
| p95 | 134ms |
| Provenance | 100% |
Hard Corpus (30 vocabulary-gap queries, zero keyword overlap)
| Metric | Raw BM25 | With Enrichment |
|---|---|---|
| R@1 | 10.0% | 46.7% |
Search Engine Layers
- BM25+ with δ normalization
- BM25F per-field scoring (content/enriched/keywords/queries independent avg lengths)
- Safelist normalizer (~50 domain morphological mappings)
- LLM enrichment field boosts (enriched_summary 1.25×, search_queries 1.40×)
- Top-K window filter (5,350 → 159 candidates)
- Word expansion (9,007 words from SO + code + Wikipedia corpus)
- Embedding reranker (Layer 10 — MiniLM centroid vectors)
- PRF pseudo-relevance feedback (Layer 11)
- Ingest quality gates (noise regex, length minimum, prefix-aware dedup)
- Tiered storage (hot/warm/cold shards with LRU cache)
- WAL (write-ahead log) for crash safety
Changelog
v4.9.20 (2026-03-08) — Current
- BM25F per-field scoring with independent field average lengths
- Query expansion removed (was inflating 3-word queries to 156 tokens)
- Keyword weight doubled (2×)
- Boost stacking cleanup (removed non-discriminative boosts)
/contextcross-channel sync command- R@1: 61.9%, R@3: 75.1%, R@5: 79.3%
v4.9.18 (2026-03-07)
- ChatGPT release review fixes
- All version strings unified across root/packages/plugin
- word_expansion.json loader handles both tuple-pair and string-list formats
- Session isolation behavior documented (None→wildcard is intentional)
- Root tests updated (220 passed, 0 failures)
- mypy python_version typo fixed
v4.9.17 (2026-03-06)
- 24 bug fixes (3 critical, 6 high, 7 medium, 8 low)
- Critical: content_norms dedup mismatch, live fact double-ingest, session summary synthesis non-functional
- High: shard merge enrichment loss, CrossSession TOCTOU race, shard cache FIFO→LRU, compact enrichment protection, WAL replay IDF, session key collapse
- Universal word expansion: 9,007 words from SO + code-search-net + Wikipedia + C4
- TDZ crash fix in agent_end (was silently killing all post-turn memory storage)
v4.9.16 (2026-03-05)
- BM25+ (δ=0.5 floor), safelist normalizer, word-embedding query vector, PRF Layer 11
/prunecommand: small/medium/large tiers, undo, backups- R@1: 47.1%, MRR: 0.540
v4.9.14 (2026-03-04)
- Word-embedding query vector (Layer 10 primary path)
- PRF Layer 11 pseudo-relevance feedback
- R@1: 45.3%, MRR: 0.531
v4.9.13 (2026-03-04)
- BM25 normalization overhaul + safelist normalizer
- R@1: 39.6%, MRR: 0.473
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file antaris_suite-5.3.0-py3-none-any.whl.
File metadata
- Download URL: antaris_suite-5.3.0-py3-none-any.whl
- Upload date:
- Size: 15.3 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
944339fff38928bae788a48c05a74776fd3fa09c036face479b6a624e869d204
|
|
| MD5 |
e2ab932da83bfbeb277a4dd78c3f3057
|
|
| BLAKE2b-256 |
ae95023845035677215a56b1238274bccf47f135a7c79c17b3d7a8751e1251d6
|