Skip to main content

Complete Antaris AI infrastructure for OpenClaw — memory, routing, safety, and context management

Project description

Antaris Core v5.5.0

Agent infrastructure for intelligent, secure, and memory-persistent AI systems.

v5.5.0 — Cross-channel recency injection, parsica-memory rename, bootstrap guard, enricher key fix, contracts schema audit

What's New in v5.5.0

Cross-Channel Recency Injection ⚡

Agents now automatically receive context from recent conversations in OTHER channels/servers. Before each response, the plugin queries the memory store for recent memories outside the current session and injects them as ambient context. This gives agents awareness of what's happening across all their channels without explicit queries.

  • Configurable: recencyEnabled (default: true), recencyWindow (default: 6 hours), recencyLimit (default: 5 entries)
  • Deduplication: memories already present from semantic recall are excluded
  • Channel labels: each injected memory includes its source channel for attribution

Parsica-Memory Rename

antaris-memory has been renamed to parsica-memory inside the suite. Parsica is the brand for memory + search products. The standalone PyPI package is parsica-memory (v2.1.3+). Internal imports remain antaris_memory for backward compatibility.

Previous (v5.3.1)

  • check_bootstrap_files() — warns when workspace files approach OpenClaw's 35K char injection limit
  • get_health() bootstrap checkbootstrap_files_ok in health reports
  • Enricher ANTARIS_LLM_API_KEY — reads OpenClaw plugin config API key, zero extra configuration
  • Contracts v5.3.0 — schema audit, memory.py updated with 11 new fields
  • ESM-safe importsfs/os replacing require() calls in plugin

Packages

Package Version Description
parsica-memory 2.3.0 Persistent memory with 11-layer BM25F search, LLM enrichment, WAL, sharding, cross-channel recency
antaris-router 5.3.0 Intelligent model routing with cost tracking, confidence gating, A/B testing
antaris-guard 5.3.0 Prompt injection detection, PII filtering, rate limiting, behavioral analysis
antaris-context 5.3.0 Context compression, hard budget enforcement, summarization, relevance scoring
antaris-pipeline 5.3.0 Agent orchestration pipeline with per-stage telemetry and OpenClaw bridge
antaris-openclaw-plugin 5.5.0 OpenClaw plugin — auto-recall, auto-ingest, cross-channel recency, Discord bridge, compaction recovery

Architecture

antaris-openclaw-plugin   (lifecycle hooks — auto-recall + auto-ingest + /context sync)
        │
antaris-pipeline          (orchestration)
   ┌────┴────────────────────┐
antaris-memory         antaris-router      antaris-guard      antaris-context
(persistence)          (model selection)   (security)         (compression)

Design Principles

  • Zero external dependencies on all core Python packages — stdlib only
  • File-based persistence — no database required
  • Multi-process safe — cross-platform FileLock using os.mkdir() atomicity
  • Fully portable — no hardcoded paths, works on any machine
  • Fully tested — 1,149 tests passing

Installation

# Install the full suite
pip install antaris-memory==4.9.20

# Or install individual packages
pip install antaris-memory antaris-guard antaris-router antaris-context antaris-pipeline

For OpenClaw plugin installation, see antaris-openclaw-plugin/INSTALL.md.

Commands

/prune — Memory Store Cleanup

Tiered memory cleanup with dual-layer protection (content keywords + enrichment/access score gate).

Command Description
/prune small Dry-run: pipeline fragments, heartbeats, entries <40 chars
/prune medium Dry-run: extends small + zero-access >30 days, near-duplicates
/prune large Dry-run: extends medium + zero-access >14 days with low decay
/prune small|medium|large confirm Apply the prune (auto-backup first)
/prune sessions Dry-run stale/aborted sessions
/prune sessions confirm Remove stale sessions
/prune undo Restore last backup
/prune backups List available backups

context — Cross-Channel Context Sync

Reads recent Discord channel history, summarizes each channel's activity via LLM, and ingests the summaries into the memory store. Any instance that runs this immediately gets caught up on what every other channel has been doing.

Usage: Say context 12 (or 3, 6, 24, 36, 48) in any channel. The number is hours of history to sync.

Command Description
context 3 Sync last 3 hours
context 6 Sync last 6 hours
context 12 Sync last 12 hours (default)
context 24 Sync last 24 hours
context 36 Sync last 36 hours
context 48 Sync last 48 hours

Default channels synced:

  • #antaris-analytics-llc
  • #antaris-suite
  • #antaris-bot
  • #wealthhealth-antaris-forge
  • #antaris-search
  • Personal DM channel

How it works:

  1. Reads all messages from each channel within the time window (paginated, no caps)
  2. Includes both human and bot messages (so instances see each other's work)
  3. Summarizes each active channel via Haiku (cheap/fast)
  4. Ingests each summary as source="channel_sync" episodic memory
  5. Reports: channels synced, message counts, which channels had activity

Note: Currently runs through the agent (say "context 12" without slash). Plugin command routing (/context) is pending an OpenClaw command registration fix.

Benchmarks

v4.9.20 — Mac Mini M4 (10-core, 32GB) · Python 3.14 · 7,658 memories

Search Quality (doc2query self-recall, 150-sample benchmark)

Metric Result
R@1 61.9%
R@3 75.1%
R@5 79.3%
MRR 0.688
p50 84ms
p95 134ms
Provenance 100%

Hard Corpus (30 vocabulary-gap queries, zero keyword overlap)

Metric Raw BM25 With Enrichment
R@1 10.0% 46.7%

Search Engine Layers

  1. BM25+ with δ normalization
  2. BM25F per-field scoring (content/enriched/keywords/queries independent avg lengths)
  3. Safelist normalizer (~50 domain morphological mappings)
  4. LLM enrichment field boosts (enriched_summary 1.25×, search_queries 1.40×)
  5. Top-K window filter (5,350 → 159 candidates)
  6. Word expansion (9,007 words from SO + code + Wikipedia corpus)
  7. Embedding reranker (Layer 10 — MiniLM centroid vectors)
  8. PRF pseudo-relevance feedback (Layer 11)
  9. Ingest quality gates (noise regex, length minimum, prefix-aware dedup)
  10. Tiered storage (hot/warm/cold shards with LRU cache)
  11. WAL (write-ahead log) for crash safety

Changelog

v4.9.20 (2026-03-08) — Current

  • BM25F per-field scoring with independent field average lengths
  • Query expansion removed (was inflating 3-word queries to 156 tokens)
  • Keyword weight doubled (2×)
  • Boost stacking cleanup (removed non-discriminative boosts)
  • /context cross-channel sync command
  • R@1: 61.9%, R@3: 75.1%, R@5: 79.3%

v4.9.18 (2026-03-07)

  • ChatGPT release review fixes
  • All version strings unified across root/packages/plugin
  • word_expansion.json loader handles both tuple-pair and string-list formats
  • Session isolation behavior documented (None→wildcard is intentional)
  • Root tests updated (220 passed, 0 failures)
  • mypy python_version typo fixed

v4.9.17 (2026-03-06)

  • 24 bug fixes (3 critical, 6 high, 7 medium, 8 low)
  • Critical: content_norms dedup mismatch, live fact double-ingest, session summary synthesis non-functional
  • High: shard merge enrichment loss, CrossSession TOCTOU race, shard cache FIFO→LRU, compact enrichment protection, WAL replay IDF, session key collapse
  • Universal word expansion: 9,007 words from SO + code-search-net + Wikipedia + C4
  • TDZ crash fix in agent_end (was silently killing all post-turn memory storage)

v4.9.16 (2026-03-05)

  • BM25+ (δ=0.5 floor), safelist normalizer, word-embedding query vector, PRF Layer 11
  • /prune command: small/medium/large tiers, undo, backups
  • R@1: 47.1%, MRR: 0.540

v4.9.14 (2026-03-04)

  • Word-embedding query vector (Layer 10 primary path)
  • PRF Layer 11 pseudo-relevance feedback
  • R@1: 45.3%, MRR: 0.531

v4.9.13 (2026-03-04)

  • BM25 normalization overhaul + safelist normalizer
  • R@1: 39.6%, MRR: 0.473

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

antaris_suite-6.0.7.tar.gz (35.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

antaris_suite-6.0.7-py3-none-any.whl (10.6 kB view details)

Uploaded Python 3

File details

Details for the file antaris_suite-6.0.7.tar.gz.

File metadata

  • Download URL: antaris_suite-6.0.7.tar.gz
  • Upload date:
  • Size: 35.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for antaris_suite-6.0.7.tar.gz
Algorithm Hash digest
SHA256 0bc0ba981add6c0159281d24a30e5b7ef20c74c303f704ddfdbbd8dd50574fda
MD5 d41f1d32c58742bfcbf6d24bb7c2d557
BLAKE2b-256 340fa476c6cc46dc8781fc82b4d2896d92efee0d6419321939775898a1f05f0f

See more details on using hashes here.

File details

Details for the file antaris_suite-6.0.7-py3-none-any.whl.

File metadata

  • Download URL: antaris_suite-6.0.7-py3-none-any.whl
  • Upload date:
  • Size: 10.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for antaris_suite-6.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 3c6f87fd0d009ba05753e909996bc48c377213cc917d71691f3619232bcf30aa
MD5 0937d652c7caf215e6e6c98d0488f37f
BLAKE2b-256 9cc5e000af2316c4b954d3113af3c345de561d08e658b2173d37b294353fd32d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page