File-based persistent memory for AI agents. Zero dependencies.
Project description
Antaris Memory
File-based persistent memory for AI agents. Zero dependencies.
Store, search, decay, and consolidate agent memories using only the Python standard library. No vector databases, no infrastructure, no API keys.
What It Does
- Stores structured facts, decisions, and context as JSON on the local filesystem
- Retrieval weighted by recency × importance × access frequency (Ebbinghaus-inspired decay)
- Classifies incoming information by priority (P0–P3) and drops ephemeral content at intake
- Detects contradictions between stored memories using deterministic rule-based comparison
- Runs fully offline — zero network calls, zero tokens, zero API keys
What It Doesn't Do
- Not a vector database — no embeddings (optional embedding support planned)
- Not a knowledge graph — flat memory store with metadata indexing
- Not semantic — contradiction detection compares normalized statements using explicit conflict rules, not inference. It will not catch contradictions phrased differently.
- Not LLM-dependent — all operations are deterministic. No model calls, no prompt engineering.
Design Goals
| Goal | Rationale |
|---|---|
| Deterministic | Same input → same output. No model variance. |
| Offline | No network, no API keys, no phoning home. |
| Minimal surface area | One class (MemorySystem), obvious method names. |
| No hidden processes | Consolidation and synthesis run only when called. |
| Transparent storage | Plain JSON files. Inspect with any text editor. |
Install
pip install antaris-memory
Quick Start
from antaris_memory import MemorySystem
mem = MemorySystem("./workspace", half_life=7.0)
mem.load() # Load existing state (no-op if first run)
# Store memories
mem.ingest("Decided to use PostgreSQL for the database.",
source="meeting-notes", category="strategic")
mem.ingest("The API costs $500/month — too expensive.",
source="review", category="operational")
# Search (results ranked by relevance × decay score)
for r in mem.search("database decision"):
print(f"[{r.confidence:.1f}] {r.content}")
# Temporal queries
mem.on_date("2026-02-14")
mem.narrative(topic="database migration")
# Selective deletion
mem.forget(entity="John Doe") # GDPR-ready, with audit trail
mem.forget(before_date="2025-01-01")
# Background consolidation
report = mem.consolidate()
# → duplicates found, topic clusters, contradictions, archive suggestions
mem.save()
Input Gating (P0–P3)
Classify content at intake. Low-value data never enters storage.
mem.ingest_with_gating("CRITICAL: API key compromised", source="alerts")
# → P0 (critical) → stored in strategic tier
mem.ingest_with_gating("Decided to switch to PostgreSQL", source="meeting")
# → P1 (operational) → stored in operational tier
mem.ingest_with_gating("thanks for the update!", source="chat")
# → P3 (ephemeral) → dropped, not stored
| Level | Category | Stored | Examples |
|---|---|---|---|
| P0 | Strategic | ✅ | Security alerts, errors, deadlines, financial commitments |
| P1 | Operational | ✅ | Decisions, assignments, technical choices |
| P2 | Tactical | ✅ | Background info, research, general discussion |
| P3 | — | ❌ | Greetings, acknowledgments, filler |
Knowledge Synthesis
Identify gaps in stored knowledge and integrate new research.
# What does the agent not know enough about?
suggestions = mem.research_suggestions(limit=5)
# → [{"topic": "token optimization", "reason": "mentioned 3x, no details", "priority": "P1"}, ...]
# Integrate external findings
report = mem.synthesize(research_results={
"token optimization": "Context window management techniques..."
})
Memory Decay
Memories fade over time unless reinforced by access:
score = importance × 2^(-age / half_life) + reinforcement
- Fresh memories score high
- Unused memories decay toward zero
- Accessed memories are automatically reinforced
- Below-threshold memories are candidates for compression
Consolidation
Run periodically to maintain memory health:
report = mem.consolidate()
- Finds and merges near-duplicate memories
- Discovers topic clusters
- Flags contradictions (deterministic, rule-based)
- Suggests memories for archival
Storage Format
All state is stored in a single memory_metadata.json file:
{
"version": "0.2.0",
"saved_at": "2026-02-15T14:30:00",
"count": 10938,
"memories": [
{
"hash": "a1b2c3d4e5f6",
"content": "Decided to use PostgreSQL",
"source": "meeting-notes",
"category": "strategic",
"created": "2026-02-15T10:00:00",
"importance": 1.0,
"confidence": 0.8,
"sentiment": {"strategic": 0.6},
"tags": ["postgresql", "deployment"]
}
]
}
Deletions are logged to memory_audit.json for compliance.
Storage format may evolve between versions. Breaking changes will increment MAJOR version. See CHANGELOG.
Architecture
MemorySystem
├── InputGate — P0-P3 classification at intake
├── DecayEngine — Ebbinghaus forgetting curves
├── SentimentTagger — Rule-based keyword tone tagging
├── TemporalEngine — Date queries and narrative building
├── ConfidenceEngine — Reliability scoring
├── CompressionEngine — Old file summarization
├── ForgettingEngine — Selective deletion with audit
├── ConsolidationEngine — Dedup, clustering, contradiction detection
└── KnowledgeSynthesizer — Gap identification and research integration
Data flow: ingest → classify (P0-P3) → normalize → persist → search → decay-weight → return
Zero Dependencies
The core package uses only the Python standard library. Optional integrations (LLMs, embeddings) are deliberately excluded to preserve deterministic behavior and eliminate runtime requirements.
Comparison
| Antaris Memory | LangChain Memory | Mem0 | Zep | |
|---|---|---|---|---|
| Input gating | ✅ P0-P3 | ❌ | ❌ | ❌ |
| Knowledge synthesis | ✅ | ❌ | ❌ | ❌ |
| No database required | ✅ | ❌ | ❌ | ❌ |
| Memory decay | ✅ Ebbinghaus | ❌ | ❌ | ⚠️ Temporal graphs |
| Tone tagging | ✅ Rule-based keywords | ❌ | ❌ | ✅ NLP |
| Temporal queries | ✅ | ❌ | ❌ | ✅ |
| Contradiction detection | ✅ Rule-based | ❌ | ❌ | ⚠️ Fact evolution |
| Selective forgetting | ✅ With audit | ❌ | ⚠️ Invalidation | ⚠️ Invalidation |
| Infrastructure needed | None | Redis/PG | Vector + KV + Graph | PostgreSQL + Vector |
License
Licensed under the Apache License 2.0. See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file antaris_memory-0.2.1.tar.gz.
File metadata
- Download URL: antaris_memory-0.2.1.tar.gz
- Upload date:
- Size: 31.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8c66b167e414ba58e335d7e18895f80f8a0b1e385921bc4e74b83c3aa4c1bded
|
|
| MD5 |
eeb9759df9627eeb7971415db1311432
|
|
| BLAKE2b-256 |
dbb42309b379a4ee9c91e0fa75dac38f6363bf583d879b514f1408381d03916a
|
File details
Details for the file antaris_memory-0.2.1-py3-none-any.whl.
File metadata
- Download URL: antaris_memory-0.2.1-py3-none-any.whl
- Upload date:
- Size: 28.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1df1756ced56a3e259ae00e707f93f0116e67be6ed148514dbd3f8826e0bb305
|
|
| MD5 |
e34055f31453c315c934b06470b33207
|
|
| BLAKE2b-256 |
b88a8a93d72b157887868721d637514280eaf98d35b4df05de524b05e3bd00ae
|