Human-like memory for AI agents. Patent pending.
Project description
๐ง Antaris Memory
Human-like memory for AI agents. Patent pending.
Give your AI agents persistent memory that decays, reinforces, feels, reasons about time, detects its own contradictions, and cleans up after itself. For under $5/year.
The Problem
Every AI agent forgets everything between sessions. GPT, Claude, Gemini โ they all start from zero every time. Enterprise managed memory solutions cost $5,000-$50,000/year, and even free open-source alternatives require complex database infrastructure to deploy.
The Solution
from antaris_memory import MemorySystem
# Initialize
mem = MemorySystem("./my-agent-workspace")
# Ingest conversations, notes, anything
mem.ingest_file("conversation.md", category="tactical")
mem.ingest_directory("./memory", pattern="*.md", category="tactical")
# Search with decay-weighted relevance
results = mem.search("what did we decide about pricing?")
# Ask about time
memories = mem.on_date("2026-02-14")
story = mem.narrative(topic="patent filing")
# Forget things (GDPR-ready)
mem.forget(entity="John Doe")
mem.forget(before_date="2025-01-01")
# Run dream-state consolidation
report = mem.consolidate()
# Save
mem.save()
Features
| Feature | Description |
|---|---|
| Input Gating (P0-P3) | Classify and route information at intake โ critical, operational, contextual, or ephemeral โ so low-value data never enters storage |
| Autonomous Knowledge Synthesis | Agent independently researches and integrates new knowledge during idle periods |
| Zero Infrastructure | No databases, no vector stores, no cloud services. Just files. |
| Memory Decay | Ebbinghaus-inspired forgetting curves with reinforcement on access |
| Sentiment Tagging | Auto-detect emotional context (positive, negative, urgent, strategic, financial) |
| Temporal Reasoning | Query by date, date ranges, build chronological narratives |
| Confidence Scoring | Track reliability, increase on corroboration |
| Contradiction Detection | Flag when memories conflict with each other |
| Memory Compression | Auto-summarize old files, preserve key points |
| Selective Forgetting | GDPR-ready deletion by topic, entity, or date with audit trail |
| Dream State | Background consolidation: find duplicates, cluster topics, generate insights |
Install
pip install antaris-memory
Or from source:
git clone https://github.com/Antaris-Analytics/antaris-memory.git
cd antaris-memory
pip install -e .
What's New in v0.2
๐ช Input Gating (P0-P3): Smart content triage automatically classifies information at intake:
- P0 (Critical): Security alerts, errors, financial commitments, deadlines โ strategic category
- P1 (Operational): Decisions, assignments, technical choices โ operational category
- P2 (Contextual): Background info, research, discussion โ tactical category
- P3 (Ephemeral): Greetings, "thanks", "OK", "lol" โ silently filtered out
๐ง Autonomous Knowledge Synthesis: During idle periods, your agent now:
- Identifies knowledge gaps (unanswered questions, TODOs, unexplained terms)
- Suggests research topics based on memory analysis
- Integrates new research findings with existing knowledge
- Creates compound knowledge entries from cross-referenced information
๐ Integration Examples: Ready-to-use examples for OpenClaw agents and LangChain chains.
# Use intelligent gating
mem.ingest_with_gating(conversation, source="chat", context={"session": "123"})
# Get research suggestions
suggestions = mem.research_suggestions(limit=5)
# Run autonomous synthesis
report = mem.synthesize(research_results={"topic": "new findings..."})
Quick Start
from antaris_memory import MemorySystem
# Create a memory system
mem = MemorySystem("./workspace", half_life=7.0)
# Load existing state (if any)
mem.load()
# Ingest some content
mem.ingest("Today we decided to use PostgreSQL for the database.",
source="meeting-notes", category="strategic")
mem.ingest("The API costs $500/month which is too expensive.",
source="review", category="financial")
# Search
results = mem.search("database decision")
for r in results:
print(f"[{r.confidence:.1f}] {r.content}")
# Check stats
print(mem.stats())
# Save state
mem.save()
How It Works
Memory Decay (Ebbinghaus Curves)
Memories naturally fade over time, just like human memory:
Score = Importance ร 2^(-age / half_life) + reinforcement
- Fresh memories score high
- Old unused memories fade toward zero
- Accessed memories get reinforced โ the more you recall something, the stronger it stays
- Memories below the archive threshold are candidates for compression
Sentiment Analysis
Every memory is auto-tagged with emotional context:
entry.sentiment = {"positive": 0.8, "financial": 0.5}
Search by emotion: mem.search("budget", sentiment_filter="financial")
Dream State Consolidation
Run periodically (cron job, background task) to:
- Find and merge near-duplicate memories
- Discover topic clusters
- Detect contradictions
- Suggest memories for archival
report = mem.consolidate()
# Returns: duplicates found, clusters, contradictions, archive suggestions
Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MemorySystem โ
โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโ โโโโโโโโโโโโโโ โ
โ โ Decay โ โ Sentiment โ โ Temporal โ โ
โ โ Engine โ โ Tagger โ โ Engine โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโ โโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโ โโโโโโโโโโโโโโ โ
โ โConfidenceโ โCompressionโ โ Forgetting โ โ
โ โ Engine โ โ Engine โ โ Engine โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโ โโโโโโโโโโโโโโ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Consolidation Engine โ โ
โ โ (Dream State Processing) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Storage: JSON file (zero dependencies) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Configuration
mem = MemorySystem(
workspace="./workspace", # Where to store metadata
half_life=7.0, # Memory decay half-life in days
tag_terms=["custom", "terms"], # Additional auto-tag keywords
)
Zero Dependencies
Antaris Memory uses only Python standard library. No numpy, no torch, no API keys required.
Optional: Install openai for embedding-based semantic search (coming in v0.2).
Comparison
| Feature | Antaris Memory | LangChain Memory | Mem0 | Zep |
|---|---|---|---|---|
| Input gating (P0-P3) | โ | โ | โ | โ |
| Autonomous knowledge synthesis | โ | โ | โ | โ |
| No database required | โ | โ | โ | โ |
| Memory decay curves | โ | โ | โ | โ ๏ธ Partial |
| Emotional tagging | โ | โ | โ | โ |
| Temporal reasoning | โ | โ | โ | โ |
| Contradiction detection | โ | โ | โ | โ ๏ธ Partial |
| Selective forgetting | โ | โ | โ ๏ธ Partial | โ ๏ธ Partial |
| No infrastructure needed | โ | โ | โ | โ |
| Patent pending | โ | โ | โ | โ |
License
Apache 2.0 โ free for personal and commercial use.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file antaris_memory-0.2.0.tar.gz.
File metadata
- Download URL: antaris_memory-0.2.0.tar.gz
- Upload date:
- Size: 27.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
26d4e3600e4116ed2b382f3d47b4d0f1a1331fa3ea5e4b34dc600554aebdcb1c
|
|
| MD5 |
8be7b0173edaa175dec8e4d5fc5fad15
|
|
| BLAKE2b-256 |
d2099aed476a6ceee47254c73fec08c3eada2a2d9c8704f7063139190f79bd7e
|
File details
Details for the file antaris_memory-0.2.0-py3-none-any.whl.
File metadata
- Download URL: antaris_memory-0.2.0-py3-none-any.whl
- Upload date:
- Size: 24.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a435ddc2d41e1f6806a637f3851f643099de5934dd4cd09ea3028e9884bc7c66
|
|
| MD5 |
8fbf739979190fe1b5335e2f64313933
|
|
| BLAKE2b-256 |
987413faf11f2076284651187320fd41f21af8c729431509626ec3f6e8f9c97f
|