File-based persistent memory for AI agents. Zero dependencies.
Reason this release was yanked:
Incorrect Versioning
Project description
🧠 Parsica-Memory
Persistent, intelligent memory for AI agents. Zero dependencies. Pure Python. File-backed.
What Is This?
AI agents are stateless by default. Every spawn is a cold start. parsica-memory gives agents a persistent, searchable, intelligent memory store that:
- Remembers across sessions, spawns, and restarts
- Retrieves the right memories using an 11-layer BM25+ search engine
- Decays old memories gracefully so signal stays high
- Learns from mistakes, facts, and procedures with specialized memory types
- Shares knowledge across multi-agent teams
- Enriches itself via LLM hooks to dramatically improve recall
- Cross-session recall — semantic memories surface across all sessions automatically
No vector database. No API keys required. No external services. Just pip install and go.
⚡ Quick Start
pip install parsica-memory
from parsica_memory import MemorySystem
mem = MemorySystem(workspace="./memory", agent_name="my-agent")
mem.load()
# Store a memory
mem.ingest("Deployed v2.3.1 to production. All checks green.",
source="deploy-log", session_id="session-123")
# Search with cross-session recall
results = mem.search("production deployment",
session_id="session-456",
cross_session_recall="semantic")
for r in results:
print(r.content)
mem.save()
That's it. No config files needed.
📦 Installation
pip install parsica-memory
Version: 1.0.1 Requirements: Python 3.8+ · Zero external dependencies · stdlib only
🔑 Key Features
11-Layer Search Engine
Not a simple keyword matcher. Every query runs through:
- BM25+ TF-IDF — baseline relevance with delta floor
- Exact Phrase Bonus — verbatim matches score higher
- Field Boosting — tags, source, category weighted
- Rarity & Proper Noun Boost — rare terms and names surface
- Positional Salience — intro/conclusion bias
- Semantic Expansion — PPMI co-occurrence query widening
- Intent Reranker — temporal, entity, howto detection
- Qualifier & Negation — "failed" ≠ "successful"
- Clustering Boost — coherent result groups score higher
- Embedding Reranker — local MiniLM embeddings (no API)
- Pseudo-Relevance Feedback — top results refine the query
Memory Types
| Type | Decay Rate | Importance | Use Case |
|---|---|---|---|
episodic |
Normal | 1× | General events |
semantic |
Normal | 1× | Facts, decisions — crosses sessions |
fact |
Normal | High recall | Verified knowledge |
mistake |
10× slower | 2× | Never forget failures |
preference |
3× slower | 1× | User/agent preferences |
procedure |
3× slower | 1× | How-to knowledge |
Cross-Session Recall (v1.0.1)
# From session B, find semantic memories stored in session A
results = mem.search(
"what's the API key format?",
session_id="session-B",
cross_session_recall="semantic" # "all" | "semantic" | "none"
)
"all"— no filtering (default, backward compatible)"semantic"— other sessions' memories only ifmemory_type == "semantic""none"— strict session isolation
Auto Memory Type Classification
Memories are automatically classified as semantic (facts, decisions, preferences) or episodic (events, tasks, logs) at ingest time. No manual tagging needed.
LLM Enrichment
Pass an enricher callable to dramatically improve recall:
def my_enricher(content: str) -> dict:
# Call any LLM — returns tags, summary, keywords, search_queries
return {"tags": [...], "summary": "...", "keywords": [...], "search_queries": [...]}
mem = MemorySystem(workspace="./memory", agent_name="my-agent", enricher=my_enricher)
Enriched fields get boosted weights in search: search_queries 3×, enriched_summary 2×, search_keywords 2×.
Context Packets
Cold-spawn solution for sub-agents:
packet = mem.build_context_packet(
task="Deploy the auth service",
max_tokens=3000,
include_mistakes=True
)
# Inject packet.render() into sub-agent system prompt
Graph Intelligence
Automatic entity extraction and knowledge graph:
path = mem.entity_path("payment-service", "database", max_hops=3)
triples = mem.graph_search(subject="PostgreSQL", relation="used_by")
Tiered Storage
| Tier | Age | Behavior |
|---|---|---|
| Hot | 0–3 days | Always loaded |
| Warm | 3–14 days | Loaded on-demand |
| Cold | 14+ days | Requires include_cold=True |
Input Gating
P0–P3 priority classification drops noise before it enters the store:
mem.ingest_with_gating("ok thanks", source="chat") # → dropped (P3)
mem.ingest_with_gating("Production outage: auth down", source="incident") # → stored (P0)
Shared / Team Memory
from parsica_memory import AgentRole
pool = mem.enable_shared_pool(
pool_dir="./shared",
pool_name="project-alpha",
agent_id="worker-1",
role=AgentRole.WRITER
)
mem.shared_write("Research complete: competitor uses GraphQL", namespace="research")
MCP Server
python -m parsica_memory serve --workspace ./memory --agent-name my-agent
Works with Claude Desktop and any MCP-compatible client.
🖥️ CLI
# Initialize a workspace
python -m parsica_memory init --workspace ./memory --agent-name my-agent
# Check status
python -m parsica_memory status --workspace ./memory
# Rebuild knowledge graph
python -m parsica_memory rebuild-graph --workspace ./memory
# Start MCP server
python -m parsica_memory serve --workspace ./memory --agent-name my-agent
🔧 Core API
from parsica_memory import MemorySystem
mem = MemorySystem(
workspace="./memory", # Required
agent_name="my-agent", # Required — scopes the store
half_life=7.0, # Decay half-life in days
enricher=None, # LLM enrichment callable
tiered_storage=True, # Hot/warm/cold tiers
graph_intelligence=True, # Entity extraction + graph
semantic_expansion=True, # PPMI query expansion
)
# Lifecycle
mem.load() # Load from disk
mem.save() # Save to disk
mem.flush() # WAL → shards
mem.close() # Flush + release
# Ingestion
mem.ingest(content, source=..., session_id=..., channel_id=...)
mem.ingest_fact(content, source=...)
mem.ingest_mistake(what_happened=..., correction=..., root_cause=..., severity=...)
mem.ingest_preference(content, source=...)
mem.ingest_procedure(content, source=...)
mem.ingest_file(path, category=...)
mem.ingest_url(url, depth=2)
# Search
mem.search(query, limit=10, cross_session_recall="semantic")
mem.search_with_context(query, cooccurrence_boost=True)
mem.recent(limit=20)
# Graph
mem.graph_search(subject=..., relation=..., obj=...)
mem.entity_path(source, target, max_hops=3)
# Context
mem.build_context_packet(task=..., max_tokens=3000, include_mistakes=True)
# Maintenance
mem.compact()
mem.consolidate()
mem.reindex()
mem.forget(topic=..., before_date=...)
🏗️ Architecture
parsica-memory/
├── Core: MemorySystem, MemoryEntry, WAL
├── Storage: ShardManager, TierManager, GCS backend
├── Search: 11-layer BM25+ pipeline
├── Intelligence: EntityExtractor, MemoryGraph, LLM Enricher
├── Multi-Agent: SharedMemoryPool, AgentRoles
├── Context: ContextPacketBuilder
└── Server: MCP server, CLI
Relationship to Antaris-Memory
parsica-memory and antaris-memory share the same core engine. They are functionally identical — same 11-layer search, same memory types, same API. The difference is branding:
- antaris-memory — part of the antaris-suite ecosystem
- parsica-memory — standalone package, same features, independent identity
Use whichever fits your project. They're interchangeable.
📄 License
Apache 2.0
🔗 Links
- PyPI: https://pypi.org/project/parsica-memory/
- GitHub: https://github.com/Antaris-Analytics-LLC/Parsica-Memory
- Antaris Analytics: https://antarisanalytics.ai
Built by Antaris Analytics LLC for production AI agent deployments.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file parsica_memory-5.2.1.tar.gz.
File metadata
- Download URL: parsica_memory-5.2.1.tar.gz
- Upload date:
- Size: 1.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
69f5ef2c3fbf36136ce899f1dd4a5800782fb1f7107125edd767c0dbf1f2ad17
|
|
| MD5 |
935ea1733f92a8a1f4770e96dfc5bd68
|
|
| BLAKE2b-256 |
62123f57f31f228219c4262e0e26fe381e84272d0d843c642315fbb92bc28459
|
File details
Details for the file parsica_memory-5.2.1-py3-none-any.whl.
File metadata
- Download URL: parsica_memory-5.2.1-py3-none-any.whl
- Upload date:
- Size: 1.8 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4fa8ce782babcbf8a3a115e40680ae2a909cb6366111e8c3229eb4cd85d2f4b5
|
|
| MD5 |
bd238cf2b55666f28263963d74376a4e
|
|
| BLAKE2b-256 |
178a138524ce30d25389de6bb26f44c34f27b4550a334784e12c1336f61bfad4
|