Skip to main content

File-based persistent memory for AI agents. Zero dependencies.

Project description

๐Ÿง  parsica-memory

Persistent, intelligent memory for AI agents. The flagship package of the Antaris Analytics suite.

PyPI version Python 3.9+ Zero dependencies License: Apache 2.0


What Is This?

AI agents are stateless by default. Every spawn is a cold start. parsica-memory gives agents a persistent, searchable, intelligent memory store that:

  • Remembers across sessions, spawns, and restarts
  • Retrieves the right memories using an 11-layer BM25+ search engine
  • Decays old memories gracefully so signal stays high
  • Learns from mistakes, facts, and procedures with specialized memory types
  • Shares knowledge across multi-agent teams via shared pools
  • Enriches itself via LLM hooks to dramatically improve recall
  • Cross-session recall โ€” semantic memories surface across all sessions automatically

No vector database. No API keys required. No external services. Just pip install and go.


โšก Quick Start

pip install parsica-memory
from parsica_memory import MemorySystem

mem = MemorySystem(workspace="./memory", agent_name="my-agent")
mem.load()

# Store a memory
mem.ingest("Team decided to use React for the frontend.",
          source="deploy-log", session_id="session-123")

# Search with cross-session recall
results = mem.search("production deployment",
                     session_id="session-456",
                     cross_session_recall="semantic")
for r in results:
    print(r.content)

mem.save()

That's it. No config files needed.


๐Ÿ“ฆ Installation

pip install parsica-memory

Version: 2.3.2 Requirements: Python 3.9+ ยท Zero external dependencies ยท stdlib only


What's New in v2.3.2

Release hardening and metadata cleanup

This release aligns package metadata, top-level version references, and public-facing docs so the published 2.3.2 package tells one consistent story.

Parsica naming cleanup

Public-facing references now consistently use parsica-memory / parsica_memory where applicable, while legacy names are only mentioned when documenting compatibility or project history.

Storage backend positioning

The local filesystem backend remains the default and supported path. Google Cloud Storage remains an experimental stub in the codebase and is documented as such rather than presented as a primary public backend.

๐Ÿ”— Cross-Channel Continuity

Your AI picks up where you left off โ€” on any device, any app.

parsica-memory tracks source_channel on every stored memory, enabling recall across devices and apps.

# Recall what happened recently across ALL channels
recent = memory.recall_recent(hours=6.0, limit=5)

# Exclude current channel (already covered by semantic recall)
recent = memory.recall_recent(hours=6.0, limit=3, exclude_channel="discord:123")

Configure in your OpenClaw plugin settings:

  • recencyEnabled โ€” toggle cross-channel injection (default: true)
  • recencyWindow โ€” how far back to look in hours (default: 6)
  • recencyLimit โ€” max recent memories to inject per turn (default: 3)

๐Ÿ”‘ Key Features

11-Layer Search Engine

Every query runs through a full pipeline:

  1. BM25+ TF-IDF โ€” baseline relevance with delta floor
  2. Exact Phrase Bonus โ€” verbatim matches score 1.5ร—
  3. Field Boosting โ€” tags 1.2ร—, category 1.3ร—, source 1.1ร—
  4. Rarity & Proper Noun Boost โ€” rare terms up to 2ร—, proper nouns 1.5ร—
  5. Positional Salience โ€” intro/conclusion windows 1.3ร—
  6. Semantic Expansion โ€” PPMI co-occurrence query widening
  7. Intent Reranker โ€” temporal, entity, howto detection
  8. Qualifier & Negation โ€” "failed" โ‰  "successful"
  9. Clustering Boost โ€” coherent result groups score higher
  10. Embedding Reranker โ€” local MiniLM embeddings (no API needed)
  11. Pseudo-Relevance Feedback โ€” top results refine the query

Memory Types

Type Decay Rate Importance Use Case
episodic Normal 1ร— General events
semantic Normal 1ร— Facts, decisions โ€” crosses sessions
fact Normal High recall Verified knowledge
mistake 10ร— slower 2ร— Never forget failures
preference 3ร— slower 1ร— User/agent preferences
procedure 3ร— slower 1ร— How-to knowledge

LLM Enrichment

Pass an enricher callable to boost recall quality:

def my_enricher(content: str) -> dict:
    # Call any LLM โ€” returns tags, summary, keywords, search_queries
    return {"tags": [...], "summary": "...", "keywords": [...], "search_queries": [...]}

mem = MemorySystem(workspace="./memory", agent_name="my-agent", enricher=my_enricher)

Enriched fields get boosted weights: search_queries 3ร—, enriched_summary 2ร—, search_keywords 2ร—.

Context Packets

Cold-spawn solution for sub-agents:

packet = mem.build_context_packet(
    task="Deploy the auth service",
    max_tokens=3000,
    include_mistakes=True
)
markdown = packet.render()  # Inject into sub-agent system prompt

Graph Intelligence

Automatic entity extraction and knowledge graph:

path = mem.entity_path("payment-service", "database", max_hops=3)
triples = mem.graph_search(subject="PostgreSQL", relation="used_by")
entity = mem.get_entity("PostgreSQL")

Tiered Storage

Tier Age Behavior
Hot 0โ€“3 days Always loaded
Warm 3โ€“14 days Loaded on-demand
Cold 14+ days Requires include_cold=True

Input Gating

P0โ€“P3 priority classification drops noise before it enters the store:

mem.ingest_with_gating("ok thanks", source="chat")  # โ†’ dropped (P3)
mem.ingest_with_gating("Production outage: auth down", source="incident")  # โ†’ stored (P0)

Shared / Team Memory

from parsica_memory import AgentRole

pool = mem.enable_shared_pool(
    pool_dir="./shared",
    pool_name="project-alpha",
    agent_id="worker-1",
    role=AgentRole.WRITER
)
mem.shared_write("Research complete: competitor uses GraphQL", namespace="research")
results = mem.shared_search("competitor API", namespace="research")

MCP Server

python -m parsica_memory serve --workspace ./memory --agent-name my-agent

Works with Claude Desktop and any MCP-compatible client.


๐Ÿ–ฅ๏ธ CLI

# Initialize a workspace
python -m parsica_memory init --workspace ./memory --agent-name my-agent

# Check status
python -m parsica_memory status --workspace ./memory

# Rebuild knowledge graph
python -m parsica_memory rebuild-graph --workspace ./memory

# Start MCP server
python -m parsica_memory serve --workspace ./memory --agent-name my-agent

๐Ÿ”ง Core API

from parsica_memory import MemorySystem

mem = MemorySystem(
    workspace="./memory",          # Required
    agent_name="my-agent",         # Required โ€” scopes the store
    half_life=7.0,                 # Decay half-life in days
    enricher=None,                 # LLM enrichment callable
    use_sharding=True,             # On-disk shard routing for scale; lifecycle split/merge tooling is still evolving
    tiered_storage=True,           # Hot/warm/cold tiers
    graph_intelligence=True,       # Entity extraction + graph
    quality_routing=True,          # Follow-up pattern detection
    semantic_expansion=True,       # PPMI query expansion
)

# Lifecycle
mem.load()                         # Load from disk โ†’ entry count
mem.save()                         # Save to disk โ†’ path
mem.flush()                        # WAL โ†’ shards
mem.close()                        # Flush + release

# Ingestion
mem.ingest(content, source=..., session_id=..., channel_id=..., memory_type=...)
mem.ingest_fact(content, source=...)
mem.ingest_mistake(what_happened=..., correction=..., root_cause=..., severity=...)
mem.ingest_preference(content, source=...)
mem.ingest_procedure(content, source=...)
mem.ingest_file(path, category=...)
mem.ingest_directory(dir_path, category=..., pattern="*.md")
mem.ingest_url(url, depth=2, incremental=True)
mem.ingest_data_file(path, format="auto")
mem.ingest_with_gating(content, source=..., context=...)

# Search
mem.search(query, limit=10, session_id=..., cross_session_recall="semantic",
           tags=..., memory_type=..., explain=True, include_cold=False)
mem.search_with_context(query, cooccurrence_boost=True)
mem.recent(limit=20)
mem.on_date("2024-03-15")
mem.between("2024-03-01", "2024-03-31")

# Graph
mem.graph_search(subject=..., relation=..., obj=...)
mem.entity_path(source, target, max_hops=3)
mem.get_entity(canonical)
mem.get_graph_stats()
mem.rebuild_graph()

# Context Packets
mem.build_context_packet(task=..., max_tokens=3000, include_mistakes=True)
mem.build_context_packet_multi(task=..., queries=[...], max_tokens=4000)

# Shared Pool
mem.enable_shared_pool(pool_dir=..., pool_name=..., agent_id=..., role=AgentRole.WRITER)
mem.shared_write(content, namespace=...)
mem.shared_search(query, namespace=...)

# Enrichment
mem.re_enrich(batch_size=50)
mem.set_embedding_fn(fn)

# Maintenance
mem.compact()
mem.consolidate()
mem.compress_old(days=60)
mem.reindex()
mem.forget(topic=..., before_date=...)
mem.delete_source(source_url)
mem.mark_used(memory_ids=[...])
mem.boost_relevance(memory_id, multiplier=1.5)

# Stats & Health
mem.get_stats()  # or mem.stats()
mem.get_health()
mem.get_hot_entries(top_n=10)

# Export / Import
mem.export(output_path, include_metadata=True)
mem.import_from(input_path, merge=True)
mem.validate_data()
mem.migrate_to_v4()

๐Ÿ—บ๏ธ Feature Matrix

Feature Status Since
Core ingestion & search โœ… v1.0
Memory types (episodic/fact/mistake/procedure/preference/semantic) โœ… v1.0
Temporal decay โœ… v1.0
Context packets โœ… v1.1
Export / Import โœ… v4.2
Local filesystem backend โœ… v1.0
Experimental GCS backend scaffold (stub only, not production-ready) โš ๏ธ v4.2
LLM enrichment hooks โœ… v4.6.5
Tiered storage (hot/warm/cold) โœ… v4.7
Web & data file ingestion โœ… v4.7
Graph intelligence (entity/relationship) โœ… v4.8/v4.9
Shared / team memory pools โœ… v4.8
11-layer search architecture โœ… v4.x
Co-occurrence / PPMI semantic tier โœ… v4.x
Input gating (P0โ€“P3 priority) โœ… v4.x
Hybrid BM25 + semantic embedding search โœ… v4.x
MCP server โœ… v4.9
Auto memory type classification โœ… v5.1
Session/channel provenance โœ… v5.1
Cross-session memory recall โœ… v5.2
doc2query (search query generation) โœ… v5.0.2
Recovery system โœ… v3.3
CLI tooling โœ… v4.x

๐Ÿ—๏ธ Architecture

parsica-memory/
โ”œโ”€โ”€ Core: MemorySystem, MemoryEntry, WAL
โ”œโ”€โ”€ Storage: ShardManager, TierManager, experimental GCS scaffold
โ”œโ”€โ”€ Search: 11-layer BM25+ pipeline
โ”œโ”€โ”€ Intelligence: EntityExtractor, MemoryGraph, LLM Enricher
โ”œโ”€โ”€ Multi-Agent: SharedMemoryPool, AgentRoles
โ”œโ”€โ”€ Context: ContextPacketBuilder
โ””โ”€โ”€ Server: MCP server, CLI

Part of antaris-suite

parsica-memory is the core package of the antaris-suite ecosystem:

  • parsica-memory โ€” persistent memory (this package)
  • antaris-guard โ€” input validation & safety
  • antaris-context โ€” context management
  • antaris-router โ€” intelligent model routing
  • antaris-pipeline โ€” orchestration pipeline
  • antaris-contracts โ€” shared type contracts

Also available as parsica-memory โ€” same engine, standalone identity.


๐Ÿ“„ License

Apache 2.0


๐Ÿ”— Links


Sharding โ€” Current Capabilities and Limits

Parsica Memory supports on-disk sharding for scale. Here is what sharding currently does and does not do:

What sharding does now

  • Routes memories to date/topic-based shard files on disk
  • Loads and searches across shards transparently
  • Preserves enrichment fields (search_keywords, enriched_summary, search_queries) when rewriting shards
  • Indexes shard metadata for fast shard selection during search
  • Supports time-window weekly JSONL sharding as an alternative storage mode

What sharding does NOT yet do

  • Shard merge โ€” combining small/fragmented shards into larger ones
  • Shard split โ€” breaking oversized shards into smaller parts
  • Automatic compaction โ€” background lifecycle maintenance
  • compact_shards() currently reports health stats only โ€” it does not perform merge or split operations

Roadmap

Production-grade shard lifecycle management (merge, split, compaction) is planned as part of the segmented backend in v3.1+. The segmented backend will introduce hot mutable segments, immutable sealed segments, and background merge/compaction โ€” superseding the current shard lifecycle approach with a fundamentally better architecture.

Shard health

Use compact_shards(dry_run=True) or parsica status to inspect current shard health, including counts, sizes, and fragmentation indicators.

Built by Antaris Analytics LLC for production AI agent deployments.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

parsica_memory-2.9.9-py3-none-any.whl (1.9 MB view details)

Uploaded Python 3

File details

Details for the file parsica_memory-2.9.9-py3-none-any.whl.

File metadata

  • Download URL: parsica_memory-2.9.9-py3-none-any.whl
  • Upload date:
  • Size: 1.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for parsica_memory-2.9.9-py3-none-any.whl
Algorithm Hash digest
SHA256 20af04d535c336842c5a3fabf3210951a4a12e915a8bcc58fefbe0f80972fcb0
MD5 6dfe81194b56b1874a2af5a428eccd17
BLAKE2b-256 562f40a1f52d9ee8317bbc0a1b9d7478576ee2817930fda50c4e53106b10977e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page