Skip to main content

Concinno's progressive-disclosure memory store — three-layer index/summary/archive with ZIQ noise-filter outcome wiring, SQLite FTS5 index, lifecycle hooks, and a local viewer.

Project description

concinno-skills-memory

Progressive-disclosure memory store for LLM agents — three layers (index / summary / archive) so the agent can scan a cheap top layer first and only pay for deeper layers when a keyword hit justifies the cost.

Part of the Concinno ecosystem. Standalone-installable; no runtime dependency on the main concinno package.

Why this exists

Loading the entire conversation history into every prompt is wasteful. The progressive-disclosure pattern keeps the per-turn token cost roughly constant by exposing memories in tiers:

Layer Content Cost per entry When to read
1 Topic + headline (≤100 char) Lowest Every turn (auto-loaded)
2 Per-topic summary (≤500 char) Medium When a Layer-1 hit looks relevant
3 Full archive (no cap) Highest Only when L2 is insufficient

The agent stays at Layer 1 by default and escalates entry-by-entry through expand_memory().

Difference from claude-mem

This package is a clean rewrite of the public progressive-disclosure pattern that claude-mem popularised. We:

  • read only the public README / docs to learn the pattern;
  • copied no code (claude-mem is AGPL with personal copyright);
  • shipped under AGPL-3.0-or-later (the same family Concinno uses).

If you want the full claude-mem MCP server experience, install claude-mem directly. If you want a tiny Python library you can embed in any agent loop and wire into Concinno's ZIQ outcome bus, install this.

Install

pip install concinno-skills-memory

Usage

from concinno_skills_memory import ProgressiveMemoryStore

store = ProgressiveMemoryStore()

# Write — all three layers populated atomically.
eid = store.add_memory(
    topic="releases",
    content="v0.1.0 shipped on 2026-04-28; first public progressive "
            "disclosure release for the Concinno ecosystem.",
    tags=["release", "0.1.0"],
)

# Step 1 — cheap scan.
hits = store.query_memory("release", layer=1)
for hit in hits:
    print(hit.render())   # "releases: v0.1.0 shipped on..."

# Step 2 — escalate the one entry that looks relevant.
summary = store.expand_memory(hits[0].entry_id, layer=2)
print(summary.summary)

# Step 3 — only if L2 is still not enough.
archive = store.expand_memory(hits[0].entry_id, layer=3)
print(archive.content)

ZIQ noise-filter outcome wiring

This package exports a typed callback contract that the Concinno ziq_outcome_bus uses to learn which layer to escalate to per query class. We do not import concinno here — the wire is one-way: Concinno reads our exported names and registers them.

from concinno_skills_memory import (
    NOISE_FILTER_OUTCOME_NAME,
    NoiseFilterCallback,
    NoiseFilterOutcome,
    reference_noise_filter,
)
  • NoiseFilterCallback — runtime-checkable Protocol; signature is (query: str, fetched_layer: int, fetched_relevance: float) -> NoiseFilterOutcome.
  • NoiseFilterOutcome — float subclass clamped to [0.0, 1.0].
  • reference_noise_filter — heuristic implementation that penalises over-fetching (Layer 3 carries a 30 % cost penalty, Layer 2 carries 15 %, Layer 1 carries none).
  • NOISE_FILTER_OUTCOME_NAME — stable string the bus keys on ("memory.noise_filter").

Production callers can swap reference_noise_filter for a learned scorer or a Haiku judge. The contract is just the callable shape.

Storage backend

The store is in-memory only by design. Persistence (file, SQLite, vector store, S3) is the caller's choice. A 200-line in-memory core is easier to embed inside any agent loop than a one-size-fits-all backend.

License

AGPL-3.0-or-later. See LICENSE.

Status

Alpha (0.1.0). The public API (three layers, three methods, the outcome contract) is stable for the Concinno 4.4.0 wire. Internal implementation may change before 1.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

concinno_skills_memory-0.2.0.tar.gz (30.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

concinno_skills_memory-0.2.0-py3-none-any.whl (25.1 kB view details)

Uploaded Python 3

File details

Details for the file concinno_skills_memory-0.2.0.tar.gz.

File metadata

  • Download URL: concinno_skills_memory-0.2.0.tar.gz
  • Upload date:
  • Size: 30.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for concinno_skills_memory-0.2.0.tar.gz
Algorithm Hash digest
SHA256 1d9dcf3fc4efa1e76f2d58b4feee57546843fdb2d4ef510e406a28a1a0abc671
MD5 37408c4cd22c6464b0bf648dca6efe77
BLAKE2b-256 987f5aa90166461a1dde395c2fc080c13d6b6fc0cf1fdecd98501b98eeac2f2c

See more details on using hashes here.

File details

Details for the file concinno_skills_memory-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for concinno_skills_memory-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 015a82e2d8a3c5b957a46fbdedced755071bacb40cf69400ed5550e404774b7c
MD5 5ae9c8fe3b5edf97b047573758abf1ae
BLAKE2b-256 0c8cba6c3b4b1180bded76b4228342f463710568fb843a6de572d4b3a8ee5f61

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page