Concinno's progressive-disclosure memory store — three-layer index/summary/archive with ZIQ noise-filter outcome wiring.
Project description
concinno-skills-memory
Progressive-disclosure memory store for LLM agents — three layers (index / summary / archive) so the agent can scan a cheap top layer first and only pay for deeper layers when a keyword hit justifies the cost.
Part of the Concinno
ecosystem. Standalone-installable; no runtime dependency on the main
concinno package.
Why this exists
Loading the entire conversation history into every prompt is wasteful. The progressive-disclosure pattern keeps the per-turn token cost roughly constant by exposing memories in tiers:
| Layer | Content | Cost per entry | When to read |
|---|---|---|---|
| 1 | Topic + headline (≤100 char) | Lowest | Every turn (auto-loaded) |
| 2 | Per-topic summary (≤500 char) | Medium | When a Layer-1 hit looks relevant |
| 3 | Full archive (no cap) | Highest | Only when L2 is insufficient |
The agent stays at Layer 1 by default and escalates entry-by-entry
through expand_memory().
Difference from claude-mem
This package is a clean rewrite of the public progressive-disclosure
pattern that claude-mem popularised. We:
- read only the public README / docs to learn the pattern;
- copied no code (
claude-memis AGPL with personal copyright); - shipped under AGPL-3.0-or-later (the same family Concinno uses).
If you want the full claude-mem MCP server experience, install
claude-mem directly. If
you want a tiny Python library you can embed in any agent loop and
wire into Concinno's ZIQ outcome bus, install this.
Install
pip install concinno-skills-memory
Usage
from concinno_skills_memory import ProgressiveMemoryStore
store = ProgressiveMemoryStore()
# Write — all three layers populated atomically.
eid = store.add_memory(
topic="releases",
content="v0.1.0 shipped on 2026-04-28; first public progressive "
"disclosure release for the Concinno ecosystem.",
tags=["release", "0.1.0"],
)
# Step 1 — cheap scan.
hits = store.query_memory("release", layer=1)
for hit in hits:
print(hit.render()) # "releases: v0.1.0 shipped on..."
# Step 2 — escalate the one entry that looks relevant.
summary = store.expand_memory(hits[0].entry_id, layer=2)
print(summary.summary)
# Step 3 — only if L2 is still not enough.
archive = store.expand_memory(hits[0].entry_id, layer=3)
print(archive.content)
ZIQ noise-filter outcome wiring
This package exports a typed callback contract that the Concinno
ziq_outcome_bus uses to learn which layer to escalate to per query
class. We do not import concinno here — the wire is one-way:
Concinno reads our exported names and registers them.
from concinno_skills_memory import (
NOISE_FILTER_OUTCOME_NAME,
NoiseFilterCallback,
NoiseFilterOutcome,
reference_noise_filter,
)
NoiseFilterCallback— runtime-checkable Protocol; signature is(query: str, fetched_layer: int, fetched_relevance: float) -> NoiseFilterOutcome.NoiseFilterOutcome— float subclass clamped to[0.0, 1.0].reference_noise_filter— heuristic implementation that penalises over-fetching (Layer 3 carries a 30 % cost penalty, Layer 2 carries 15 %, Layer 1 carries none).NOISE_FILTER_OUTCOME_NAME— stable string the bus keys on ("memory.noise_filter").
Production callers can swap reference_noise_filter for a learned
scorer or a Haiku judge. The contract is just the callable shape.
Storage backend
The store is in-memory only by design. Persistence (file, SQLite, vector store, S3) is the caller's choice. A 200-line in-memory core is easier to embed inside any agent loop than a one-size-fits-all backend.
License
AGPL-3.0-or-later. See LICENSE.
Status
Alpha (0.1.0). The public API (three layers, three methods, the outcome contract) is stable for the Concinno 4.4.0 wire. Internal implementation may change before 1.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file concinno_skills_memory-0.1.0.tar.gz.
File metadata
- Download URL: concinno_skills_memory-0.1.0.tar.gz
- Upload date:
- Size: 15.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a1fd979564567f653c5d795049fd80b4568a7d3c70d0d8812e9c9edbc71d446b
|
|
| MD5 |
7e3f1c1b1152086cc01ceea497130121
|
|
| BLAKE2b-256 |
4e208ce9075a42c0810b57bc45b0a7b20c04fb3eff2d6ed0419232bbc556b1de
|
File details
Details for the file concinno_skills_memory-0.1.0-py3-none-any.whl.
File metadata
- Download URL: concinno_skills_memory-0.1.0-py3-none-any.whl
- Upload date:
- Size: 11.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2131e7edfe4e1ca0bca2fd4257ed162ac5257741ccee9dc9e2b3913971658277
|
|
| MD5 |
eb6fbbcc9236fa6c427f40ffcd620de8
|
|
| BLAKE2b-256 |
96158e9a65bb18eae4ed5230129c8517984054b36348c97c53b4a932cd9fef03
|