A lightweight memory engine that helps systems remember useful things, recall what matters, and learn from feedback.
Project description
Neural Ledger
Memory is not just storage. It is judgement about what deserves to become memory.
Neural Ledger is a lightweight memory engine for software and agents. It helps systems remember useful things, recall what matters, and learn from feedback.
from neural_ledger import Memory
mem = Memory()
mem.remember("GitHub API 401 — the access token expired", kind="observation")
mem.remember("Fix: regenerate the expired GitHub personal access token", kind="procedure")
mem.remember("Check rate limit headers before retrying requests", kind="note")
hits = mem.recall("How do I fix a GitHub API 401 error?", with_why=True, limit=3)
for hit in hits:
print(f"[{hit.kind}] {hit.content}")
print(f" score={hit.score:.4f} why: {hit.why}")
# Tell the engine which memory actually helped
mem.feedback([hits[0]], helped=True)
mem.feedback(hits[1:], helped=False)
# Repeated feedback accumulates. Over time, useful records rise;
# misleading ones score lower. See docs/examples/failure-memory.md
# for a controlled benchmark demonstrating the learning effect.
Three verbs. Everything else stays behind the curtain.
Why Neural Ledger
Most memory systems stop at retrieval. They find candidates by similarity and return them. They do not get better.
Neural Ledger is built around a different idea: feedback is a first-class signal, not a logging call. Every feedback() call updates a per-record usefulness prior and a graph of co-retrieval links. Those signals directly shape future rankings.
The result is a memory engine that improves with use — one that can learn, over repeated interactions, which memories are worth surfacing and which are noise.
Three verbs cover everything:
remember(...)— store experiencerecall(...)— retrieve the most relevant contextfeedback(...)— teach the system what actually helped
Everything else stays behind the curtain.
Installation
pip install neural-ledger
Quickstart
1. Create a memory
from neural_ledger import Memory
mem = Memory()
By default, Neural Ledger runs fully in memory. No database, API key, or graph backend is required.
2. Store experience
mem.remember("User prefers terse weekly updates")
mem.remember(
"GitHub API failed because the token expired",
kind="observation",
metadata={"tool": "github", "severity": "high"},
)
3. Recall what matters
hits = mem.recall("How should I write the update?", with_why=True)
4. Teach the system what helped
mem.feedback(hits, helped=True)
Over time, Neural Ledger can use feedback to improve ranking and retrieval quality.
API
Memory
Memory(
persist_path: str | None = None, # None = in-memory; path = SQLite
namespace: str = "default",
agent_id: str | None = None, # identity for governed shared memory
config: MemoryConfig | None = None,
)
remember(...)
Store a new memory.
record = mem.remember(
content: str,
*,
kind: str = "note",
metadata: dict | None = None,
source: str | None = None,
timestamp: datetime | None = None,
visibility: str = "local", # 'local' or 'shared'
provenance: str | None = None, # run ID, tool name, etc.
)
remember_many(...)
Store multiple memories at once.
records = mem.remember_many(
[
"User prefers terse weekly updates",
{"content": "Token expiry caused the API failure", "visibility": "shared"},
],
default_visibility="local",
)
recall(...)
Retrieve the most relevant memories for a query.
hits = mem.recall(
query: str,
*,
limit: int = 5,
kind: str | list[str] | None = None,
metadata_filter: dict | None = None,
min_score: float | None = None,
with_why: bool = False,
scope: str = "local", # 'local', 'shared', or 'merged'
)
feedback(...)
Tell Neural Ledger whether retrieved memories helped.
mem.feedback(
hits_or_ids,
*,
helped: bool | float,
reason: str | None = None,
metadata: dict | None = None,
)
helped accepts either:
True/Falsefor simple usage- a float in
[0, 1]for finer control
Return types
MemoryRecord
@dataclass(slots=True)
class MemoryRecord:
id: str
content: str
kind: str
metadata: dict
source: str | None
timestamp: datetime
agent_id: str | None = None
provenance: str | None = None
visibility: str = "local"
MemoryHit
@dataclass(slots=True)
class MemoryHit:
id: str
content: str
score: float
kind: str
metadata: dict
source: str | None
timestamp: datetime
why: str | None = None
agent_id: str | None = None
provenance: str | None = None
Design principles
Easy to start. Deep to grow.
The first successful use should take under five minutes. Advanced machinery can come later.
Learn from usefulness, not just similarity.
Similarity finds candidates. Feedback teaches the system what actually helps.
Retrieve context, not clutter.
Neural Ledger should return the smallest useful set, not a heap of vaguely related notes.
Keep the front door tiny.
The public API should stay simple even if the engine becomes sophisticated.
Architecture
The public API is three verbs. Behind them is a layered retrieval and learning engine.
┌──────────────────────────────────────────────────────┐
│ Memory (public API) │
│ remember() · recall() · feedback() │
└──────────────────────┬───────────────────────────────┘
│
┌──────────▼──────────┐
│ Runtime │
│ namespace · policy │
└────┬──────────┬─────┘
│ │
┌────────────▼──┐ ┌────▼──────────────────────┐
│ RecordStore │ │ Retrieval pipeline │
│ (dict/SQLite)│ │ Semantic (optional) │
└───────────────┘ │ → Keyword fallback │
│ → Path expansion (BFS) │
┌───────────────┐ │ → Rank by seed · link · │
│ LinkStore │ │ freshness · usefulness │
│ (nx/SQLite) │ └────────────────────────────┘
└───────────────┘
┌────────────────────────────┐
│ Learning engine │
│ usefulness prior │
│ link weight + evidence │
│ uncertainty · decay │
└────────────────────────────┘
Key properties:
- Semantic retrieval with automatic keyword fallback when embeddings are unavailable
- Graph path expansion: retrieval follows co-retrieval links, not just nearest neighbours
- Per-record usefulness prior: feedback directly scales future retrieval scores
- Evidence history on links: conflicting signals raise uncertainty rather than overwriting
- Time-based decay: recent interactions are fresher; activation fades without reinforcement
- Full SQLite persistence: records, link weights, usefulness, and metrics survive restarts
- Governed shared memory: multiple agents share a ledger with explicit visibility and provenance
These are engine-room concerns. The public API stays at three verbs.
What Neural Ledger is not
Neural Ledger is not:
- a thin vector-store wrapper,
- a graph database pitch deck,
- an ontology-first framework,
- an LLM-everywhere abstraction layer.
The point is not to make memory more complicated. The point is to make memory more useful.
Persistence
Memory survives process restarts when you pass a persist_path:
# First run — store something.
with Memory(persist_path="memory.db") as mem:
mem.remember("GitHub 401 caused by expired token", kind="observation")
# Later run — it is still there.
with Memory(persist_path="memory.db") as mem:
hits = mem.recall("GitHub 401")
print(hits[0].content) # "GitHub 401 caused by expired token"
Records, learned usefulness, link weights, and engine metrics all survive the restart.
Shared memory across agents
Multiple agents can share a governed memory ledger. Records default to local; sharing is always explicit.
# Agent A stores a shared finding.
with Memory(persist_path="team.db", agent_id="agent-a") as agent_a:
agent_a.remember(
"GitHub API 401 caused by expired token — refresh resolves it",
visibility="shared",
provenance="run-042",
)
# Agent B recalls it — with full provenance.
with Memory(persist_path="team.db", agent_id="agent-b") as agent_b:
hits = agent_b.recall("GitHub 401 fix", scope="merged")
print(hits[0].content) # agent-a's finding
print(hits[0].agent_id) # "agent-a"
print(hits[0].provenance) # "run-042"
agent_b.feedback(hits, helped=True) # reinforces the shared record
See docs/examples/shared-memory.md and examples/shared_memory_two_agents.py for the full scenario.
Current scope
Neural Ledger is intentionally small.
Included:
- in-memory and SQLite-persistent usage
- records, retrieval, and feedback
- feedback-learned usefulness and link weights
- governed shared memory with
agent_id,visibility, andprovenance - lightweight configuration
Not yet included:
- per-agent evidence attribution (Phase 4)
- explicit forgetting API
- heavy backend integrations
- full proof-chain objects
- broad framework adapters
Roadmap
Completed
- Phase 1 — Tiny public API, in-memory backend, feedback-aware retrieval
- Phase 2 — Canonical proof: feedback improves recall over keyword and semantic baselines
- Phase 3 — SQLite persistence: records, usefulness, link weights, and metrics survive restarts
- Phase 3B — Governed shared memory: multiple agents on one ledger with explicit visibility, provenance-preserving recall, and accumulated feedback
Upcoming
- Phase 4 — Evidence and confidence strengthening: per-agent attribution, explainable conflict handling, trust-weighted ranking
- Phase 5 — Public proof pack and release: polished README, benchmark summary, terminal demo
Example: personal preference memory
from neural_ledger import Memory
mem = Memory()
mem.remember("The user prefers concise answers on work topics", kind="preference")
mem.remember("The user likes deep examples when learning maths", kind="preference")
hits = mem.recall("How should I answer this question about a status update?", with_why=True)
for hit in hits:
print(f"{hit.content} ({hit.score:.2f})")
print(hit.why)
mem.feedback(hits, helped=True, reason="The preference was relevant")
Contributing
Neural Ledger is being built in the open.
The current focus is:
- a beautiful beginner experience,
- honest internals,
- strong evaluation,
- and a clear theory of memory as judgement.
Issues, ideas, benchmarks, and well-argued criticism are welcome.
License
MIT
One line to remember
Build the memory layer that decides what deserves to become memory.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neural_ledger-0.1.0a1.tar.gz.
File metadata
- Download URL: neural_ledger-0.1.0a1.tar.gz
- Upload date:
- Size: 44.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ff4d2a6cdf36688aeafd6f60677f771cf35ddd4aa5f0d4edc6a31de1596523d0
|
|
| MD5 |
dcd997acae3bbaa11e62277f577a9887
|
|
| BLAKE2b-256 |
6eaa4aa86169ec27e57c2612f9dd9ab62173d1e88d9aa36531c9922b751e6c62
|
File details
Details for the file neural_ledger-0.1.0a1-py3-none-any.whl.
File metadata
- Download URL: neural_ledger-0.1.0a1-py3-none-any.whl
- Upload date:
- Size: 36.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a4626d748198fe9cff5bf26ae1b9d28d66e73ea89e09d4f9ab46c0a7b4f95b5e
|
|
| MD5 |
2d801dccad008bdd2256b06102483dec
|
|
| BLAKE2b-256 |
63a9ade64ca34c9a8c23f6b074e04f2376a20cd47c5456e208a301713e9cb488
|