Persistent, correctable memory for AI. The memory layer that learns from mistakes.
Project description
Persistent, correctable memory for AI. The memory layer that learns from mistakes.
Free. Open source. Zero dependencies. Works with any LLM. Runs on Linux, macOS, and Windows.
They gave us MCP for free. They gave us agents for free. Now here's the missing piece — memory that actually learns — for free too.
Every AI agent forgets everything when the session ends. NeverOnce gives them a brain that persists — and more importantly, a brain that learns from corrections so the same mistake never happens twice.
Why NeverOnce?
| Without NeverOnce | With NeverOnce |
|---|---|
| AI forgets everything each session | Memories persist forever |
| Same mistakes repeated daily | Corrections prevent repeat errors |
| No learning from feedback | Helpful memories strengthen, bad ones decay |
| Each session starts from zero | Context builds over time |
Proven in Production
NeverOnce's correction system was battle-tested for 4 months before open-sourcing:
| Metric | Value |
|---|---|
| Total memories stored | 1,421 |
| Corrections | 87 |
| Running since | November 2025 |
| Most-surfaced correction | 491 times |
| Avg correction surfaced | 78 times each |
| Memory types used | 11 |
The most-used correction was surfaced 491 times — and the AI never repeated that mistake once after it was stored. That's the power of corrections over plain memory.
Install
pip install neveronce
Zero dependencies. Just Python's built-in SQLite. That's it.
Quickstart — 5 lines
from neveronce import Memory
mem = Memory("my_app")
mem.store("user prefers dark mode", tags=["preference"])
mem.correct("never use imperial units", context="unit conversion")
results = mem.recall("what units should I use?")
# → Returns the correction first, always
The Killer Feature: Corrections
Most memory systems just store and retrieve. NeverOnce has corrections — a special memory type that:
- Always stored at maximum importance (10/10)
- Always surfaces first in recall results
- Never decays, even if not used frequently
- Represents "I was wrong, here's the fix"
# Store a correction when the AI makes a mistake
mem.correct(
"use metric units, never imperial",
context="unit conversion in engineering calculations"
)
# Later, before taking action, check for applicable corrections
warnings = mem.check("converting measurements to feet and inches")
# → Returns: "use metric units, never imperial"
This is the difference between an AI that's smart and an AI that gets smarter.
Full API
Memory(name, db_dir=None, namespace="default")
Create a memory store. Each name gets its own SQLite database at ~/.neveronce/<name>.db.
.store(content, *, tags=None, context="", importance=5)
Store a general memory. Returns the memory ID.
.correct(content, *, context="", tags=None)
Store a correction. Always importance 10. Always surfaces first.
.recall(query, *, limit=10, min_importance=1)
Search memories by relevance (FTS5/BM25). Corrections always float to top.
.check(planned_action)
Pre-flight check. Returns only matching corrections for the planned action. Call this before doing something to catch mistakes early.
.helped(memory_id, did_help)
Feedback loop. Mark whether a surfaced memory was actually useful. Helpful memories get stronger. Unhelpful ones can be decayed.
.decay(surfaced_threshold=5, decay_amount=1)
Lower importance of memories surfaced many times but never marked helpful. Corrections are immune to decay.
.forget(memory_id)
Delete a memory.
.stats()
Returns {total, corrections, avg_importance, avg_effectiveness}.
MCP Server
NeverOnce includes an MCP server so any MCP-compatible AI client can use it:
# Install with MCP support
pip install neveronce[mcp]
# Run the server
python -m neveronce
Add to your MCP config (Claude Code, Cursor, etc.):
{
"mcpServers": {
"neveronce": {
"command": "python",
"args": ["-m", "neveronce"]
}
}
}
The server exposes all NeverOnce operations as MCP tools: store, correct, recall, check, helped, forget, stats.
Multi-Agent Support
Namespaces let multiple agents share a memory store without stepping on each other:
from neveronce import Memory
mem = Memory("team")
# Agent 1: research agent
mem.store("found 3 relevant papers on transformer memory", namespace="researcher")
mem.correct("ignore papers before 2024, methodology changed", namespace="researcher")
# Agent 2: coding agent
mem.store("user prefers async/await over callbacks", namespace="coder")
mem.correct("always use Python 3.12+ syntax", namespace="coder")
# Each agent recalls only its own context
research_context = mem.recall("relevant papers", namespace="researcher")
coding_context = mem.recall("coding style", namespace="coder")
One database, multiple agents, isolated context. Cross-namespace search is also possible by omitting the namespace parameter.
Why FTS5 Instead of Embeddings?
Most memory systems use vector embeddings for search. NeverOnce uses SQLite FTS5 (full-text search with BM25 ranking) instead. This is a deliberate choice, not a limitation:
- Corrections are short, high-signal text. "Never use HTTP for internal services" doesn't need semantic similarity — it needs exact keyword matching. BM25 excels at this.
- Zero dependencies. Embeddings require numpy, sentence-transformers, or an API call. FTS5 is built into Python's sqlite3. Nothing to install, nothing to break.
- Speed. FTS5 queries are sub-millisecond. No model loading, no inference, no API latency.
- Deterministic. Same query, same results. No embedding model drift or version mismatches.
- Offline. Works without internet. No API keys, no cloud services.
For most correction and preference storage, keyword matching is actually more reliable than semantic search. When you store "never use tabs, always use spaces," you want the word "tabs" to trigger that correction — not a semantically similar but different concept.
If your use case needs semantic search, NeverOnce's architecture is simple enough to extend. But for the core use case — corrections that prevent mistakes — FTS5 is the right tool.
How It Works
- Storage: SQLite with FTS5 full-text search. Zero external dependencies.
- Search: BM25 ranking via FTS5. Fast, proven, built into Python.
- Corrections: Stored at importance 10, tagged as corrections, always surface first in results.
- Feedback loop:
helped()tracks effectiveness. Memories that help get stronger. Memories that don't can be decayed. - Decay: Unhelpful memories lose importance over time. Corrections never decay.
Design Philosophy
- Zero dependencies — Just sqlite3 (built into Python). No numpy, no embeddings, no vector DBs.
- Corrections > memories — The ability to say "I was wrong" is more important than total recall.
- Feedback-driven — Memories that help survive. Memories that don't fade away.
- One file, one store — Each Memory instance is a single
.dbfile. Copy it, back it up, share it. - Model-agnostic — Works with any LLM. NeverOnce is the memory, not the brain.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neveronce-0.1.0.tar.gz.
File metadata
- Download URL: neveronce-0.1.0.tar.gz
- Upload date:
- Size: 15.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
771db665ec8a58c0ff6e3efef04c6a314d637abe8c312273e988df7071004e62
|
|
| MD5 |
af9adf6d1b3b45672f82343a714d7233
|
|
| BLAKE2b-256 |
2e947a1bfb9d1989c0ce2ab367e72018b25f5b9eedce6ee4e3e675eb62e22c9b
|
File details
Details for the file neveronce-0.1.0-py3-none-any.whl.
File metadata
- Download URL: neveronce-0.1.0-py3-none-any.whl
- Upload date:
- Size: 13.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3d2e813b4f9347c52cf181009e2c0c4f19a39e28cc456343b4841d4ea27ce3a3
|
|
| MD5 |
365a0427bd4896ba390cdfcc835dab12
|
|
| BLAKE2b-256 |
f33faedddf6d5be4a4db8f887d75c6c6f6249f30b635257b6ca98e83a37d8fba
|