On-chain memory infrastructure for AI agents — built on the Internet Computer
Project description
total-recall
On-chain memory infrastructure for AI agents — built on the Internet Computer.
Your agent wakes up fresh every session. Total Recall gives it permanent, encrypted, on-chain memory — no cloud, no servers, no single point of failure.
Live at: www.totalrecallagent.com
Install
pip install total-recall
With async support:
pip install total-recall[async]
With LangChain integration:
pip install total-recall[langchain]
Quick Start
from total_recall import TotalRecallClient
# 1. Create a client (generate your API key at totalrecallagent.com)
memory = TotalRecallClient(api_key="tr_your_key_here")
# 2. Store memory
memory.store("last_context", {
"user": "MTR",
"task": "HVAC layout review",
"status": "in_progress",
})
# 3. Retrieve it next session
ctx = memory.get("last_context")
print(ctx.value) # {"user": "MTR", "task": "HVAC layout review", ...}
print(ctx.updated_at) # datetime object
That's it. Three lines. Your agent remembers.
API Reference
TotalRecallClient(api_key, *, base_url, timeout, max_retries)
| Param | Type | Default | Description |
|---|---|---|---|
api_key |
str | — | API key from your dashboard |
base_url |
str | prod | Override API endpoint |
timeout |
float | 30.0 | Request timeout in seconds |
max_retries |
int | 3 | Retry attempts on network errors |
memory.store(key, value, tags=[])
Store any value — string, dict/list (auto JSON-encoded), or raw bytes.
memory.store("session_state", {"step": 3, "done": False}, tags=["session"])
memory.store("raw_note", "Agent resumed at checkpoint alpha")
memory.get(key)
Retrieve a memory entry. Returns None if not found.
entry = memory.get("session_state")
if entry:
print(entry.value) # auto-decoded: {"step": 3, "done": False}
print(entry.tags) # ["session"]
print(entry.updated_at) # datetime
memory.get_all()
Get all stored memory entries at once.
entries = memory.get_all()
for e in entries:
print(e.key, e.value)
memory.keys()
List all stored keys.
ks = memory.keys()
# ["session_state", "last_context", "project_notes"]
memory.delete(key)
Delete a memory entry. No-op if key doesn't exist.
memory.delete("old_session")
memory.merge(key, patch, tags=[])
Merge new data into an existing entry. Creates it if it doesn't exist.
memory.merge("agent_state", {"last_seen": "2026-04-26", "status": "idle"})
memory.search(tags)
Search entries by tags. Returns entries that have ALL specified tags.
results = memory.search(tags=["session", "hvac"])
for e in results:
print(e.key, e.tags)
memory.get_stats()
Get current usage stats and tier limits.
stats = memory.get_stats()
print(stats["tier"]) # "Free" | "Pro" | "Agent" | "Enterprise"
print(stats["storage_bytes"]) # bytes used
print(stats["calls_today"]) # calls today
print(stats["limits"]["calls_per_day"]) # 0 = unlimited
memory.ping()
Check if the service is reachable.
status = memory.ping() # "🧠 Total Recall is alive"
Async Usage
from total_recall import TotalRecallAsyncClient
async def run():
async with TotalRecallAsyncClient(api_key="tr_...") as memory:
await memory.store("key", {"hello": "world"})
entry = await memory.get("key")
print(entry.value)
LangChain Integration
Give any LangChain agent persistent on-chain memory:
from total_recall.langchain import TotalRecallMemory
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationChain
memory = TotalRecallMemory(
api_key="tr_your_key_here",
session_key="my_agent_session", # unique per agent/user
)
chain = ConversationChain(
llm=ChatOpenAI(model="gpt-4o"),
memory=memory,
verbose=True,
)
# First session
chain.predict(input="My name is MTR and I work in HVAC.")
# Next session — agent still remembers
chain.predict(input="What do you know about me?")
# → "Your name is MTR and you work in HVAC."
Memory persists across Python processes, machine restarts, and model changes.
Real-World Example — AutoGen Agent
import os
import autogen
from total_recall import TotalRecallClient
memory = TotalRecallClient(api_key=os.environ["TOTAL_RECALL_API_KEY"])
def on_agent_start(agent_name: str):
"""Restore agent context at session start."""
ctx = memory.get_json(f"{agent_name}_context")
if ctx:
print(f"[{agent_name}] Resuming. Last task: {ctx.get('last_task')}")
return ctx
return {}
def on_agent_end(agent_name: str, state: dict):
"""Persist agent context at session end."""
memory.merge(f"{agent_name}_context", {
"last_task": state.get("current_task"),
"last_seen": str(__import__("datetime").datetime.utcnow()),
"session_count": state.get("session_count", 0) + 1,
}, tags=["agent", "context"])
Real-World Example — OpenAI Assistants
import os
from openai import OpenAI
from total_recall import TotalRecallClient
client = OpenAI()
memory = TotalRecallClient(api_key=os.environ["TOTAL_RECALL_API_KEY"])
# Load memory into system prompt
ctx = memory.get_json("openai_agent_ctx") or {}
system_prompt = f"""You are a helpful assistant.
Previous context: {ctx}
Always update your memory by noting key facts learned each session."""
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": "What HVAC projects are we working on?"},
]
)
# Save updated context after each session
memory.merge("openai_agent_ctx", {
"last_response_preview": response.choices[0].message.content[:200],
"last_seen": str(__import__("datetime").datetime.utcnow()),
})
How It Works
- API key is generated on-chain, tied to your Internet Identity
- Memory stored in an ICP canister — no servers, no cloud
- Data persists across upgrades via stable storage
- Agents authenticate with API keys, no Internet Identity needed
- All calls go directly to IC boundary nodes
Canister Info
| Backend | fwyts-iiaaa-aaaaj-a6lpq-cai |
| Network | ICP Mainnet |
| Built with | Motoko, dfx 0.31.0 |
License
MIT — Cleo 3 LLC
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file totalrecallagent-0.2.0.tar.gz.
File metadata
- Download URL: totalrecallagent-0.2.0.tar.gz
- Upload date:
- Size: 8.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eb3c3595122a2b43f07c0bb168839c97fc11c0a9f3b63d575e8bb7dd58d76f90
|
|
| MD5 |
eba8f239a668ef984598617fdda6a1ce
|
|
| BLAKE2b-256 |
bc832e3b19ff0001e635a04420e19c177dd0673e183976a0af7aa8d2718a73f8
|
File details
Details for the file totalrecallagent-0.2.0-py3-none-any.whl.
File metadata
- Download URL: totalrecallagent-0.2.0-py3-none-any.whl
- Upload date:
- Size: 10.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b54fc12d731589aaeb2505ae984f45369526f81bb91ba61bc63fe4c0efcb360f
|
|
| MD5 |
5067fad95577b4e112aa8f70fa2ae537
|
|
| BLAKE2b-256 |
586ccaaecaff859b5610127c39ff1daf5877d82f71c685d06e79fcd7852093a4
|