Skip to main content

On-chain memory infrastructure for AI agents — built on the Internet Computer

Project description

total-recall

On-chain memory infrastructure for AI agents — built on the Internet Computer.

Your agent wakes up fresh every session. Total Recall gives it permanent, encrypted, on-chain memory — no cloud, no servers, no single point of failure.

Live at: www.totalrecallagent.com


Install

pip install total-recall

With async support:

pip install total-recall[async]

With LangChain integration:

pip install total-recall[langchain]

Quick Start

from total_recall import TotalRecallClient

# 1. Create a client (generate your API key at totalrecallagent.com)
memory = TotalRecallClient(api_key="tr_your_key_here")

# 2. Store memory
memory.store("last_context", {
    "user":   "MTR",
    "task":   "HVAC layout review",
    "status": "in_progress",
})

# 3. Retrieve it next session
ctx = memory.get("last_context")
print(ctx.value)      # {"user": "MTR", "task": "HVAC layout review", ...}
print(ctx.updated_at) # datetime object

That's it. Three lines. Your agent remembers.


API Reference

TotalRecallClient(api_key, *, base_url, timeout, max_retries)

Param Type Default Description
api_key str API key from your dashboard
base_url str prod Override API endpoint
timeout float 30.0 Request timeout in seconds
max_retries int 3 Retry attempts on network errors

memory.store(key, value, tags=[])

Store any value — string, dict/list (auto JSON-encoded), or raw bytes.

memory.store("session_state", {"step": 3, "done": False}, tags=["session"])
memory.store("raw_note", "Agent resumed at checkpoint alpha")

memory.get(key)

Retrieve a memory entry. Returns None if not found.

entry = memory.get("session_state")
if entry:
    print(entry.value)      # auto-decoded: {"step": 3, "done": False}
    print(entry.tags)       # ["session"]
    print(entry.updated_at) # datetime

memory.get_all()

Get all stored memory entries at once.

entries = memory.get_all()
for e in entries:
    print(e.key, e.value)

memory.keys()

List all stored keys.

ks = memory.keys()
# ["session_state", "last_context", "project_notes"]

memory.delete(key)

Delete a memory entry. No-op if key doesn't exist.

memory.delete("old_session")

memory.merge(key, patch, tags=[])

Merge new data into an existing entry. Creates it if it doesn't exist.

memory.merge("agent_state", {"last_seen": "2026-04-26", "status": "idle"})

memory.search(tags)

Search entries by tags. Returns entries that have ALL specified tags.

results = memory.search(tags=["session", "hvac"])
for e in results:
    print(e.key, e.tags)

memory.get_stats()

Get current usage stats and tier limits.

stats = memory.get_stats()
print(stats["tier"])                     # "Free" | "Pro" | "Agent" | "Enterprise"
print(stats["storage_bytes"])            # bytes used
print(stats["calls_today"])              # calls today
print(stats["limits"]["calls_per_day"]) # 0 = unlimited

memory.ping()

Check if the service is reachable.

status = memory.ping()  # "🧠 Total Recall is alive"

Async Usage

from total_recall import TotalRecallAsyncClient

async def run():
    async with TotalRecallAsyncClient(api_key="tr_...") as memory:
        await memory.store("key", {"hello": "world"})
        entry = await memory.get("key")
        print(entry.value)

LangChain Integration

Give any LangChain agent persistent on-chain memory:

from total_recall.langchain import TotalRecallMemory
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationChain

memory = TotalRecallMemory(
    api_key="tr_your_key_here",
    session_key="my_agent_session",  # unique per agent/user
)

chain = ConversationChain(
    llm=ChatOpenAI(model="gpt-4o"),
    memory=memory,
    verbose=True,
)

# First session
chain.predict(input="My name is MTR and I work in HVAC.")

# Next session — agent still remembers
chain.predict(input="What do you know about me?")
# → "Your name is MTR and you work in HVAC."

Memory persists across Python processes, machine restarts, and model changes.


Real-World Example — AutoGen Agent

import os
import autogen
from total_recall import TotalRecallClient

memory = TotalRecallClient(api_key=os.environ["TOTAL_RECALL_API_KEY"])

def on_agent_start(agent_name: str):
    """Restore agent context at session start."""
    ctx = memory.get_json(f"{agent_name}_context")
    if ctx:
        print(f"[{agent_name}] Resuming. Last task: {ctx.get('last_task')}")
        return ctx
    return {}

def on_agent_end(agent_name: str, state: dict):
    """Persist agent context at session end."""
    memory.merge(f"{agent_name}_context", {
        "last_task":     state.get("current_task"),
        "last_seen":     str(__import__("datetime").datetime.utcnow()),
        "session_count": state.get("session_count", 0) + 1,
    }, tags=["agent", "context"])

Real-World Example — OpenAI Assistants

import os
from openai import OpenAI
from total_recall import TotalRecallClient

client = OpenAI()
memory = TotalRecallClient(api_key=os.environ["TOTAL_RECALL_API_KEY"])

# Load memory into system prompt
ctx = memory.get_json("openai_agent_ctx") or {}
system_prompt = f"""You are a helpful assistant.
Previous context: {ctx}
Always update your memory by noting key facts learned each session."""

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system",  "content": system_prompt},
        {"role": "user",    "content": "What HVAC projects are we working on?"},
    ]
)

# Save updated context after each session
memory.merge("openai_agent_ctx", {
    "last_response_preview": response.choices[0].message.content[:200],
    "last_seen": str(__import__("datetime").datetime.utcnow()),
})

How It Works

  • API key is generated on-chain, tied to your Internet Identity
  • Memory stored in an ICP canister — no servers, no cloud
  • Data persists across upgrades via stable storage
  • Agents authenticate with API keys, no Internet Identity needed
  • All calls go directly to IC boundary nodes

Canister Info

Backend fwyts-iiaaa-aaaaj-a6lpq-cai
Network ICP Mainnet
Built with Motoko, dfx 0.31.0

License

MIT — Cleo 3 LLC

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

totalrecallagent-0.5.0.tar.gz (13.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

totalrecallagent-0.5.0-py3-none-any.whl (15.7 kB view details)

Uploaded Python 3

File details

Details for the file totalrecallagent-0.5.0.tar.gz.

File metadata

  • Download URL: totalrecallagent-0.5.0.tar.gz
  • Upload date:
  • Size: 13.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for totalrecallagent-0.5.0.tar.gz
Algorithm Hash digest
SHA256 9b1ea0cd22dce86b72ac66adbda6806c319cdd880a8e0ac8f0c9f2877eeb8c14
MD5 56f51ebfb9688561e1b9235460e1f66a
BLAKE2b-256 4b91feef111c63e1ee1d496a2a3b1b70ebdf1c298ca4a2ba103380f06111f744

See more details on using hashes here.

File details

Details for the file totalrecallagent-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for totalrecallagent-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bf74d107a2b148434ea1159773a461464aa62e0a6ecc1cda5482a757e842c84d
MD5 16e3c0024fe4ca659839db87b2dea896
BLAKE2b-256 3c44cb861ba571939fc5bfeaf30dfa32b67f4b95de6d53e73b0989232f3a1923

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page