MemoryX Python SDK - Give your AI agents long-term memory
Project description
🧠 MemoryX Python SDK
Give your AI agents long-term memory in 2 lines of code.
MemoryX is a cloud-native memory layer for AI agents. It provides semantic search, automatic categorization, and project-level memory sharing across multiple agents.
✨ Features
- 🚀 Zero Config - Auto-registration, no API key management needed
- 🔍 Semantic Search - Find memories by meaning, not keywords
- 🤖 Multi-Agent Support - Multiple agents share memories within a project
- 📊 Auto Categorization - Memories are automatically classified (semantic, episodic, etc.)
- 🔐 Secure by Design - Machine fingerprint authentication, no passwords
- ☁️ Cloud Sync - Memories stored securely in the cloud
- 📦 Zero Dependencies - Uses only Python standard library
📦 Installation
pip install t0ken-memoryx
🚀 Quick Start
from memoryx import connect_memory
# Connect - auto-registers on first use
memory = connect_memory()
# Store memories (async, non-blocking)
memory.send_memories([
{"content": "User prefers dark mode in VSCode"},
{"content": "User's project uses TypeScript and React"},
])
# Search memories semantically
results = memory.search("coding preferences")
for m in results["data"]:
print(f"- {m['content']} (score: {m['score']})")
🎯 Use Cases
AI Assistants
# Remember user preferences across sessions
memory.send_memories([
{"content": "User prefers concise answers without code comments"},
{"content": "User works in fintech industry"},
])
# Later, in another session...
results = memory.search("communication style")
Multi-Agent Systems
# Agent 1: VSCode Extension
memory_vscode = APIClient()
memory_vscode.auto_register("vscode", "my-extension")
memory_vscode.send_memories([{"content": "User is debugging auth module"}])
# Agent 2: CLI Tool (same machine, shares project)
memory_cli = APIClient()
memory_cli.auto_register("cli", "my-cli")
# Can search memories stored by VSCode agent!
results = memory_cli.search("debugging")
Conversation Memory
# Store conversation for later analysis
memory.send_conversation("conv-123", [
{"role": "user", "content": "I need help with my React app"},
{"role": "assistant", "content": "Sure, what's the issue?"},
{"role": "user", "content": "The useState hook isn't updating"},
])
📖 API Reference
connect_memory(base_url=None, verbose=True)
Quick connect with auto-registration.
from memoryx import connect_memory
memory = connect_memory() # First time: auto-registers
# Output: MemoryX activated
# Agent ID: openclaw-abc12345
APIClient(config)
Advanced usage with explicit configuration.
from memoryx import APIClient
client = APIClient({
"api_key": "mx_ope_xxx", # Optional: use existing key
"api_base_url": "https://t0ken.ai/api" # Optional: self-hosted
})
# Auto-register new agent
result = client.auto_register("my-agent", "Agent Name")
print(result["agent_id"]) # my-agent-abc12345
print(result["api_key"]) # mx_myagent_xxx
send_memories(memories)
Store memories (batch supported).
# Single memory
result = client.send_memories([
{"content": "User likes pizza", "metadata": {"category": "preference"}}
])
# Batch memories
result = client.send_memories([
{"content": "User's timezone is UTC+8"},
{"content": "User speaks English and Chinese"},
])
print(result["task_id"]) # Async task ID
search(query, limit=10)
Semantic search across all project memories.
results = client.search("food preferences", limit=5)
for m in results["data"]:
print(f"- {m['content']} (category: {m['category']}, score: {m['score']})")
list(limit=50, offset=0)
List memories with pagination.
memories = client.list(limit=20, offset=0)
print(f"Total: {memories['total']}")
delete(memory_id)
Delete a memory.
client.delete("memory_abc123")
get_task_status(task_id)
Check async task status.
status = client.get_task_status("task_abc123")
print(status["status"]) # PENDING, STARTED, SUCCESS, FAILURE
get_quota()
Get account quota.
quota = client.get_quota()
print(f"Plan: {quota['quota']['tier']}")
print(f"Searches remaining: {quota['quota']['cloud_search']['remaining']}")
🔒 Security
- Machine Fingerprint: Uses MAC address + hardware info for authentication
- No Passwords: API keys are auto-generated and tied to agents
- Project Isolation: Each machine gets its own project, memories are isolated
🌐 Self-Hosted
from memoryx import connect_memory
memory = connect_memory(base_url="http://your-server:8000/api")
📚 More Resources
📄 License
MIT © MemoryX Team
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file t0ken_memoryx-2.1.2.tar.gz.
File metadata
- Download URL: t0ken_memoryx-2.1.2.tar.gz
- Upload date:
- Size: 10.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
de08b56ed6f50c9718337425c265113feea9a3889f2e9c9cd86d3e4013308675
|
|
| MD5 |
3de5c0f9d1e3642ebbb1e6ecbe999f78
|
|
| BLAKE2b-256 |
7531f971ea4e809952d5efba8c35fdca4163e6c966351576836e9cf662c51751
|
File details
Details for the file t0ken_memoryx-2.1.2-py3-none-any.whl.
File metadata
- Download URL: t0ken_memoryx-2.1.2-py3-none-any.whl
- Upload date:
- Size: 8.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
213b5789ccb2cb5fa33ac7afc1075c4236c698d048344c5a80060db1dda04afb
|
|
| MD5 |
a21f041e9e073e7b333b2eec96063ba9
|
|
| BLAKE2b-256 |
14ef5952fb3207fb033fc1d8b4859d6ce99400d8f4620e76839d597a00643b24
|