No project description provided
Project description
Angel Recall
Angel Recall is a Python implementation of the MemOS (Memory Operating System) architecture, designed to give AI agents a long-term, evolvable memory. It uses a Lane-Based Command Queue instead of concurrent async to ensure predictable, race-free execution.
The Lane-Based Advantage
Unlike standard async libraries, Angel Recall processes tasks for each session (user) in a dedicated Lane.
- Predictable Execution: Tasks within a session run sequentially, preventing race conditions.
- Clean Logs: No interleaved garbage in your application logs.
- Simplified Mental Model: You don't need to worry about
asynciolocks or complex parallel debugging.
Quick Start
Installation
pip install angel-recall
Memory Dashboard
Angel Recall comes with a built-in Flask dashboard to visualize your memory vault and chat with your agent in real-time.
# Launch the dashboard
angel-recall memos
The dashboard allows you to:
- Configure API Keys: Add
OPENAI_API_KEY,ANTHROPIC_API_KEY, etc., directly from the UI. - Switch Models: Test different LLMs (local via Ollama or remote via LiteLLM).
- Inspect Memories: View all
MemCubesin your vault, including their semantic types, access scopes, and timestamps. - Chat: Interact with the memory-augmented agent and see how memories are created and retrieved.
Basic Usage
from angel_recall import MemOS
# Initialize the Memory OS (Synchronous)
memos = MemOS(persist_directory="./my_vault")
# Store a new memory
memos.process("Remember that I prefer technical deep-dives.")
# Retrieve relevant memories
response = memos.process("How should you format the report for me?")
print(response["response"])
Configuration
Angel Recall is designed to be flexible. You can configure the model and persistence directory during initialization:
memos = MemOS(
persist_directory="./my_vault",
model="ollama/gemma3n:e4b" # Supports any LiteLLM-compatible model
)
Local Embedding Optimization
By default, angel-recall relies on external calls for embeddings via litellm. For better performance, privacy, and zero-latency retrieval, you can enable local embeddings using sentence-transformers.
Enabling Local Embeddings
You can enable local embeddings during initialization or dynamically:
# During initialization
memos = MemOS(local_embedding=True)
# Or dynamically
memos.enable_local_embedding(True)
Installation
To use this feature, you must manually install the sentence-transformers library:
pip install sentence-transformers
If local_embedding is set to True but the library is not found, angel-recall will print an error message and exit.
Intelligent Memory Categorization
Angel Recall goes beyond basic storage by automatically categorizing and prioritizing your memories based on their content and intent:
- Corrections: When a user says "Actually, it's X," the system identifies this as a
CORRECTION.- High Priority: Corrections are assigned a higher priority (default 5) to ensure they override older information in retrieval.
- Proactive Archiving: Storing a correction automatically archives older conflicting facts to maintain context accuracy.
- Insights: When the agent or user identifies a complex pattern or deep understanding, it is stored as an
INSIGHT(priority 3). - Facts & Preferences: Standard user data (priority 0).
Memory Policies
Angel Recall implements several policies to ensure your agent's memory stays relevant and secure:
- Weight Decay & Forgetting: To simulate how humans prioritize information, memories have a
weight(default 1.0). In every processing cycle, weights decay by a configurable rate.- Reinforcement: Accessing or "touching" a memory increases its weight.
- Forgetting: If a memory's weight falls below a threshold (e.g., 0.3), it is automatically moved to
MemoryState.ARCHIVED. - Configuration:
memos = MemOS(decay_rate=0.01, min_weight=0.5)
- Lifecycle Management: Memories automatically transition through states (
GENERATED->ACTIVATED->ARCHIVED). Cold memories are eventually moved to long-term storage to keep the retrieval context clean. - TTL (Time-to-Live): You can set an expiration for any memory. Once the TTL is reached, the memory is automatically purged during the next process cycle.
- Access Governance: Supports
PRIVATE,SHARED, andPUBLICscopes. The system verifies ownership and permissions before any read or write operation. - Sensitivity Masking: Built-in support for PII redaction and sensitive tag handling to prevent leakage of private data into LLM prompts.
Access Governance Example
You can control the visibility of memories programmatically or via tools. By default, all memories are private to the owner.
from angel_recall import MemOS, AccessScope
memos = MemOS()
# Programmatically update access
# Only the owner ('alice') can change the scope
success = memos.api.update_access_scope(
cube_id="some-uuid",
scope=AccessScope.SHARED,
user="alice"
)
if success:
print("Memory is now shared with other users in the system.")
When using get_memory_tools, your agent can also manage its own privacy:
# The agent can call 'set_memory_access' tool
# Example: set_memory_access(memory_id="...", scope="public")
Advanced Usage: Agent Tools
If you are building your own LangGraph agents and want to give them explicit control over memory, you can use get_memory_tools. This is perfect for complex agents that need to decide when to commit something to long-term storage.
from langgraph.prebuilt import ToolNode
from angel_recall import MemOS, get_memory_tools
memos = MemOS()
tools = get_memory_tools(memos, user="alice")
# Create a tool node for your graph
tool_node = ToolNode(tools)
# Bind tools to your model
# model_with_tools = model.bind_tools(tools)
Tool-based LangGraph Example
from langgraph.graph import StateGraph, END
from angel_recall import MemOS, get_memory_tools
memos = MemOS()
tools = get_memory_tools(memos, user="alice")
def call_model(state):
# Your logic here to call an LLM bound with 'tools'
pass
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", ToolNode(tools))
workflow.set_entry_point("agent")
# Add conditional edges for tool usage...
Testing
We use pytest for testing. To run the tests using the qwen2.5:0.5b model:
pytest tests/test_core.py
Ensure Ollama is running and you have pulled the model: ollama pull qwen2.5:0.5b.
Core Components
- MemVault: The storage engine combining ChromaDB for semantic search and NetworkX for relationship mapping.
- MemReader: An intelligent parser that understands intent (store, retrieve, delete) and categorizes memories into semantic types like
CORRECTION,INSIGHT,FACT, orPREFERENCE. - MemGovernance: Ensures privacy and access control, managing who can read or modify specific memory cubes.
- MemScheduler: Dynamically selects the best memory fragments based on the current task's context.
Contributing
We welcome contributions that improve the efficiency of the memory scheduler or add support for new vector backends. Please see our contributing guidelines for more details.
License
Apache License 2.0. See LICENSE for details.
Credits
- MemOs paper
- Memelord by Glauber Costa
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file angel_recall-0.4.1.tar.gz.
File metadata
- Download URL: angel_recall-0.4.1.tar.gz
- Upload date:
- Size: 26.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Linux Mint","version":"22.1","id":"xia","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3c908e7c5634c618d8da5657cab590d7ae93837b9c208a0320bdf31aaa44970c
|
|
| MD5 |
4b3aa95061fc310618007c087f0f9448
|
|
| BLAKE2b-256 |
96ca736764e9cfa694de5969aacb95d7013e5f0b9b443f387e2cb86f279397ac
|
File details
Details for the file angel_recall-0.4.1-py3-none-any.whl.
File metadata
- Download URL: angel_recall-0.4.1-py3-none-any.whl
- Upload date:
- Size: 21.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Linux Mint","version":"22.1","id":"xia","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a150b09c5412b5f98586227fcfe283fa0d97988a9803aa65946b950b806634d9
|
|
| MD5 |
90af7205e9a6b0cae1d7b51b0d4155d5
|
|
| BLAKE2b-256 |
cc369e9b9681fdfd48f89ccbea4d5862e0640347e230940a5a533d066f271025
|