A memory management library for Python
Project description
Experimental software
Memorizz is an educational/experimental framework. APIs may change and the project has not undergone security hardening for production workloads.
Memorizz is a Python framework for building memory-augmented AI agents. It provides:
- multiple memory systems (episodic, semantic, procedural, short-term, shared)
- pluggable storage providers (Oracle, MongoDB, filesystem)
- agent builders and application modes (
assistant,workflow,deep_research) - scheduled automations (cron, interval, one-shot) with optional WhatsApp delivery
- optional internet access, sandbox code execution, and local web UI
Key Capabilities
- Persistent memory across sessions and conversations
- Semantic retrieval with embeddings + vector search
- Entity memory tools for profile-style facts (
entity_memory_lookup/entity_memory_upsert) - Tool calling with automatic function registration
- Semantic cache to reduce repeat LLM calls
- Multi-agent orchestration with shared blackboard memory
- Context-window telemetry via
get_context_window_stats() - Scheduled automations via SDK, web UI, or agent conversation (see
src/memorizz/automation/README.md)
Installation
Base install:
pip install memorizz
Common extras:
pip install "memorizz[oracle]" # Oracle provider
pip install "memorizz[mongodb]" # MongoDB provider
pip install "memorizz[filesystem]" # Local filesystem + FAISS
pip install "memorizz[sandbox-e2b]" # E2B sandbox
pip install "memorizz[sandbox-daytona]" # Daytona sandbox
pip install "memorizz[ui]" # Local web UI
pip install "memorizz[all]" # Everything
Quick Start (Filesystem Provider)
import os
from pathlib import Path
from memorizz.memagent.builders import MemAgentBuilder
from memorizz.memory_provider import FileSystemConfig, FileSystemProvider
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
provider = FileSystemProvider(
FileSystemConfig(
root_path=Path("~/.memorizz").expanduser(),
embedding_provider="openai",
embedding_config={"model": "text-embedding-3-small"},
)
)
agent = (
MemAgentBuilder()
.with_instruction("You are a helpful assistant with persistent memory.")
.with_memory_provider(provider)
.with_llm_config(
{
"provider": "openai",
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
}
)
.with_semantic_cache(enabled=True, threshold=0.85)
.build()
)
print(agent.run("Hi, my name is Leah and I work on payments systems."))
print(agent.run("What did I tell you about my work?"))
stats = agent.get_context_window_stats()
print(stats)
Oracle Setup (Optional)
If you want Oracle AI Database as the backing store:
./install_oracle.sh
memorizz setup-oracle
Then configure ORACLE_USER, ORACLE_PASSWORD, ORACLE_DSN, and your LLM credentials. Full setup details are in SETUP.md.
For multi-client consistency (UI + notebooks), you can set shared embedding defaults:
export MEMORIZZ_DEFAULT_EMBEDDING_PROVIDER=openai
export MEMORIZZ_DEFAULT_EMBEDDING_MODEL=text-embedding-3-small
export MEMORIZZ_DEFAULT_EMBEDDING_DIMENSIONS=1536
Application Modes
ApplicationMode presets automatically enable different memory stacks:
assistant: conversation, long-term, personas, entity memory, short-term, summariesworkflow: workflow memory, toolbox, long-term, short-term, summariesdeep_research: toolbox, shared memory, long-term, short-term, summaries
Example:
import os
from memorizz.enums import ApplicationMode
from memorizz.memagent.builders import MemAgentBuilder
llm_config = {
"provider": "openai",
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
}
agent = (
MemAgentBuilder()
.with_application_mode(ApplicationMode.DEEP_RESEARCH)
.with_memory_provider(provider)
.with_llm_config(llm_config)
.build()
)
Internet Access (Deep Research)
Deep Research agents can attach internet providers and expose internet_search / open_web_page tools.
import os
from memorizz.internet_access import TavilyProvider
from memorizz.memagent.builders import create_deep_research_agent
llm_config = {
"provider": "openai",
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
}
internet_provider = TavilyProvider(api_key=os.environ["TAVILY_API_KEY"])
agent = (
create_deep_research_agent(internet_provider=internet_provider)
.with_memory_provider(provider)
.with_llm_config(llm_config)
.build()
)
results = agent.search_internet("latest vector database benchmark")
Sandbox Code Execution
Attach a sandbox provider to enable execute_code, sandbox_write_file, and sandbox_read_file tools.
import os
from memorizz.memagent import MemAgent
llm_config = {
"provider": "openai",
"model": "gpt-4o-mini",
"api_key": os.environ["OPENAI_API_KEY"],
}
agent = MemAgent(
llm_config=llm_config,
memory_provider=provider,
sandbox_provider="e2b", # or "daytona" / "graalpy"
)
print(agent.execute_code("print(2 ** 16)"))
Multi-Agent Deep Research Workflow
from memorizz.memagent.orchestrators import DeepResearchWorkflow
workflow = DeepResearchWorkflow.from_config(
memory_provider=provider,
delegate_instructions=[
"Financial researcher: collect metrics and citations.",
"Risk analyst: identify key downside scenarios.",
],
)
report = workflow.run("Analyze the last 3 years of cloud infrastructure trends.")
print(report)
CLI
After installation, the memorizz command exposes:
memorizz run local # start local web UI (requires [ui])
memorizz install-oracle # start Oracle container helper
memorizz setup-oracle # initialize Oracle schema/user
Examples
examples/single_agent/memagent_local_oracle.ipynbexamples/single_agent/memagent_remote_oracle.ipynbexamples/deep_research/deep_research_memagent.ipynbexamples/sandbox/memagent_e2b_sandbox.ipynbexamples/sandbox/memagent_daytona_sandbox.ipynbexamples/sandbox/memagent_graalpy_sandbox.ipynbexamples/automations/automations_guide.ipynbexamples/model_providers/openai_provider.ipynbexamples/model_providers/anthropic_provider.ipynbexamples/model_providers/ollama_provider.ipynbexamples/model_providers/compare_providers.ipynb
Documentation
- Docs source:
docs/ - Local preview:
make docs-serve(ormkdocs serve) - Architecture notes:
src/memorizz/MEMORY_ARCHITECTURE.md
License
PolyForm Noncommercial 1.0.0.
See LICENSE and NOTICE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memorizz-0.0.40.tar.gz.
File metadata
- Download URL: memorizz-0.0.40.tar.gz
- Upload date:
- Size: 620.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
750ad91fac219a8063e26282a769341d2cbcbdd2b57b1eaf3f825406f9035163
|
|
| MD5 |
eb7061447ef64fbfc3f885ea3926b1e0
|
|
| BLAKE2b-256 |
8143058f1ed7ee175b1837180676ce21d10cc8caa9808711030daf1192720008
|
File details
Details for the file memorizz-0.0.40-py3-none-any.whl.
File metadata
- Download URL: memorizz-0.0.40-py3-none-any.whl
- Upload date:
- Size: 540.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
630e55e68fa32c1cf1139d0d109f12a1074b5661ea1ae8290e905c09533f7ef9
|
|
| MD5 |
1fa58fc2a73520adc96437fa8eee829b
|
|
| BLAKE2b-256 |
5c9a3c8b2b8b6650416372a9de47d5029f1739bd86627e1a17a58a73674eded0
|