Skip to main content

Intelligent Memory System - Persistent memory layer for LLM applications

Project description

OceanBase Logo

PowerMem PyPI - Downloads GitHub commit activity Package version license pyversions Ask DeepWiki Join Discord

English | 中文 | 日本語

✨ Highlights

PowerMem LOCOMO Benchmark Metrics
  • 🎯 Accurate: [48.77% Accuracy Improvement] More accurate than full-context in the LOCOMO benchmark (78.70% VS 52.9%)
  • Agile: [91.83% Faster Response] Significantly reduced p95 latency for retrieval compared to full-context (1.44s VS 17.12s)
  • 💰 Affordable: [96.53% Token Reduction] Significantly reduced costs compared to full-context without sacrificing performance (0.9k VS 26k)

🧠 PowerMem - Intelligent Memory System

In AI application development, enabling large language models to persistently "remember" historical conversations, user preferences, and contextual information is a core challenge. PowerMem combines a hybrid storage architecture of vector retrieval, full-text search, and graph databases, and introduces the Ebbinghaus forgetting curve theory from cognitive science to build a powerful memory infrastructure for AI applications. The system also provides comprehensive multi-agent support capabilities, including agent memory isolation, cross-agent collaboration and sharing, fine-grained permission control, and privacy protection mechanisms, enabling multiple AI agents to achieve efficient collaboration while maintaining independent memory spaces.

🚀 Core Features

👨‍💻 Developer Friendly

  • 🔌 Lightweight Integration: Provides a simple Python SDK, automatically loads configuration from .env files, enabling developers to quickly integrate into existing projects. Also supports MCP Server and HTTP API Server integration methods

🧠 Intelligent Memory Management

  • 🔍 Intelligent Memory Extraction: Automatically extracts key facts from conversations through LLM, intelligently detects duplicates, updates conflicting information, and merges related memories to ensure accuracy and consistency of the memory database
  • 📉 Ebbinghaus Forgetting Curve: Based on the memory forgetting patterns from cognitive science, automatically calculates memory retention rates and implements time-decay weighting, prioritizing recent and relevant memories, allowing AI systems to naturally "forget" outdated information like humans

👤 User Profile Support

  • 🎭 User Profile: Automatically builds and updates user profiles based on historical conversations and behavioral data, applicable to scenarios such as personalized recommendations and AI companionship, enabling AI systems to better understand and serve each user

🤖 Multi-Agent Support

  • 🔐 Agent Shared/Isolated Memory: Provides independent memory spaces for each agent, supports cross-agent memory sharing and collaboration, and enables flexible permission management through scope control

🎨 Multimodal Support

  • 🖼️ Text, Image, and Audio Memory: Automatically converts images and audio to text descriptions for storage, supports retrieval of multimodal mixed content (text + image + audio), enabling AI systems to understand richer contextual information

💾 Deeply Optimized Data Storage

  • 📦 Sub Stores Support: Implements data partition management through sub stores, supports automatic query routing, significantly improving query performance and resource utilization for ultra-large-scale data
  • 🔗 Hybrid Retrieval: Combines multi-channel recall capabilities of vector retrieval, full-text search, and graph retrieval, builds knowledge graphs through LLM and supports multi-hop graph traversal for precise retrieval of complex memory relationships

🚀 Quick Start

📥 Installation

pip install powermem

💡 Basic Usage(SDK)

✨ Simplest Way: Create memory from .env file automatically! Configuration Reference

from powermem import Memory, auto_config

# Load configuration (auto-loads from .env)
config = auto_config()
# Create memory instance
memory = Memory(config=config)

# Add memory
memory.add("User likes coffee", user_id="user123")

# Search memories
results = memory.search("user preferences", user_id="user123")
for result in results.get('results', []):
    print(f"- {result.get('memory')}")

For more detailed examples and usage patterns, see the Getting Started Guide.

🌐 HTTP API Server

PowerMem also provides a production-ready HTTP API server that exposes all core memory management capabilities through RESTful APIs. This enables any application that supports HTTP calls to integrate PowerMem's intelligent memory system, regardless of programming language.

Relationship with SDK: The API server uses the same PowerMem SDK under the hood and shares the same configuration (.env file). It provides an HTTP interface to the same memory management features available in the Python SDK, making PowerMem accessible to non-Python applications.

Starting the API Server:

# Method 1: Using CLI command (after pip install)
powermem-server --host 0.0.0.0 --port 8000

# Method 2: Using Docker
# run with Docker
docker run -d \
  --name powermem-server \
  -p 8000:8000 \
  -v $(pwd)/.env:/app/.env:ro \
  --env-file .env \
  oceanbase/powermem-server:latest

# Or use Docker Compose (recommended)
docker-compose -f docker/docker-compose.yml up -d

Once started, the API server provides:

  • RESTful API endpoints for all memory operations
  • Interactive API documentation at http://localhost:8000/docs
  • API Key authentication and rate limiting support
  • Same configuration as SDK (via .env file)

For complete API documentation and usage examples, see the API Server Documentation.

🔌 MCP Server

PowerMem also provides a Model Context Protocol (MCP) server that enables integration with MCP-compatible clients such as Claude Desktop. The MCP server exposes PowerMem's memory management capabilities through the MCP protocol, allowing AI assistants to access and manage memories seamlessly.

Relationship with SDK: The MCP server uses the same PowerMem SDK and shares the same configuration (.env file). It provides an MCP interface to the same memory management features, making PowerMem accessible to MCP-compatible AI assistants.

Installation:

# Install PowerMem (required)
pip install powermem

# Install uvx (if not already installed)
# On macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh

# On Windows:
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

Starting the MCP Server:

# SSE mode (recommended, default port 8000)
uvx powermem-mcp sse

# SSE mode with custom port
uvx powermem-mcp sse 8001

# Stdio mode
uvx powermem-mcp stdio

# Streamable HTTP mode (default port 8000)
uvx powermem-mcp streamable-http

# Streamable HTTP mode with custom port
uvx powermem-mcp streamable-http 8001

Integration with Claude Desktop:

Add the following configuration to your Claude Desktop config file:

{
  "mcpServers": {
    "powermem": {
      "url": "http://localhost:8000/mcp"
    }
  }
}

The MCP server provides tools for memory management including adding, searching, updating, and deleting memories. For complete MCP documentation and usage examples, see the MCP Server Documentation.

🔗 Integrations & Demos

  • 🔗 LangChain Integration: Build medical support chatbot using LangChain + PowerMem + OceanBase, View Example
  • 🔗 LangGraph Integration: Build customer service chatbot using LangGraph + PowerMem + OceanBase, View Example

📚 Documentation

⭐ Highlights Release Notes

Version Release Date Function
0.5.0 2026.02.06
  • Unified configuration governance across SDK/API Server (pydantic-settings based)
  • Added OceanBase native hybrid search support
  • Enhanced Memory query handling and added sorting support for memory list operations
  • Added user profile support for custom native-language output
0.4.0 2026.01.20
  • Sparse vector support for enhanced hybrid retrieval, combining dense vector, full-text, and sparse vector search
  • User memory query rewriting - automatically enhances search queries based on user profiles for improved recall
  • Schema upgrade and data migration tools for existing tables
0.3.0 2026.01.09
  • Production-ready HTTP API Server with RESTful endpoints for all memory operations
  • Docker support for easy deployment and containerization
0.2.0 2025.12.16
  • Advanced user profile management, supporting "personalized experience" for AI applications
  • Expanded multimodal support, including text, image, and audio memory
0.1.0 2025.11.14
  • Core memory management functionality, supporting persistent storage of memories
  • Hybrid retrieval supporting vector, full-text, and graph search
  • Intelligent memory extraction based on LLM fact extraction
  • Full lifecycle memory management supporting Ebbinghaus forgetting curve
  • Multi-Agent memory management support
  • Multiple storage backend support (OceanBase, PostgreSQL, SQLite)
  • Support for knowledge graph retrieval through multi-hop graph search

💬 Support


📄 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

powermem-0.5.2.tar.gz (281.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

powermem-0.5.2-py3-none-any.whl (380.6 kB view details)

Uploaded Python 3

File details

Details for the file powermem-0.5.2.tar.gz.

File metadata

  • Download URL: powermem-0.5.2.tar.gz
  • Upload date:
  • Size: 281.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for powermem-0.5.2.tar.gz
Algorithm Hash digest
SHA256 8f422580354d997650af8a3a25ee5901915aca53148f050960618e582edb798a
MD5 c4bfa71e3f7688b8549d396df527f158
BLAKE2b-256 31fb093bf851113caa25d344da383cc7a157d3db5778fc0ea8dd708e13025828

See more details on using hashes here.

Provenance

The following attestation bundles were made for powermem-0.5.2.tar.gz:

Publisher: publish.yml on oceanbase/powermem

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file powermem-0.5.2-py3-none-any.whl.

File metadata

  • Download URL: powermem-0.5.2-py3-none-any.whl
  • Upload date:
  • Size: 380.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for powermem-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 02811a6a07255818238a387cc99ee5017191e9f11b890935c91a9e3c1f0516de
MD5 d93fdb68937715705adb87ffb41f139c
BLAKE2b-256 1d0ea19492c48e7d09748dc0b1487e5ff028b7f0429a03fc6d317ff3bf00bde7

See more details on using hashes here.

Provenance

The following attestation bundles were made for powermem-0.5.2-py3-none-any.whl:

Publisher: publish.yml on oceanbase/powermem

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page