Supermemory integration for Pipecat - memory-enhanced conversational AI pipelines
Project description
Supermemory Pipecat SDK
Memory-enhanced conversational AI pipelines with Supermemory and Pipecat.
Installation
pip install supermemory-pipecat
Quick Start
import os
from pipecat.pipeline.pipeline import Pipeline
from pipecat.services.openai import OpenAILLMService, OpenAIUserContextAggregator
from supermemory_pipecat import SupermemoryPipecatService
# Create memory service
memory = SupermemoryPipecatService(
api_key=os.getenv("SUPERMEMORY_API_KEY"),
user_id="user-123", # Required: used as container_tag
session_id="conversation-456", # Optional: groups memories by session
)
# Create pipeline with memory
pipeline = Pipeline([
transport.input(),
stt,
user_context,
memory, # Automatically retrieves and injects relevant memories
llm,
transport.output(),
])
Configuration
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
user_id |
str | Yes | User identifier - used as container_tag for memory scoping |
session_id |
str | No | Session/conversation ID for grouping memories |
api_key |
str | No | Supermemory API key (or set SUPERMEMORY_API_KEY env var) |
params |
InputParams | No | Advanced configuration |
base_url |
str | No | Custom API endpoint |
Advanced Configuration
from supermemory_pipecat import SupermemoryPipecatService
memory = SupermemoryPipecatService(
user_id="user-123",
session_id="conv-456",
params=SupermemoryPipecatService.InputParams(
search_limit=10, # Max memories to retrieve
search_threshold=0.1, # Similarity threshold
mode="full", # "profile", "query", or "full"
system_prompt="Based on previous conversations, I recall:\n\n",
),
)
Memory Modes
| Mode | Static Profile | Dynamic Profile | Search Results |
|---|---|---|---|
"profile" |
Yes | Yes | No |
"query" |
No | No | Yes |
"full" |
Yes | Yes | Yes |
How It Works
- Intercepts context frames - Listens for
LLMContextFramein the pipeline - Tracks conversation - Maintains clean conversation history (no injected memories)
- Retrieves memories - Queries
/v4/profileAPI with user's message - Injects memories - Formats and adds to LLM context as system message
- Stores messages - Sends last user message to Supermemory (background, non-blocking)
What Gets Stored
Only the last user message is sent to Supermemory:
User: What's the weather like today?
Stored as:
{
"content": "User: What's the weather like today?",
"container_tags": ["user-123"],
"custom_id": "conversation-456",
"metadata": { "platform": "pipecat" }
}
Full Example
import asyncio
import os
from fastapi import FastAPI, WebSocket
from pipecat.pipeline.pipeline import Pipeline
from pipecat.pipeline.task import PipelineTask
from pipecat.pipeline.runner import PipelineRunner
from pipecat.processors.aggregators.openai_llm_context import OpenAILLMContext
from pipecat.services.google.gemini_live.llm import GeminiLiveLLMService
from pipecat.transports.websocket.fastapi import (
FastAPIWebsocketTransport,
FastAPIWebsocketParams,
)
from supermemory_pipecat import SupermemoryPipecatService
app = FastAPI()
@app.websocket("/chat")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
transport = FastAPIWebsocketTransport(
websocket=websocket,
params=FastAPIWebsocketParams(audio_in_enabled=True, audio_out_enabled=True),
)
# Gemini Live for speech-to-speech
llm = GeminiLiveLLMService(
api_key=os.getenv("GEMINI_API_KEY"),
model="models/gemini-2.5-flash-native-audio-preview-12-2025",
)
context = OpenAILLMContext([{"role": "system", "content": "You are a helpful assistant."}])
context_aggregator = llm.create_context_aggregator(context)
# Supermemory memory service
memory = SupermemoryPipecatService(
user_id="alice",
session_id="session-123",
)
pipeline = Pipeline([
transport.input(),
context_aggregator.user(),
memory,
llm,
transport.output(),
context_aggregator.assistant(),
])
runner = PipelineRunner()
task = PipelineTask(pipeline)
await runner.run(task)
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file supermemory_pipecat-0.1.1.tar.gz.
File metadata
- Download URL: supermemory_pipecat-0.1.1.tar.gz
- Upload date:
- Size: 8.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f8d54b8feebaed98369ab115e8447bd309513d56e1fa7b76cc2c0bf5047401c8
|
|
| MD5 |
daed7ec65c2deb7d006871185816052f
|
|
| BLAKE2b-256 |
4cb3fb5fa0998da8965b10ef012128d0925676a8c2288bf4aab2a245b0236cdb
|
File details
Details for the file supermemory_pipecat-0.1.1-py3-none-any.whl.
File metadata
- Download URL: supermemory_pipecat-0.1.1-py3-none-any.whl
- Upload date:
- Size: 9.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9ea513053998f537e40dcde64b5b454a557a590308b4ab61e063522b4b71aea8
|
|
| MD5 |
04e210a0e2a32fe68e4a945c69b44968
|
|
| BLAKE2b-256 |
2011df40f0485a48601f1141627c08b6884ece15f3fcd4e308f0f0b296e6a7a0
|