Skip to main content

Supermemory integration for Pipecat - memory-enhanced conversational AI pipelines

Project description

Supermemory Pipecat SDK

Memory-enhanced conversational AI pipelines with Supermemory and Pipecat.

Installation

pip install supermemory-pipecat

Quick Start

import os
from pipecat.pipeline.pipeline import Pipeline
from pipecat.services.openai import OpenAILLMService, OpenAIUserContextAggregator
from supermemory_pipecat import SupermemoryPipecatService

# Create memory service
memory = SupermemoryPipecatService(
    api_key=os.getenv("SUPERMEMORY_API_KEY"),
    user_id="user-123",  # Required: used as container_tag
    session_id="conversation-456",  # Optional: groups memories by session
)

# Create pipeline with memory
pipeline = Pipeline([
    transport.input(),
    stt,
    user_context,
    memory,  # Automatically retrieves and injects relevant memories
    llm,
    transport.output(),
])

Configuration

Parameters

Parameter Type Required Description
user_id str Yes User identifier - used as container_tag for memory scoping
session_id str No Session/conversation ID for grouping memories
api_key str No Supermemory API key (or set SUPERMEMORY_API_KEY env var)
params InputParams No Advanced configuration
base_url str No Custom API endpoint

Advanced Configuration

from supermemory_pipecat import SupermemoryPipecatService

memory = SupermemoryPipecatService(
    user_id="user-123",
    session_id="conv-456",
    params=SupermemoryPipecatService.InputParams(
        search_limit=10,           # Max memories to retrieve
        search_threshold=0.1,      # Similarity threshold
        mode="full",               # "profile", "query", or "full"
        system_prompt="Based on previous conversations, I recall:\n\n",
    ),
)

Memory Modes

Mode Static Profile Dynamic Profile Search Results
"profile" Yes Yes No
"query" No No Yes
"full" Yes Yes Yes

How It Works

  1. Intercepts context frames - Listens for LLMContextFrame in the pipeline
  2. Tracks conversation - Maintains clean conversation history (no injected memories)
  3. Retrieves memories - Queries /v4/profile API with user's message
  4. Injects memories - Formats and adds to LLM context as system message
  5. Stores messages - Sends last user message to Supermemory (background, non-blocking)

What Gets Stored

Only the last user message is sent to Supermemory:

User: What's the weather like today?

Stored as:

{
  "content": "User: What's the weather like today?",
  "container_tags": ["user-123"],
  "custom_id": "conversation-456",
  "metadata": { "platform": "pipecat" }
}

Full Example

import asyncio
import os
from fastapi import FastAPI, WebSocket
from pipecat.pipeline.pipeline import Pipeline
from pipecat.pipeline.task import PipelineTask
from pipecat.pipeline.runner import PipelineRunner
from pipecat.services.openai import (
    OpenAILLMService,
    OpenAIUserContextAggregator,
)
from pipecat.transports.network.fastapi_websocket import (
    FastAPIWebsocketTransport,
    FastAPIWebsocketParams,
)
from supermemory_pipecat import SupermemoryPipecatService

app = FastAPI()

@app.websocket("/chat")
async def websocket_endpoint(websocket: WebSocket):
    await websocket.accept()

    transport = FastAPIWebsocketTransport(
        websocket=websocket,
        params=FastAPIWebsocketParams(audio_out_enabled=True),
    )

    user_context = OpenAIUserContextAggregator()

    # Supermemory memory service
    memory = SupermemoryPipecatService(
        user_id="alice",
        session_id="session-123",
    )

    llm = OpenAILLMService(
        api_key=os.getenv("OPENAI_API_KEY"),
        model="gpt-4",
    )

    pipeline = Pipeline([
        transport.input(),
        user_context,
        memory,
        llm,
        transport.output(),
    ])

    runner = PipelineRunner()
    task = PipelineTask(pipeline)
    await runner.run(task)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

supermemory_pipecat-0.1.0.tar.gz (7.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

supermemory_pipecat-0.1.0-py3-none-any.whl (9.0 kB view details)

Uploaded Python 3

File details

Details for the file supermemory_pipecat-0.1.0.tar.gz.

File metadata

  • Download URL: supermemory_pipecat-0.1.0.tar.gz
  • Upload date:
  • Size: 7.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.12

File hashes

Hashes for supermemory_pipecat-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f5c6d059af27ca55f6a6508615723ea7f03456f77fe532895726175fd6dc0951
MD5 78992c839b0595b359ac1c2e7af8486a
BLAKE2b-256 a8bc78fa45dcf545d354a687dd184e2230e07188e3fbd6ad939387aa30a22618

See more details on using hashes here.

File details

Details for the file supermemory_pipecat-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for supermemory_pipecat-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 98b98712f42ff4e424f342126a88b8449a73e93078bb76cad9ed2a75c0aec2b1
MD5 f73e7a0c9be8edf4c5341a3aabf8aa4b
BLAKE2b-256 51dec9bc7aa8d368931a526bee8386f40171cb45e262432ae427aa8b5bd705c4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page