Supermemory integration for Cartesia Line - memory-enhanced voice agents
Project description
Supermemory Cartesia SDK
Memory-enhanced voice agents with Supermemory and Cartesia Line.
Installation
pip install supermemory-cartesia
Quick Start
import os
from line.llm_agent import LlmAgent, LlmConfig
from line.voice_agent_app import VoiceAgentApp
from supermemory_cartesia import SupermemoryCartesiaAgent
async def get_agent(env, call_request):
# Extract container_tag from call metadata (typically user ID)
container_tag = call_request.metadata.get("user_id", "default-user")
# Create base LLM agent
base_agent = LlmAgent(
model="gemini/gemini-2.5-flash-preview-09-2025",
config=LlmConfig(
system_prompt="You are a helpful voice assistant with memory.",
introduction="Hello! Great to talk with you again!"
)
)
# Wrap with Supermemory
memory_agent = SupermemoryCartesiaAgent(
agent=base_agent,
api_key=os.getenv("SUPERMEMORY_API_KEY"),
container_tag=container_tag,
custom_id=call_request.call_id,
)
return memory_agent
# Create voice agent app
app = VoiceAgentApp(get_agent=get_agent)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8000)
Configuration
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
agent |
LlmAgent | Yes | The Cartesia Line agent to wrap |
container_tag |
str | Yes | Primary container tag for memory scoping (e.g., user ID) |
custom_id |
str | Yes | Custom ID for grouping conversation messages into a single document |
add_memory |
Literal | No | Memory persistence mode: "always" (default) or "never" |
container_tags |
List[str] | No | Additional container tags for organization (e.g., ["org", "prod"]) |
api_key |
str | No | Supermemory API key (or set SUPERMEMORY_API_KEY env var) |
config |
MemoryConfig | No | Advanced configuration |
base_url |
str | No | Custom API endpoint |
Advanced Configuration
from supermemory_cartesia import SupermemoryCartesiaAgent
memory_agent = SupermemoryCartesiaAgent(
agent=base_agent,
container_tag="user-123",
custom_id="conversation-456",
add_memory="always", # "always" (default) or "never"
container_tags=["org-acme", "prod"], # Optional: additional tags
config=SupermemoryCartesiaAgent.MemoryConfig(
search_limit=10, # Max memories to retrieve
search_threshold=0.1, # Similarity threshold
mode="full", # "profile", "query", or "full"
system_prompt="Based on previous conversations, I recall:\n\n",
),
)
# Read-only mode - retrieve memories but don't save new ones
read_only_agent = SupermemoryCartesiaAgent(
agent=base_agent,
container_tag="user-123",
custom_id="conversation-456",
add_memory="never", # Only retrieve, don't save
)
Memory Modes
| Mode | Static Profile | Dynamic Profile | Search Results |
|---|---|---|---|
"profile" |
Yes | Yes | No |
"query" |
No | No | Yes |
"full" |
Yes | Yes | Yes |
How It Works
- Intercepts events - Listens for
UserTurnEndedevents from Cartesia Line - Retrieves memories - Queries Supermemory
/v4/profileAPI with user's message - Enriches context - Adds memories to event history as system message
- Stores messages - Sends conversation to Supermemory (background, non-blocking)
- Passes to agent - Forwards enriched event to wrapped LlmAgent
What Gets Stored
User and assistant messages are sent to Supermemory:
{
"content": "User: What's the weather?\nAssistant: It's sunny today!",
"container_tags": ["user-123", "org-acme", "prod"],
"metadata": { "platform": "cartesia" }
}
Architecture
Cartesia Line uses an event-driven architecture:
User Speaks (Audio)
↓
[Ink STT] → Automatic speech recognition
↓
UserTurnEnded Event {content: "user message", history: [...]}
↓
┌──────────────────────────────────────────────┐
│ SUPERMEMORY CARTESIA AGENT (Wrapper) │
│ │
│ process(env, event): │
│ 1. Intercept UserTurnEnded │
│ 2. Extract user message │
│ 3. Query Supermemory API │
│ 4. Enrich event.history with memories │
│ 5. Pass to wrapped LlmAgent │
│ 6. Store conversation (async background) │
└──────────────────────────────────────────────┘
↓
AgentSendText Event {text: "response"}
↓
[Sonic TTS] → Ultra-fast speech synthesis
↓
Audio Output
Comparison with Pipecat SDK
| Aspect | Pipecat | Cartesia Line |
|---|---|---|
| Integration Pattern | Extends FrameProcessor |
Wrapper around LlmAgent |
| Event Handling | process_frame() method |
process() method |
| Events | LLMContextFrame, LLMMessagesFrame |
UserTurnEnded, CallStarted |
| Context Object | LLMContext.get_messages() |
event.history |
| Memory Injection | Modify context.add_message() |
Modify event.history |
Full Example with Tools
import os
from line.llm_agent import LlmAgent, LlmConfig
from line.tools import LoopbackTool
from line.voice_agent_app import VoiceAgentApp
from supermemory_cartesia import SupermemoryCartesiaAgent
# Define custom tools
async def get_weather(location: str) -> str:
return f"The weather in {location} is sunny, 72°F"
weather_tool = LoopbackTool(
name="get_weather",
description="Get current weather for a location",
function=get_weather
)
async def get_agent(env, call_request):
container_tag = call_request.metadata.get("user_id", "default-user")
org_id = call_request.metadata.get("org_id")
# Create LLM agent with tools
base_agent = LlmAgent(
model="gemini/gemini-2.5-flash-preview-09-2025",
tools=[weather_tool],
config=LlmConfig(
system_prompt="You are a personal assistant with memory and tools.",
introduction="Hi! How can I help you today?"
)
)
# Wrap with Supermemory
memory_agent = SupermemoryCartesiaAgent(
agent=base_agent,
api_key=os.getenv("SUPERMEMORY_API_KEY"),
container_tag=container_tag,
custom_id=call_request.call_id,
container_tags=[org_id] if org_id else None,
config=SupermemoryCartesiaAgent.MemoryConfig(
mode="full",
search_limit=15,
search_threshold=0.15,
)
)
return memory_agent
app = VoiceAgentApp(get_agent=get_agent)
Development
# Clone repository
git clone https://github.com/supermemoryai/supermemory
cd supermemory/packages/cartesia-sdk-python
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black .
isort .
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file supermemory_cartesia-0.1.1.tar.gz.
File metadata
- Download URL: supermemory_cartesia-0.1.1.tar.gz
- Upload date:
- Size: 10.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
acca2243e3bbe6e5763343c631b40f5f11055bc4bd0a905f52d3ea572cda2604
|
|
| MD5 |
a03c094160aaeca28a89ef7006accef7
|
|
| BLAKE2b-256 |
7dc00d325d56a9a15444747e066c6247969320188a707df5f71c4323e0fb3bf4
|
Provenance
The following attestation bundles were made for supermemory_cartesia-0.1.1.tar.gz:
Publisher:
publish-cartesia-sdk-python.yml on supermemoryai/supermemory
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
supermemory_cartesia-0.1.1.tar.gz -
Subject digest:
acca2243e3bbe6e5763343c631b40f5f11055bc4bd0a905f52d3ea572cda2604 - Sigstore transparency entry: 1324025468
- Sigstore integration time:
-
Permalink:
supermemoryai/supermemory@9b32d5c95198583a9b806b1d0cca0f3fff837a60 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/supermemoryai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-cartesia-sdk-python.yml@9b32d5c95198583a9b806b1d0cca0f3fff837a60 -
Trigger Event:
push
-
Statement type:
File details
Details for the file supermemory_cartesia-0.1.1-py3-none-any.whl.
File metadata
- Download URL: supermemory_cartesia-0.1.1-py3-none-any.whl
- Upload date:
- Size: 11.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5672f85cedfff8da59c03d653b1fef79f5a59528774ca72a117d0332c80e85c4
|
|
| MD5 |
1bbf81bf4c444ee35c4e4da158e82350
|
|
| BLAKE2b-256 |
8d9cea1e14bfc6579535480513f6a63e0a2ef8dcaf67ccb53d1b57a600e7a815
|
Provenance
The following attestation bundles were made for supermemory_cartesia-0.1.1-py3-none-any.whl:
Publisher:
publish-cartesia-sdk-python.yml on supermemoryai/supermemory
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
supermemory_cartesia-0.1.1-py3-none-any.whl -
Subject digest:
5672f85cedfff8da59c03d653b1fef79f5a59528774ca72a117d0332c80e85c4 - Sigstore transparency entry: 1324025507
- Sigstore integration time:
-
Permalink:
supermemoryai/supermemory@9b32d5c95198583a9b806b1d0cca0f3fff837a60 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/supermemoryai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-cartesia-sdk-python.yml@9b32d5c95198583a9b806b1d0cca0f3fff837a60 -
Trigger Event:
push
-
Statement type: