Persistent memory infrastructure for AI applications
Project description
Velixar Python SDK
Persistent memory for AI assistants and agents. Give any LLM-powered application long-term recall across sessions.
Velixar is an open memory layer — it works with any AI assistant, agent framework, or LLM pipeline. Store facts, preferences, and context that persist beyond a single conversation.
Installation
pip install velixar
# With LangChain integration
pip install velixar[langchain]
# With LlamaIndex integration
pip install velixar[llamaindex]
# All integrations
pip install velixar[all]
Quick Start
from velixar import Velixar
v = Velixar(api_key="vlx_your_key") # Or set VELIXAR_API_KEY env var
# Store a memory
memory_id = v.store(
content="User prefers dark mode and metric units",
tier=0, # 0=pinned, 1=session, 2=semantic (default), 3=org
user_id="user_123",
tags=["preferences"],
)
# Search memories semantically
results = v.search("user preferences", limit=5)
for memory in results.memories:
print(f"[{memory.score:.2f}] {memory.content}")
# Get context for LLM prompts
context = v.get_context("What are the user's preferences?", max_tokens=2000)
Async Support
from velixar import AsyncVelixar
async with AsyncVelixar(api_key="vlx_...") as v:
await v.store("User's favorite color is blue", user_id="user_123")
results = await v.search("favorite color")
Memory Tiers
| Tier | Name | Use Case |
|---|---|---|
| 0 | Pinned | Critical facts, user preferences, never expire |
| 1 | Session | Current conversation context |
| 2 | Semantic | Long-term memories (default) |
| 3 | Organization | Shared team knowledge (Hivemind+) |
from velixar import MemoryTier
v.store("User is allergic to peanuts", tier=MemoryTier.PINNED)
v.store("Currently discussing project X", tier=MemoryTier.SESSION)
Cognitive Features by Plan
| Feature | Free | Cortex ($29) | Synapse ($75) | Hivemind ($25/seat) |
|---|---|---|---|---|
| Store & search | ✓ | ✓ | ✓ | ✓ |
| Neural ensembles | — | ✓ | ✓ | ✓ |
| Temporal chains | — | ✓ | ✓ | ✓ |
| Consolidation | — | ✓ | ✓ | ✓ |
| Identity modeling | — | — | ✓ | ✓ |
| Org memory (tier 3) | — | — | — | ✓ |
Free tier stores and searches memories. Paid tiers activate cognitive features automatically — no code changes needed. Pricing →
Use With Any AI Assistant
Velixar is assistant-agnostic. Plug it into OpenAI, Anthropic, LangChain, LlamaIndex, custom agents, or any LLM pipeline:
# Inject memories as context before calling your LLM
results = v.search(user_message, limit=5)
context = "\n".join(m.content for m in results.memories)
response = openai.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": f"Relevant memories:\n{context}"},
{"role": "user", "content": user_message},
],
)
# Store important facts after the conversation
v.store("User prefers concise answers", user_id="user_123")
LangChain Integration
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from velixar.integrations.langchain import VelixarChatMessageHistory
def get_session_history(session_id: str):
return VelixarChatMessageHistory(session_id=session_id, api_key="vlx_...")
chain = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
MessagesPlaceholder(variable_name="history"),
("human", "{input}"),
]) | ChatOpenAI()
with_history = RunnableWithMessageHistory(
chain, get_session_history,
input_messages_key="input",
history_messages_key="history",
)
# Memory persists across sessions, restarts, and deployments
config = {"configurable": {"session_id": "user_123"}}
with_history.invoke({"input": "I prefer Python over JavaScript"}, config=config)
with_history.invoke({"input": "What language do I prefer?"}, config=config)
LlamaIndex Integration
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from velixar.integrations.llamaindex import VelixarMemory
memory = VelixarMemory(api_key="vlx_...", user_id="user_123")
agent = ReActAgent.from_tools(tools=[...], llm=OpenAI(), memory=memory)
Batch Operations
result = v.store_many([
{"content": "Fact 1", "tier": 0},
{"content": "Fact 2", "tier": 2, "tags": ["important"]},
{"content": "Fact 3", "user_id": "user_456"},
])
Error Handling
from velixar import VelixarError, RateLimitError, AuthenticationError
try:
v.store("test")
except AuthenticationError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after}s")
except VelixarError as e:
print(f"Error: {e.message}")
Configuration
v = Velixar(
api_key="vlx_...", # Or VELIXAR_API_KEY env var
base_url="https://...", # Custom endpoint (optional)
timeout=30.0, # Request timeout in seconds
max_retries=3, # Retry attempts for failures
)
Get an API Key
Sign up at velixarai.com and generate a key under Settings → API Keys.
Related
- velixar (JavaScript SDK) — TypeScript/JavaScript client
- velixar-mcp-server — MCP server for any MCP-compatible AI assistant
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file velixar-1.0.0.tar.gz.
File metadata
- Download URL: velixar-1.0.0.tar.gz
- Upload date:
- Size: 15.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b09d8410e57bbdc8429c77415277bf581902dd4da84ef7bc71503bb4573d644a
|
|
| MD5 |
f91babf0a792ef21ef9e582d4a381da5
|
|
| BLAKE2b-256 |
72d45c3471bf8c7399e56fcd1c83e9c85edc2b673625c554bf25334097e111e8
|
File details
Details for the file velixar-1.0.0-py3-none-any.whl.
File metadata
- Download URL: velixar-1.0.0-py3-none-any.whl
- Upload date:
- Size: 14.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
95cbefedfd41a71f0365a9c3d8f2f674a10647cfd72f7b841239d66ff3f533b7
|
|
| MD5 |
27d745e37be4d33e1121e446da106dec
|
|
| BLAKE2b-256 |
bceda1d6c14d203800d47f5777aafb4ab76c0f293ba619eca01c2242e1413d65
|