Persistent memory infrastructure for AI applications
Project description
Velixar Python SDK
Persistent memory infrastructure for AI applications.
Installation
pip install velixar
# With LangChain integration
pip install velixar[langchain]
# With LlamaIndex integration
pip install velixar[llamaindex]
# All integrations
pip install velixar[all]
Quick Start
from velixar import Velixar
# Initialize client
v = Velixar(api_key="vlx_your_key") # Or set VELIXAR_API_KEY env var
# Store a memory
memory_id = v.store(
content="User prefers dark mode and metric units",
tier=0, # 0=pinned (critical), 2=semantic (default)
user_id="user_123",
tags=["preferences", "settings"],
)
# Search memories
results = v.search("user preferences", limit=5)
for memory in results.memories:
print(f"[{memory.score:.2f}] {memory.content}")
# Get context for LLM prompts
context = v.get_context("What are the user's preferences?", max_tokens=2000)
Async Support
from velixar import AsyncVelixar
async with AsyncVelixar(api_key="vlx_...") as v:
await v.store("User's favorite color is blue", user_id="user_123")
results = await v.search("favorite color")
Memory Tiers
| Tier | Name | Use Case |
|---|---|---|
| 0 | Pinned | Critical facts, user preferences, never expire |
| 1 | Session | Current conversation context |
| 2 | Semantic | Long-term memories (default) |
| 3 | Organization | Shared team knowledge |
from velixar import MemoryTier
# Store critical preference
v.store("User is allergic to peanuts", tier=MemoryTier.PINNED)
# Store session context
v.store("Currently discussing project X", tier=MemoryTier.SESSION)
LangChain Integration
from langchain.chains import ConversationChain
from langchain_openai import ChatOpenAI
from velixar.integrations.langchain import VelixarMemory
# Create memory backed by Velixar
memory = VelixarMemory(
api_key="vlx_...",
user_id="user_123",
)
# Use with any LangChain chain
chain = ConversationChain(
llm=ChatOpenAI(),
memory=memory,
)
response = chain.invoke({"input": "Remember that I prefer Python over JavaScript"})
response = chain.invoke({"input": "What programming language do I prefer?"})
LlamaIndex Integration
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from velixar.integrations.llamaindex import VelixarMemory
memory = VelixarMemory(api_key="vlx_...", user_id="user_123")
agent = ReActAgent.from_tools(
tools=[...],
llm=OpenAI(),
memory=memory,
)
OpenAI Function Calling
from openai import OpenAI
from velixar import Velixar
from velixar.integrations.openai import VelixarAssistant
# Simple wrapper with automatic memory
assistant = VelixarAssistant(
openai_client=OpenAI(),
velixar_api_key="vlx_...",
user_id="user_123",
)
assistant.chat("Remember that my birthday is March 15th")
assistant.chat("When is my birthday?") # Uses memory automatically
Batch Operations
# Store multiple memories at once
result = v.store_many([
{"content": "Fact 1", "tier": 0},
{"content": "Fact 2", "tier": 2, "tags": ["important"]},
{"content": "Fact 3", "user_id": "user_456"},
])
print(f"Stored {result.stored} memories")
Error Handling
from velixar import Velixar, VelixarError, RateLimitError, AuthenticationError
try:
v = Velixar(api_key="invalid")
v.store("test")
except AuthenticationError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after}s")
except VelixarError as e:
print(f"Error: {e.message}")
Configuration
v = Velixar(
api_key="vlx_...", # Or VELIXAR_API_KEY env var
base_url="https://...", # Custom endpoint (optional)
timeout=30.0, # Request timeout in seconds
max_retries=3, # Retry attempts for failures
)
Environment Variables
| Variable | Description |
|---|---|
VELIXAR_API_KEY |
Your API key |
VELIXAR_BASE_URL |
Custom API endpoint |
License
MIT License - see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
velixar-0.1.0.tar.gz
(11.7 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
velixar-0.1.0-py3-none-any.whl
(11.9 kB
view details)
File details
Details for the file velixar-0.1.0.tar.gz.
File metadata
- Download URL: velixar-0.1.0.tar.gz
- Upload date:
- Size: 11.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
13bc672642a0999bca37f6cc1050fd40d635b8d724b25eb946f35b31b2ff22d1
|
|
| MD5 |
ab7c67a9a00782c9cc86b28248f40cae
|
|
| BLAKE2b-256 |
721d655b73126cc8471afd26a75c03c442f76155a2c5c0d3fd498d482f302c95
|
File details
Details for the file velixar-0.1.0-py3-none-any.whl.
File metadata
- Download URL: velixar-0.1.0-py3-none-any.whl
- Upload date:
- Size: 11.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6a0b0ba5da4bdd3b915690f1cf1a858cde23fde5ea3a6b9faabbcd1f785efb71
|
|
| MD5 |
5dc659a3165fd8a157d1d0fea171cd7d
|
|
| BLAKE2b-256 |
1e8c858f8d0a978b30db680ba443efbb18686758a491f08d1cc9188d30552b61
|