An integration package connecting Redis and LangChain
Project description
langchain-redis
This package contains the LangChain integration with Redis, providing powerful tools for vector storage, semantic caching, and chat history management.
Installation
pip install -U langchain-redis
This will install the package along with its dependencies, including redis
, redisvl
, and ulid
.
Configuration
To use this package, you need to have a Redis instance running. You can configure the connection by setting the following environment variable:
export REDIS_URL="redis://username:password@localhost:6379"
Alternatively, you can pass the Redis URL directly when initializing the components or use the RedisConfig
class for more detailed configuration.
Features
1. Vector Store
The RedisVectorStore
class provides a vector database implementation using Redis.
Usage
from langchain_redis import RedisVectorStore, RedisConfig
from langchain_core.embeddings import Embeddings
embeddings = Embeddings() # Your preferred embedding model
config = RedisConfig(
index_name="my_vectors",
redis_url="redis://localhost:6379",
distance_metric="COSINE" # Options: COSINE, L2, IP
)
vector_store = RedisVectorStore(embeddings, config=config)
# Adding documents
texts = ["Document 1 content", "Document 2 content"]
metadatas = [{"source": "file1"}, {"source": "file2"}]
vector_store.add_texts(texts, metadatas=metadatas)
# Adding documents with custom keys
custom_keys = ["doc1", "doc2"]
vector_store.add_texts(texts, metadatas=metadatas, keys=custom_keys)
# Similarity search
query = "Sample query"
docs = vector_store.similarity_search(query, k=2)
# Similarity search with score
docs_and_scores = vector_store.similarity_search_with_score(query, k=2)
# Similarity search with filtering
filter_expr = Tag("category") == "science"
filtered_docs = vector_store.similarity_search(query, k=2, filter=filter_expr)
# Maximum marginal relevance search
docs = vector_store.max_marginal_relevance_search(query, k=2, fetch_k=10)
Features
- Efficient vector storage and retrieval
- Support for metadata filtering
- Multiple distance metrics: Cosine similarity, L2, and Inner Product
- Maximum marginal relevance search
- Custom key support for document indexing
2. Cache
The RedisCache
and RedisSemanticCache
classes provide caching mechanisms for LLM calls.
Usage
from langchain_redis import RedisCache, RedisSemanticCache
from langchain_core.language_models import LLM
from langchain_core.embeddings import Embeddings
# Standard cache
cache = RedisCache(redis_url="redis://localhost:6379", ttl=3600)
# Semantic cache
embeddings = Embeddings() # Your preferred embedding model
semantic_cache = RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=embeddings,
distance_threshold=0.1
)
# Using cache with an LLM
llm = LLM(cache=cache) # or LLM(cache=semantic_cache)
# Async cache operations
await cache.aupdate("prompt", "llm_string", [Generation(text="cached_response")])
cached_result = await cache.alookup("prompt", "llm_string")
Features
- Efficient caching of LLM responses
- TTL support for automatic cache expiration
- Semantic caching for similarity-based retrieval
- Asynchronous cache operations
3. Chat History
The RedisChatMessageHistory
class provides a Redis-based storage for chat message history.
Usage
from langchain_redis import RedisChatMessageHistory
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
history = RedisChatMessageHistory(
session_id="user_123",
redis_url="redis://localhost:6379",
ttl=3600 # Optional: set TTL for message expiration
)
# Adding messages
history.add_user_message("Hello, AI!")
history.add_ai_message("Hello, human! How can I assist you today?")
history.add_message(SystemMessage(content="This is a system message"))
# Retrieving messages
messages = history.messages
# Searching messages
results = history.search_messages("assist")
# Get the number of messages
message_count = len(history)
# Clear history
history.clear()
Features
- Persistent storage of chat messages
- Support for different message types (Human, AI, System)
- Message searching capabilities
- Automatic expiration with TTL support
- Message count functionality
Advanced Configuration
The RedisConfig
class allows for detailed configuration of the Redis integration:
from langchain_redis import RedisConfig
config = RedisConfig(
index_name="my_index",
redis_url="redis://localhost:6379",
distance_metric="COSINE",
key_prefix="my_prefix",
vector_datatype="FLOAT32",
storage_type="hash",
metadata_schema=[
{"name": "category", "type": "tag"},
{"name": "price", "type": "numeric"}
]
)
Refer to the inline documentation for detailed information on these configuration options.
Error Handling and Logging
The package uses Python's standard logging module. You can configure logging to get more information about the package's operations:
import logging
logging.basicConfig(level=logging.INFO)
Error handling is done through custom exceptions. Make sure to handle these exceptions in your application code.
Performance Considerations
- For large datasets, consider using batched operations when adding documents to the vector store.
- Adjust the
k
andfetch_k
parameters in similarity searches to balance between accuracy and performance. - Use appropriate indexing algorithms (FLAT, HNSW) based on your dataset size and query requirements.
Examples
For more detailed examples and use cases, please refer to the docs/
directory in this repository.
Contributing / Development
The libray is rooted at libs/redis
, for all the commands below, CD to libs\redis
:
Unit Tests
To install dependencies for unit tests:
poetry install --with test
To run unit tests:
make test
To run a specific test:
TEST_FILE=tests/unit_tests/test_imports.py make test
Integration Tests
You would need an OpenAI API Key to run the integration tests:
export OPENAI_API_KEY=sk-J3nnYJ3nnYWh0Can1Turnt0Ug1VeMe50mth1n1cAnH0ld0n2
To install dependencies for integration tests:
poetry install --with test,test_integration
To run integration tests:
make integration_tests
Local Development
Install langchain-redis development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
poetry install --with lint,typing,test,test_integration
Then verify dependency installation:
make lint
License
This project is licensed under the MIT License (LICENSE).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file langchain_redis-0.1.1.tar.gz
.
File metadata
- Download URL: langchain_redis-0.1.1.tar.gz
- Upload date:
- Size: 27.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4a6e881f5d28ccb8d06b9b2449193e725a569ad93f86d4c75accde54ce04f735 |
|
MD5 | b4e37cc15012c4194b38b34ab79f25ee |
|
BLAKE2b-256 | e45856fe8b35edeabfabb3e9bd14b9517a1f12d50b974129cc1573a2983c8916 |
File details
Details for the file langchain_redis-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: langchain_redis-0.1.1-py3-none-any.whl
- Upload date:
- Size: 28.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 22ff9818eab5d9fca1c9e78a6d16e42e0c3ecd3a71a67127846a86d68d05ef3a |
|
MD5 | a6460fbaf940eb883dc88af85684c01a |
|
BLAKE2b-256 | bbdc1e4a5294dd4a790ba9c4385b78f35edf19caffccb94db8ad4b62f44944f8 |