Skip to main content

An integration package connecting Redis and LangChain for AI working memory

Project description

langchain-redis

This package contains the LangChain integration with Redis, providing powerful tools for vector storage, semantic caching, and chat history management.

Installation

pip install -U langchain-redis

This will install the package along with its dependencies, including redis, redisvl, and ulid.

Configuration

To use this package, you need to have a Redis instance running. You can configure the connection by setting the following environment variable:

export REDIS_URL="redis://username:password@localhost:6379"

Alternatively, you can pass the Redis URL directly when initializing the components or use the RedisConfig class for more detailed configuration.

Redis Connection Options

This package supports various Redis deployment modes through different connection URL schemes:

Standard Redis Connection

# Standard Redis
redis_url = "redis://localhost:6379"

# Redis with authentication
redis_url = "redis://username:password@localhost:6379"

# Redis SSL/TLS
redis_url = "rediss://localhost:6380"

Redis Sentinel Connection

Redis Sentinel provides high availability for Redis. You can connect to a Sentinel-managed Redis deployment using the redis+sentinel:// URL scheme:

# Single Sentinel node
redis_url = "redis+sentinel://sentinel-host:26379/mymaster"

# Multiple Sentinel nodes (recommended for high availability)
redis_url = "redis+sentinel://sentinel1:26379,sentinel2:26379,sentinel3:26379/mymaster"

# Sentinel with authentication
redis_url = "redis+sentinel://username:password@sentinel1:26379,sentinel2:26379/mymaster"

The Sentinel URL format is: redis+sentinel://[username:password@]host1:port1[,host2:port2,...]/service_name

Where:

  • host:port - One or more Sentinel node addresses
  • service_name - The name of the Redis master service (e.g., "mymaster")

Example using Sentinel with RedisVectorStore:

from langchain_redis import RedisVectorStore, RedisConfig
from langchain_openai import OpenAIEmbeddings

config = RedisConfig(
    redis_url="redis+sentinel://sentinel1:26379,sentinel2:26379/mymaster",
    index_name="my_index"
)

vector_store = RedisVectorStore(
    embeddings=OpenAIEmbeddings(),
    config=config
)

Example using Sentinel with RedisCache:

from langchain_redis import RedisCache

cache = RedisCache(
    redis_url="redis+sentinel://sentinel1:26379,sentinel2:26379/mymaster",
    ttl=3600
)

Example using Sentinel with RedisChatMessageHistory:

from langchain_redis import RedisChatMessageHistory

history = RedisChatMessageHistory(
    session_id="user_123",
    redis_url="redis+sentinel://sentinel1:26379,sentinel2:26379/mymaster"
)

Features

1. Vector Store

The RedisVectorStore class provides a vector database implementation using Redis.

Usage

from langchain_redis import RedisVectorStore, RedisConfig
from langchain_core.embeddings import Embeddings

embeddings = Embeddings()  # Your preferred embedding model

config = RedisConfig(
    index_name="my_vectors",
    redis_url="redis://localhost:6379",
    distance_metric="COSINE"  # Options: COSINE, L2, IP
)

vector_store = RedisVectorStore(embeddings, config=config)

# Adding documents
texts = ["Document 1 content", "Document 2 content"]
metadatas = [{"source": "file1"}, {"source": "file2"}]
vector_store.add_texts(texts, metadatas=metadatas)

# Adding documents with custom keys
custom_keys = ["doc1", "doc2"]
vector_store.add_texts(texts, metadatas=metadatas, keys=custom_keys)

# Similarity search
query = "Sample query"
docs = vector_store.similarity_search(query, k=2)

# Similarity search with score
docs_and_scores = vector_store.similarity_search_with_score(query, k=2)

# Similarity search with filtering
filter_expr = Tag("category") == "science"
filtered_docs = vector_store.similarity_search(query, k=2, filter=filter_expr)

# Maximum marginal relevance search
docs = vector_store.max_marginal_relevance_search(query, k=2, fetch_k=10)

Features

  • Efficient vector storage and retrieval
  • Support for metadata filtering
  • Multiple distance metrics: Cosine similarity, L2, and Inner Product
  • Maximum marginal relevance search
  • Custom key support for document indexing

2. Cache

The RedisCache, RedisSemanticCache, and LangCacheSemanticCache classes provide caching mechanisms for LLM calls.

Usage

from langchain_redis import RedisCache, RedisSemanticCache, LangCacheSemanticCache
from langchain_core.language_models import LLM
from langchain_core.embeddings import Embeddings

# Standard cache
cache = RedisCache(redis_url="redis://localhost:6379", ttl=3600)

# Semantic cache
embeddings = Embeddings()  # Your preferred embedding model
semantic_cache = RedisSemanticCache(
    redis_url="redis://localhost:6379",
    embedding=embeddings,
    distance_threshold=0.1
)

# LangChain cache - manages embeddings for you
langchain_cache = LangCacheSemanticCache(
    cache_id="your-cache-id",
    api_key="your-api-key",
    distance_threshold=0.1
)

# Using cache with an LLM
llm = LLM(cache=cache)  # or LLM(cache=semantic_cache) or LLM(cache=langchain_cache)

# Async cache operations
await cache.aupdate("prompt", "llm_string", [Generation(text="cached_response")])
cached_result = await cache.alookup("prompt", "llm_string")

Features

  • Efficient caching of LLM responses
  • TTL support for automatic cache expiration
  • Semantic caching for similarity-based retrieval
  • Asynchronous cache operations

What is Redis LangCache?

  • LangCache is a fully managed, cloud-based service that provides a semantic cache for LLM applications.
  • It manages embeddings and vector search for you, allowing you to focus on your application logic.
  • See our docs to learn more, or try LangCache on Redis Cloud today.

3. Chat History

The RedisChatMessageHistory class provides a Redis-based storage for chat message history with efficient search capabilities.

Usage

from langchain_redis import RedisChatMessageHistory
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage

# Initialize with optional TTL (time-to-live) in seconds
history = RedisChatMessageHistory(
    session_id="user_123",
    redis_url="redis://localhost:6379",
    ttl=3600,  # Messages will expire after 1 hour
)

# Adding messages
history.add_message(HumanMessage(content="Hello, AI!"))
history.add_message(AIMessage(content="Hello, human! How can I assist you today?"))
history.add_message(SystemMessage(content="This is a system message"))

# Retrieving all messages in chronological order
messages = history.messages

# Searching messages with full-text search
results = history.search_messages("assist", limit=5)  # Returns matching messages

# Get message count
message_count = len(history)

# Clear history for current session
history.clear()

# Delete all sessions and index (use with caution)
history.delete()

Features

  • Fast storage of chat messages with automatic expiration (TTL)
  • Support for different message types (Human, AI, System)
  • Full-text search capabilities across message content
  • Chronological message retrieval
  • Session-based message organization
  • Customizable key prefixing
  • Thread-safe operations
  • Efficient RedisVL-based indexing and querying

Advanced Configuration

The RedisConfig class allows for detailed configuration of the Redis integration:

from langchain_redis import RedisConfig

config = RedisConfig(
    index_name="my_index",
    redis_url="redis://localhost:6379",
    distance_metric="COSINE",
    key_prefix="my_prefix",
    vector_datatype="FLOAT32",
    storage_type="hash",
    metadata_schema=[
        {"name": "category", "type": "tag"},
        {"name": "price", "type": "numeric"}
    ]
)

Refer to the inline documentation for detailed information on these configuration options.

Error Handling and Logging

The package uses Python's standard logging module. You can configure logging to get more information about the package's operations:

import logging
logging.basicConfig(level=logging.INFO)

Error handling is done through custom exceptions. Make sure to handle these exceptions in your application code.

Performance Considerations

  • For large datasets, consider using batched operations when adding documents to the vector store.
  • Adjust the k and fetch_k parameters in similarity searches to balance between accuracy and performance.
  • Use appropriate indexing algorithms (FLAT, HNSW) based on your dataset size and query requirements.

Examples

For more detailed examples and use cases, please refer to the docs/ directory in this repository.

Contributing / Development

The library is rooted at libs/redis, for all the commands below, CD to libs/redis:

Unit Tests

To install dependencies for unit tests:

poetry install --with test

To run unit tests:

make test

To run a specific test:

TEST_FILE=tests/unit_tests/test_imports.py make test

Integration Tests

You would need an OpenAI API Key to run the integration tests:

export OPENAI_API_KEY=sk-J3nnYJ3nnYWh0Can1Turnt0Ug1VeMe50mth1n1cAnH0ld0n2

To install dependencies for integration tests:

poetry install --with test,test_integration

To run integration tests:

make integration_tests

Local Development

Install langchain-redis development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):

poetry install --with lint,typing,test,test_integration

Then verify dependency installation:

make lint

License

This project is licensed under the MIT License (LICENSE).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_redis-0.2.5.tar.gz (36.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_redis-0.2.5-py3-none-any.whl (36.5 kB view details)

Uploaded Python 3

File details

Details for the file langchain_redis-0.2.5.tar.gz.

File metadata

  • Download URL: langchain_redis-0.2.5.tar.gz
  • Upload date:
  • Size: 36.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for langchain_redis-0.2.5.tar.gz
Algorithm Hash digest
SHA256 f73cde65a1d2ada301ce51d3d5f403ff845e51f8100ee6d1039fa55786eee37e
MD5 6cf6d9e6e3b5cefd6a337856f113aff9
BLAKE2b-256 0b14e2c89a0e1d9b8faed20c03a7d5c1f9b10a998bc9af024b58405dc657feed

See more details on using hashes here.

File details

Details for the file langchain_redis-0.2.5-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_redis-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 73ad5c7d4039401f11f4f8de7d37615d9736ad2eca7fac71ee93c8abc3d5086a
MD5 c4f2b5479a0d165601c47bc36b513f7b
BLAKE2b-256 7e5f705ae80a50e25290476c76a7798696833a397de69aaa986a189c0915ceef

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page