Skip to main content

An integration package connecting Astra DB and LangChain

Project description

langchain-astradb

This package contains the LangChain integrations for using DataStax Astra DB.

DataStax Astra DB is a serverless vector-capable database built on Apache Cassandra® and made conveniently available through an easy-to-use JSON API.

[!IMPORTANT] This package replaces the deprecated Astra DB classes found under langchain_community.*. Migrating away from the community plugins is strongly advised to get the latest features, fixes, and compatibility with modern versions of the AstraPy Data API client.

Architecture sketch

Architecture sketch

Installation and Setup

Installation of this partner package:

pip install langchain-astradb

Integrations overview

See the LangChain docs page and the API reference for more details.

Vector Store

from langchain_astradb import AstraDBVectorStore

my_store = AstraDBVectorStore(
  embedding=my_embedding,
  collection_name="my_store",
  api_endpoint="https://...",
  token="AstraCS:...",
)

Class AstraDBVectorStore supports server-side embeddings ("vectorize"), hybrid search (vector ANN + BM25 + reranker), autodetect on arbitrary collections, non-Astra DB deployments of Data API, and more. Example notebook.

Chat message history

from langchain_astradb import AstraDBChatMessageHistory

message_history = AstraDBChatMessageHistory(
    session_id="test-session",
    api_endpoint="https://...",
    token="AstraCS:...",
)

LLM Cache

from langchain_astradb import AstraDBCache

cache = AstraDBCache(
    api_endpoint="https://...",
    token="AstraCS:...",
)

Semantic LLM Cache

from langchain_astradb import AstraDBSemanticCache

cache = AstraDBSemanticCache(
    embedding=my_embedding,
    api_endpoint="https://...",
    token="AstraCS:...",
)

Document loader

from langchain_astradb import AstraDBLoader

loader = AstraDBLoader(
    collection_name="my_collection",
    api_endpoint="https://...",
    token="AstraCS:...",
)

Store

from langchain_astradb import AstraDBStore

store = AstraDBStore(
    collection_name="my_kv_store",
    api_endpoint="https://...",
    token="AstraCS:...",
)

Byte Store

from langchain_astradb import AstraDBByteStore

store = AstraDBByteStore(
    collection_name="my_kv_store",
    api_endpoint="https://...",
    token="AstraCS:...",
)

Collection defaults mismatch

The Astra DB plugins default to idempotency as far as database provisioning is concerned.

This means that, unless requested otherwise, creating an instance of e.g. AstraDBVectorStore will trigger the creation of the underlying Astra DB collection in the target database.

For a collection that already exists, if the requested configuration matches what's on DB, this is no problem: the Data API responds successfully and the whole 'creation' is a no-op.

However, if the create command specifies a different configuration than the already-existing collection, an error is returned by the Data API (with an error code of EXISTING_COLLECTION_DIFFERENT_SETTINGS) and reported back to the LangChain user. A possible occurrence of this issue is related to indexing settings (see the dedicated section for guidance).

The case of hybrid search

The introduction of "hybrid search" in the Data API, and the fact that the collection defaults have been changed accordingly, may also lead to one such mismatch error.

Most recent deployments of the Data API configure new collections to be equipped for hybrid search by default. On such deployments, when running an AstraDBVectorStore workload based on a pre-existing collection a mismatch may be detected (a new create-collection API command will effectively try to create a differently-configured object on DB).

Here are three suggested ways to remediate the problem:

Solution one is to let the AstraDBVectorStore autodetect the configuration and behave accordingly in its data read/write operations. This assumes the collection already exists, and has the advantage that hybrid capabilities are picked up automatically:

vector_store = AstraDBVectorStore(
    collection_name="astra_existing_collection",
    # embedding=...,  # needed unless using 'vectorize'
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    autodetect_collection=True,
)

Solution two is to simply turn off the actual collection creation step with the setup_mode constructor parameter. The store behaviour is entirely dictated by the passed parameters, simply no attempt is made to create the collection on DB. This can work if you are sure that the collection exists, and has the effect of full predictability of the workload: in particular, even if the hybrid capabilities could be detected, whether to use them or not depends only on the passed constructor parameters:

from langchain_astradb.utils.astradb import SetupMode

vector_store = AstraDBVectorStore(
    collection_name="astra_existing_collection",
    # embedding=...,  # needed unless using 'vectorize'
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    collection_vector_service_options=VectorServiceOptions(...),  # if 'vectorize'
    setup_mode=SetupMode.OFF
)

Solution three is to specify your hybrid-related settings (reranker and lexical) for the store to exactly match what's on the database (including the case of turning these off):

from astrapy.info import (
    CollectionLexicalOptions,
    CollectionRerankOptions,
    RerankServiceOptions,
    VectorServiceOptions,
)

# hybrid-related capabilities explicitly ON
vector_store = AstraDBVectorStore(
    collection_name="astra_existing_collection",
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    collection_vector_service_options=VectorServiceOptions(...),
    collection_lexical=CollectionLexicalOptions(analyzer="standard"),
    collection_rerank=CollectionRerankOptions(
        service=RerankServiceOptions(
            provider="nvidia",
            model_name="nvidia/llama-3.2-nv-rerankqa-1b-v2",
        ),
    ),
    collection_reranking_api_key=...,  # if needed by the model/setup
)

# hybrid-related capabilities explicitly OFF
vector_store = AstraDBVectorStore(
    collection_name="astra_existing_collection",
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    token=ASTRA_DB_APPLICATION_TOKEN,
    collection_vector_service_options=VectorServiceOptions(...),
    collection_lexical=CollectionLexicalOptions(enabled=False),
    collection_rerank=CollectionRerankOptions(enabled=False),
)

(the two examples above, with and without hybrid capabilities, assume a vectorize-enabled collection, i.e. with server-side embedding computation.)

Warnings about indexing

When creating an Astra DB object in LangChain, such as an AstraDBVectorStore, you may see a warning similar to the following:

Astra DB collection '...' is detected as having indexing turned on for all fields (either created manually or by older versions of this plugin). This implies stricter limitations on the amount of text each string in a document can store. Consider reindexing anew on a fresh collection to be able to store longer texts.

The reason for the warning is that the requested collection already exists on the database, and it is configured to index all of its fields for search, possibly implicitly, by default. When the LangChain object tries to create it, it attempts to enforce, instead, an indexing policy tailored to the prospected usage. For example, the LangChain vector store will index the metadata but leave the textual content out: this is both to enable storing very long texts and to avoid indexing fields that will never be used in filtering a search (indexing those would also have a slight performance cost for writes).

Typically there are two reasons why you may encounter the warning:

  1. you have created a collection by other means than letting the AstraDBVectorStore do it for you: for example, through the Astra UI, or using AstraPy's create_collection method of class Database directly;
  2. you have created the collection with a version of the Astra DB plugin that is not up-to-date (i.e. prior to the langchain-astradb partner package).

Keep in mind that this is a warning and your application will continue running just fine, as long as you don't store very long texts. Should you need to add to a vector store, for example, a Document whose page_content exceeds ~8K in length, you will receive an indexing error from the database.

Remediation

You have several options:

  • you can ignore the warning because you know your application will never need to store very long textual contents;
  • you can ignore the warning and explicitly instruct the plugin not to create the collection, assuming it exists already (which suppresses the warning): store = AstraDBVectorStore(..., setup_mode=langchain_astradb.utils.astradb.SetupMode.OFF). In this case the collection will be used as-is, no (indexing) questions asked;
  • if you can afford populating the collection anew, you can drop it and re-run the LangChain application: the collection will be created with the optimized indexing settings. This is the recommended option, when possible.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_astradb-1.0.0.tar.gz (91.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_astradb-1.0.0-py3-none-any.whl (60.1 kB view details)

Uploaded Python 3

File details

Details for the file langchain_astradb-1.0.0.tar.gz.

File metadata

  • Download URL: langchain_astradb-1.0.0.tar.gz
  • Upload date:
  • Size: 91.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for langchain_astradb-1.0.0.tar.gz
Algorithm Hash digest
SHA256 23686094c74802862e92fb45141f89092472bfdedb373bfd8b748d4caeecc11f
MD5 3eb90a1ee21e19805038ad3b338fd158
BLAKE2b-256 b96431e1e93c42e773fab6346dd2f304ec58c1f06debeb008efbddedd1b68345

See more details on using hashes here.

File details

Details for the file langchain_astradb-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_astradb-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 867f07e3e328d48fc79243e336197695d2b1d79f6c5d161385322172eb7cb4fb
MD5 d29e0c470ea6ce3367c7ab858ae998c7
BLAKE2b-256 70b6f68f71c95a8dc80540dff7e24406e312f733b014671a64ace7221f7fcb5c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page