An integration package connecting Couchbase and LangChain
Project description
langchain-couchbase
This package contains the official LangChain integration with Couchbase
The documentation and API Reference can be found on Github Pages.
Installation
pip install -U langchain-couchbase
Vector Store
CouchbaseQueryVectorStore
CouchbaseQueryVectorStore class enables the usage of Couchbase for Vector Search using the Query and Indexing Service. It supports two different types of vector indexes:
-
Hyperscale Vector Index - Optimized for pure vector searches on large datasets (billions of documents). Best for content discovery, recommendations, and applications requiring high accuracy with low memory footprint. Hyperscale Vector indexes compare vectors and scalar values simultaneously.
-
Composite Vector Index - Combines a Global Secondary Index (GSI) with a vector column. Ideal for searches combining vector similarity with scalar filters where scalars filter out large portions of the dataset. Composite Vector indexes apply scalar filters first, then perform vector searches on the filtered results.
For guidance on choosing the right index type, see Choose the Right Vector Index.
Note: CouchbaseQueryVectorStore requires Couchbase Server version 8.0 and above.
To use this in an application:
import getpass
# Constants for the connection
COUCHBASE_CONNECTION_STRING = getpass.getpass(
"Enter the connection string for the Couchbase cluster: "
)
DB_USERNAME = getpass.getpass("Enter the username for the Couchbase cluster: ")
DB_PASSWORD = getpass.getpass("Enter the password for the Couchbase cluster: ")
# Create Couchbase connection object
from datetime import timedelta
from couchbase.auth import PasswordAuthenticator
from couchbase.cluster import Cluster
from couchbase.options import ClusterOptions
auth = PasswordAuthenticator(DB_USERNAME, DB_PASSWORD)
options = ClusterOptions(auth)
cluster = Cluster(COUCHBASE_CONNECTION_STRING, options)
# Wait until the cluster is ready for use.
cluster.wait_until_ready(timedelta(seconds=5))
from langchain_couchbase import CouchbaseQueryVectorStore
from langchain_couchbase.vectorstores import DistanceStrategy
vector_store = CouchbaseQueryVectorStore(
cluster=cluster,
bucket_name=BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
embedding=my_embeddings,
distance_metric=DistanceStrategy.DOT
)
Note: The Hyperscale and Composite vector indexes must be created after adding documents to the vector store. This enables efficient vector searches.
See a usage example
CouchbaseSearchVectorStore
CouchbaseSearchVectorStore class enables the usage of Couchbase for Vector Search using Search Vector Indexes. Search Vector Indexes combine a Couchbase Search index with a vector column, allowing hybrid searches that combine vector searches with Full-Text Search (FTS) and geospatial searches.
Note: CouchbaseSearchVectorStore requires Couchbase Server version 7.6 and above.
from langchain_couchbase import CouchbaseSearchVectorStore
To use this in an application:
import getpass
# Constants for the connection
COUCHBASE_CONNECTION_STRING = getpass.getpass(
"Enter the connection string for the Couchbase cluster: "
)
DB_USERNAME = getpass.getpass("Enter the username for the Couchbase cluster: ")
DB_PASSWORD = getpass.getpass("Enter the password for the Couchbase cluster: ")
# Create Couchbase connection object
from datetime import timedelta
from couchbase.auth import PasswordAuthenticator
from couchbase.cluster import Cluster
from couchbase.options import ClusterOptions
auth = PasswordAuthenticator(DB_USERNAME, DB_PASSWORD)
options = ClusterOptions(auth)
cluster = Cluster(COUCHBASE_CONNECTION_STRING, options)
# Wait until the cluster is ready for use.
cluster.wait_until_ready(timedelta(seconds=5))
from langchain_couchbase import CouchbaseSearchVectorStore
vector_store = CouchbaseSearchVectorStore(
cluster=cluster,
bucket_name=BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
embedding=my_embeddings,
index_name=SEARCH_INDEX_NAME,
)
See a usage example
LLM Caches
CouchbaseCache
Use Couchbase as a cache for prompts and responses.
See a usage example.
To import this cache:
from langchain_couchbase.cache import CouchbaseCache
To use this cache with your LLMs:
from langchain_core.globals import set_llm_cache
cluster = couchbase_cluster_connection_object
set_llm_cache(
CouchbaseCache(
cluster=cluster,
bucket_name=BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
)
)
CouchbaseSemanticCache
Semantic caching allows users to retrieve cached prompts based on the semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore. The CouchbaseSemanticCache needs a Search Index defined to work. Please look at the usage example on how to set up the index.
See a usage example.
To import this cache:
from langchain_couchbase.cache import CouchbaseSemanticCache
To use this cache with your LLMs:
from langchain_core.globals import set_llm_cache
# use any embedding provider...
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
cluster = couchbase_cluster_connection_object
set_llm_cache(
CouchbaseSemanticCache(
cluster=cluster,
embedding = embeddings,
bucket_name=BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
index_name=INDEX_NAME,
)
)
Chat Message History
Use Couchbase as the storage for your chat messages.
See a usage example.
To use the chat message history in your applications:
from langchain_couchbase.chat_message_histories import CouchbaseChatMessageHistory
cluster = couchbase_cluster_connection_object
message_history = CouchbaseChatMessageHistory(
cluster=cluster,
bucket_name=BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
session_id="test-session",
)
message_history.add_user_message("hi!")
Documentation
Generating Documentation Locally
To generate the documentation locally, follow these steps:
- Ensure you have the project installed in your environment:
pip install -e . # Install in development mode
- Install the required documentation dependencies:
pip install sphinx sphinx-rtd-theme tomli
- Navigate to the docs directory:
cd docs
- Ensure the build directory exists:
mkdir -p source/build
- Build the HTML documentation:
make html
- The generated documentation will be available in the
docs/build/htmldirectory. You can openindex.htmlin your browser to view it:
# On macOS
open build/html/index.html
# On Linux
xdg-open build/html/index.html
# On Windows
start build/html/index.html
Additional Documentation Commands
- To clean the build directory before rebuilding:
make clean html
- To check for broken links in the documentation:
make linkcheck
- To generate a PDF version of the documentation (requires LaTeX):
make latexpdf
- For help on available make commands:
make help
Troubleshooting
- If you encounter errors about missing modules, ensure you have installed the project in your environment.
- If Sphinx can't find your package modules, verify your
conf.pyhas the correct path configuration. - For sphinx-specific errors, refer to the Sphinx documentation.
- If you see an error about missing
tomlimodule, make sure you've installed it withpip install tomli.
Contributing
We welcome contributions! Please see our Contributing Guide for details.
📢 Support Policy
We truly appreciate your interest in this project!
This project is community-maintained, which means it's not officially supported by our support team.
If you need help, have found a bug, or want to contribute improvements, the best place to do that is right here — by opening a GitHub issue.
Our support portal is unable to assist with requests related to this project, so we kindly ask that all inquiries stay within GitHub.
Your collaboration helps us all move forward together — thank you!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_couchbase-1.0.1.tar.gz.
File metadata
- Download URL: langchain_couchbase-1.0.1.tar.gz
- Upload date:
- Size: 20.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eda2bfff2a17d9bf579aa8af038aea41644eca6ac01e1da7581aad7a98f58045
|
|
| MD5 |
7c5b5578aeecd8bbee1476a3f708fbe3
|
|
| BLAKE2b-256 |
be39491b5f8c77407449be5f32d25705bb5823d811e2fc5c5d68c8ffa2263748
|
File details
Details for the file langchain_couchbase-1.0.1-py3-none-any.whl.
File metadata
- Download URL: langchain_couchbase-1.0.1-py3-none-any.whl
- Upload date:
- Size: 25.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5ff9308c52208dbc75d68ff38649dff20a91a3c11b77f8027f25c01ed804f3aa
|
|
| MD5 |
4f14b6f48ccfcc926815a7bba17c6f52
|
|
| BLAKE2b-256 |
36c022a41178cdd778806a5bca6364171a243da445f367b678645ec5d42d2596
|