Skip to main content

llama-index vector_stores lindorm integration

Project description

LlamaIndex Vector_Stores Integration: Lindorm

  • LindormVectorStore support pure vector search, search with metadata filtering, hybrid search, async, etc.
  • Please refer to the notebook for usage of Lindorm as vector store in LlamaIndex.

Example Usage

pip install llama-index
pip install opensearch-py
pip install llama-index-vector-stores-lindorm
from llama_index.vector_stores.lindorm import (
    LindormVectorStore,
    LindormVectorClient,
)

# how to obtain an lindorm search instance:
# https://alibabacloud.com/help/en/lindorm/latest/create-an-instance

# how to access your lindorm search instance:
# https://www.alibabacloud.com/help/en/lindorm/latest/view-endpoints

# run curl commands to connect to and use LindormSearch:
# https://www.alibabacloud.com/help/en/lindorm/latest/connect-and-use-the-search-engine-with-the-curl-command

# lindorm instance info
host = "ld-bp******jm*******-proxy-search-pub.lindorm.aliyuncs.com"
port = 30070
username = "your_username"
password = "your_password"

# index to demonstrate the VectorStore impl
index_name = "lindorm_test_index"

# extension param of lindorm search, number of cluster units to query; between 1 and method.parameters.nlist.
nprobe = "a number(string type)"

# extension param of lindorm search, usually used to improve recall accuracy, but it increases performance overhead;
#   between 1 and 200; default: 10.
reorder_factor = "a number(string type)"

# LindormVectorClient encapsulates logic for a single index with vector search enabled
client = LindormVectorClient(
    host=host,
    port=port,
    username=username,
    password=password,
    index=index_name,
    dimension=1536,  # match with your embedding model
    nprobe=nprobe,
    reorder_factor=reorder_factor,
    # filter_type="pre_filter/post_filter(default)"
)

# initialize vector store
vector_store = LindormVectorStore(client)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_vector_stores_lindorm-0.3.0.tar.gz (9.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_vector_stores_lindorm-0.3.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_vector_stores_lindorm-0.3.0.tar.gz
Algorithm Hash digest
SHA256 ddd2792c7508cf69075c490d13f42df4789b9aefe07c258e57a4926d9915f82a
MD5 6c880bd46b643c15f51c8dc9f0fad6ae
BLAKE2b-256 dc9891563a5d2cc8b5b8f69c526b74695ca51979fae4c8443831f3691312acf1

See more details on using hashes here.

File details

Details for the file llama_index_vector_stores_lindorm-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_vector_stores_lindorm-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9299fe585a26d76831dac401859b22aeb41086c096b35a3592ba8265c0127a0b
MD5 d8c62c4f96850253b8196abf4be19cf8
BLAKE2b-256 4eecfc818095b5191c043815efa1a4a4aeb8fda915013881375eae7b41c9a1c7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page