Skip to main content

llama-index vector_stores lindorm integration

Project description

LlamaIndex Vector_Stores Integration: Lindorm

  • LindormVectorStore support pure vector search, search with metadata filtering, hybrid search, async, etc.
  • Please refer to the notebook for usage of Lindorm as vector store in LlamaIndex.

Example Usage

pip install llama-index
pip install opensearch-py
pip install llama-index-vector-stores-lindorm
from llama_index.vector_stores.lindorm import (
    LindormVectorStore,
    LindormVectorClient,
)

# how to obtain an lindorm search instance:
# https://alibabacloud.com/help/en/lindorm/latest/create-an-instance

# how to access your lindorm search instance:
# https://www.alibabacloud.com/help/en/lindorm/latest/view-endpoints

# run curl commands to connect to and use LindormSearch:
# https://www.alibabacloud.com/help/en/lindorm/latest/connect-and-use-the-search-engine-with-the-curl-command

# lindorm instance info
host = "ld-bp******jm*******-proxy-search-pub.lindorm.aliyuncs.com"
port = 30070
username = "your_username"
password = "your_password"

# index to demonstrate the VectorStore impl
index_name = "lindorm_test_index"

# extension param of lindorm search, number of cluster units to query; between 1 and method.parameters.nlist.
nprobe = "a number(string type)"

# extension param of lindorm search, usually used to improve recall accuracy, but it increases performance overhead;
#   between 1 and 200; default: 10.
reorder_factor = "a number(string type)"

# LindormVectorClient encapsulates logic for a single index with vector search enabled
client = LindormVectorClient(
    host=host,
    port=port,
    username=username,
    password=password,
    index=index_name,
    dimension=1536,  # match with your embedding model
    nprobe=nprobe,
    reorder_factor=reorder_factor,
    # filter_type="pre_filter/post_filter(default)"
)

# initialize vector store
vector_store = LindormVectorStore(client)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_vector_stores_lindorm-0.4.0.tar.gz (10.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_vector_stores_lindorm-0.4.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_vector_stores_lindorm-0.4.0.tar.gz
Algorithm Hash digest
SHA256 8a9b7eacb17e692c6cab8d1ef89130f4ea9c2286d4f2661a7cf1986d62794acb
MD5 bfc80316bc1ccc6ea88ceab19de85c6c
BLAKE2b-256 7f75f3a41f1889880dd994e7d6ed05c1f09a53cbca00fcad8b4c7754d995a134

See more details on using hashes here.

File details

Details for the file llama_index_vector_stores_lindorm-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_vector_stores_lindorm-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 742ddcc9f7fc6ba3f46fd4276ffa3de1e5d9509ae30926aa8e5c2a94d41d7ee5
MD5 55912506513dda872c951c2ffe5bffb8
BLAKE2b-256 bd3b46565719e2cb7db2d0375933ea0dd7c52aae10fb00557c079f3c077c7319

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page