Skip to main content

llama-index managed vectara integration

Project description

LlamaIndex Managed Integration: Vectara

The Vectara Index provides a simple implementation to Vectara's end-to-end RAG pipeline, including data ingestion, document retrieval, reranking results, summary generation, and hallucination evaluation.

Setup

First, make sure you have the latest LlamaIndex version installed.

Next, install the Vectara Index:

pip install -U llama-index-indices-managed-vectara

Finally, set up your Vectara corpus. If you don't have a Vectara account, you can sign up and follow our Quick Start guide to create a corpus and an API key (make sure it has both indexing and query permissions).

Usage

First let's initialize the index with some sample documents.

import os

os.environ["VECTARA_API_KEY"] = "<YOUR_VECTARA_API_KEY>"
os.environ["VECTARA_CORPUS_ID"] = "<YOUR_VECTARA_CORPUS_ID>"
os.environ["VECTARA_CUSTOMER_ID"] = "<YOUR_VECTARA_CUSTOMER_ID>"

from llama_index.indices.managed.vectara import VectaraIndex
from llama_index.core.schema import Document

docs = [
    Document(
        text="""
        This is test text for Vectara integration with LlamaIndex.
        Users should love their experience with this integration
        """,
    ),
    Document(
        text="""
        The Vectara index integration with LlamaIndex implements Vectara's RAG pipeline.
        It can be used both as a retriever and query engine.
        """,
    ),
]

index = VectaraIndex.from_documents(docs)

You can now use this index to retrieve documents.

# Retrieves the top search result
retriever = index.as_retriever(similarity_top_k=1)

results = retriever.retrieve("How will users feel about this new tool?")
print(results[0])

You can also use it as a query engine to get a generated summary from the retrieved results.

query_engine = index.as_query_engine()

results = query_engine.query(
    "Which company has partnered with Vectara to implement their RAG pipeline as an index?"
)
print(f"Generated summary: {results.response}\n")
print("Top sources:")
for node in results.source_nodes[:2]:
    print(node)

If you want to see the full features and capabilities of VectaraIndex, check out this Jupyter notebook.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file llama_index_indices_managed_vectara-0.2.5.tar.gz.

File metadata

File hashes

Hashes for llama_index_indices_managed_vectara-0.2.5.tar.gz
Algorithm Hash digest
SHA256 7f1532f5bb0e6bffd39e51bff603c0087d5995ac7ea1bfcf31802a652501d6bc
MD5 bc054235d9dd4e59da0d33dccda72277
BLAKE2b-256 6afc9ffbfb3a1a14b5b1c4c88a749ff52e167cd8d63b77405cbaff99723f6339

See more details on using hashes here.

File details

Details for the file llama_index_indices_managed_vectara-0.2.5-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_indices_managed_vectara-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 df21481219c01eee5474410e2eefe8ed9b670ced5d74b69eb3c9c6bd365d5aba
MD5 74f421dfe8ad2ea66ec44ce5b1f90045
BLAKE2b-256 9549b21501647b29010f0fb89f056f13ab423153a5665a84ab1e2b281df195cd

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page