Skip to main content

llama-index packs redis_ingestion_pipeline integration

Project description

Redis Ingestion Pipeline Pack

This LlamaPack creates an ingestion pipeline, with both a cache and vector store backed by Redis.

CLI Usage

You can download llamapacks directly using llamaindex-cli, which comes installed with the llama-index python package:

llamaindex-cli download-llamapack RedisIngestionPipelinePack --download-dir ./redis_ingestion_pack

You can then inspect the files at ./redis_ingestion_pack and use them as a template for your own project!

Code Usage

You can download the pack to a ./redis_ingestion_pack directory:

from llama_index.core.llama_pack import download_llama_pack

# download and install dependencies
RedisIngestionPipelinePack = download_llama_pack(
    "RedisIngestionPipelinePack", "./redis_ingestion_pack"
)

From here, you can use the pack, or inspect and modify the pack in ./redis_ingestion_pack.

Then, you can set up the pack like so:

from llama_index.core.node_parser import SentenceSplitter
from llama_index.embeddings.openai import OpenAIEmbedding

transformations = [SentenceSplitter(), OpenAIEmbedding()]

# create the pack
ingest_pack = RedisIngestionPipelinePack(
    transformations,
    hostname="localhost",
    port=6379,
    cache_collection_name="ingest_cache",
    vector_collection_name="vector_store",
)

The run() function is a light wrapper around pipeline.run().

You can use this to ingest data and then create an index from the vector store.

pipeline.run(documents)

index = VectorStoreIndex.from_vector_store(inget_pack.vector_store)

You can learn more about the ingestion pipeline at the LlamaIndex documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file llama_index_packs_redis_ingestion_pipeline-0.2.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_packs_redis_ingestion_pipeline-0.2.0.tar.gz
Algorithm Hash digest
SHA256 f4c0fb7835c47ad58baee2aa7650fab926383848ffc443198872752e6b018646
MD5 699152fa3a40fd374fe049abc138a74e
BLAKE2b-256 38313c541e3ebb52d017ee32239ed9c22a0bf78c328f1a384ae84afb6cdac353

See more details on using hashes here.

File details

Details for the file llama_index_packs_redis_ingestion_pipeline-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_packs_redis_ingestion_pipeline-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d165ab77fcf2a44d74ad1c14dfd7f80550814a8006b418723d151a34bfff945e
MD5 97f325ff20c417a9f1b4c5f40b618530
BLAKE2b-256 0ae887d56c260a519ed6d04cb7136f442ecc629035c56c81167da826fc5cbc70

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page