Skip to main content

llama-index readers zep integration

Project description

Zep Reader

pip install llama-index-readers-zep

The Zep Reader returns a set of texts corresponding to a text query or embeddings retrieved from a Zep Collection. The Reader is initialized with a Zep API URL and optionally an API key. The Reader can then be used to load data from a Zep Document Collection.

About Zep

Zep is a long-term memory store for LLM applications. Zep makes it simple to add relevant documents, chat history memory and rich user data to your LLM app's prompts.

For more information about Zep and the Zep Quick Start Guide, see the Zep documentation.

Usage

Here's an end-to-end example usage of the ZepReader. First, we create a Zep Collection, chunk a document, and add it to the collection.

We then wait for Zep's async embedder to embed the document chunks. Finally, we query the collection and print the results.

import time
from uuid import uuid4

from llama_index.core.node_parser import SimpleNodeParser
from llama_index.core import Document
from zep_python import ZepClient
from zep_python.document import Document as ZepDocument


from llama_index.readers.zep import ZepReader

# Create a Zep collection
zep_api_url = "http://localhost:8000"  # replace with your Zep API URL
collection_name = f"babbage{uuid4().hex}"
file = "babbages_calculating_engine.txt"

print(f"Creating collection {collection_name}")

client = ZepClient(base_url=zep_api_url, api_key="optional_api_key")
collection = client.document.add_collection(
    name=collection_name,  # required
    description="Babbage's Calculating Engine",  # optional
    metadata={"foo": "bar"},  # optional metadata
    embedding_dimensions=1536,  # this must match the model you've configured in Zep
    is_auto_embedded=True,  # use Zep's built-in embedder. Defaults to True
)

node_parser = SimpleNodeParser.from_defaults(chunk_size=250, chunk_overlap=20)

with open(file) as f:
    raw_text = f.read()

print("Splitting text into chunks and adding them to the Zep vector store.")
docs = node_parser.get_nodes_from_documents(
    [Document(text=raw_text)], show_progress=True
)

# Convert nodes to ZepDocument
zep_docs = [ZepDocument(content=d.get_content()) for d in docs]
uuids = collection.add_documents(zep_docs)
print(f"Added {len(uuids)} documents to collection {collection_name}")

print("Waiting for documents to be embedded")
while True:
    c = client.document.get_collection(collection_name)
    print(
        "Embedding status: "
        f"{c.document_embedded_count}/{c.document_count} documents embedded"
    )
    time.sleep(1)
    if c.status == "ready":
        break

query = "Was Babbage awarded a medal?"

# Using the ZepReader to load data from Zep
reader = ZepReader(api_url=zep_api_url, api_key="optional_api_key")
results = reader.load_data(
    collection_name=collection_name, query=query, top_k=3
)

print("\n\n".join([r.text for r in results]))

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_readers_zep-0.3.0.tar.gz (3.5 kB view details)

Uploaded Source

Built Distribution

llama_index_readers_zep-0.3.0-py3-none-any.whl (3.7 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_readers_zep-0.3.0.tar.gz.

File metadata

  • Download URL: llama_index_readers_zep-0.3.0.tar.gz
  • Upload date:
  • Size: 3.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0

File hashes

Hashes for llama_index_readers_zep-0.3.0.tar.gz
Algorithm Hash digest
SHA256 4cd5b35b66fffe8c355dc558bf5f8dfc6d5f88a23158bcd9e9e744c188d40137
MD5 cf943814574e1adf210657afcc489b2b
BLAKE2b-256 e0c68ce9556dee7ec43e932643b40ce0cdfbaf13d21863b7b89ec5589734a8b1

See more details on using hashes here.

File details

Details for the file llama_index_readers_zep-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_readers_zep-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8de71eb2261583a3bf0e90fd4ff171e449a7f1bbc34b350527f1fe35c799f987
MD5 a58847d618c0a36a51cae24f6683c6ab
BLAKE2b-256 779855bd284469f5f70e7c7860211af5a525d6eb82f27bb4faaaa0845bdee8a1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page