Skip to main content

llama-index readers zep integration

Project description

Zep Reader

pip install llama-index-readers-zep

The Zep Reader returns a set of texts corresponding to a text query or embeddings retrieved from a Zep Collection. The Reader is initialized with a Zep API URL and optionally an API key. The Reader can then be used to load data from a Zep Document Collection.

About Zep

Zep is a long-term memory store for LLM applications. Zep makes it simple to add relevant documents, chat history memory and rich user data to your LLM app's prompts.

For more information about Zep and the Zep Quick Start Guide, see the Zep documentation.

Usage

Here's an end-to-end example usage of the ZepReader. First, we create a Zep Collection, chunk a document, and add it to the collection.

We then wait for Zep's async embedder to embed the document chunks. Finally, we query the collection and print the results.

import time
from uuid import uuid4

from llama_index.core.node_parser import SimpleNodeParser
from llama_index.core import Document
from zep_python import ZepClient
from zep_python.document import Document as ZepDocument


from llama_index.readers.zep import ZepReader

# Create a Zep collection
zep_api_url = "http://localhost:8000"  # replace with your Zep API URL
collection_name = f"babbage{uuid4().hex}"
file = "babbages_calculating_engine.txt"

print(f"Creating collection {collection_name}")

client = ZepClient(base_url=zep_api_url, api_key="optional_api_key")
collection = client.document.add_collection(
    name=collection_name,  # required
    description="Babbage's Calculating Engine",  # optional
    metadata={"foo": "bar"},  # optional metadata
    embedding_dimensions=1536,  # this must match the model you've configured in Zep
    is_auto_embedded=True,  # use Zep's built-in embedder. Defaults to True
)

node_parser = SimpleNodeParser.from_defaults(chunk_size=250, chunk_overlap=20)

with open(file) as f:
    raw_text = f.read()

print("Splitting text into chunks and adding them to the Zep vector store.")
docs = node_parser.get_nodes_from_documents(
    [Document(text=raw_text)], show_progress=True
)

# Convert nodes to ZepDocument
zep_docs = [ZepDocument(content=d.get_content()) for d in docs]
uuids = collection.add_documents(zep_docs)
print(f"Added {len(uuids)} documents to collection {collection_name}")

print("Waiting for documents to be embedded")
while True:
    c = client.document.get_collection(collection_name)
    print(
        "Embedding status: "
        f"{c.document_embedded_count}/{c.document_count} documents embedded"
    )
    time.sleep(1)
    if c.status == "ready":
        break

query = "Was Babbage awarded a medal?"

# Using the ZepReader to load data from Zep
reader = ZepReader(api_url=zep_api_url, api_key="optional_api_key")
results = reader.load_data(
    collection_name=collection_name, query=query, top_k=3
)

print("\n\n".join([r.text for r in results]))

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_readers_zep-0.4.1.tar.gz (4.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_readers_zep-0.4.1-py3-none-any.whl (4.5 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_readers_zep-0.4.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_readers_zep-0.4.1.tar.gz
Algorithm Hash digest
SHA256 31977f1351ca34f1a52988093c8cc34e23a4d3db9e027652f41cd1a23cdf640d
MD5 f68653a997802c988e87c8e729ffd78c
BLAKE2b-256 64a4f02fb55c5e08ab5d4ae118e08ce5f202273e374b4c227af0f77b2b1e0ef1

See more details on using hashes here.

File details

Details for the file llama_index_readers_zep-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_readers_zep-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7955bda4d553cbb050c28a124a52546f712d5dd00d0c3d0ff5562ab3791c4d94
MD5 c3d4f538bf8a95f14b4a16f222721ab2
BLAKE2b-256 41d7e8dc56e3a563844c71ee2d49cbd356409095e66a8506ecf0eeeaa7a94b82

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page