Skip to main content

llama-index readers lilac integration

Project description

Lilac reader

pip install llama-index-readers-papers

pip install llama-index-readers-lilac

Lilac is an open-source product that helps you analyze, enrich, and clean unstructured data with AI.

It can be used to analyze, clean, structure, and label data that can be used in downstream LlamaIndex and LangChain applications.

Lilac projects

This assumes you've already run Lilac locally, and have a project directory with a dataset. For more details on Lilac projects, see Lilac Projects

You can use any LlamaIndex loader to load data into Lilac, clean data, and then bring it back into LlamaIndex Documents.

Usage

LlamaIndex => Lilac

See this notebook for getting data into Lilac from LlamaHub.

import lilac as ll

# See: https://llamahub.ai/l/papers-arxiv
from llama_index.readers.papers import ArxivReader

loader = ArxivReader()
documents = loader.load_data(search_query="au:Karpathy")

# Set the project directory for Lilac.
ll.set_project_dir("./data")

# This assumes you already have a lilac project set up.
# If you don't, use ll.init(project_dir='./data')
ll.create_dataset(
    config=ll.DatasetConfig(
        namespace="local",
        name="arxiv-karpathy",
        source=ll.LlamaIndexDocsSource(
            # documents comes from the loader.load_data call in the previous cell.
            documents=documents
        ),
    )
)

# You can start a lilac server with. Once you've cleaned the dataset, you can come back into GPTIndex.
ll.start_server(project_dir="./data")

Lilac => LlamaIndex Documents

from llama_index.core import VectorStoreIndex, download_loader

from llama_index.readers.lilac import LilacReader

loader = LilacReader()
documents = loader.load_data(
    project_dir="~/my_project",
    # The name of your dataset in the project dir.
    dataset="local/arxiv-karpathy",
)

index = VectorStoreIndex.from_documents(documents)

index.query("How are ImageNet labels validated?")

This loader is designed to be used as a way to load data into GPT Index and/or subsequently used in a LangChain Agent.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_readers_lilac-0.3.0.tar.gz (3.6 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file llama_index_readers_lilac-0.3.0.tar.gz.

File metadata

  • Download URL: llama_index_readers_lilac-0.3.0.tar.gz
  • Upload date:
  • Size: 3.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0

File hashes

Hashes for llama_index_readers_lilac-0.3.0.tar.gz
Algorithm Hash digest
SHA256 086eb51d94b7214c60f4b4af1d8cecd23a49a2296b9f590d4744cbf0158a77db
MD5 f62bd8cfbcefd4d9740c82bcedae2585
BLAKE2b-256 be622146e78803f8881bb3ceaa5c518e681b2710f6a1f84f0138de734c1b5423

See more details on using hashes here.

File details

Details for the file llama_index_readers_lilac-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_readers_lilac-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 13f4c07b97b97694927e6d5f048d69f1f3780c570d98e947bf579d1a7cd48abd
MD5 01b705f31f649028e5027fecf10a4f1c
BLAKE2b-256 f95809403529c9a615e6452420e41c2c02c893f432191ff31cb082f42355adce

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page