Skip to main content

llama-index readers apify integration

Project description

Apify Loaders

pip install llama-index-readers-apify

Apify Actor Loader

Apify is a cloud platform for web scraping and data extraction, which provides an ecosystem of more than a thousand ready-made apps called Actors for various scraping, crawling, and extraction use cases.

This loader runs a specific Actor and loads its results.

Usage

In this example, we’ll use the Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, and extract text content from the web pages. The extracted text then can be fed to a vector index or language model like GPT in order to answer questions from it.

To use this loader, you need to have a (free) Apify account and set your Apify API token in the code.

from llama_index.core import Document
from llama_index.readers.apify import ApifyActor

reader = ApifyActor("<My Apify API token>")

documents = reader.load_data(
    actor_id="apify/website-content-crawler",
    run_input={
        "startUrls": [{"url": "https://docs.llamaindex.ai/en/latest/"}]
    },
    dataset_mapping_function=lambda item: Document(
        text=item.get("text"),
        metadata={
            "url": item.get("url"),
        },
    ),
)

This loader is designed to be used as a way to load data into LlamaIndex and/or subsequently used as a Tool in a LangChain Agent.

Apify Dataset Loader

Apify is a cloud platform for web scraping and data extraction, which provides an ecosystem of more than a thousand ready-made apps called Actors for various scraping, crawling, and extraction use cases.

This loader loads documents from an existing Apify dataset.

Usage

In this example, we’ll load a dataset generated by the Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, and extract text content from the web pages. The extracted text then can be fed to a vector index or language model like GPT in order to answer questions from it.

To use this loader, you need to have a (free) Apify account and set your Apify API token in the code.

from llama_index.core import Document
from llama_index.readers.apify import ApifyDataset

reader = ApifyDataset("<Your Apify API token>")
documents = reader.load_data(
    dataset_id="<Apify Dataset ID>",
    dataset_mapping_function=lambda item: Document(
        text=item.get("text"),
        metadata={
            "url": item.get("url"),
        },
    ),
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_readers_apify-0.3.0.tar.gz (3.6 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file llama_index_readers_apify-0.3.0.tar.gz.

File metadata

  • Download URL: llama_index_readers_apify-0.3.0.tar.gz
  • Upload date:
  • Size: 3.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0

File hashes

Hashes for llama_index_readers_apify-0.3.0.tar.gz
Algorithm Hash digest
SHA256 8f72b7071322ab08a55aee3414b32cfff2a0aa88082330eda97e4ac6d1175fe2
MD5 a61924defd5cf2bb0eee0e1e16f857b4
BLAKE2b-256 2a01a1a40c00981d8d79eed43093a1124f0c8ca9831198ad96111ea48e87af30

See more details on using hashes here.

File details

Details for the file llama_index_readers_apify-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_readers_apify-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 89dd10467be7530a7b886eb820b81742e6db42fa42723754f27b5e5d35807883
MD5 f4b7c19a7c7d65d0657e0ae464ab8c8a
BLAKE2b-256 bdef14fcc8f2e554ed065a6f845f9b50b33d21914ab482df84ee8013ffc74e5b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page