Skip to main content

llama-index readers apify integration

Project description

Apify Loaders

Apify Actor Loader

Apify is a cloud platform for web scraping and data extraction, which provides an ecosystem of more than a thousand ready-made apps called Actors for various scraping, crawling, and extraction use cases.

This loader runs a specific Actor and loads its results.

Usage

In this example, we’ll use the Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, and extract text content from the web pages. The extracted text then can be fed to a vector index or language model like GPT in order to answer questions from it.

To use this loader, you need to have a (free) Apify account and set your Apify API token in the code.

from llama_index import download_loader
from llama_index.readers.schema import Document


# Converts a single record from the Actor's resulting dataset to the LlamaIndex format
def tranform_dataset_item(item):
    return Document(
        text=item.get("text"),
        extra_info={
            "url": item.get("url"),
        },
    )


ApifyActor = download_loader("ApifyActor")

reader = ApifyActor("<My Apify API token>")
documents = reader.load_data(
    actor_id="apify/website-content-crawler",
    run_input={
        "startUrls": [{"url": "https://gpt-index.readthedocs.io/en/latest"}]
    },
    dataset_mapping_function=tranform_dataset_item,
)

This loader is designed to be used as a way to load data into LlamaIndex and/or subsequently used as a Tool in a LangChain Agent. See here for examples.

Apify Dataset Loader

Apify is a cloud platform for web scraping and data extraction, which provides an ecosystem of more than a thousand ready-made apps called Actors for various scraping, crawling, and extraction use cases.

This loader loads documents from an existing Apify dataset.

Usage

In this example, we’ll load a dataset generated by the Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, and extract text content from the web pages. The extracted text then can be fed to a vector index or language model like GPT in order to answer questions from it.

To use this loader, you need to have a (free) Apify account and set your Apify API token in the code.

from llama_index import download_loader
from llama_index.readers.schema import Document


# Converts a single record from the Apify dataset to the LlamaIndex format
def tranform_dataset_item(item):
    return Document(
        text=item.get("text"),
        extra_info={
            "url": item.get("url"),
        },
    )


ApifyDataset = download_loader("ApifyDataset")

reader = ApifyDataset("<Your Apify API token>")
documents = reader.load_data(
    dataset_id="<Apify Dataset ID>",
    dataset_mapping_function=tranform_dataset_item,
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_readers_apify-0.1.0.tar.gz (3.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_readers_apify-0.1.0-py3-none-any.whl (5.3 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_readers_apify-0.1.0.tar.gz.

File metadata

  • Download URL: llama_index_readers_apify-0.1.0.tar.gz
  • Upload date:
  • Size: 3.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.10.13 Darwin/23.0.0

File hashes

Hashes for llama_index_readers_apify-0.1.0.tar.gz
Algorithm Hash digest
SHA256 8973ff383c40e2faea725980448ebaf70ff3904c17e1a03698c23260124511c4
MD5 9cd13bea2fcf6c413c674c3dc02f771e
BLAKE2b-256 2f4fe78e0ed35ca1389061ff3bc2cbb05e3017be02a3753422b64121f5685e23

See more details on using hashes here.

File details

Details for the file llama_index_readers_apify-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_readers_apify-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d8d609e4c59b4cb2dfae8d3f39d2f442fc183fd29c196f42608ee25f1f7b3e90
MD5 d4617db17059a112f54a81586fdb0ecc
BLAKE2b-256 da924d197fe6629090614e2e04d8ac6b809dda651f863ee2ff171a18211478c9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page