Skip to main content

llama-index readers apify integration

Project description

Apify Loaders

Apify Actor Loader

Apify is a cloud platform for web scraping and data extraction, which provides an ecosystem of more than a thousand ready-made apps called Actors for various scraping, crawling, and extraction use cases.

This loader runs a specific Actor and loads its results.

Usage

In this example, we’ll use the Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, and extract text content from the web pages. The extracted text then can be fed to a vector index or language model like GPT in order to answer questions from it.

To use this loader, you need to have a (free) Apify account and set your Apify API token in the code.

from llama_index import download_loader
from llama_index.readers.schema import Document


# Converts a single record from the Actor's resulting dataset to the LlamaIndex format
def tranform_dataset_item(item):
    return Document(
        text=item.get("text"),
        extra_info={
            "url": item.get("url"),
        },
    )


ApifyActor = download_loader("ApifyActor")

reader = ApifyActor("<My Apify API token>")
documents = reader.load_data(
    actor_id="apify/website-content-crawler",
    run_input={
        "startUrls": [{"url": "https://gpt-index.readthedocs.io/en/latest"}]
    },
    dataset_mapping_function=tranform_dataset_item,
)

This loader is designed to be used as a way to load data into LlamaIndex and/or subsequently used as a Tool in a LangChain Agent. See here for examples.

Apify Dataset Loader

Apify is a cloud platform for web scraping and data extraction, which provides an ecosystem of more than a thousand ready-made apps called Actors for various scraping, crawling, and extraction use cases.

This loader loads documents from an existing Apify dataset.

Usage

In this example, we’ll load a dataset generated by the Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, and extract text content from the web pages. The extracted text then can be fed to a vector index or language model like GPT in order to answer questions from it.

To use this loader, you need to have a (free) Apify account and set your Apify API token in the code.

from llama_index import download_loader
from llama_index.readers.schema import Document


# Converts a single record from the Apify dataset to the LlamaIndex format
def tranform_dataset_item(item):
    return Document(
        text=item.get("text"),
        extra_info={
            "url": item.get("url"),
        },
    )


ApifyDataset = download_loader("ApifyDataset")

reader = ApifyDataset("<Your Apify API token>")
documents = reader.load_data(
    dataset_id="<Apify Dataset ID>",
    dataset_mapping_function=tranform_dataset_item,
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_readers_apify-0.1.3.tar.gz (3.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_readers_apify-0.1.3-py3-none-any.whl (4.7 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_readers_apify-0.1.3.tar.gz.

File metadata

  • Download URL: llama_index_readers_apify-0.1.3.tar.gz
  • Upload date:
  • Size: 3.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.10.13 Darwin/23.0.0

File hashes

Hashes for llama_index_readers_apify-0.1.3.tar.gz
Algorithm Hash digest
SHA256 c7f400917346783c7f9ec6b27eebd30fa154a51c8f1f0040a5b52d977efd466b
MD5 592502780cc64a9e31860cb397e019df
BLAKE2b-256 be317642742b80b4e57c33bd19cdb15253cb091d1e2fc076e7a69c029699f07d

See more details on using hashes here.

File details

Details for the file llama_index_readers_apify-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_readers_apify-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 c677349a9c97b0e5661efa4434351bc6bb9ccdc7f0de33e3e28e17c77b442ec1
MD5 15c3bae35db0a469c15c87f95785b94e
BLAKE2b-256 9682ef90d442a95c3521caff767139ff297511dcb543b9264ea255034a21e037

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page