Skip to main content

llama-index readers apify integration

Project description

Apify Loaders

Apify Actor Loader

Apify is a cloud platform for web scraping and data extraction, which provides an ecosystem of more than a thousand ready-made apps called Actors for various scraping, crawling, and extraction use cases.

This loader runs a specific Actor and loads its results.

Usage

In this example, we’ll use the Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, and extract text content from the web pages. The extracted text then can be fed to a vector index or language model like GPT in order to answer questions from it.

To use this loader, you need to have a (free) Apify account and set your Apify API token in the code.

from llama_index import download_loader
from llama_index.readers.schema import Document


# Converts a single record from the Actor's resulting dataset to the LlamaIndex format
def tranform_dataset_item(item):
    return Document(
        text=item.get("text"),
        extra_info={
            "url": item.get("url"),
        },
    )


ApifyActor = download_loader("ApifyActor")

reader = ApifyActor("<My Apify API token>")
documents = reader.load_data(
    actor_id="apify/website-content-crawler",
    run_input={
        "startUrls": [{"url": "https://gpt-index.readthedocs.io/en/latest"}]
    },
    dataset_mapping_function=tranform_dataset_item,
)

This loader is designed to be used as a way to load data into LlamaIndex and/or subsequently used as a Tool in a LangChain Agent. See here for examples.

Apify Dataset Loader

Apify is a cloud platform for web scraping and data extraction, which provides an ecosystem of more than a thousand ready-made apps called Actors for various scraping, crawling, and extraction use cases.

This loader loads documents from an existing Apify dataset.

Usage

In this example, we’ll load a dataset generated by the Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs, and extract text content from the web pages. The extracted text then can be fed to a vector index or language model like GPT in order to answer questions from it.

To use this loader, you need to have a (free) Apify account and set your Apify API token in the code.

from llama_index import download_loader
from llama_index.readers.schema import Document


# Converts a single record from the Apify dataset to the LlamaIndex format
def tranform_dataset_item(item):
    return Document(
        text=item.get("text"),
        extra_info={
            "url": item.get("url"),
        },
    )


ApifyDataset = download_loader("ApifyDataset")

reader = ApifyDataset("<Your Apify API token>")
documents = reader.load_data(
    dataset_id="<Apify Dataset ID>",
    dataset_mapping_function=tranform_dataset_item,
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_readers_apify-0.1.2.tar.gz (3.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_readers_apify-0.1.2-py3-none-any.whl (5.3 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_readers_apify-0.1.2.tar.gz.

File metadata

  • Download URL: llama_index_readers_apify-0.1.2.tar.gz
  • Upload date:
  • Size: 3.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.10.13 Darwin/23.0.0

File hashes

Hashes for llama_index_readers_apify-0.1.2.tar.gz
Algorithm Hash digest
SHA256 8312f269edf83725108c47207dd7fdb8b0269b4df8263bad3f92bc12ec6ce060
MD5 02b049176a90fc1718ce5a2ec205581e
BLAKE2b-256 3d80d69890942d0ef0dfc47c6bf63f2180791ccad85dd906a78f70be93f789c4

See more details on using hashes here.

File details

Details for the file llama_index_readers_apify-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_readers_apify-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5c1477f1e1ee70cf3ed72df16fad17f96739c5ba696f1b7a426c65ed5e88c987
MD5 1c1bf751f6b08db52a2b7d49d56c58cf
BLAKE2b-256 171d0bb4532a376bcbf4b3c2caf62bf9da57a9fea3017e22628f88fc41485db2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page