Skip to main content

Apify-haystack integration

Project description

Apify-Haystack integration

License PyPi Package Python

The Apify-Haystack integration allows easy interaction between the Apify platform and Haystack.

Apify is a platform for web scraping, data extraction, and web automation tasks. It provides serverless applications called Actors for different tasks, like crawling websites, and scraping Facebook, Instagram, and Google results, etc.

Haystack offers an ecosystem of tools for building, managing, and deploying search engines and LLM applications.

Installation

Apify-haystack is available at the apify-haystack PyPI package.

pip install apify-haystack

Examples

Crawl a website using Apify's Website Content Crawler and convert it to Haystack Documents

You need to have an Apify account and API token to run this example. You can start with a free account at Apify and get your API token.

In the example below, specify apify_api_token and run the script:

from dotenv import load_dotenv
from haystack import Document

from apify_haystack import ApifyDatasetFromActorCall

# Set APIFY_API_TOKEN here or load it from .env file
apify_api_token = "" or load_dotenv()

actor_id = "apify/website-content-crawler"
run_input = {
    "maxCrawlPages": 3,  # limit the number of pages to crawl
    "startUrls": [{"url": "https://haystack.deepset.ai/"}],
}


def dataset_mapping_function(dataset_item: dict) -> Document:
    return Document(content=dataset_item.get("text"), meta={"url": dataset_item.get("url")})


actor = ApifyDatasetFromActorCall(
    actor_id=actor_id, run_input=run_input, dataset_mapping_function=dataset_mapping_function
)
print(f"Calling the Apify actor {actor_id} ... crawling will take some time ...")
print("You can monitor the progress at: https://console.apify.com/actors/runs")

dataset = actor.run().get("documents")

print(f"Loaded {len(dataset)} documents from the Apify Actor {actor_id}:")
for d in dataset:
    print(d)

More examples

See other examples in the examples directory for more examples, here is a list of few of them

  • Load a dataset from Apify and convert it to a Haystack Document
  • Call Website Content Crawler and convert the data into the Haystack Documents
  • Crawl websites, retrieve text content, and store it in the InMemoryDocumentStore
  • Retrieval-Augmented Generation (RAG): Extracting text from a website & question answering

Support

If you find any bug or issue, please submit an issue on GitHub. For questions, you can ask on Stack Overflow, in GitHub Discussions or you can join our Discord server.

Contributing

Your code contributions are welcome. If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see CONTRIBUTING.md.

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

apify_haystack-0.1.2b1.tar.gz (13.5 kB view details)

Uploaded Source

Built Distribution

apify_haystack-0.1.2b1-py3-none-any.whl (14.9 kB view details)

Uploaded Python 3

File details

Details for the file apify_haystack-0.1.2b1.tar.gz.

File metadata

  • Download URL: apify_haystack-0.1.2b1.tar.gz
  • Upload date:
  • Size: 13.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for apify_haystack-0.1.2b1.tar.gz
Algorithm Hash digest
SHA256 da8c102cae5d66af42bf6758b9b4f9a38f2048b4791e30e45898044836fc8325
MD5 8f93989b80f783dd0f45994cc1592483
BLAKE2b-256 3a5ef5ebff84dcadbc3bf417b6f47003cdc4172618ac09172f8ec4505ec4372b

See more details on using hashes here.

File details

Details for the file apify_haystack-0.1.2b1-py3-none-any.whl.

File metadata

File hashes

Hashes for apify_haystack-0.1.2b1-py3-none-any.whl
Algorithm Hash digest
SHA256 68691e54d439337e4af363c2dc5f41cb485fbb4d676fdd8464a119e84a78ee22
MD5 c1ef8310326222016bae0d53abd2e5d6
BLAKE2b-256 61a9b7077180fbb1bf704e60c082fbfac3aedca5b9a69614ee95364794a88d6d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page