Skip to main content

Apify-haystack integration

Project description

Apify-Haystack integration

The Apify-Haystack integration allows easy interaction between the Apify platform and Haystack.

Apify is a platform for web scraping, data extraction, and web automation tasks. It provides serverless applications called Actors for different tasks, like crawling websites, and scraping Facebook, Instagram, and Google results, etc.

Haystack offers an ecosystem of tools for building, managing, and deploying search engines and LLM applications.

Installation

Apify-haystack is available at the apify-haystack PyPI package.

pip install apify-haystack

Examples

Crawl a website using Apify's Website Content Crawler and convert it to Haystack Documents

You need to have an Apify account and API token to run this example. You can start with a free account at Apify and get your API token.

In the example below, specify apify_api_token and run the script:

from dotenv import load_dotenv
from haystack import Document

from apify_haystack import ApifyDatasetFromActorCall

# Set APIFY_API_TOKEN here or load it from .env file
apify_api_token = "" or load_dotenv()

actor_id = "apify/website-content-crawler"
run_input = {
    "maxCrawlPages": 3,  # limit the number of pages to crawl
    "startUrls": [{"url": "https://haystack.deepset.ai/"}],
}


def dataset_mapping_function(dataset_item: dict) -> Document:
    return Document(content=dataset_item.get("text"), meta={"url": dataset_item.get("url")})


actor = ApifyDatasetFromActorCall(
    actor_id=actor_id, run_input=run_input, dataset_mapping_function=dataset_mapping_function
)
print(f"Calling the Apify actor {actor_id} ... crawling will take some time ...")
print("You can monitor the progress at: https://console.apify.com/actors/runs")

dataset = actor.run().get("documents")

print(f"Loaded {len(dataset)} documents from the Apify Actor {actor_id}:")
for d in dataset:
    print(d)

More examples

See other examples in the examples directory for more examples, here is a list of few of them

  • Load a dataset from Apify and convert it to a Haystack Document
  • Call Website Content Crawler and convert the data into the Haystack Documents
  • Crawl websites, retrieve text content, and store it in the InMemoryDocumentStore
  • Retrieval-Augmented Generation (RAG): Extracting text from a website & question answering

Support

If you find any bug or issue, please submit an issue on GitHub. For questions, you can ask on Stack Overflow, in GitHub Discussions or you can join our Discord server.

Contributing

Your code contributions are welcome. If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see CONTRIBUTING.md.

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

apify_haystack-0.1.0b1.tar.gz (13.3 kB view details)

Uploaded Source

Built Distribution

apify_haystack-0.1.0b1-py3-none-any.whl (14.8 kB view details)

Uploaded Python 3

File details

Details for the file apify_haystack-0.1.0b1.tar.gz.

File metadata

  • Download URL: apify_haystack-0.1.0b1.tar.gz
  • Upload date:
  • Size: 13.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for apify_haystack-0.1.0b1.tar.gz
Algorithm Hash digest
SHA256 c876d1cc797a632bafba24ecd8866e2d11d94135842df0088ca5bc213797e9e8
MD5 c5af01567cf818b7fb1f2dfb1f2edc4b
BLAKE2b-256 eb0bcd88a05cdd9cbca714e2b129e9145d2444dede0a88873b894bcc1cb54d29

See more details on using hashes here.

File details

Details for the file apify_haystack-0.1.0b1-py3-none-any.whl.

File metadata

File hashes

Hashes for apify_haystack-0.1.0b1-py3-none-any.whl
Algorithm Hash digest
SHA256 b485fc97709693a787daad57cd458c310ddb9057c0448c50c17bcb1bde9a44e3
MD5 af25f42e84056ec86b6354bfb4f463e8
BLAKE2b-256 41801bc439588ed05c7ed41971ed9f7e636e37377c489a8334c812819a7b8646

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page