Skip to main content

Apify-haystack integration

Project description

Apify-Haystack integration

The Apify-Haystack integration allows easy interaction between the Apify platform and Haystack.

Apify is a platform for web scraping, data extraction, and web automation tasks. It provides serverless applications called Actors for different tasks, like crawling websites, and scraping Facebook, Instagram, and Google results, etc.

Haystack offers an ecosystem of tools for building, managing, and deploying search engines and LLM applications.

Installation

Apify-haystack is available at the apify-haystack PyPI package.

pip install apify-haystack

Examples

Crawl a website using Apify's Website Content Crawler and convert it to Haystack Documents

You need to have an Apify account and API token to run this example. You can start with a free account at Apify and get your API token.

In the example below, specify apify_api_token and run the script:

from dotenv import load_dotenv
from haystack import Document

from apify_haystack import ApifyDatasetFromActorCall

# Set APIFY_API_TOKEN here or load it from .env file
apify_api_token = "" or load_dotenv()

actor_id = "apify/website-content-crawler"
run_input = {
    "maxCrawlPages": 3,  # limit the number of pages to crawl
    "startUrls": [{"url": "https://haystack.deepset.ai/"}],
}


def dataset_mapping_function(dataset_item: dict) -> Document:
    return Document(content=dataset_item.get("text"), meta={"url": dataset_item.get("url")})


actor = ApifyDatasetFromActorCall(
    actor_id=actor_id, run_input=run_input, dataset_mapping_function=dataset_mapping_function
)
print(f"Calling the Apify actor {actor_id} ... crawling will take some time ...")
print("You can monitor the progress at: https://console.apify.com/actors/runs")

dataset = actor.run().get("documents")

print(f"Loaded {len(dataset)} documents from the Apify Actor {actor_id}:")
for d in dataset:
    print(d)

More examples

See other examples in the examples directory for more examples, here is a list of few of them

  • Load a dataset from Apify and convert it to a Haystack Document
  • Call Website Content Crawler and convert the data into the Haystack Documents
  • Crawl websites, retrieve text content, and store it in the InMemoryDocumentStore
  • Retrieval-Augmented Generation (RAG): Extracting text from a website & question answering

Support

If you find any bug or issue, please submit an issue on GitHub. For questions, you can ask on Stack Overflow, in GitHub Discussions or you can join our Discord server.

Contributing

Your code contributions are welcome. If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see CONTRIBUTING.md.

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

apify_haystack-0.1.0b2.tar.gz (13.3 kB view details)

Uploaded Source

Built Distribution

apify_haystack-0.1.0b2-py3-none-any.whl (14.8 kB view details)

Uploaded Python 3

File details

Details for the file apify_haystack-0.1.0b2.tar.gz.

File metadata

  • Download URL: apify_haystack-0.1.0b2.tar.gz
  • Upload date:
  • Size: 13.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for apify_haystack-0.1.0b2.tar.gz
Algorithm Hash digest
SHA256 aad75038532bf467fc6b0d151ddafd770a7be520698e97572fc0ce81dff3467f
MD5 319401c347ebba6eb96a34bf5feb4327
BLAKE2b-256 ecfafbdc83e3f611ea6bf45c22a38262739185d70b778fb9d2e8b96cc0db2014

See more details on using hashes here.

File details

Details for the file apify_haystack-0.1.0b2-py3-none-any.whl.

File metadata

File hashes

Hashes for apify_haystack-0.1.0b2-py3-none-any.whl
Algorithm Hash digest
SHA256 59b6964014af660eed64ee5e3c83ffa890de569985181d81d0ee27c53c38789d
MD5 159a213395a742ae3313d5b3fc29a14f
BLAKE2b-256 dda515a35b0bd809837afa355c9965399dae1b2c589249ea07edaab900a64412

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page