Skip to main content

Apify-haystack integration

Project description

Apify-Haystack integration

License PyPi Package Python

The Apify-Haystack integration allows easy interaction between the Apify platform and Haystack.

Apify is a platform for web scraping, data extraction, and web automation tasks. It provides serverless applications called Actors for different tasks, like crawling websites, and scraping Facebook, Instagram, and Google results, etc.

Haystack offers an ecosystem of tools for building, managing, and deploying search engines and LLM applications.

Installation

Apify-haystack is available at the apify-haystack PyPI package.

pip install apify-haystack

Examples

Crawl a website using Apify's Website Content Crawler and convert it to Haystack Documents

You need to have an Apify account and API token to run this example. You can start with a free account at Apify and get your API token.

In the example below, specify apify_api_token and run the script:

from dotenv import load_dotenv
from haystack import Document

from apify_haystack import ApifyDatasetFromActorCall

# Set APIFY_API_TOKEN here or load it from .env file
apify_api_token = "" or load_dotenv()

actor_id = "apify/website-content-crawler"
run_input = {
    "maxCrawlPages": 3,  # limit the number of pages to crawl
    "startUrls": [{"url": "https://haystack.deepset.ai/"}],
}


def dataset_mapping_function(dataset_item: dict) -> Document:
    return Document(content=dataset_item.get("text"), meta={"url": dataset_item.get("url")})


actor = ApifyDatasetFromActorCall(
    actor_id=actor_id, run_input=run_input, dataset_mapping_function=dataset_mapping_function
)
print(f"Calling the Apify actor {actor_id} ... crawling will take some time ...")
print("You can monitor the progress at: https://console.apify.com/actors/runs")

dataset = actor.run().get("documents")

print(f"Loaded {len(dataset)} documents from the Apify Actor {actor_id}:")
for d in dataset:
    print(d)

More examples

See other examples in the examples directory for more examples, here is a list of few of them

  • Load a dataset from Apify and convert it to a Haystack Document
  • Call Website Content Crawler and convert the data into the Haystack Documents
  • Crawl websites, retrieve text content, and store it in the InMemoryDocumentStore
  • Retrieval-Augmented Generation (RAG): Extracting text from a website & question answering Open In Colab
  • Analyze Your Instagram Comments’ Vibe with Apify and Haystack Open In Colab

Support

If you find any bug or issue, please submit an issue on GitHub. For questions, you can ask on Stack Overflow, in GitHub Discussions or you can join our Discord server.

Contributing

Your code contributions are welcome. If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see CONTRIBUTING.md.

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

apify_haystack-0.1.5.tar.gz (15.0 kB view details)

Uploaded Source

Built Distribution

apify_haystack-0.1.5-py3-none-any.whl (16.7 kB view details)

Uploaded Python 3

File details

Details for the file apify_haystack-0.1.5.tar.gz.

File metadata

  • Download URL: apify_haystack-0.1.5.tar.gz
  • Upload date:
  • Size: 15.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for apify_haystack-0.1.5.tar.gz
Algorithm Hash digest
SHA256 52dc45a7fa11a8e90146a224f4ff4fe3ae3bdd05051a254c6b87c5992c4228d2
MD5 57beffbcde930ddc2f34544feb444046
BLAKE2b-256 e6973cab2187fd1ff819028bd2ca2a92581b8f0e8ebfba597cc1a66515e48db9

See more details on using hashes here.

File details

Details for the file apify_haystack-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for apify_haystack-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 1a961315b83251763829bc2fd5d9be7488b0b05742d77624b6174fee8bd9be05
MD5 babb4712fb31eaf8a3820de2d2e09527
BLAKE2b-256 337655b31a9ef689aabf5ef8a8acb91d9ff944e67a547ca72238400b7574f296

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page