Skip to main content

Apify SDK for Python

Project description

Apify SDK for Python

PyPI version PyPI - Downloads PyPI - Python Version Chat on discord

The Apify SDK for Python is the official library to create Apify Actors in Python. It provides useful features like Actor lifecycle management, local storage emulation, and Actor event handling.

If you just need to access the Apify API from your Python applications, check out the Apify Client for Python instead.

Installation

The Apify SDK for Python is available on PyPI as the apify package. For default installation, using Pip, run the following:

pip install apify

For users interested in integrating Apify with Scrapy, we provide a package extra called scrapy. To install Apify with the scrapy extra, use the following command:

pip install apify[scrapy]

Documentation

For usage instructions, check the documentation on Apify Docs.

Examples

Below are few examples demonstrating how to use the Apify SDK with some web scraping-related libraries.

Apify SDK with HTTPX and BeautifulSoup

This example illustrates how to integrate the Apify SDK with HTTPX and BeautifulSoup to scrape data from web pages.

from bs4 import BeautifulSoup
from httpx import AsyncClient

from apify import Actor


async def main() -> None:
    async with Actor:
        # Retrieve the Actor input, and use default values if not provided.
        actor_input = await Actor.get_input() or {}
        start_urls = actor_input.get('start_urls', [{'url': 'https://apify.com'}])

        # Open the default request queue for handling URLs to be processed.
        request_queue = await Actor.open_request_queue()

        # Enqueue the start URLs.
        for start_url in start_urls:
            url = start_url.get('url')
            await request_queue.add_request(url)

        # Process the URLs from the request queue.
        while request := await request_queue.fetch_next_request():
            Actor.log.info(f'Scraping {request.url} ...')

            # Fetch the HTTP response from the specified URL using HTTPX.
            async with AsyncClient() as client:
                response = await client.get(request.url)

            # Parse the HTML content using Beautiful Soup.
            soup = BeautifulSoup(response.content, 'html.parser')

            # Extract the desired data.
            data = {
                'url': actor_input['url'],
                'title': soup.title.string,
                'h1s': [h1.text for h1 in soup.find_all('h1')],
                'h2s': [h2.text for h2 in soup.find_all('h2')],
                'h3s': [h3.text for h3 in soup.find_all('h3')],
            }

            # Store the extracted data to the default dataset.
            await Actor.push_data(data)

Apify SDK with PlaywrightCrawler from Crawlee

This example demonstrates how to use the Apify SDK alongside PlaywrightCrawler from Crawlee to perform web scraping.

from crawlee.crawlers import PlaywrightCrawler, PlaywrightCrawlingContext

from apify import Actor


async def main() -> None:
    async with Actor:
        # Retrieve the Actor input, and use default values if not provided.
        actor_input = await Actor.get_input() or {}
        start_urls = [url.get('url') for url in actor_input.get('start_urls', [{'url': 'https://apify.com'}])]

        # Exit if no start URLs are provided.
        if not start_urls:
            Actor.log.info('No start URLs specified in Actor input, exiting...')
            await Actor.exit()

        # Create a crawler.
        crawler = PlaywrightCrawler(
            # Limit the crawl to max requests. Remove or increase it for crawling all links.
            max_requests_per_crawl=50,
            headless=True,
        )

        # Define a request handler, which will be called for every request.
        @crawler.router.default_handler
        async def request_handler(context: PlaywrightCrawlingContext) -> None:
            url = context.request.url
            Actor.log.info(f'Scraping {url}...')

            # Extract the desired data.
            data = {
                'url': context.request.url,
                'title': await context.page.title(),
                'h1s': [await h1.text_content() for h1 in await context.page.locator('h1').all()],
                'h2s': [await h2.text_content() for h2 in await context.page.locator('h2').all()],
                'h3s': [await h3.text_content() for h3 in await context.page.locator('h3').all()],
            }

            # Store the extracted data to the default dataset.
            await context.push_data(data)

            # Enqueue additional links found on the current page.
            await context.enqueue_links()

        # Run the crawler with the starting URLs.
        await crawler.run(start_urls)

What are Actors?

Actors are serverless cloud programs that can do almost anything a human can do in a web browser. They can do anything from small tasks such as filling in forms or unsubscribing from online services, all the way up to scraping and processing vast numbers of web pages.

They can be run either locally, or on the Apify platform, where you can run them at scale, monitor them, schedule them, or publish and monetize them.

If you're new to Apify, learn what is Apify in the Apify platform documentation.

Creating Actors

To create and run Actors through Apify Console, see the Console documentation.

To create and run Python Actors locally, check the documentation for how to create and run Python Actors locally.

Guides

To see how you can use the Apify SDK with other popular libraries used for web scraping, check out our guides for using Requests and HTTPX, Beautiful Soup, Playwright, Selenium, or Scrapy.

Usage concepts

To learn more about the features of the Apify SDK and how to use them, check out the Usage Concepts section in the sidebar, particularly the guides for the Actor lifecycle, working with storages, handling Actor events or how to use proxies.

Project details


Release history Release notifications | RSS feed

This version

3.0.4

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

apify-3.0.4.tar.gz (862.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

apify-3.0.4-py3-none-any.whl (96.8 kB view details)

Uploaded Python 3

File details

Details for the file apify-3.0.4.tar.gz.

File metadata

  • Download URL: apify-3.0.4.tar.gz
  • Upload date:
  • Size: 862.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for apify-3.0.4.tar.gz
Algorithm Hash digest
SHA256 d5c8315bece68c7af3b7e595b93d8c4661e70314e769ce5131f363009d4014ec
MD5 4bf1a1e62a2a203ad5fc70a59d41a386
BLAKE2b-256 7518fcbf479e278433c14ae5080cbed2e1e1026898886b9bfc2df020d1ffca29

See more details on using hashes here.

Provenance

The following attestation bundles were made for apify-3.0.4.tar.gz:

Publisher: release.yaml on apify/apify-sdk-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file apify-3.0.4-py3-none-any.whl.

File metadata

  • Download URL: apify-3.0.4-py3-none-any.whl
  • Upload date:
  • Size: 96.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for apify-3.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 85aff37c64737b0caae4288efd9362c3f2bd153d8cfb333896be6f3a2a05de54
MD5 1731cc3ff24ac62c5838b902bc45f78c
BLAKE2b-256 e8a5ae703524d9fe7e67a77a9d91e106d64c41c160071d6620ba0142f07d53fe

See more details on using hashes here.

Provenance

The following attestation bundles were made for apify-3.0.4-py3-none-any.whl:

Publisher: release.yaml on apify/apify-sdk-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page