Skip to main content

Apify SDK for Python

Project description

Apify SDK for Python

PyPI package version PyPI package downloads Codecov report PyPI Python version Chat on Discord

The Apify SDK for Python is the official library to create Apify Actors in Python. It provides useful features like Actor lifecycle management, local storage emulation, and Actor event handling.

If you just need to access the Apify API from your Python applications, check out the Apify Client for Python instead.

Installation

The Apify SDK for Python is available on PyPI as the apify package. For default installation, using Pip, run the following:

pip install apify

For users interested in integrating Apify with Scrapy, we provide a package extra called scrapy. To install Apify with the scrapy extra, use the following command:

pip install apify[scrapy]

Documentation

For usage instructions, check the documentation on Apify Docs.

Examples

Below are few examples demonstrating how to use the Apify SDK with some web scraping-related libraries.

Apify SDK with HTTPX and BeautifulSoup

This example illustrates how to integrate the Apify SDK with HTTPX and BeautifulSoup to scrape data from web pages.

from bs4 import BeautifulSoup
from httpx import AsyncClient

from apify import Actor


async def main() -> None:
    async with Actor:
        # Retrieve the Actor input, and use default values if not provided.
        actor_input = await Actor.get_input() or {}
        start_urls = actor_input.get('start_urls', [{'url': 'https://apify.com'}])

        # Open the default request queue for handling URLs to be processed.
        request_queue = await Actor.open_request_queue()

        # Enqueue the start URLs.
        for start_url in start_urls:
            url = start_url.get('url')
            await request_queue.add_request(url)

        # Process the URLs from the request queue.
        while request := await request_queue.fetch_next_request():
            Actor.log.info(f'Scraping {request.url} ...')

            # Fetch the HTTP response from the specified URL using HTTPX.
            async with AsyncClient() as client:
                response = await client.get(request.url)

            # Parse the HTML content using Beautiful Soup.
            soup = BeautifulSoup(response.content, 'html.parser')

            # Extract the desired data.
            data = {
                'url': actor_input['url'],
                'title': soup.title.string,
                'h1s': [h1.text for h1 in soup.find_all('h1')],
                'h2s': [h2.text for h2 in soup.find_all('h2')],
                'h3s': [h3.text for h3 in soup.find_all('h3')],
            }

            # Store the extracted data to the default dataset.
            await Actor.push_data(data)

Apify SDK with PlaywrightCrawler from Crawlee

This example demonstrates how to use the Apify SDK alongside PlaywrightCrawler from Crawlee to perform web scraping.

from crawlee.crawlers import PlaywrightCrawler, PlaywrightCrawlingContext

from apify import Actor


async def main() -> None:
    async with Actor:
        # Retrieve the Actor input, and use default values if not provided.
        actor_input = await Actor.get_input() or {}
        start_urls = [url.get('url') for url in actor_input.get('start_urls', [{'url': 'https://apify.com'}])]

        # Exit if no start URLs are provided.
        if not start_urls:
            Actor.log.info('No start URLs specified in Actor input, exiting...')
            await Actor.exit()

        # Create a crawler.
        crawler = PlaywrightCrawler(
            # Limit the crawl to max requests. Remove or increase it for crawling all links.
            max_requests_per_crawl=50,
            headless=True,
        )

        # Define a request handler, which will be called for every request.
        @crawler.router.default_handler
        async def request_handler(context: PlaywrightCrawlingContext) -> None:
            url = context.request.url
            Actor.log.info(f'Scraping {url}...')

            # Extract the desired data.
            data = {
                'url': context.request.url,
                'title': await context.page.title(),
                'h1s': [await h1.text_content() for h1 in await context.page.locator('h1').all()],
                'h2s': [await h2.text_content() for h2 in await context.page.locator('h2').all()],
                'h3s': [await h3.text_content() for h3 in await context.page.locator('h3').all()],
            }

            # Store the extracted data to the default dataset.
            await context.push_data(data)

            # Enqueue additional links found on the current page.
            await context.enqueue_links()

        # Run the crawler with the starting URLs.
        await crawler.run(start_urls)

What are Actors?

Actors are serverless cloud programs that can do almost anything a human can do in a web browser. They can do anything from small tasks such as filling in forms or unsubscribing from online services, all the way up to scraping and processing vast numbers of web pages.

They can be run either locally, or on the Apify platform, where you can run them at scale, monitor them, schedule them, or publish and monetize them.

If you're new to Apify, learn what is Apify in the Apify platform documentation.

Creating Actors

To create and run Actors through Apify Console, see the Console documentation.

To create and run Python Actors locally, check the documentation for how to create and run Python Actors locally.

Guides

To see how you can use the Apify SDK with other popular libraries used for web scraping, check out our guides for using Requests and HTTPX, Beautiful Soup, Playwright, Selenium, or Scrapy.

Usage concepts

To learn more about the features of the Apify SDK and how to use them, check out the Usage Concepts section in the sidebar, particularly the guides for the Actor lifecycle, working with storages, handling Actor events or how to use proxies.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

apify-3.3.1b1.tar.gz (7.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

apify-3.3.1b1-py3-none-any.whl (100.4 kB view details)

Uploaded Python 3

File details

Details for the file apify-3.3.1b1.tar.gz.

File metadata

  • Download URL: apify-3.3.1b1.tar.gz
  • Upload date:
  • Size: 7.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for apify-3.3.1b1.tar.gz
Algorithm Hash digest
SHA256 bb88ad02d8933c09934da541ef174d65b51937b3d6a6e9e89501a374dabd0b6c
MD5 58e9b5e84c786f90e43362059e0d5654
BLAKE2b-256 967084371a7bcb9d8d09ba427753bc9b6955d10c528ee838aa1d254f3ab7a3e7

See more details on using hashes here.

Provenance

The following attestation bundles were made for apify-3.3.1b1.tar.gz:

Publisher: on_master.yaml on apify/apify-sdk-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file apify-3.3.1b1-py3-none-any.whl.

File metadata

  • Download URL: apify-3.3.1b1-py3-none-any.whl
  • Upload date:
  • Size: 100.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for apify-3.3.1b1-py3-none-any.whl
Algorithm Hash digest
SHA256 2e39531f06cfdad27005caa535bbaae0e89b4273447ffc1b63f2c2a3ba82730b
MD5 d63aee5c39d98769d4d029a35a3a141a
BLAKE2b-256 6faa919bc05bb7ac8bbf23f16d420a40f83c1dd110a0109ef68230880f39db27

See more details on using hashes here.

Provenance

The following attestation bundles were made for apify-3.3.1b1-py3-none-any.whl:

Publisher: on_master.yaml on apify/apify-sdk-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page