Skip to main content

Crawlee for Python

Project description

Crawlee
A web scraping and browser automation library

Crawlee covers your crawling and scraping end-to-end and helps you build reliable scrapers. Fast.

Your crawlers will appear almost human-like and fly under the radar of modern bot protections even with the default configuration. Crawlee gives you the tools to crawl the web for links, scrape data, and store it to disk or cloud while staying configurable to suit your project's needs.

👉 View full documentation, guides and examples on the Crawlee project website 👈

We also have a TypeScript implementation of the Crawlee, which you can explore and utilize for your projects. Visit our GitHub repository for more information Crawlee for JS/TS on GitHub.

Installation

We recommend visiting the Introduction tutorial in Crawlee documentation for more information.

Crawlee is available as the crawlee PyPI package.

pip install crawlee

Additional, optional dependencies unlocking more features are shipped as package extras.

If you plan to use BeautifulSoupCrawler, install crawlee with beautifulsoup extra:

pip install 'crawlee[beautifulsoup]'

If you plan to use PlaywrightCrawler, install crawlee with the playwright extra:

pip install 'crawlee[playwright]'

Then, install the Playwright dependencies:

playwright install

You can install multiple extras at once by using a comma as a separator:

pip install 'crawlee[beautifulsoup,playwright]'

Features

Why Crawlee is the preferred choice for web scraping and crawling?

Why use Crawlee instead of just a random HTTP library with an HTML parser?

  • Unified interface for HTTP & headless browser crawling.
  • Automatic parallel crawling based on available system resources.
  • Written in Python with type hints - enhances DX (IDE autocompletion) and reduces bugs (static type checking).
  • Automatic retries on errors or when you’re getting blocked.
  • Integrated proxy rotation and session management.
  • Configurable request routing - direct URLs to the appropriate handlers.
  • Persistent queue for URLs to crawl.
  • Pluggable storage of both tabular data and files.
  • Robust error handling.

Why to use Crawlee rather than Scrapy?

  • Crawlee has out-of-the-box support for headless browser crawling (Playwright).
  • Crawlee has a minimalistic & elegant interface - Set up your scraper with fewer than 10 lines of code.
  • Complete type hint coverage.
  • Based on standard Asyncio.

Introduction

Crawlee covers your crawling and scraping end-to-end and helps you build reliable scrapers. Fast.

Your crawlers will appear human-like and fly under the radar of modern bot protections even with the default configuration. Crawlee gives you the tools to crawl the web for links, scrape data and persistently store it in machine-readable formats, without having to worry about the technical details. And thanks to rich configuration options, you can tweak almost any aspect of Crawlee to suit your project's needs if the default settings don't cut it.

BeautifulSoupCrawler

The BeautifulSoupCrawler downloads web pages using an HTTP library and provides HTML-parsed content to the user. It uses HTTPX for HTTP communication and BeautifulSoup for parsing HTML. It is ideal for projects that require efficient extraction of data from HTML content. This crawler has very good performance since it does not use a browser. However, if you need to execute client-side JavaScript, to get your content, this is not going to be enough and you will need to use PlaywrightCrawler. Also if you want to use this crawler, make sure you install crawlee with beautifulsoup extra.

import asyncio

from crawlee.beautifulsoup_crawler import BeautifulSoupCrawler, BeautifulSoupCrawlingContext


async def main() -> None:
    crawler = BeautifulSoupCrawler(
        # Limit the crawl to max requests. Remove or increase it for crawling all links.
        max_requests_per_crawl=10,
    )

    # Define the default request handler, which will be called for every request.
    @crawler.router.default_handler
    async def request_handler(context: BeautifulSoupCrawlingContext) -> None:
        context.log.info(f'Processing {context.request.url} ...')

        # Extract data from the page.
        data = {
            'url': context.request.url,
            'title': context.soup.title.string if context.soup.title else None,
        }

        # Push the extracted data to the default dataset.
        await context.push_data(data)

        # Enqueue all links found on the page.
        await context.enqueue_links()

    # Run the crawler with the initial list of URLs.
    await crawler.run(['https://crawlee.dev'])

if __name__ == '__main__':
    asyncio.run(main())

PlaywrightCrawler

The PlaywrightCrawler uses a headless browser to download web pages and provides an API for data extraction. It is built on Playwright, an automation library designed for managing headless browsers. It excels at retrieving web pages that rely on client-side JavaScript for content generation, or tasks requiring interaction with JavaScript-driven content. For scenarios where JavaScript execution is unnecessary or higher performance is required, consider using the BeautifulSoupCrawler. Also if you want to use this crawler, make sure you install crawlee with playwright extra.

import asyncio

from crawlee.playwright_crawler import PlaywrightCrawler, PlaywrightCrawlingContext


async def main() -> None:
    crawler = PlaywrightCrawler(
        # Limit the crawl to max requests. Remove or increase it for crawling all links.
        max_requests_per_crawl=10,
    )

    # Define the default request handler, which will be called for every request.
    @crawler.router.default_handler
    async def request_handler(context: PlaywrightCrawlingContext) -> None:
        context.log.info(f'Processing {context.request.url} ...')

        # Extract data from the page.
        data = {
            'url': context.request.url,
            'title': await context.page.title(),
        }

        # Push the extracted data to the default dataset.
        await context.push_data(data)

        # Enqueue all links found on the page.
        await context.enqueue_links()

    # Run the crawler with the initial list of requests.
    await crawler.run(['https://crawlee.dev'])


if __name__ == '__main__':
    asyncio.run(main())

More examples

Explore our Examples page in the Crawlee documentation for a wide range of additional use cases and demonstrations.

Running on the Apify platform

Crawlee is open-source and runs anywhere, but since it's developed by Apify, it's easy to set up on the Apify platform and run in the cloud. Visit the Apify SDK website to learn more about deploying Crawlee to the Apify platform.

Support

If you find any bug or issue with Crawlee, please submit an issue on GitHub. For questions, you can ask on Stack Overflow, in GitHub Discussions or you can join our Discord server.

Contributing

Your code contributions are welcome, and you'll be praised for eternity! If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see CONTRIBUTING.md.

License

This project is licensed under the Apache License 2.0 - see the LICENSE.md file for details.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawlee-0.0.8b9.tar.gz (109.1 kB view details)

Uploaded Source

Built Distribution

crawlee-0.0.8b9-py3-none-any.whl (147.8 kB view details)

Uploaded Python 3

File details

Details for the file crawlee-0.0.8b9.tar.gz.

File metadata

  • Download URL: crawlee-0.0.8b9.tar.gz
  • Upload date:
  • Size: 109.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for crawlee-0.0.8b9.tar.gz
Algorithm Hash digest
SHA256 73aa116d9486283c7f2b6af6dd8455f32917a062edcd351eb086de5f5c5f4282
MD5 7277a09bfc89b8af094b4cc7cacbada9
BLAKE2b-256 8e90152c07727d91348f1c50596c64ee1167277386c379d24b3ac5a83a01bad2

See more details on using hashes here.

File details

Details for the file crawlee-0.0.8b9-py3-none-any.whl.

File metadata

  • Download URL: crawlee-0.0.8b9-py3-none-any.whl
  • Upload date:
  • Size: 147.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for crawlee-0.0.8b9-py3-none-any.whl
Algorithm Hash digest
SHA256 9bcb6f8eca84dc0550d99514c559174b413d1e4529b6855e4574ccdf75cbd1de
MD5 ccb3e89fe1e410c4493b00b4ef4b9bd4
BLAKE2b-256 1b9a87eb0d69dd43035f1355f9f191ab5eb50777f9e9da1715dc95bcce51e4f9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page