Skip to main content

A Python library for ethical web crawling

Project description

Ethicrawl

pytest codecov Security Python Version License

A Python library for ethical web crawling that respects robots.txt rules, maintains proper rate limits, and provides powerful tools for web scraping.

Features

  • Respectful by design: Automatic robots.txt compliance and rate limiting
  • Powerful sitemap support: Parse and filter XML sitemaps
  • Domain boundaries: Control cross-domain access with explicit whitelisting
  • Flexible configuration: Easily configure timeouts, rate limits, and other settings
  • Resource management: Clean unbinding and resource release
  • JavaScript support: Optional JavaScript rendering with Chromium

Installation

Since this package is not yet available on PyPI, you can install it directly from GitHub:

pip install git+https://github.com/ethicrawl/ethicrawl.git

For development:

# Clone the repository
git clone https://github.com/ethicrawl/ethicrawl.git

# Navigate to the directory
cd ethicrawl

# Install in development mode
pip install -e .

Quick Start

from ethicrawl import Ethicrawl, Config

# Configure global settings
config = Config()
config.http.rate_limit = 1.0  # 1 request per second

# Create and bind the crawler to a website
crawler = Ethicrawl()
crawler.bind("https://example.com")

# Check if a URL is allowed by robots.txt
if crawler.robots.can_fetch("https://example.com/some/path"):
    # Fetch the page
    response = crawler.get("https://example.com/some/path")
    print(f"Status: {response.status_code}")

# Parse sitemaps
sitemaps = crawler.robots.sitemaps
urls = crawler.sitemaps.parse(sitemaps)

# Filter URLs matching a pattern
article_urls = urls.filter(r"/articles/")
print(f"Found {len(article_urls)} article URLs")

# Clean up when done
crawler.unbind()

Responsible Web Crawling

Ethicrawl is designed to help you crawl websites responsibly:

  • Respects robots.txt rules - Automatically checks if URLs are allowed
  • Maintains rate limits - Prevents overloading servers with requests
  • Explicit domain boundaries - Requires whitelisting for cross-domain requests
  • Polite bot identification - Uses a descriptive user agent by default

Advanced Usage

For more advanced examples, see the usage.py file included in the repository.

Features demonstrated in the advanced usage:

  • Custom HTTP clients
  • Sitemap filtering and parsing
  • Domain whitelisting
  • JavaScript rendering with Chromium

Configuration

Ethicrawl provides a flexible configuration system:

from ethicrawl import Config

config = Config()

# HTTP settings
config.http.timeout = 30
config.http.rate_limit = 0.5
config.http.user_agent = "MyCustomBot/1.0"

# Sitemap settings
config.sitemap.max_depth = 3

# Logging settings
config.logger.level = "INFO"

License

Apache 2.0 License - See LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ethicrawl-1.0.0a1.tar.gz (58.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ethicrawl-1.0.0a1-py3-none-any.whl (75.6 kB view details)

Uploaded Python 3

File details

Details for the file ethicrawl-1.0.0a1.tar.gz.

File metadata

  • Download URL: ethicrawl-1.0.0a1.tar.gz
  • Upload date:
  • Size: 58.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for ethicrawl-1.0.0a1.tar.gz
Algorithm Hash digest
SHA256 2cad13f8827b973d327ebe2de1b0471b0016bb32c504178f6825d643c5abcc36
MD5 7cee534209226272ace38df5abfe8598
BLAKE2b-256 d82066929c5fa7ccab16bd52a56834615afbda015c7d0dc4fce4c465aed3cc94

See more details on using hashes here.

File details

Details for the file ethicrawl-1.0.0a1-py3-none-any.whl.

File metadata

  • Download URL: ethicrawl-1.0.0a1-py3-none-any.whl
  • Upload date:
  • Size: 75.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for ethicrawl-1.0.0a1-py3-none-any.whl
Algorithm Hash digest
SHA256 7e0b92f7ef087f1e16ed89f2d0d9890ee8ae66fa0c67760bf93b13a5f900cf9a
MD5 cedbaca85ebf880acb061f2d96846649
BLAKE2b-256 f09b7407ade924bac95bc359f83069d06e4c195c1197ad07c7dfac6cc14b0b50

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page