Skip to main content

Focused browser automation package for web scraping and content extraction

Project description

Multi-Browser Crawler

A focused browser automation package for web scraping and content extraction.

Features

  • Browser Pool Management: Auto-scaling browser pools with session management
  • Proxy Support: Built-in proxy rotation and management
  • Image Download: Automatic image capture and localization
  • API Discovery: Network request capture and pattern matching
  • Session Persistence: Stateful browsing with cookie/session support

Installation

pip install multi-browser-crawler

Quick Start

import asyncio
from multi_browser_crawler import BrowserPoolManager, BrowserConfig

async def main():
    # Simple configuration
    config = BrowserConfig(headless=True, timeout=30)
    pool = BrowserPoolManager(config.to_dict())

    try:
        await pool.initialize()
        
        # Fetch HTML
        result = await pool.fetch_html(
            url="https://example.com",
            session_id="my_session"
        )

        if result['status']['success']:
            print(f"✅ Success! Title: {result.get('title', 'N/A')}")
            print(f"HTML size: {len(result.get('html', ''))} characters")
        else:
            print(f"❌ Error: {result['status'].get('error')}")

    finally:
        await pool.shutdown()

if __name__ == "__main__":
    asyncio.run(main())

Configuration Options

config = BrowserConfig(
    headless=True,              # Run in headless mode
    timeout=30,                 # Page load timeout (seconds)
    min_browsers=1,             # Minimum browsers in pool
    max_browsers=5,             # Maximum browsers in pool
    proxy_url="http://proxy:8080",  # Optional proxy URL
    download_images_dir="/tmp/images"  # Image download directory
)

API Methods

fetch_html()

result = await pool.fetch_html(
    url="https://example.com",
    session_id="optional_session",      # For persistent sessions
    timeout=30,                         # Request timeout
    api_patterns=["*/api/*"],          # Capture API calls
    images_to_capture=["*.jpg", "*.png"] # Download images
)

Response format:

{
    'status': {'success': True, 'url': '...', 'load_time': 1.23},
    'html': '<html>...</html>',
    'title': 'Page Title',
    'api_calls': [...],  # Captured API requests
    'images': [...]      # Downloaded images
}

Session Management

# Persistent session - maintains cookies/state
result1 = await pool.fetch_html(url="https://site.com/login", session_id="user1")
result2 = await pool.fetch_html(url="https://site.com/profile", session_id="user1")

# Non-persistent - fresh browser each time  
result3 = await pool.fetch_html(url="https://site.com", session_id=None)

Proxy Support

# Single proxy
config = BrowserConfig(proxy_url="http://proxy:8080")

# The package integrates with rotating-mitmproxy for advanced proxy rotation

Testing

# Run all tests
python -m pytest tests/ -v

# Run specific test categories
python -m pytest tests/ -m "not slow" -v

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multi_browser_crawler-0.5.1.tar.gz (52.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

multi_browser_crawler-0.5.1-py3-none-any.whl (58.2 kB view details)

Uploaded Python 3

File details

Details for the file multi_browser_crawler-0.5.1.tar.gz.

File metadata

  • Download URL: multi_browser_crawler-0.5.1.tar.gz
  • Upload date:
  • Size: 52.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for multi_browser_crawler-0.5.1.tar.gz
Algorithm Hash digest
SHA256 8192d435d739728d455db7b71d681186262fd52144cf386ec679d6efb2aff9fe
MD5 f4c9a2e65ccfdb50da2af39d338fa0bb
BLAKE2b-256 07d15cdf033e29e30b56f103bc05746a01084455812d5ea6133ec87643cd468b

See more details on using hashes here.

File details

Details for the file multi_browser_crawler-0.5.1-py3-none-any.whl.

File metadata

File hashes

Hashes for multi_browser_crawler-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 420ad58b39cec73c1f3fa58d418e8ea0250760121c86236c3cb7adbc8da34787
MD5 3ccea3622a3223591ec2f43af4fe226b
BLAKE2b-256 10948ba23f233c11b3aedef7a9a04462980c4b2efc45bf0a562c5949de846b33

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page