Skip to main content

Focused browser automation package for web scraping and content extraction

Project description

Multi-Browser Crawler

A focused browser automation package for web scraping and content extraction.

Features

  • Browser Pool Management: Auto-scaling browser pools with session management
  • Proxy Support: Built-in proxy rotation and management
  • Image Download: Automatic image capture and localization
  • API Discovery: Network request capture and pattern matching
  • Session Persistence: Stateful browsing with cookie/session support

Installation

pip install multi-browser-crawler

Quick Start

import asyncio
from multi_browser_crawler import BrowserPoolManager, BrowserConfig

async def main():
    # Simple configuration
    config = BrowserConfig(headless=True, timeout=30)
    pool = BrowserPoolManager(config.to_dict())

    try:
        await pool.initialize()
        
        # Fetch HTML
        result = await pool.fetch_html(
            url="https://example.com",
            session_id="my_session"
        )

        if result['status']['success']:
            print(f"✅ Success! Title: {result.get('title', 'N/A')}")
            print(f"HTML size: {len(result.get('html', ''))} characters")
        else:
            print(f"❌ Error: {result['status'].get('error')}")

    finally:
        await pool.shutdown()

if __name__ == "__main__":
    asyncio.run(main())

Configuration Options

config = BrowserConfig(
    headless=True,              # Run in headless mode
    timeout=30,                 # Page load timeout (seconds)
    min_browsers=1,             # Minimum browsers in pool
    max_browsers=5,             # Maximum browsers in pool
    proxy_url="http://proxy:8080",  # Optional proxy URL
    download_images_dir="/tmp/images"  # Image download directory
)

API Methods

fetch_html()

result = await pool.fetch_html(
    url="https://example.com",
    session_id="optional_session",      # For persistent sessions
    timeout=30,                         # Request timeout
    api_patterns=["*/api/*"],          # Capture API calls
    images_to_capture=["*.jpg", "*.png"] # Download images
)

Response format:

{
    'status': {'success': True, 'url': '...', 'load_time': 1.23},
    'html': '<html>...</html>',
    'title': 'Page Title',
    'api_calls': [...],  # Captured API requests
    'images': [...]      # Downloaded images
}

Session Management

# Persistent session - maintains cookies/state
result1 = await pool.fetch_html(url="https://site.com/login", session_id="user1")
result2 = await pool.fetch_html(url="https://site.com/profile", session_id="user1")

# Non-persistent - fresh browser each time  
result3 = await pool.fetch_html(url="https://site.com", session_id=None)

Proxy Support

# Single proxy
config = BrowserConfig(proxy_url="http://proxy:8080")

# The package integrates with rotating-mitmproxy for advanced proxy rotation

Testing

# Run all tests
python -m pytest tests/ -v

# Run specific test categories
python -m pytest tests/ -m "not slow" -v

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multi_browser_crawler-0.5.0.tar.gz (52.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

multi_browser_crawler-0.5.0-py3-none-any.whl (57.8 kB view details)

Uploaded Python 3

File details

Details for the file multi_browser_crawler-0.5.0.tar.gz.

File metadata

  • Download URL: multi_browser_crawler-0.5.0.tar.gz
  • Upload date:
  • Size: 52.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for multi_browser_crawler-0.5.0.tar.gz
Algorithm Hash digest
SHA256 64651e8aa4984138790ac3264894493577fcfd39d0131b02ad02e3ab474b3bfb
MD5 63a9833528fded1d2fc824266b0a8f67
BLAKE2b-256 14a5835275134ae9b9d88d1103b9bdf454877ebba48668aa8a3a7eef0c312bc2

See more details on using hashes here.

File details

Details for the file multi_browser_crawler-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for multi_browser_crawler-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2cddae5d64714a4b859440aea9725e77fa7409e4d8febac87eac8be86aa0bada
MD5 6d7d7eb49c76728ccd90c5577c827208
BLAKE2b-256 c6abf337fe9f94401fd07230cf15eae21cfaa4ab06572c22901ca3a6df687dd3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page