Skip to main content

Focused browser automation package for web scraping and content extraction

Project description

Multi-Browser Crawler

A focused browser automation package for web scraping and content extraction.

Features

  • Browser Pool Management: Auto-scaling browser pools with session management
  • Proxy Support: Built-in proxy rotation and management
  • Image Download: Automatic image capture and localization
  • API Discovery: Network request capture and pattern matching
  • Session Persistence: Stateful browsing with cookie/session support

Installation

pip install multi-browser-crawler

Quick Start

import asyncio
from multi_browser_crawler import BrowserPoolManager, BrowserConfig

async def main():
    # Simple configuration
    config = BrowserConfig(headless=True, timeout=30)
    pool = BrowserPoolManager(config.to_dict())

    try:
        await pool.initialize()
        
        # Fetch HTML
        result = await pool.fetch_html(
            url="https://example.com",
            session_id="my_session"
        )

        if result['status']['success']:
            print(f"✅ Success! Title: {result.get('title', 'N/A')}")
            print(f"HTML size: {len(result.get('html', ''))} characters")
        else:
            print(f"❌ Error: {result['status'].get('error')}")

    finally:
        await pool.shutdown()

if __name__ == "__main__":
    asyncio.run(main())

Configuration Options

config = BrowserConfig(
    headless=True,              # Run in headless mode
    timeout=30,                 # Page load timeout (seconds)
    min_browsers=1,             # Minimum browsers in pool
    max_browsers=5,             # Maximum browsers in pool
    proxy_url="http://proxy:8080",  # Optional proxy URL
    download_images_dir="/tmp/images"  # Image download directory
)

API Methods

fetch_html()

result = await pool.fetch_html(
    url="https://example.com",
    session_id="optional_session",      # For persistent sessions
    timeout=30,                         # Request timeout
    api_patterns=["*/api/*"],          # Capture API calls
    images_to_capture=["*.jpg", "*.png"] # Download images
)

Response format:

{
    'status': {'success': True, 'url': '...', 'load_time': 1.23},
    'html': '<html>...</html>',
    'title': 'Page Title',
    'api_calls': [...],  # Captured API requests
    'images': [...]      # Downloaded images
}

Session Management

# Persistent session - maintains cookies/state
result1 = await pool.fetch_html(url="https://site.com/login", session_id="user1")
result2 = await pool.fetch_html(url="https://site.com/profile", session_id="user1")

# Non-persistent - fresh browser each time  
result3 = await pool.fetch_html(url="https://site.com", session_id=None)

Proxy Support

# Single proxy
config = BrowserConfig(proxy_url="http://proxy:8080")

# The package integrates with rotating-mitmproxy for advanced proxy rotation

Testing

# Run all tests
python -m pytest tests/ -v

# Run specific test categories
python -m pytest tests/ -m "not slow" -v

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multi_browser_crawler-0.5.2.tar.gz (52.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

multi_browser_crawler-0.5.2-py3-none-any.whl (58.2 kB view details)

Uploaded Python 3

File details

Details for the file multi_browser_crawler-0.5.2.tar.gz.

File metadata

  • Download URL: multi_browser_crawler-0.5.2.tar.gz
  • Upload date:
  • Size: 52.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for multi_browser_crawler-0.5.2.tar.gz
Algorithm Hash digest
SHA256 784c882cc37b61c578b4aef7515f7adc30a9ff059a2ae41d1bffbb302590b3e1
MD5 4990ab046ce9025fb5a00b171698738a
BLAKE2b-256 bab6c5347158dc28de400a0193487b0dcc081fc95f863e761a55adfb743d7c81

See more details on using hashes here.

File details

Details for the file multi_browser_crawler-0.5.2-py3-none-any.whl.

File metadata

File hashes

Hashes for multi_browser_crawler-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 790738acea7db7432bb534fdc19caabbac8594a844cde1bd711c6b745f127f05
MD5 f0334a47ed4b54d329bad302c2035602
BLAKE2b-256 b5ae71bb69ad97482f09d915c3fe191df6b84ae4c177d6d73bf9c8d48f8c2724

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page