Skip to main content

Focused browser automation package for web scraping and content extraction

Project description

Multi-Browser Crawler

A focused browser automation package for web scraping and content extraction.

Features

  • Browser Pool Management: Auto-scaling browser pools with session management
  • Proxy Support: Built-in proxy rotation and management
  • Image Download: Automatic image capture and localization
  • API Discovery: Network request capture and pattern matching
  • Session Persistence: Stateful browsing with cookie/session support

Installation

pip install multi-browser-crawler

Quick Start

import asyncio
from multi_browser_crawler import BrowserPoolManager, BrowserConfig

async def main():
    # Simple configuration
    config = BrowserConfig(headless=True, timeout=30)
    pool = BrowserPoolManager(config.to_dict())

    try:
        await pool.initialize()
        
        # Fetch HTML
        result = await pool.fetch_html(
            url="https://example.com",
            session_id="my_session"
        )

        if result['status']['success']:
            print(f"✅ Success! Title: {result.get('title', 'N/A')}")
            print(f"HTML size: {len(result.get('html', ''))} characters")
        else:
            print(f"❌ Error: {result['status'].get('error')}")

    finally:
        await pool.shutdown()

if __name__ == "__main__":
    asyncio.run(main())

Configuration Options

config = BrowserConfig(
    headless=True,              # Run in headless mode
    timeout=30,                 # Page load timeout (seconds)
    min_browsers=1,             # Minimum browsers in pool
    max_browsers=5,             # Maximum browsers in pool
    proxy_url="http://proxy:8080",  # Optional proxy URL
    download_images_dir="/tmp/images"  # Image download directory
)

API Methods

fetch_html()

result = await pool.fetch_html(
    url="https://example.com",
    session_id="optional_session",      # For persistent sessions
    timeout=30,                         # Request timeout
    api_patterns=["*/api/*"],          # Capture API calls
    images_to_capture=["*.jpg", "*.png"] # Download images
)

Response format:

{
    'status': {'success': True, 'url': '...', 'load_time': 1.23},
    'html': '<html>...</html>',
    'title': 'Page Title',
    'api_calls': [...],  # Captured API requests
    'images': [...]      # Downloaded images
}

Session Management

# Persistent session - maintains cookies/state
result1 = await pool.fetch_html(url="https://site.com/login", session_id="user1")
result2 = await pool.fetch_html(url="https://site.com/profile", session_id="user1")

# Non-persistent - fresh browser each time  
result3 = await pool.fetch_html(url="https://site.com", session_id=None)

Proxy Support

# Single proxy
config = BrowserConfig(proxy_url="http://proxy:8080")

# The package integrates with rotating-mitmproxy for advanced proxy rotation

Testing

# Run all tests
python -m pytest tests/ -v

# Run specific test categories
python -m pytest tests/ -m "not slow" -v

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multi_browser_crawler-0.5.4.tar.gz (52.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

multi_browser_crawler-0.5.4-py3-none-any.whl (58.3 kB view details)

Uploaded Python 3

File details

Details for the file multi_browser_crawler-0.5.4.tar.gz.

File metadata

  • Download URL: multi_browser_crawler-0.5.4.tar.gz
  • Upload date:
  • Size: 52.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.5

File hashes

Hashes for multi_browser_crawler-0.5.4.tar.gz
Algorithm Hash digest
SHA256 51b662aeceea77d069b9e165c2ec88d69a3511b02a6319d28636826fc2fb09ba
MD5 85fdb7c98ee9c23ca7e8582d4ef2022d
BLAKE2b-256 3677636db3a289ec37940b5544ca1cebe691e0f6405e35fe24de4f72456c5de4

See more details on using hashes here.

File details

Details for the file multi_browser_crawler-0.5.4-py3-none-any.whl.

File metadata

File hashes

Hashes for multi_browser_crawler-0.5.4-py3-none-any.whl
Algorithm Hash digest
SHA256 1cb3411f08f67d5fadada6cb89f663356e9ea02771de2e7da4e5c2a99f2d10c5
MD5 fafe4f62fa58f2740fb86fded4c6f4f6
BLAKE2b-256 c6dfff5ea58b92ad08a6b5af49c9ad7b02011d9649427a49b8cc42d70f3478d1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page