Skip to main content

Smart web scraper that abstracts away complexity - from simple sites to highly protected ones.

Project description

IntelliScraper

A powerful, anti-bot detection asynchronous web scraping solution built with Playwright, designed for scraping protected sites such as job hiring platforms, social networks, e-commerce dashboards, and other web applications that require authentication. It features asynchronous session management, proxy integration, and advanced HTML parsing capabilities for high-performance and reliable scraping under anti-bot protection systems.

PyPI Version Python Version License Status


✨ Features

  • 🔐 Session Management — Capture and reuse authentication sessions with cookies, local storage, and browser fingerprints.
  • 🛡️ Anti-Detection — Advanced techniques to prevent bot detection.
  • 🌐 Proxy Support — Integrated Bright Data and custom proxy configurations.
  • 📝 HTML Parsing — Extract text, links, and convert to Markdown (including LLM-optimized output).
  • 🎯 CLI Tool — Generate sessions through an interactive login flow.
  • ⚡ Fully Asynchronous — Built with async/await for maximum concurrency and non-blocking I/O.
  • 🚀 Playwright-Powered — Reliable automation framework for browser-based scraping.

🚀 Quick Start

Installation

# Install the package
pip install intelliscraper-core

# Install Playwright browser (Chromium)
playwright install chromium

[!NOTE] Playwright requires browser binaries to be installed separately. The command above installs Chromium, which is necessary for IntelliScraper to function.

For more reference: https://pypi.org/project/intelliscraper-core/


⚡ Basic Asynchronous Scraping (No Authentication)

import asyncio
from intelliscraper import AsyncScraper, ScrapStatus

async def main():
    async with AsyncScraper() as scraper:
        response = await scraper.scrape("https://example.com")

        if response.status == ScrapStatus.SUCCESS:
            print(response.scrap_html_content)

if __name__ == "__main__":
    asyncio.run(main())

🔐 Creating Session Data

Use the built-in CLI tool to create and store authentication sessions:

intelliscraper-session --url "https://example.com" --site "example" --output "./example_session.json"

How it works:

  1. 🌐 Opens a Chromium browser with the given URL
  2. 🔐 Log in with your credentials
  3. ⏎ Press Enter after successful login
  4. 💾 Session data (cookies, storage, fingerprints) are saved to a JSON file

[!IMPORTANT] Sessions maintain internal time-series data such as timestamps, request durations, and scrape statuses. These metrics help analyze performance, rate limits, and stability of scraping sessions. Excessive concurrency may cause request failures, so gradual scaling is recommended.


🧠 Authenticated Asynchronous Scraping with Session

import asyncio
import json
from intelliscraper import AsyncScraper, Session, ScrapStatus

async def main():
    # Load existing session
    with open("example_session.json") as f:
        session = Session(**json.load(f))

    async with AsyncScraper(session_data=session) as scraper:
        response = await scraper.scrape(
            "https://example.com/jobs/python?experience=entry-level%2Cmid-level"
        )

        if response.status == ScrapStatus.SUCCESS:
            print("Successfully scraped authenticated page!")
            print(response.scrap_html_content)

if __name__ == "__main__":
    asyncio.run(main())

📝 HTML Parsing

import asyncio
from intelliscraper import AsyncScraper, HTMLParser, ScrapStatus

async def main():
    async with AsyncScraper() as scraper:
        response = await scraper.scrape("https://example.com")

        if response.status == ScrapStatus.SUCCESS:
            parser = HTMLParser(
                url=response.scrape_request.url,
                html=response.scrap_html_content
            )
            print(parser.text)
            print(parser.links)
            print(parser.markdown)
            print(parser.markdown_for_llm)

if __name__ == "__main__":
    asyncio.run(main())

🌐 Proxy Support (Async)

Use a proxy with this web scraper, utilizing asynchronous blocks.

import asyncio
from intelliscraper import AsyncScraper, BrightDataProxy, ScrapStatus

async def main():
    bright_proxy = BrightDataProxy(
        host="brd.superproxy.io",
        port=22225,
        username="your-username",
        password="your-password"
    )

    async with AsyncScraper(proxy=bright_proxy) as scraper:
        response = await scraper.scrape("https://example.com")

        if response.status == ScrapStatus.SUCCESS:
            print("Scraped successfully through Bright Data proxy!")

if __name__ == "__main__":
    asyncio.run(main())

📁 More examples, including Bright Data configurations and session management, are available in the examples/ directory.


📋 Requirements

  • Python 3.12+
  • Playwright
  • Compatible with Windows, macOS, and Linux

🗺️ Roadmap

  • ✅ Async scraping (core feature)
  • ✅ Session management CLI
  • ✅ Proxy integration (Bright Data)
  • ✅ HTML parsing and Markdown generation
  • ✅ Anti-detection mechanisms
  • 🔄 Distributed crawler mode
  • 🔄 AI-based content extraction

📄 License

Licensed under the MIT License.


📧 Support

For help, issues, or contributions — visit the GitHub Issues page.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

intelliscraper_core-0.1.7.tar.gz (46.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

intelliscraper_core-0.1.7-py3-none-any.whl (25.8 kB view details)

Uploaded Python 3

File details

Details for the file intelliscraper_core-0.1.7.tar.gz.

File metadata

  • Download URL: intelliscraper_core-0.1.7.tar.gz
  • Upload date:
  • Size: 46.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.12

File hashes

Hashes for intelliscraper_core-0.1.7.tar.gz
Algorithm Hash digest
SHA256 94b736148d58366fa958c13b87246e9afad2b1274c6dbf27064e708cf28015ea
MD5 92662d372b805b1934df87f57b07961f
BLAKE2b-256 b20763d451d1d0e1e0b247725fae720ebad5e282a38a61d2e83a2c9427a187f2

See more details on using hashes here.

File details

Details for the file intelliscraper_core-0.1.7-py3-none-any.whl.

File metadata

File hashes

Hashes for intelliscraper_core-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 44ce933599cdb4fbadfa548a7b865accbf5a5c2e935c10c11daadbf02b5f5757
MD5 0b1c7cb6c809b92ed706f36e0c1054c9
BLAKE2b-256 b0b36532af1d299f4c0c4c09f0c9388900f5d884116901e45eea5ef81a835c76

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page