Skip to main content

Python wrapper for SpiderForce4AI HTML-to-Markdown conversion service

Project description

SpiderForce4AI Python Wrapper

A Python wrapper for SpiderForce4AI - a powerful HTML-to-Markdown conversion service. This package provides an easy-to-use interface for crawling websites and converting their content to clean Markdown format.

Features

  • 🔄 Simple synchronous and asynchronous APIs
  • 📁 Automatic Markdown file saving with URL-based filenames
  • 📊 Real-time progress tracking in console
  • 🪝 Webhook support for real-time notifications
  • 📝 Detailed crawl reports in JSON format
  • ⚡ Concurrent crawling with rate limiting
  • 🔍 Support for sitemap.xml crawling
  • 🛡️ Comprehensive error handling

Installation

pip install spiderforce4ai

Quick Start

from spiderforce4ai import SpiderForce4AI, CrawlConfig

# Initialize the client
spider = SpiderForce4AI("http://localhost:3004")

# Use default configuration
config = CrawlConfig()

# Crawl a single URL
result = spider.crawl_url("https://example.com", config)

# Crawl multiple URLs
urls = [
    "https://example.com/page1",
    "https://example.com/page2"
]
results = spider.crawl_urls(urls, config)

# Crawl from sitemap
results = spider.crawl_sitemap("https://example.com/sitemap.xml", config)

Configuration

The CrawlConfig class provides various configuration options. All parameters are optional with sensible defaults:

config = CrawlConfig(
    # Content Selection (all optional)
    target_selector="article",              # Specific element to target
    remove_selectors=[".ads", "#popup"],   # Elements to remove
    remove_selectors_regex=["modal-\\d+"],  # Regex patterns for removal
    
    # Processing Settings
    max_concurrent_requests=1,              # Default: 1
    request_delay=0.5,                     # Delay between requests in seconds
    timeout=30,                            # Request timeout in seconds
    
    # Output Settings
    output_dir="spiderforce_reports",      # Default output directory
    webhook_url="https://your-webhook.com", # Optional webhook endpoint
    webhook_timeout=10,                     # Webhook timeout in seconds
    report_file=None                        # Optional custom report location
)

Default Directory Structure

./
└── spiderforce_reports/
    ├── example-com-page1.md
    ├── example-com-page2.md
    └── crawl_report.json

Webhook Notifications

If webhook_url is configured, the crawler sends POST requests with the following JSON structure:

{
  "url": "https://example.com/page1",
  "status": "success",
  "markdown": "# Page Title\n\nContent...",
  "timestamp": "2025-02-15T10:30:00.123456",
  "config": {
    "target_selector": "article",
    "remove_selectors": [".ads", "#popup"],
    "remove_selectors_regex": ["modal-\\d+"]
  }
}

Crawl Report

A comprehensive JSON report is automatically generated in the output directory:

{
  "timestamp": "2025-02-15T10:30:00.123456",
  "config": {
    "target_selector": "article",
    "remove_selectors": [".ads", "#popup"],
    "remove_selectors_regex": ["modal-\\d+"]
  },
  "results": {
    "successful": [
      {
        "url": "https://example.com/page1",
        "status": "success",
        "markdown": "# Page Title\n\nContent...",
        "timestamp": "2025-02-15T10:30:00.123456"
      }
    ],
    "failed": [
      {
        "url": "https://example.com/page2",
        "status": "failed",
        "error": "HTTP 404: Not Found",
        "timestamp": "2025-02-15T10:30:01.123456"
      }
    ]
  },
  "summary": {
    "total": 2,
    "successful": 1,
    "failed": 1
  }
}

Async Usage

import asyncio
from spiderforce4ai import SpiderForce4AI, CrawlConfig

async def main():
    config = CrawlConfig()
    spider = SpiderForce4AI("http://localhost:3004")
    
    async with spider:
        results = await spider.crawl_urls_async(
            ["https://example.com/page1", "https://example.com/page2"],
            config
        )
    
    return results

if __name__ == "__main__":
    results = asyncio.run(main())

Error Handling

The crawler is designed to be resilient:

  • Continues processing even if some URLs fail
  • Records all errors in the crawl report
  • Sends error notifications via webhook if configured
  • Provides clear error messages in console output

Progress Tracking

The crawler provides real-time progress tracking in the console:

🔄 Crawling URLs... [####################] 100% 
✓ Successful: 95
✗ Failed: 5
📊 Report saved to: ./spiderforce_reports/crawl_report.json

Usage with AI Agents

The package is designed to be easily integrated with AI agents and chat systems:

from spiderforce4ai import SpiderForce4AI, CrawlConfig

def fetch_content_for_ai(urls):
    spider = SpiderForce4AI("http://localhost:3004")
    config = CrawlConfig()
    
    # Crawl content
    results = spider.crawl_urls(urls, config)
    
    # Return successful results
    return {
        result.url: result.markdown 
        for result in results 
        if result.status == "success"
    }

# Use with AI agent
urls = ["https://example.com/article1", "https://example.com/article2"]
content = fetch_content_for_ai(urls)

Requirements

  • Python 3.11 or later
  • Docker (for running SpiderForce4AI service)

License

MIT License

Credits

Created by Peter Tam

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spiderforce4ai-0.1.0.tar.gz (6.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spiderforce4ai-0.1.0-py3-none-any.whl (6.5 kB view details)

Uploaded Python 3

File details

Details for the file spiderforce4ai-0.1.0.tar.gz.

File metadata

  • Download URL: spiderforce4ai-0.1.0.tar.gz
  • Upload date:
  • Size: 6.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for spiderforce4ai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 09e4991ae705a9b501c6ee7fa2dbb220f2d18d7b137e26479bf3cf99ce4a9282
MD5 1e1cfc1262160a7dae8bb6de2734deb6
BLAKE2b-256 e594c100b6d754a3c2489ea5579fecdeb3f31e0c195d4936fb7f04239450c01f

See more details on using hashes here.

File details

Details for the file spiderforce4ai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: spiderforce4ai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 6.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for spiderforce4ai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 02324d28bdf036f77b28d35e13fb057062a127b8845d04c84d8399ab99d9152f
MD5 972a8f27614cb9ff3f7813ab7ea24b60
BLAKE2b-256 260c610cb7808197416334f89632aa18a76fcf37d04bc6e1e0424e79fc6b8207

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page