Skip to main content

Python wrapper for SpiderForce4AI HTML-to-Markdown conversion service

Project description

SpiderForce4AI Python Wrapper

A Python package for web content crawling and HTML-to-Markdown conversion. Built for seamless integration with SpiderForce4AI service.

Quick Start (Minimal Setup)

from spiderforce4ai import SpiderForce4AI, CrawlConfig

# Initialize with your service URL
spider = SpiderForce4AI("http://localhost:3004")

# Create default config
config = CrawlConfig()

# Crawl a single URL
result = spider.crawl_url("https://example.com", config)

Installation

pip install spiderforce4ai

Crawling Methods

1. Single URL

# Basic usage
result = spider.crawl_url("https://example.com", config)

# Async version
async def crawl():
    result = await spider.crawl_url_async("https://example.com", config)

2. Multiple URLs

urls = [
    "https://example.com/page1",
    "https://example.com/page2"
]

# Client-side parallel (using multiprocessing)
results = spider.crawl_urls_parallel(urls, config)

# Server-side parallel (single request)
results = spider.crawl_urls_server_parallel(urls, config)

# Async version
async def crawl():
    results = await spider.crawl_urls_async(urls, config)

3. Sitemap Crawling

# Server-side parallel (recommended)
results = spider.crawl_sitemap_server_parallel("https://example.com/sitemap.xml", config)

# Client-side parallel
results = spider.crawl_sitemap_parallel("https://example.com/sitemap.xml", config)

# Async version
async def crawl():
    results = await spider.crawl_sitemap_async("https://example.com/sitemap.xml", config)

Configuration Options

All configuration options are optional with sensible defaults:

from pathlib import Path

config = CrawlConfig(
    # Content Selection (all optional)
    target_selector="article",              # Specific element to extract
    remove_selectors=[                      # Elements to remove
        ".ads", 
        "#popup",
        ".navigation",
        ".footer"
    ],
    remove_selectors_regex=["modal-\\d+"],  # Regex patterns for removal
    
    # Processing Settings
    max_concurrent_requests=1,              # For client-side parallel processing
    request_delay=0.5,                     # Delay between requests (seconds)
    timeout=30,                            # Request timeout (seconds)
    
    # Output Settings
    output_dir=Path("spiderforce_reports"),  # Default directory for files
    webhook_url="https://your-webhook.com",  # Real-time notifications
    webhook_timeout=10,                      # Webhook timeout
    webhook_headers={                        # Optional custom headers for webhook
        "Authorization": "Bearer your-token",
        "X-Custom-Header": "value"
    },
    webhook_payload_template='''{           # Optional custom webhook payload template
        "crawled_url": "{url}",
        "content": "{markdown}",
        "crawl_status": "{status}",
        "crawl_error": "{error}",
        "crawl_time": "{timestamp}",
        "custom_field": "your-value"
    }''',
    save_reports=False,                      # Whether to save crawl reports (default: False)
    report_file=Path("crawl_report.json")    # Report location (used only if save_reports=True)
)

Real-World Examples

1. Basic Blog Crawling

from spiderforce4ai import SpiderForce4AI, CrawlConfig
from pathlib import Path

spider = SpiderForce4AI("http://localhost:3004")
config = CrawlConfig(
    target_selector="article.post-content",
    output_dir=Path("blog_content")
)

result = spider.crawl_url("https://example.com/blog-post", config)

2. Parallel Website Crawling

config = CrawlConfig(
    remove_selectors=[
        ".navigation",
        ".footer",
        ".ads",
        "#cookie-notice"
    ],
    max_concurrent_requests=5,
    output_dir=Path("website_content"),
    webhook_url="https://your-webhook.com/endpoint"
)

# Using server-side parallel processing
results = spider.crawl_urls_server_parallel([
    "https://example.com/page1",
    "https://example.com/page2",
    "https://example.com/page3"
], config)

3. Full Sitemap Processing

config = CrawlConfig(
    target_selector="main",
    remove_selectors=[".sidebar", ".comments"],
    output_dir=Path("site_content"),
    report_file=Path("crawl_report.json")
)

results = spider.crawl_sitemap_server_parallel(
    "https://example.com/sitemap.xml",
    config
)

Output Structure

1. Directory Layout

spiderforce_reports/           # Default output directory
├── example-com-page1.md      # Converted markdown files
├── example-com-page2.md
└── crawl_report.json         # Crawl report

2. Markdown Files

Each file is named using a slugified version of the URL:

# Page Title

Content converted to clean markdown...

3. Crawl Report

{
  "timestamp": "2025-02-15T10:30:00.123456",
  "config": {
    "target_selector": "article",
    "remove_selectors": [".ads", "#popup"]
  },
  "results": {
    "successful": [
      {
        "url": "https://example.com/page1",
        "status": "success",
        "markdown": "# Page Title\n\nContent...",
        "timestamp": "2025-02-15T10:30:00.123456"
      }
    ],
    "failed": [
      {
        "url": "https://example.com/page2",
        "status": "failed",
        "error": "HTTP 404: Not Found",
        "timestamp": "2025-02-15T10:30:01.123456"
      }
    ]
  },
  "summary": {
    "total": 2,
    "successful": 1,
    "failed": 1
  }
}

4. Webhook Notifications

If configured, real-time updates are sent for each processed URL:

{
  "url": "https://example.com/page1",
  "status": "success",
  "markdown": "# Page Title\n\nContent...",
  "timestamp": "2025-02-15T10:30:00.123456",
  "config": {
    "target_selector": "article",
    "remove_selectors": [".ads", "#popup"]
  }
}

Error Handling

The package handles various types of errors gracefully:

  • Network errors
  • Timeout errors
  • Invalid URLs
  • Missing content
  • Service errors

All errors are:

  1. Logged in the console
  2. Included in the JSON report
  3. Sent via webhook (if configured)
  4. Available in the results list

Requirements

  • Python 3.11 or later
  • Running SpiderForce4AI service
  • Internet connection

Performance Considerations

  1. Server-side Parallel Processing

    • Best for most cases
    • Single HTTP request for multiple URLs
    • Less network overhead
    • Use: crawl_urls_server_parallel() or crawl_sitemap_server_parallel()
  2. Client-side Parallel Processing

    • Good for special cases requiring local control
    • Uses Python multiprocessing
    • More network overhead
    • Use: crawl_urls_parallel() or crawl_sitemap_parallel()
  3. Async Processing

    • Best for integration with async applications
    • Good for real-time processing
    • Use: crawl_url_async(), crawl_urls_async(), or crawl_sitemap_async()

License

MIT License

Credits

Created by Peter Tam

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spiderforce4ai-0.1.9.tar.gz (10.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spiderforce4ai-0.1.9-py3-none-any.whl (8.9 kB view details)

Uploaded Python 3

File details

Details for the file spiderforce4ai-0.1.9.tar.gz.

File metadata

  • Download URL: spiderforce4ai-0.1.9.tar.gz
  • Upload date:
  • Size: 10.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for spiderforce4ai-0.1.9.tar.gz
Algorithm Hash digest
SHA256 eb9eecbbb508c4a34c00b7b095b7dc9b1d1a7728922098cac9e6d82da5344ca9
MD5 16001e1695c96d9ea39da3dec918809c
BLAKE2b-256 4bf8a75d67a491bca3c412133f381f300a9261c37a0529b1d072de7b790e6837

See more details on using hashes here.

File details

Details for the file spiderforce4ai-0.1.9-py3-none-any.whl.

File metadata

  • Download URL: spiderforce4ai-0.1.9-py3-none-any.whl
  • Upload date:
  • Size: 8.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for spiderforce4ai-0.1.9-py3-none-any.whl
Algorithm Hash digest
SHA256 f7bb26bbf3db0597a8f06d8a90f1abb2f6f9d75410431b9c2d6c28709eb88aea
MD5 c91eadb67f21d346307c9629895e63c7
BLAKE2b-256 b5ad2188da3fb406a73a969045c37c6d1587b7ea47573a51809c9ceedfdd283b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page