Python wrapper for SpiderForce4AI HTML-to-Markdown conversion service
Project description
SpiderForce4AI Python Wrapper
A Python package for web content crawling and HTML-to-Markdown conversion. Built for seamless integration with SpiderForce4AI service.
Quick Start (Minimal Setup)
from spiderforce4ai import SpiderForce4AI, CrawlConfig
# Initialize with your service URL
spider = SpiderForce4AI("http://localhost:3004")
# Create default config
config = CrawlConfig()
# Crawl a single URL
result = spider.crawl_url("https://example.com", config)
Installation
pip install spiderforce4ai
Crawling Methods
1. Single URL
# Basic usage
result = spider.crawl_url("https://example.com", config)
# Async version
async def crawl():
result = await spider.crawl_url_async("https://example.com", config)
2. Multiple URLs
urls = [
"https://example.com/page1",
"https://example.com/page2"
]
# Client-side parallel (using multiprocessing)
results = spider.crawl_urls_parallel(urls, config)
# Server-side parallel (single request)
results = spider.crawl_urls_server_parallel(urls, config)
# Async version
async def crawl():
results = await spider.crawl_urls_async(urls, config)
3. Sitemap Crawling
# Server-side parallel (recommended)
results = spider.crawl_sitemap_server_parallel("https://example.com/sitemap.xml", config)
# Client-side parallel
results = spider.crawl_sitemap_parallel("https://example.com/sitemap.xml", config)
# Async version
async def crawl():
results = await spider.crawl_sitemap_async("https://example.com/sitemap.xml", config)
Configuration Options
All configuration options are optional with sensible defaults:
from pathlib import Path
config = CrawlConfig(
# Content Selection (all optional)
target_selector="article", # Specific element to extract
remove_selectors=[ # Elements to remove
".ads",
"#popup",
".navigation",
".footer"
],
remove_selectors_regex=["modal-\\d+"], # Regex patterns for removal
# Processing Settings
max_concurrent_requests=1, # For client-side parallel processing
request_delay=0.5, # Delay between requests (seconds)
timeout=30, # Request timeout (seconds)
# Output Settings
output_dir=Path("spiderforce_reports"), # Default directory for files
webhook_url="https://your-webhook.com", # Real-time notifications
webhook_timeout=10, # Webhook timeout
report_file=Path("crawl_report.json") # Final report location
)
Real-World Examples
1. Basic Blog Crawling
from spiderforce4ai import SpiderForce4AI, CrawlConfig
from pathlib import Path
spider = SpiderForce4AI("http://localhost:3004")
config = CrawlConfig(
target_selector="article.post-content",
output_dir=Path("blog_content")
)
result = spider.crawl_url("https://example.com/blog-post", config)
2. Parallel Website Crawling
config = CrawlConfig(
remove_selectors=[
".navigation",
".footer",
".ads",
"#cookie-notice"
],
max_concurrent_requests=5,
output_dir=Path("website_content"),
webhook_url="https://your-webhook.com/endpoint"
)
# Using server-side parallel processing
results = spider.crawl_urls_server_parallel([
"https://example.com/page1",
"https://example.com/page2",
"https://example.com/page3"
], config)
3. Full Sitemap Processing
config = CrawlConfig(
target_selector="main",
remove_selectors=[".sidebar", ".comments"],
output_dir=Path("site_content"),
report_file=Path("crawl_report.json")
)
results = spider.crawl_sitemap_server_parallel(
"https://example.com/sitemap.xml",
config
)
Output Structure
1. Directory Layout
spiderforce_reports/ # Default output directory
├── example-com-page1.md # Converted markdown files
├── example-com-page2.md
└── crawl_report.json # Crawl report
2. Markdown Files
Each file is named using a slugified version of the URL:
# Page Title
Content converted to clean markdown...
3. Crawl Report
{
"timestamp": "2025-02-15T10:30:00.123456",
"config": {
"target_selector": "article",
"remove_selectors": [".ads", "#popup"]
},
"results": {
"successful": [
{
"url": "https://example.com/page1",
"status": "success",
"markdown": "# Page Title\n\nContent...",
"timestamp": "2025-02-15T10:30:00.123456"
}
],
"failed": [
{
"url": "https://example.com/page2",
"status": "failed",
"error": "HTTP 404: Not Found",
"timestamp": "2025-02-15T10:30:01.123456"
}
]
},
"summary": {
"total": 2,
"successful": 1,
"failed": 1
}
}
4. Webhook Notifications
If configured, real-time updates are sent for each processed URL:
{
"url": "https://example.com/page1",
"status": "success",
"markdown": "# Page Title\n\nContent...",
"timestamp": "2025-02-15T10:30:00.123456",
"config": {
"target_selector": "article",
"remove_selectors": [".ads", "#popup"]
}
}
Error Handling
The package handles various types of errors gracefully:
- Network errors
- Timeout errors
- Invalid URLs
- Missing content
- Service errors
All errors are:
- Logged in the console
- Included in the JSON report
- Sent via webhook (if configured)
- Available in the results list
Requirements
- Python 3.11 or later
- Running SpiderForce4AI service
- Internet connection
Performance Considerations
-
Server-side Parallel Processing
- Best for most cases
- Single HTTP request for multiple URLs
- Less network overhead
- Use:
crawl_urls_server_parallel()orcrawl_sitemap_server_parallel()
-
Client-side Parallel Processing
- Good for special cases requiring local control
- Uses Python multiprocessing
- More network overhead
- Use:
crawl_urls_parallel()orcrawl_sitemap_parallel()
-
Async Processing
- Best for integration with async applications
- Good for real-time processing
- Use:
crawl_url_async(),crawl_urls_async(), orcrawl_sitemap_async()
License
MIT License
Credits
Created by Peter Tam
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file spiderforce4ai-0.1.8.tar.gz.
File metadata
- Download URL: spiderforce4ai-0.1.8.tar.gz
- Upload date:
- Size: 9.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e84917e6bd435ba34fe35b0c4d98f1822b1922a2c67a481e1acfbd231d4c4b2e
|
|
| MD5 |
e74bc4fa2ba76fa14793b83885692a16
|
|
| BLAKE2b-256 |
6d3480350daa9c5dd390a8abfafefd529bb50789ea13fe683f5fb84d0e7b4ed3
|
File details
Details for the file spiderforce4ai-0.1.8-py3-none-any.whl.
File metadata
- Download URL: spiderforce4ai-0.1.8-py3-none-any.whl
- Upload date:
- Size: 8.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e239977d85ef32996fcb85c4a9ddb540c8a0b2980dfd34485c87238d317d6196
|
|
| MD5 |
a6b34965ebc27748d4f63d7ccc8de7a4
|
|
| BLAKE2b-256 |
249035db5dce6d260515a02de66daecae4b7342c8573567308093dc40d4959e2
|