Python SDK for the Scraping Pros API — web scraping with browser rendering, proxy rotation, and structured data extraction.
Project description
scrapingpros
Python SDK for the Scraping Pros API — web scraping with browser rendering, proxy rotation, and structured data extraction.
Installation
pip install scrapingpros
On Windows, if you get "Permission denied", use:
python -m pip install scrapingpros
Quick Start
No signup needed — use the demo token to start immediately:
from scrapingpros import ScrapingPros
client = ScrapingPros("demo_6x595maoA6GdOdVb")
result = client.scrape("https://example.com")
print(result.html)
The demo token includes 5,000 credits/month and 30 req/min (1 simple request = 1 credit, 1 browser request = 5 credits). Credits are NOT consumed for requests that fail due to infrastructure errors. For higher limits, contact the Scraping Pros team.
Usage Examples
Markdown Output (for AI/LLM)
result = client.scrape("https://example.com", format="markdown")
print(result.markdown)
Browser Rendering
result = client.scrape(
"https://spa-site.com",
browser=True,
use_proxy="any",
)
Structured Data Extraction
result = client.scrape(
"https://quotes.toscrape.com/",
extract={
"quotes": {"selector": "css:.text", "multiple": True},
"authors": {"selector": "css:.author", "multiple": True},
},
)
print(result.extracted_data["quotes"])
Async Batch Processing
collection = client.create_collection("my-batch", [
{"url": "https://example.com/1"},
{"url": "https://example.com/2"},
])
run = client.run_and_wait(collection.id)
print(f"Done: {run.success_requests}/{run.total_requests}")
Async Client
from scrapingpros import AsyncScrapingPros
async with AsyncScrapingPros("demo_6x595maoA6GdOdVb") as client:
result = await client.scrape("https://example.com", format="markdown")
API Methods
| Method | Description |
|---|---|
client.scrape(url, ...) |
Scrape a URL (HTML or markdown) |
client.download(url, ...) |
Download a file as base64 |
client.create_collection(name, requests) |
Create a batch collection |
client.run_and_wait(collection_id) |
Run a batch and wait for completion |
client.create_viability_test(urls) |
Analyze sites before scraping |
client.list_proxy_countries() |
List available proxy countries |
client.billing() |
Check usage and billing |
client.health() |
API health check |
Error Handling
from scrapingpros import ScrapingPros, AuthenticationError, RateLimitError, QuotaExceededError
try:
result = client.scrape("https://example.com")
except AuthenticationError:
print("Invalid token — use demo_6x595maoA6GdOdVb for testing")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after}s")
except QuotaExceededError:
print("Monthly quota exceeded — upgrade your plan for more requests")
All exceptions inherit from ScrapingProsError.
Usage & Quota Tracking
# Check remaining quota after any API call
client.scrape("https://example.com")
print(f"Requests remaining: {client.quota_remaining}")
print(f"Rate limit remaining: {client.rate_limit_remaining}")
# Get detailed billing info
billing = client.billing()
Plans
| Plan | Price | Credits/mo | Rate | Concurrent |
|---|---|---|---|---|
| Demo (public) | Free | 5,000 | 30/min | 5 |
| Free | $0 | 1,000 | 30/min | 5 |
| Starter | $29 | 25,000 | 30/min | 10 |
| Growth | $69 | 100,000 | 60/min | 20 |
| Pro | $199 | 500,000 | 120/min | 50 |
| Scale | $499 | 2,500,000 | 200/min | 100 |
| Enterprise | Custom | Unlimited | 2,000/min | Custom |
1 simple request = 1 credit, 1 browser request = 5 credits.
See all features: client.plans() or visit scrapingpros.com.
Configuration
client = ScrapingPros(
"demo_6x595maoA6GdOdVb", # or your dedicated token / SP_TOKEN env var
base_url="https://api.scrapingpros.com", # default
timeout=120.0, # request timeout in seconds
max_retries=3, # auto-retry on 429
)
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scrapingpros-0.2.3.tar.gz.
File metadata
- Download URL: scrapingpros-0.2.3.tar.gz
- Upload date:
- Size: 66.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9606b0e6d0cccca4b0486ed7cb967a1f2d836ccb22622e284c24e014df3a39e8
|
|
| MD5 |
b37d2dc279a1395d32b12de7382a71cc
|
|
| BLAKE2b-256 |
755af5c3bc2b304db22080ac01d75fb252deae36bb33371c09545d6e93ee7dbb
|
File details
Details for the file scrapingpros-0.2.3-py3-none-any.whl.
File metadata
- Download URL: scrapingpros-0.2.3-py3-none-any.whl
- Upload date:
- Size: 28.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3dcde15d4f15e9dc5d9f3f076687d378b45d0c4c34162f29b0b3ce329f56ddcb
|
|
| MD5 |
b1bdbf525e5dbb6dddeb295711d0798d
|
|
| BLAKE2b-256 |
fc8862476d073afdd3eb0c8bfa669a779313e8d0f48618ac3251142601157d37
|