Official Python SDK for ScrapeBadger - Async web scraping APIs for Twitter and more
Project description
ScrapeBadger Python SDK
The official Python SDK for ScrapeBadger - async web scraping APIs for Twitter, Vinted, and more.
Features
- Async-first - Built with
asynciofor high-performance concurrent scraping - Type-safe - Full type hints and Pydantic models for all responses
- Automatic pagination - Iterator methods with smart rate limit handling
- Resilient retries - Exponential backoff on transient errors
- 37+ Twitter endpoints - Tweets, users, lists, communities, trends, geo, real-time streams
- Vinted scraping - Search items, item details, user profiles, brands, colors, markets
- Web scraping - Anti-bot bypass, JS rendering, and AI data extraction
Installation
pip install scrapebadger
Or with uv:
uv add scrapebadger
Quick Start
import asyncio
from scrapebadger import ScrapeBadger
async def main():
async with ScrapeBadger(api_key="your-api-key") as client:
# Get a user profile
user = await client.twitter.users.get_by_username("elonmusk")
print(f"{user.name} has {user.followers_count:,} followers")
# Scrape a website
result = await client.web.scrape("https://scrapebadger.com", format="markdown")
print(result.content)
# Search tweets
tweets = await client.twitter.tweets.search("python programming")
for tweet in tweets.data:
print(f"@{tweet.username}: {tweet.text[:100]}...")
asyncio.run(main())
Authentication
Get your API key from scrapebadger.com and pass it to the client:
from scrapebadger import ScrapeBadger
client = ScrapeBadger(api_key="sb_live_xxxxxxxxxxxxx")
You can also set the SCRAPEBADGER_API_KEY environment variable:
export SCRAPEBADGER_API_KEY="sb_live_xxxxxxxxxxxxx"
Available APIs
| API | Description | Documentation |
|---|---|---|
| Web Scraping | Scrape any website with JS rendering, anti-bot bypass, and AI extraction | Web Scraping Guide |
| 37+ endpoints for tweets, users, lists, communities, trends, and real-time streams | Twitter Guide | |
| Vinted | Search items, item details, user profiles, brands, colors, statuses, and markets | Vinted Guide |
Error Handling
from scrapebadger import (
ScrapeBadger,
ScrapeBadgerError,
AuthenticationError,
RateLimitError,
InsufficientCreditsError,
NotFoundError,
ValidationError,
ServerError,
)
async with ScrapeBadger(api_key="your-key") as client:
try:
user = await client.twitter.users.get_by_username("elonmusk")
except AuthenticationError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after} seconds")
print(f"Limit: {e.limit}, Remaining: {e.remaining}")
except InsufficientCreditsError:
print("Out of credits! Purchase more at scrapebadger.com")
except NotFoundError:
print("User not found")
except ValidationError as e:
print(f"Invalid parameters: {e}")
except ServerError:
print("Server error, try again later")
except ScrapeBadgerError as e:
print(f"API error: {e}")
Configuration
Custom Timeout and Retries
from scrapebadger import ScrapeBadger
client = ScrapeBadger(
api_key="your-key",
timeout=120.0, # Request timeout in seconds (default: 300)
max_retries=5, # Retry attempts (default: 10)
)
Advanced Configuration
from scrapebadger import ScrapeBadger
from scrapebadger._internal import ClientConfig
config = ClientConfig(
api_key="your-key",
base_url="https://scrapebadger.com",
timeout=300.0,
connect_timeout=10.0,
max_retries=10,
retry_on_status=(502, 503, 504),
headers={"X-Custom-Header": "value"},
)
client = ScrapeBadger(config=config)
Retry Behavior
The SDK automatically retries requests that fail with 502, 503, or 504 status codes using exponential backoff (1s, 2s, 4s, 8s, ...). Each retry logs a warning:
⚠ 503 Service Unavailable — retrying in 4s (attempt 3/10)
To see these warnings, configure Python logging:
import logging
logging.basicConfig(level=logging.WARNING)
Rate Limit Aware Pagination
When using *_all pagination methods, the SDK reads X-RateLimit-Remaining and
X-RateLimit-Reset headers from each response. When remaining requests drop below
20% of your tier's limit, pagination automatically slows down to spread requests
across the remaining window — preventing 429 errors. A warning is logged when
throttling activates:
⚠ Rate limit: 25/300 remaining (resets in 42s), throttling pagination to ~0.6 req/s
This works transparently with all tier levels (Free: 60/min, Basic: 300/min, Pro: 1000/min, Enterprise: 5000/min).
Development
Setup
# Clone the repository
git clone https://github.com/scrape-badger/scrapebadger-python.git
cd scrapebadger-python
# Install dependencies with uv
uv sync --dev
# Install pre-commit hooks
uv run pre-commit install
Running Tests
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=src/scrapebadger --cov-report=html
# Run specific tests
uv run pytest tests/test_client.py -v
Code Quality
# Lint
uv run ruff check src/ tests/
# Format
uv run ruff format src/ tests/
# Type check
uv run mypy src/
# All checks
uv run ruff check src/ tests/ && uv run ruff format --check src/ tests/ && uv run mypy src/
Contributing
Contributions are welcome! Please read our Contributing Guide for details.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Run tests and linting (
uv run pytest && uv run ruff check) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
- Documentation: docs.scrapebadger.com
- Issues: GitHub Issues
- Email: support@scrapebadger.com
- Discord: Join our community
Made with ❤️ by ScrapeBadger
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scrapebadger-0.5.1.tar.gz.
File metadata
- Download URL: scrapebadger-0.5.1.tar.gz
- Upload date:
- Size: 49.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
59156907e1385ffa22c77ddf17783a3cc04004bc6c21738d3eba505181042127
|
|
| MD5 |
7d80d8aad7e1c198d7104f72c0bf7e37
|
|
| BLAKE2b-256 |
01aa14cb74222a25e294bcab2714520d349d34b4692049756481e3834937f848
|
Provenance
The following attestation bundles were made for scrapebadger-0.5.1.tar.gz:
Publisher:
publish.yml on scrape-badger/scrapebadger-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
scrapebadger-0.5.1.tar.gz -
Subject digest:
59156907e1385ffa22c77ddf17783a3cc04004bc6c21738d3eba505181042127 - Sigstore transparency entry: 1203764566
- Sigstore integration time:
-
Permalink:
scrape-badger/scrapebadger-python@4c5abc3b21498f27ed84a70ff75a36e906d8dffb -
Branch / Tag:
refs/tags/v0.5.1 - Owner: https://github.com/scrape-badger
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@4c5abc3b21498f27ed84a70ff75a36e906d8dffb -
Trigger Event:
release
-
Statement type:
File details
Details for the file scrapebadger-0.5.1-py3-none-any.whl.
File metadata
- Download URL: scrapebadger-0.5.1-py3-none-any.whl
- Upload date:
- Size: 66.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4e538f28e9e0db6cc218225d3bd47ce8816716ec85c92810435563893b9ad836
|
|
| MD5 |
1e2f944fb52aa8878a37913b807ac6a2
|
|
| BLAKE2b-256 |
ee24b7d81063aee67937a96140f622b511f02b0904f0b829b013b3ecc438455d
|
Provenance
The following attestation bundles were made for scrapebadger-0.5.1-py3-none-any.whl:
Publisher:
publish.yml on scrape-badger/scrapebadger-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
scrapebadger-0.5.1-py3-none-any.whl -
Subject digest:
4e538f28e9e0db6cc218225d3bd47ce8816716ec85c92810435563893b9ad836 - Sigstore transparency entry: 1203764568
- Sigstore integration time:
-
Permalink:
scrape-badger/scrapebadger-python@4c5abc3b21498f27ed84a70ff75a36e906d8dffb -
Branch / Tag:
refs/tags/v0.5.1 - Owner: https://github.com/scrape-badger
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@4c5abc3b21498f27ed84a70ff75a36e906d8dffb -
Trigger Event:
release
-
Statement type: