A modular Python scraping framework for premium content platforms
Project description
UltimaScraperAPI
UltimaScraperAPI is a modular Python scraping framework designed to interact with premium content platforms such as OnlyFans, Fansly, and LoyalFans. It provides a unified, async-first API for authentication, user data retrieval, posts, messages, and media downloads with comprehensive session management and caching capabilities.
Platform Status:
- โ OnlyFans: Fully supported and stable
- ๐ง Fansly: Work in progress with limited functionality
- ๐ง LoyalFans: Work in progress with limited functionality
๐ Documentation
Read the full documentation โ
Getting Started
- Installation Guide - Installation methods and requirements
- Quick Start Tutorial - Get up and running in minutes
- Configuration - Complete configuration reference
User Guides
- Authentication - How to authenticate with platforms
- Working with APIs - Common operations and patterns
- Proxy Support - Configure proxies and VPNs
- Session Management - Redis integration and caching
API Reference
- OnlyFans API - Complete OnlyFans API reference
- Fansly API - Fansly API reference (WIP)
- LoyalFans API - LoyalFans API reference (WIP)
- Helpers - Utility functions and helpers
Development
- Architecture - System design and architecture
- Contributing Guide - How to contribute
- Testing - Running and writing tests
โจ Features
- ๐ Multi-Platform Support: OnlyFans (stable), Fansly (WIP), and LoyalFans (WIP)
- โก Async-First Design: Built with
asyncioandaiohttpfor high performance - ๐ Flexible Authentication: Cookie-based and guest authentication flows
- ๐ฆ Unified Data Models: Consistent Pydantic models for users, posts, messages, and media
- ๐ง Highly Extensible: Modular architecture makes adding new platforms easy
- ๐ Advanced Networking: Session management, connection pooling, proxy support (HTTP/HTTPS/SOCKS)
- ๐ WebSocket Support: Real-time updates and live notifications
- ๐พ Redis Integration: Optional caching, session persistence, and rate limiting
- ๐ Type Safety: Comprehensive type hints and validation with Pydantic v2
- ๐ DRM Support: Widevine CDM integration for encrypted content
- ๐ฏ Rate Limiting: Built-in rate limiting and exponential backoff
- ๐ก๏ธ Error Handling: Comprehensive error handling with retry mechanisms
- ๐ Comprehensive Logging: Detailed logging for debugging and monitoring
๐ Requirements
- Python: 3.10, 3.11, 3.12, 3.13, or 3.14 (but less than 4.0)
- Package Manager: uv (recommended) or pip
- Optional: Redis 6.2+ for caching and session management
๐ Installation
Using uv (Recommended)
uv is a fast Python package installer and resolver:
# Install uv if you haven't already
pip install uv
# Install UltimaScraperAPI
uv pip install ultima-scraper-api
Using pip
pip install ultima-scraper-api
From Source
For development or the latest features:
# Clone the repository
git clone https://github.com/UltimaHoarder/UltimaScraperAPI.git
cd UltimaScraperAPI
# Install with uv
uv pip install -e .
# Or with pip
pip install -e .
Virtual Environment (Recommended)
Always use a virtual environment to avoid dependency conflicts:
# Create virtual environment
python -m venv venv
# Activate it
source venv/bin/activate # Linux/macOS
venv\Scripts\activate # Windows
# Install the package
uv pip install ultima-scraper-api
๐ก Quick Start
Basic Usage
import asyncio
from ultima_scraper_api import OnlyFansAPI, UltimaScraperAPIConfig
async def main():
# Initialize configuration
config = UltimaScraperAPIConfig()
api = OnlyFansAPI(config)
# Authentication credentials
# Obtain these from your browser's Network tab (F12)
# See: https://ultimahoarder.github.io/UltimaScraperAPI/user-guide/authentication/
auth_json = {
"cookie": "your_cookie_value",
"user_agent": "your_user_agent",
"x-bc": "your_x-bc_token"
}
# Use context manager for automatic cleanup
async with api.login_context(auth_json) as authed:
if authed and authed.is_authed():
# Get authenticated user info
me = await authed.get_me()
print(f"Logged in as: {me.username}")
# Get user profile
user = await authed.get_user("username")
if user:
print(f"User: {user.username} ({user.name})")
# Fetch user's posts
posts = await user.get_posts(limit=10)
print(f"Found {len(posts)} posts")
# Download media from posts
for post in posts:
if post.media:
for media in post.media:
print(f"Downloading: {media.filename}")
content = await media.download()
# Save content to file...
if __name__ == "__main__":
asyncio.run(main())
Credential Extraction
You need three pieces of information from your browser:
- Cookie: Your session cookie
- User-Agent: Your browser's user agent string
- x-bc (OnlyFans only): Dynamic authorization token
Quick Steps:
- Open your browser and navigate to the platform
- Open Developer Tools (F12)
- Go to the Network tab
- Look for API requests and copy the required headers
For detailed instructions with screenshots, see the Authentication Guide.
Guest Mode (Limited Access)
Some platforms support guest access for public content:
async with api.login_context(guest=True) as authed:
# Limited operations available (public profiles, posts, etc.)
user = await authed.get_user("public_username")
if user:
print(f"Public profile: {user.username}")
๐ง Configuration
Basic Configuration
from ultima_scraper_api import UltimaScraperAPIConfig
# Load from JSON file
config = UltimaScraperAPIConfig.from_json_file("config.json")
# Or create programmatically
config = UltimaScraperAPIConfig()
Environment Variables
# Set up your credentials
export ONLYFANS_COOKIE="your_cookie_value"
export ONLYFANS_USER_AGENT="Mozilla/5.0 ..."
export ONLYFANS_XBC="your_x-bc_token"
Then load them in your code:
import os
auth_json = {
"cookie": os.getenv("ONLYFANS_COOKIE"),
"user_agent": os.getenv("ONLYFANS_USER_AGENT"),
"x-bc": os.getenv("ONLYFANS_XBC")
}
Proxy Configuration
Configure HTTP, HTTPS, or SOCKS proxies:
from ultima_scraper_api import UltimaScraperAPIConfig
from ultima_scraper_api.config import Network, Proxy
config = UltimaScraperAPIConfig(
network=Network(
proxy=Proxy(
http="http://proxy.example.com:8080",
https="https://proxy.example.com:8080",
# Or SOCKS proxy
# http="socks5://proxy.example.com:1080"
)
)
)
Redis Configuration
Enable Redis for caching and session management:
from ultima_scraper_api.config import Redis
config = UltimaScraperAPIConfig(
redis=Redis(
host="localhost",
port=6379,
db=0,
password="your_password" # Optional
)
)
For complete configuration options, see the Configuration Guide.
๐ Usage Examples
Fetching Subscriptions
async with api.login_context(auth_json) as authed:
# Get all active subscriptions
subscriptions = await authed.get_subscriptions()
for sub in subscriptions:
user = sub.user
print(f"{user.username} - Subscribed: {sub.subscribed_at}")
print(f" Expires: {sub.expires_at}")
print(f" Price: ${sub.price}")
Getting Messages
async with api.login_context(auth_json) as authed:
# Get a specific user
user = await authed.get_user("username")
# Fetch message conversation
messages = await user.get_messages(limit=50)
for msg in messages:
print(f"[{msg.created_at}] {msg.from_user.username}: {msg.text}")
# Check for media attachments
if msg.media:
print(f" Attachments: {len(msg.media)} media files")
Downloading Stories
import aiofiles
async with api.login_context(auth_json) as authed:
user = await authed.get_user("username")
# Get active stories
stories = await user.get_stories()
for story in stories:
if story.media:
for media in story.media:
# Download media content
content = await media.download()
# Save to file
filename = f"stories/{media.filename}"
async with aiofiles.open(filename, "wb") as f:
await f.write(content)
print(f"Downloaded: {filename}")
Pagination and Batch Processing
async with api.login_context(auth_json) as authed:
user = await authed.get_user("username")
# Fetch all posts with pagination
all_posts = []
offset = 0
limit = 50
while True:
posts = await user.get_posts(limit=limit, offset=offset)
if not posts:
break
all_posts.extend(posts)
offset += limit
print(f"Fetched {len(all_posts)} posts so far...")
print(f"Total posts: {len(all_posts)}")
Concurrent Operations
import asyncio
async with api.login_context(auth_json) as authed:
# Get multiple users concurrently
usernames = ["user1", "user2", "user3"]
tasks = [authed.get_user(username) for username in usernames]
users = await asyncio.gather(*tasks, return_exceptions=True)
for username, user in zip(usernames, users):
if isinstance(user, Exception):
print(f"Error fetching {username}: {user}")
else:
print(f"Fetched: {user.username} - {user.posts_count} posts")
For more examples and patterns, see the Working with APIs Guide.
๐ ๏ธ Development
Setting Up Development Environment
# Clone the repository
git clone https://github.com/UltimaHoarder/UltimaScraperAPI.git
cd UltimaScraperAPI
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate # Linux/macOS
venv\Scripts\activate # Windows
# Install in development mode with dev dependencies
uv pip install -e ".[dev]"
# Or with pip
pip install -e ".[dev]"
Running Tests
# Run all tests
pytest
# Run with coverage report
pytest --cov=ultima_scraper_api --cov-report=html
# Run specific test file
pytest tests/test_onlyfans.py
# Run with verbose output
pytest -v
Code Quality
# Format code with Black
black ultima_scraper_api/ tests/
# Check formatting without changing files
black --check ultima_scraper_api/
# Type checking (if using mypy)
mypy ultima_scraper_api/
Building Documentation
# Serve documentation locally with live reload
uv run mkdocs serve -a localhost:8001
# Open http://localhost:8001 in your browser
# Build static documentation site
uv run mkdocs build --clean
# Deploy to GitHub Pages
uv run mkdocs gh-deploy
Using Nox for Automation
# Run all sessions (tests, linting, docs)
nox
# Run specific session
nox -s tests
nox -s lint
nox -s docs
For detailed contribution guidelines, see the Contributing Guide.
๐ค Contributing
Contributions are welcome! Please read the Contributing Guide for details on:
- Code of conduct
- Development setup
- Submitting pull requests
- Writing tests
- Documentation standards
๐ฆ Project Structure
UltimaScraperAPI/
โโโ ultima_scraper_api/ # Main package
โ โโโ apis/ # Platform-specific APIs
โ โ โโโ onlyfans/ # OnlyFans implementation
โ โ โโโ fansly/ # Fansly implementation (WIP)
โ โ โโโ loyalfans/ # LoyalFans implementation (WIP)
โ โโโ classes/ # Utility classes
โ โโโ helpers/ # Helper functions
โ โโโ managers/ # Session/scrape managers
โ โโโ models/ # Data models
โโโ documentation/ # MkDocs documentation
โโโ tests/ # Test files
โโโ typings/ # Type stubs
โโโ pyproject.toml # Project configuration
๐ License
This project is licensed under the GNU Affero General Public License v3.0 - see the LICENSE file for details.
What This Means
- โ You can use this commercially
- โ You can modify the code
- โ You can distribute it
- โ ๏ธ You must disclose source code when distributing
- โ ๏ธ You must use the same license for derivatives
- โ ๏ธ Network use requires source code disclosure
โ ๏ธ Disclaimer
This software is provided for educational and research purposes. Users are responsible for complying with the terms of service of any platforms they interact with using this software.
๐ Acknowledgments
Built with industry-leading open source libraries:
- aiohttp - Async HTTP client/server framework
- Pydantic - Data validation using Python type hints
- httpx - Modern HTTP client
- Redis - In-memory data structure store for caching
- websockets - WebSocket client and server
- MkDocs Material - Beautiful documentation site generator
- pytest - Testing framework
- Black - Code formatter
Special thanks to all contributors and the open source community!
๐ Support & Community
- ๐ Documentation - Comprehensive guides and API reference
- ๐ Issue Tracker - Report bugs or request features
- ๐ฌ Discussions - Ask questions and share ideas
- ๐ฆ Releases - Version history and changelogs
Getting Help
If you encounter issues:
- Check the documentation first
- Search existing issues for similar problems
- Create a new issue with a detailed description and minimal reproduction example
- Join the discussions for community support
Made with โค๏ธ by UltimaHoarder
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ultima_scraper_api-3.0.0b3.tar.gz.
File metadata
- Download URL: ultima_scraper_api-3.0.0b3.tar.gz
- Upload date:
- Size: 397.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e338e61b54ce4b069431477d4123179a066ffb90a3c8e98414a6af81768b429d
|
|
| MD5 |
770b71b8ed83aaefd14ec0f74b5a45c3
|
|
| BLAKE2b-256 |
7d0a9806d36d9cb88865dfd58bbc24e0abb68de812fcc165b50ca4ab0ad49aef
|
File details
Details for the file ultima_scraper_api-3.0.0b3-py3-none-any.whl.
File metadata
- Download URL: ultima_scraper_api-3.0.0b3-py3-none-any.whl
- Upload date:
- Size: 140.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
827f8fb0bcbb0e0e2f9d8daefba6ec04ebc412f849c4420603ed79fd5974d310
|
|
| MD5 |
8b57fd6ebf7024da1edc8ce43e75f402
|
|
| BLAKE2b-256 |
70038f48b1aaabb94fc4f4ed714a446544972a4f02258d4aefc2abbe9b835fa2
|