Cruel - Scrape everything or anything from the web.
Project description
Cruel
is a Python web-request library that allows you to extract detailed information from any webpage or site.
Features
- Inbuilt beautiful soup instance
- ScraperAPI powered; so you don't have to worry about IP blocking
Installation
You can install the cruel
using pip:
pip install cruel
Usage
Below are examples of how to use the cruel
to extract data from Fiverr gig pages and user profiles.
Scrape Example
from cruel import session
session.set_scraper_api_key("XYZ-SCRAPER_API_KEY")
response = session.get("https://www.fiverr.com/username/your-gig-slug") # your fiverr url should be here
print(response.soup) # gives you beautiful soup instance
# You can use `response.soup` to further extract your information.
Get your ScraperAPI key here.
Project Structure
The cruel
is organized into several modules to enhance code readability and maintainability:
cruel
__init__.py
: For exportingsession
cruel.utils
req.py
: Extending requests for Fiverr scrapingscrape_utils.py
: Utilities for scraping
License
Contributing
New pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Author
Check more of my projects.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
cruel-0.0.12.tar.gz
(15.3 kB
view details)
File details
Details for the file cruel-0.0.12.tar.gz
.
File metadata
- Download URL: cruel-0.0.12.tar.gz
- Upload date:
- Size: 15.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 89776657e795b2620fcc0a8791204e51fe56ef9b1764742ea7ee06b5c067f191 |
|
MD5 | 990b9a45634eafc0a7152269f5cdabaa |
|
BLAKE2b-256 | d49d0289449c1434f9789f506e791c982dd32f799b6700d29e699ca6d47e1187 |