Python tool that allows you to scrape web novels from various sources and save them to more readable formats like EPUB.
Project description
Web Novel Scraper CLI
🔑 Why Use Web Novel Scraper?
- Read Offline: Download your favorite novels and read them anywhere, even without internet
- Device Friendly: EPUB format optimized for e-readers and mobile devices
- Resource Efficient: Smart caching system prevents unnecessary downloads
- Server Friendly: Prevents accidental server overloads
- Simple Interface: Basic and direct commands for a hassle-free experience
- Automatic Organization: Keep your novels organized and easy to find
🌟 Main Features
- Downloads and converts web novels to EPUB format
- Smart caching: downloads chapters only once
- Simple and straightforward command-line interface
- Support for multiple web novel sites
🚀 Quick Tutorial
1. Installation
pip install web-novel-scraper
2. Download Your First Novel
- Create a new novel:
web-novel-scraper create-novel -t "My First Novel" --toc-main-url "https://novelbin.me/novel/my-novel/toc"
- Convert to EPUB:
web-novel-scraper save-novel-to-epub -t "My First Novel" --sync-toc
- Find your files:
web-novel-scraper show-novel-dir -t "My First Novel"
3. Additional Options
- Add metadata:
web-novel-scraper set-metadata -t "My First Novel" --author "Author" --language "en"
- Add cover image:
web-novel-scraper set-cover-image -t "My First Novel" --cover "path/to/image.jpg"
- View novel information:
web-novel-scraper show-novel-info -t "My First Novel"
📱 Supported Sites
- Novelbin
- Novelhi
- Novellive
- Royalroad
- GenesisStudio
- HostedNovel
- ScribbleHub
- NovelCool
- FreeWebNovel
- Foxaholic
- Fanmtl
- Pandamtl
- MtlNovels
📖 Full Documentation
For a detailed guide, advanced use cases, and complete command reference, visit: https://web-novel-scraper.readthedocs.io/stable/
📝 Responsible Usage Note
Please use this tool responsibly and respect the terms of service and rate limits of the web novel sites.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file web_novel_scraper-2.1.12.tar.gz.
File metadata
- Download URL: web_novel_scraper-2.1.12.tar.gz
- Upload date:
- Size: 42.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
425b683d28dac7fc1dffbf3b03ce5e5a30bf8069554a98433600855a0573e670
|
|
| MD5 |
cae17ad667be89e56651518f1c9cc971
|
|
| BLAKE2b-256 |
3f368ecafec881128c444b793dfe95960a74bc6235a2799fd1f5c273a4a03fdf
|
Provenance
The following attestation bundles were made for web_novel_scraper-2.1.12.tar.gz:
Publisher:
publish.yaml on ImagineBrkr/web-novel-scraper
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
web_novel_scraper-2.1.12.tar.gz -
Subject digest:
425b683d28dac7fc1dffbf3b03ce5e5a30bf8069554a98433600855a0573e670 - Sigstore transparency entry: 294725110
- Sigstore integration time:
-
Permalink:
ImagineBrkr/web-novel-scraper@24df7e741172e23d1897ebd700f3b3c0c691945b -
Branch / Tag:
refs/heads/main - Owner: https://github.com/ImagineBrkr
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@24df7e741172e23d1897ebd700f3b3c0c691945b -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file web_novel_scraper-2.1.12-py3-none-any.whl.
File metadata
- Download URL: web_novel_scraper-2.1.12-py3-none-any.whl
- Upload date:
- Size: 40.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d1a8c436cd8478ee7ef8c1304fa7f697eca8c00f82a09498aa2894681ddf49f6
|
|
| MD5 |
ff5725c807040b6871dbe87886ba07f1
|
|
| BLAKE2b-256 |
52f8e62586870b7957ca4c736ac84cc765aa9d937b1ae8c0a990926a118ff4fe
|
Provenance
The following attestation bundles were made for web_novel_scraper-2.1.12-py3-none-any.whl:
Publisher:
publish.yaml on ImagineBrkr/web-novel-scraper
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
web_novel_scraper-2.1.12-py3-none-any.whl -
Subject digest:
d1a8c436cd8478ee7ef8c1304fa7f697eca8c00f82a09498aa2894681ddf49f6 - Sigstore transparency entry: 294725123
- Sigstore integration time:
-
Permalink:
ImagineBrkr/web-novel-scraper@24df7e741172e23d1897ebd700f3b3c0c691945b -
Branch / Tag:
refs/heads/main - Owner: https://github.com/ImagineBrkr
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@24df7e741172e23d1897ebd700f3b3c0c691945b -
Trigger Event:
workflow_dispatch
-
Statement type: