A rather customizable image crawler structure, designed to download images with their information using multi-threading method. Besides, several wheels have been implemented to help better build a custom image crawler for yourself.
Project description
Image Crawler Utils
A Customizable Multi-station Image Crawler Structure
English | 简体中文
About
Click Here for Documentation
A rather customizable image crawler structure, designed to download images with their information using multi-threading method. This GIF depicts a sample run:
Besides, several classes and functions have been implemented to help better build a custom image crawler for yourself.
Please follow the rules of robots.txt, and set a low number of threads with high number of delay time when crawling images. Frequent requests and massive download traffic may result in IP addresses being banned or accounts being suspended.
Installing
It is recommended to install it by
pip install image-crawler-utils
- Requires
Python >= 3.9.
Attentions!
- nodriver is used to parse information from certain websites. It is suggested to install the latest version of Google Chrome first to ensure the crawler will be correctly running.
Features
- Currently supported websites:
- Danbooru - features supported:
- Downloading images searched by tags
- yande.re / konachan.com / konachan.net - features supported:
- Downloading images searched by tags
- Gelbooru - features supported:
- Downloading images searched by tags
- Safebooru - features supported:
- Downloading images searched by tags
- Pixiv - features supported:
- Downloading images searched by tags
- Downloading images uploaded by a certain member
- Twitter / X - features supported:
- Downloading images from searching result
- Downloading images uploaded by a certain user
- Danbooru - features supported:
- Logging of crawler process onto the console and (optional) into a file.
- Using
richbars and logging messages to denote the progress of crawler (Jupyter Notebook support is included). - Save or load the settings and configs of a crawler.
- Save or load the information of images for future downloading.
- Acquire and manage cookies of some websites, including saving and loading them.
- Several classes and functions for custom image crawler designing.
Example
Running this example will download the first 20 images from Danbooru with keyword / tag kuon_(utawarerumono) and rating:general into the "Danbooru" folder. Information of images will be stored in image_info_list.json at same the path of your program. Pay attention that the proxies may need to be changed manually.
from image_crawler_utils import CrawlerSettings, Downloader, save_image_infos
from image_crawler_utils.stations.booru import DanbooruKeywordParser
#======================================================================#
# This part prepares the settings for crawling and downloading images. #
#======================================================================#
crawler_settings = CrawlerSettings(
image_num=20,
# If you do not use system proxies, remove '#' and set the proxies manually.
# proxies={"https": "socks5://127.0.0.1:7890"},
)
#==================================================================#
# This part gets the URLs and information of images from Danbooru. #
#==================================================================#
parser = DanbooruKeywordParser(
crawler_settings=crawler_settings,
standard_keyword_string="kuon_(utawarerumono) AND rating:general",
)
image_info_list = parser.run()
# The information will be saved at image_info_list.json
save_image_infos(image_info_list, "image_info_list")
#===================================================================#
# This part downloads the images according to the image information #
# just collected in the image_info_list. #
#===================================================================#
downloader = Downloader(
store_path='Danbooru',
image_info_list=image_info_list,
crawler_settings=crawler_settings,
)
downloader.run()
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file image_crawler_utils-0.4.1.tar.gz.
File metadata
- Download URL: image_crawler_utils-0.4.1.tar.gz
- Upload date:
- Size: 44.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8535c82f1746a3c12950b311a32eaf714b6894f64a7d2202d865b1fa6b2fe163
|
|
| MD5 |
fb212ae074cc421e3070f7e2f521ed49
|
|
| BLAKE2b-256 |
39bc2a41f6fb2254c7d0e0071bd81fad0ac5173ac1d46aaffebb21fda87ed4f8
|
File details
Details for the file image_crawler_utils-0.4.1-py3-none-any.whl.
File metadata
- Download URL: image_crawler_utils-0.4.1-py3-none-any.whl
- Upload date:
- Size: 50.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c2222e6b2197e2a229bd5ce635933dc53f09c0bd4ae933db41b4d568009904d0
|
|
| MD5 |
e665595ca6ba4b520275afd28ee7b1d3
|
|
| BLAKE2b-256 |
fd53699b857b91f999aff225a7cf837761bfd4a2022a795adbc7f68877d201fa
|