Skip to main content

A rather customizable image crawler structure, designed to download images with their information using multi-threading method. Besides, several wheels have been implemented to help better build a custom image crawler for yourself.

Project description

Image Crawler Utils

A Customizable Multi-station Image Crawler Structure

English | 简体中文


About

Click Here for Documentation

A rather customizable image crawler structure, designed to download images with their information using multi-threading method. This GIF depicts a sample run:

Besides, several classes and functions have been implemented to help better build a custom image crawler for yourself.

Please follow the rules of robots.txt, and set a low number of threads with high number of delay time when crawling images. Frequent requests and massive download traffic may result in IP addresses being banned or accounts being suspended.

Installing

It is recommended to install it by

pip install image-crawler-utils
  • Requires Python >= 3.9.

Attentions!

  • nodriver is used to parse information from certain websites. It is suggested to install the latest version of Google Chrome first to ensure the crawler will be correctly running.

Features

  • Currently supported websites:
    • Danbooru - features supported:
      • Downloading images searched by tags
    • yande.re / konachan.com / konachan.net - features supported:
      • Downloading images searched by tags
    • Gelbooru - features supported:
      • Downloading images searched by tags
    • Safebooru - features supported:
      • Downloading images searched by tags
    • Pixiv - features supported:
      • Downloading images searched by tags
      • Downloading images uploaded by a certain member
    • Twitter / X - features supported:
      • Downloading images from searching result
      • Downloading images uploaded by a certain user
  • Logging of crawler process onto the console and (optional) into a file.
  • Using rich bars and logging messages to denote the progress of crawler (Jupyter Notebook support is included).
  • Save or load the settings and configs of a crawler.
  • Save or load the information of images for future downloading.
  • Acquire and manage cookies of some websites, including saving and loading them.
  • Several classes and functions for custom image crawler designing.

Example

Running this example will download the first 20 images from Danbooru with keyword / tag kuon_(utawarerumono) and rating:general into the "Danbooru" folder. Information of images will be stored in image_info_list.json at same the path of your program. Pay attention that the proxies may need to be changed manually.

from image_crawler_utils import CrawlerSettings, Downloader, save_image_infos
from image_crawler_utils.stations.booru import DanbooruKeywordParser

#======================================================================#
# This part prepares the settings for crawling and downloading images. #
#======================================================================#

crawler_settings = CrawlerSettings(
    image_num=20,
    # If you do not use system proxies, remove '#' and set the proxies manually.
    # proxies={"https": "socks5://127.0.0.1:7890"},
)

#==================================================================#
# This part gets the URLs and information of images from Danbooru. #
#==================================================================#

parser = DanbooruKeywordParser(
    crawler_settings=crawler_settings,
    standard_keyword_string="kuon_(utawarerumono) AND rating:general",
)
image_info_list = parser.run()
# The information will be saved at image_info_list.json
save_image_infos(image_info_list, "image_info_list")

#===================================================================#
# This part downloads the images according to the image information #
# just collected in the image_info_list.                            #
#===================================================================#

downloader = Downloader(
    store_path='Danbooru',
    image_info_list=image_info_list,
    crawler_settings=crawler_settings,
)
downloader.run()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

image_crawler_utils-0.4.5.tar.gz (86.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

image_crawler_utils-0.4.5-py3-none-any.whl (115.5 kB view details)

Uploaded Python 3

File details

Details for the file image_crawler_utils-0.4.5.tar.gz.

File metadata

  • Download URL: image_crawler_utils-0.4.5.tar.gz
  • Upload date:
  • Size: 86.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for image_crawler_utils-0.4.5.tar.gz
Algorithm Hash digest
SHA256 a792e1e470d65085f8b7a4e9527a4e71ebb246c5fea8ac4db291447fecd64ce9
MD5 1de2b6026b6831f0b442404416232d97
BLAKE2b-256 945eb33a9472b1163a0804586673aa77d2ae140a525b4bedddbad99ee7f98e7f

See more details on using hashes here.

File details

Details for the file image_crawler_utils-0.4.5-py3-none-any.whl.

File metadata

File hashes

Hashes for image_crawler_utils-0.4.5-py3-none-any.whl
Algorithm Hash digest
SHA256 d5f691800501a8b0509d64f36155d86cb6b8f592367809089578ec3b238f2d2c
MD5 11c1008a95dda37103debcaf5dae3e7b
BLAKE2b-256 682676d0b7fe6932e7bd2975244b0c7640cacc0a6335d8910420bf1c1100528a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page