Skip to main content

A rather customizable image crawler structure, designed to download images with their information using multi-threading method. Besides, several wheels have been implemented to help better build a custom image crawler for yourself.

Project description

Image Crawler Utils

A Customizable Multi-station Image Crawler Structure

English | 简体中文


About

Click Here for Documentation

A rather customizable image crawler structure, designed to download images with their information using multi-threading method. This GIF depicts a sample run:

Besides, several classes and functions have been implemented to help better build a custom image crawler for yourself.

Please follow the rules of robots.txt, and set a low number of threads with high number of delay time when crawling images. Frequent requests and massive download traffic may result in IP addresses being banned or accounts being suspended.

Installing

It is recommended to install it by

pip install image-crawler-utils
  • Requires Python >= 3.9.

Attentions!

  • nodriver is used to parse information from certain websites. It is suggested to install the latest version of Google Chrome first to ensure the crawler will be correctly running.

Features

  • Currently supported websites:
    • Danbooru - features supported:
      • Downloading images searched by tags
    • yande.re / konachan.com / konachan.net - features supported:
      • Downloading images searched by tags
    • Gelbooru - features supported:
      • Downloading images searched by tags
    • Safebooru - features supported:
      • Downloading images searched by tags
    • Pixiv - features supported:
      • Downloading images searched by tags
      • Downloading images uploaded by a certain member
    • Twitter / X - features supported:
      • Downloading images from searching result
      • Downloading images uploaded by a certain user
  • Logging of crawler process onto the console and (optional) into a file.
  • Using rich bars and logging messages to denote the progress of crawler (Jupyter Notebook support is included).
  • Save or load the settings and configs of a crawler.
  • Save or load the information of images for future downloading.
  • Acquire and manage cookies of some websites, including saving and loading them.
  • Several classes and functions for custom image crawler designing.

Example

Running this example will download the first 20 images from Danbooru with keyword / tag kuon_(utawarerumono) and rating:general into the "Danbooru" folder. Information of images will be stored in image_info_list.json at same the path of your program. Pay attention that the proxies may need to be changed manually.

from image_crawler_utils import CrawlerSettings, Downloader, save_image_infos
from image_crawler_utils.stations.booru import DanbooruKeywordParser

#======================================================================#
# This part prepares the settings for crawling and downloading images. #
#======================================================================#

crawler_settings = CrawlerSettings(
    image_num=20,
    # If you do not use system proxies, remove '#' and set the proxies manually.
    # proxies={"https": "socks5://127.0.0.1:7890"},
)

#==================================================================#
# This part gets the URLs and information of images from Danbooru. #
#==================================================================#

parser = DanbooruKeywordParser(
    crawler_settings=crawler_settings,
    standard_keyword_string="kuon_(utawarerumono) AND rating:general",
)
image_info_list = parser.run()
# The information will be saved at image_info_list.json
save_image_infos(image_info_list, "image_info_list")

#===================================================================#
# This part downloads the images according to the image information #
# just collected in the image_info_list.                            #
#===================================================================#

downloader = Downloader(
    store_path='Danbooru',
    image_info_list=image_info_list,
    crawler_settings=crawler_settings,
)
downloader.run()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

image_crawler_utils-0.4.4.tar.gz (85.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

image_crawler_utils-0.4.4-py3-none-any.whl (115.4 kB view details)

Uploaded Python 3

File details

Details for the file image_crawler_utils-0.4.4.tar.gz.

File metadata

  • Download URL: image_crawler_utils-0.4.4.tar.gz
  • Upload date:
  • Size: 85.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for image_crawler_utils-0.4.4.tar.gz
Algorithm Hash digest
SHA256 d1c0cf5ce5ae214cf05abc13deae61249689d6fec250d2a842fedb2769e65e7b
MD5 afef737a934b7c8cb98a3339974f6add
BLAKE2b-256 36eac1c91aec249918d5c4b559b553db89a40973a02d0096a578f9ed1d36cfd0

See more details on using hashes here.

File details

Details for the file image_crawler_utils-0.4.4-py3-none-any.whl.

File metadata

File hashes

Hashes for image_crawler_utils-0.4.4-py3-none-any.whl
Algorithm Hash digest
SHA256 9466e5bb0d3ffa99396e4309fbdeeb778bf240f3e6a350724bd44076be49ac23
MD5 f5836df33d3e76f57949246ec9f1b784
BLAKE2b-256 e87942cba2206bee34ff40bba35de0d2ec57c3f4169b309508f1b87e5139951c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page