Skip to main content

Swiftly get tons of images from indexed tars on Huggingface.

Project description

cheesechaser

PyPI PyPI - Python Version Loc Comments

Code Test Package Release codecov

Discord GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

Swiftly get tons of images from indexed tars on Huggingface

Installation

pip install cheesechaser

How this library works

This library is based on the mirror datasets on huggingface.

For the Gelbooru mirror dataset repository, such as deepghs/gelbooru_full, each data packet includes a tar archive file and a corresponding JSON index file. The JSON index file contains detailed information about the files within the tar archive, including file size, offset, and file fingerprint.

The files in this dataset repository are organized according to a fixed pattern based on their IDs. For example, a file with the ID 114514 will have a modulus result of 4514 when divided by 10000. Consequently, it is stored in images/4/0514.tar.

Utilizing the quick download feature from hfutils.index, users can instantly access individual files. Since the download service is provided through Huggingface's LFS service and not the original website or an image CDN, there is no risk of IP or account blocking. The only limitations to your download speed are your network bandwidth and disk read/write speeds.

This efficient system ensures a seamless and reliable access to the dataset without any restrictions.

Batch Download Images

  • Danbooru
from cheesechaser.datapool import DanbooruNewestDataPool

pool = DanbooruNewestDataPool()

# download danbooru #2010000-2010300, to directory /data/exp2
pool.batch_download_to_directory(
    resource_ids=range(2010000, 2010300),
    dst_dir='/data/exp2',
    max_workers=12,
)
  • Danbooru With Tags Query
from cheesechaser.datapool import DanbooruNewestDataPool
from cheesechaser.query import DanbooruIdQuery

pool = DanbooruNewestDataPool()
my_waifu_ids = DanbooruIdQuery(['surtr_(arknights)', 'solo'])

# download danbooru images with surtr+solo, to directory /data/exp2_surtr
pool.batch_download_to_directory(
    resource_ids=my_waifu_ids,
    dst_dir='/data/exp2_surtr',
    max_workers=12,
)
  • Konachan (Gated dataset, you should be granted first and set HF_TOKEN environment variable)
from cheesechaser.datapool import KonachanDataPool

pool = KonachanDataPool()

# download konachan #210000-210300, to directory /data/exp2
pool.batch_download_to_directory(
    resource_ids=range(210000, 210300),
    dst_dir='/data/exp2',
    max_workers=12,
)
  • Civitai (this mirror repository on hf is private for now, you have to use hf token of an authorized account)
from cheesechaser.datapool import CivitaiDataPool

pool = CivitaiDataPool()

# download civitai #7810000-7810300, to directory /data/exp2
# should contain one image and one json metadata file
pool.batch_download_to_directory(
    resource_ids=range(7810000, 7810300),
    dst_dir='/data/exp2',
    max_workers=12,
)

More supported:

  • RealbooruDataPool (Gated Dataset)
  • ThreedbooruDataPool (Gated Dataset)
  • FancapsDataPool (Gated Dataset)
  • BangumiBaseDataPool (Gated Dataset)
  • AnimePicturesDataPool (Gated Dataset)
  • KonachanDataPool (Gated Dataset)
  • YandeDataPool (Gated Dataset)
  • ZerochanDataPool (Gated Dataset)
  • GelbooruDataPool and GelbooruWebpDataPool (Gated Dataset)
  • DanbooruNewestDataPool and DanbooruNewestWebpDataPool

Batch Retrieving Images

from itertools import islice

from cheesechaser.datapool import DanbooruNewestDataPool
from cheesechaser.pipe import SimpleImagePipe, PipeItem

pool = DanbooruNewestDataPool()
pipe = SimpleImagePipe(pool)

# select from danbooru 7349990-7359990
ids = range(7349990, 7359990)
with pipe.batch_retrieve(ids) as session:
    # only need 20 images
    for i, item in enumerate(islice(session, 20)):
        item: PipeItem
        print(i, item)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cheesechaser-0.1.6.tar.gz (42.1 kB view details)

Uploaded Source

Built Distribution

cheesechaser-0.1.6-py3-none-any.whl (56.5 kB view details)

Uploaded Python 3

File details

Details for the file cheesechaser-0.1.6.tar.gz.

File metadata

  • Download URL: cheesechaser-0.1.6.tar.gz
  • Upload date:
  • Size: 42.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.10

File hashes

Hashes for cheesechaser-0.1.6.tar.gz
Algorithm Hash digest
SHA256 491f4c5468d3131e84bae720d391143a7689ce14c8c21658e94c55f1b8b94dff
MD5 f360db6b98ac04f2c597e28cc157f058
BLAKE2b-256 9bc3885815a612b0e939299f998af7e22ab6e0f55009534fbd451925e0f6e853

See more details on using hashes here.

File details

Details for the file cheesechaser-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: cheesechaser-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 56.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.10

File hashes

Hashes for cheesechaser-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 aa9891a824a6c4d23cfe65673ab37316f901ee10f8f78ccc05270547d204ed48
MD5 0a2e413e2c2d94c15b14fd222b040675
BLAKE2b-256 036c474cb52dc6fd0f3054ed275cc4565b0c09e4d5f2aca9b7aa0c10127dbac8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page