Swiftly get tons of images from indexed tars on Huggingface.
Project description
cheesechaser
Swiftly get tons of images from indexed tars on Huggingface
Installation
pip install cheesechaser
How this library works
This library is based on the mirror datasets on huggingface.
For the Gelbooru mirror dataset repository, such as deepghs/gelbooru_full, each data packet includes a tar archive file and a corresponding JSON index file. The JSON index file contains detailed information about the files within the tar archive, including file size, offset, and file fingerprint.
The files in this dataset repository are organized according to a fixed pattern based on their IDs. For example, a file
with the ID 114514 will have a modulus result of 4514 when divided by 10000. Consequently, it is stored
in images/4/0514.tar
.
Utilizing the quick download feature from hfutils.index, users can instantly access individual files. Since the download service is provided through Huggingface's LFS service and not the original website or an image CDN, there is no risk of IP or account blocking. The only limitations to your download speed are your network bandwidth and disk read/write speeds.
This efficient system ensures a seamless and reliable access to the dataset without any restrictions.
Batch Download Images
- Danbooru
from cheesechaser.datapool import DanbooruNewestDataPool
pool = DanbooruNewestDataPool()
# download danbooru #2010000-2010300, to directory /data/exp2
pool.batch_download_to_directory(
resource_ids=range(2010000, 2010300),
dst_dir='/data/exp2',
max_workers=12,
)
- Danbooru With Tags Query
from cheesechaser.datapool import DanbooruNewestDataPool
from cheesechaser.query import DanbooruIdQuery
pool = DanbooruNewestDataPool()
my_waifu_ids = DanbooruIdQuery(['surtr_(arknights)', 'solo'])
# download danbooru images with surtr+solo, to directory /data/exp2_surtr
pool.batch_download_to_directory(
resource_ids=my_waifu_ids,
dst_dir='/data/exp2_surtr',
max_workers=12,
)
- Konachan (Gated dataset, you should be granted first and set
HF_TOKEN
environment variable)
from cheesechaser.datapool import KonachanDataPool
pool = KonachanDataPool()
# download konachan #210000-210300, to directory /data/exp2
pool.batch_download_to_directory(
resource_ids=range(210000, 210300),
dst_dir='/data/exp2',
max_workers=12,
)
- Civitai (this mirror repository on hf is private for now, you have to use hf token of an authorized account)
from cheesechaser.datapool import CivitaiDataPool
pool = CivitaiDataPool()
# download civitai #7810000-7810300, to directory /data/exp2
# should contain one image and one json metadata file
pool.batch_download_to_directory(
resource_ids=range(7810000, 7810300),
dst_dir='/data/exp2',
max_workers=12,
)
More supported:
RealbooruDataPool
(Gated Dataset)ThreedbooruDataPool
(Gated Dataset)FancapsDataPool
(Gated Dataset)BangumiBaseDataPool
(Gated Dataset)AnimePicturesDataPool
(Gated Dataset)KonachanDataPool
(Gated Dataset)YandeDataPool
(Gated Dataset)ZerochanDataPool
(Gated Dataset)GelbooruDataPool
andGelbooruWebpDataPool
(Gated Dataset)DanbooruNewestDataPool
andDanbooruNewestWebpDataPool
Batch Retrieving Images
from itertools import islice
from cheesechaser.datapool import DanbooruNewestDataPool
from cheesechaser.pipe import SimpleImagePipe, PipeItem
pool = DanbooruNewestDataPool()
pipe = SimpleImagePipe(pool)
# select from danbooru 7349990-7359990
ids = range(7349990, 7359990)
with pipe.batch_retrieve(ids) as session:
# only need 20 images
for i, item in enumerate(islice(session, 20)):
item: PipeItem
print(i, item)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for cheesechaser-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 39f59d2175f0614279e967f565300783976bee8a1cfe5b3a6b5f13baba49a9f4 |
|
MD5 | 5675642a14bfeda1fcee2aa565da29c8 |
|
BLAKE2b-256 | 55b7fac081291f12512b0e32b672a25b14529c5aff7a8902e945f554586b2bd3 |