Skip to main content

A high performance dataloader for Python, written in Rust

Project description

datago

Rust Rust-py

A Rust-written data loader which can be used as a python module. Handles several data sources, from local files to webdataset or a VectorDB focused http stack soon-to-be open sourced. Focused on image data at the moment, could also easily be more generic.

Datago handles, outside of the Python GIL

  • per sample IO
  • deserialization (jpg and png decompression)
  • some optional vision processing (aligning different image payloads)
  • optional serialization

Samples are exposed in the Python scope as python native objects, using PIL and Numpy base types. Speed will be network dependent, but GB/s is typical. Depending on the front ends, datago can be rank and world-size aware, in which case the samples are dispatched depending on the samples hash.

Datago organization

Use it

You can simply install datago with [uv] pip install datago

Use the package from Python

Please note that in all the of the following cases, you can directly get an IterableDataset (torch compatible) with the following code snippet

from dataset import DatagoIterDataset
client_config = {} # See below for examples
datago_dataset = DatagoIterDataset(client_config, return_python_types=True)

return_python_types enforces that images will be of the PIL.Image sort for instance, being an external binary module should be transparent.

Dataroom
from datago import DatagoClient, initialize_logging
import os
import json

# Respects RUST_LOG=INFO env var for setting log level
# If omitted the logger will be initialized when the client starts.
initialize_logging()

config = {
    "source_config": {
        "sources": os.environ.get("DATAROOM_TEST_SOURCE", ""),
        "page_size": 500,
        "rank": 0,
        "world_size": 1,
    },
    "limit": 200,
    "samples_buffer_size": 32,
}

client = DatagoClient(json.dumps(config))

for _ in range(10):
    sample = client.get_sample()

Please note that the image buffers will be passed around as raw pointers, see below (we provide python utils to convert to PIL types).

Local files

To test datago while serving local files (jpg, png, ..), code would look like the following. Note that datago serving files with a lot of concurrent threads means that, even if random_sampling is not set, there will be some randomness in the sample ordering.

from datago import DatagoClient, initialize_logging
import os
import json

# Can also set the log level directly instead of using RUST_LOG env var
initialize_logging(log_level="warn")

config = {
    "source_type": "file",
    "source_config": {
        "root_path": "myPath",
        "random_sampling": False, # True if used directly for training
        "rank": 0, # Optional, distributed workloads are possible
        "world_size": 1,
    },
    "limit": 200,
    "samples_buffer_size": 32,
}

client = DatagoClient(json.dumps(config))

for _ in range(10):
    sample = client.get_sample()
[experimental] Webdataset

Please note that this implementation is very new, and probably has significant limitations still. It has not yet been tested at scale. Please also note that you can find a better example in /python/benchmark_webdataset.py, which will show how to convert everything to more pythonic types (PIL images).

from datago import DatagoClient, initialize_logging
import os
import json

# Can also set the log level directly instead of using RUST_LOG env var
initialize_logging(log_level="warn")

# URL of the test bucket
bucket = "https://storage.googleapis.com/webdataset/fake-imagenet"
dataset = "/imagenet-train-{000000..001281}.tar"
url = bucket + dataset

client_config = {
    "source_type": "webdataset",
    "source_config": {
        "url": url,
        "random_sampling": False,
        "concurrent_downloads": 8, # The number of TarballSamples which should be handled concurrently
        "rank": 0,
        "world_size": 1,
    },
    "prefetch_buffer_size": 128,
    "samples_buffer_size": 64,
    "limit": 1_000_000, # Dummy example, max number of samples you would like to serve
}

client = DatagoClient(json.dumps(client_config))

for _ in range(10):
    sample = client.get_sample()

Process images on the fly

Datago can also process images on the fly, for instance to align different image payloads. This is done by adding an image_config to the configuration. The following example shows how to align different image payloads.

Processing can be very CPU heavy, but it will be distributed over all CPU cores wihout requiring multiple python processes. I.e., you can keep a single python process using get_sample() on the client and still saturate all CPU cores.

There are three main processing topics that you can choose from:

  • crop the images to within an aspect ratio bucket (which is very handy for all Transformer / patch based architectures)
  • resize the images (setting here will be related to the square aspect ratio bucket, other buckets will differ of course)
  • pre-encode the images to a specific format (jpg, png, ...)
   config = {
    "source_type": "file",
    "source_config": {
        "root_path": "myPath",
        "random_sampling": False, # True if used directly for training
    },
    # Optional pre-processing of the images, placing them in an aspect ratio bucket to preserve as much as possible of the original content
    "image_config": {
        "crop_and_resize": True, # False to turn it off, or just omit this part of the config
        "default_image_size": 1024,
        "downsampling_ratio": 32,
        "min_aspect_ratio": 0.5,
        "max_aspect_ratio": 2.0,
        "pre_encode_images": False,
    },
    "limit": 200,
    "samples_buffer_size": 32,
}

Match the raw exported buffers with typical python types

See helper functions provided in raw_types.py, should be self explanatory. Check python benchmarks for examples. As mentioned above, we also provide a wrapper so that you get a dataset directly.

Logging

We are using the log crate with env_logger. You can set the log level using the RUST_LOG environment variable. E.g. RUST_LOG=INFO.

When using the library from Python, env_logger will be initialized automatically when creating a DatagoClient. There is also a initialize_logging function in the datago module, which if called before using a client, allows to customize the log level. This only works if RUST_LOG is not set.

Env variables

There are a couple of env variables which will change the behavior of the library, for settings which felt too low level to be exposed in the config.

  • DATAGO_MAX_TASKS: refers to the number of threads which will be used to load the samples. Defaults to a multiple of the CPU cores.
  • RUST_LOG: see above, will change the level of logging for the whole library, could be useful for debugging or to report an issue here.
  • DATAGO_MAX_RETRIES: number of retries for a failed sample load, defaults to 3.
Build it

Preamble

Just install the rust toolchain via rustup

[Apple Silicon MacOS only]

If you are using an Apple Silicon Mac OS machine, create a .cargo/config file and paste the following:

[target.x86_64-apple-darwin]
rustflags = [
  "-C", "link-arg=-undefined",
  "-C", "link-arg=dynamic_lookup",
]

[target.aarch64-apple-darwin]
rustflags = [
  "-C", "link-arg=-undefined",
  "-C", "link-arg=dynamic_lookup",
]

Build a benchmark CLI

Cargo run --release -- -h to get all the information, should be fairly straightforward

Run the rust test suite

From the datago folder

cargo test

Generate the python package binaries manually

Build a wheel useable locally

maturin build -i python3.11 --release --target "x86_64-unknown-linux-gnu"

Build a wheel which can be uploaded to pypi or related

  • either use a manylinux docker image

  • or cross compile using zip

maturin build -i python3.11 --release --target "x86_64-unknown-linux-gnu" --manylinux 2014 --zig

then you can pip install from target/wheels

Update the pypi release (maintainers)

Create a new tag and a new release in this repo, a new package will be pushed automatically.

Benchmarks As usual, benchmarks are a tricky game, and you shouldn't read too much into the following plots but do your own tests. Some python benchmark examples are provided in the [python](./python/) folder.

In general, Datago will be impactful if you want to load a lot of images very fast, but if you consume them as you go at a more leisury pace then it's not really needed. The more CPU work there is with the images and the higher quality they are, the more Datago will shine.

From disk: ImageNet

The following benchmarks are using ImageNet 1k, which is very low resolution and thus kind of a worst case scenario. Data is served from cache (i.e. the OS cache) and the images are not pre-processed. In this case the receiving python process is typically the bottleneck, and caps at around 3000 images per second.

AMD Zen3 laptop - IN1k - disk - no processing

AMD Zen3 laptop & M2 SSD

AMD EPYC 9454 - IN1k - disk - no processing

AMD EPYC 9454

Webdataset: FakeIN

This benchmark is using low resolution images. It's accessed through the webdataset front end, datago is compared with the popular python webdataset library. Note that datago will start streaming the images faster here (almost instantly !), which emphasizes throughput differences depending on how long you test it for.

Of note is also that this can be bottlenecked by your external bandwidth to the remote storage where WDS is hosted, in which case both solution would yield comparable numbers.

AMD Zen3 laptop - webdataset - no processing

AMD EPYC 9454

Webdataset: PD12M

This benchmark is using high resolution images. It's accessed through the webdataset front end, datago is compared with the popular python webdataset library. Note that datago will start streaming the images faster here (almost instantly !), which emphasizes throughput differences depending on how long you test it for.

Of note is also that this can be bottlenecked by your external bandwidth to the remote storage where WDS is hosted, in which case both solution would yield comparable numbers.

AMD Zen3 laptop - webdataset - no processing

AMD Zen3 laptop

AMD EPYC 9454 - pd12m - webdataset - no processing

AMD EPYC 9454

AMD Zen3 laptop - webdataset - processing

Adding image processing (crop and resize to Transformer compatible size buckets) to the equation changes the picture, as the work spread becomes more important. If you're training a diffusion model or an image encoder from a diverse set of images, this is likely to be the most realistic micro-benchmark.

AMD Zen3 laptop

License

MIT License

Copyright (c) 2025 Photoroom

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datago-2026.2.2.tar.gz (941.1 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

datago-2026.2.2-cp314-cp314-manylinux_2_31_x86_64.whl (5.2 MB view details)

Uploaded CPython 3.14manylinux: glibc 2.31+ x86-64

datago-2026.2.2-cp313-cp313-manylinux_2_31_x86_64.whl (5.2 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.31+ x86-64

datago-2026.2.2-cp312-cp312-manylinux_2_31_x86_64.whl (5.2 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.31+ x86-64

datago-2026.2.2-cp311-cp311-manylinux_2_31_x86_64.whl (5.2 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.31+ x86-64

datago-2026.2.2-cp310-cp310-manylinux_2_31_x86_64.whl (5.2 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.31+ x86-64

File details

Details for the file datago-2026.2.2.tar.gz.

File metadata

  • Download URL: datago-2026.2.2.tar.gz
  • Upload date:
  • Size: 941.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.11.5

File hashes

Hashes for datago-2026.2.2.tar.gz
Algorithm Hash digest
SHA256 1127d945e20d72274379178c4f35ca6fec3dfc742ea9ed2cc8f9292361e28e47
MD5 f9bf6231fc0821c950e26dcf8ad8ea2a
BLAKE2b-256 fe242e41477d76190e1ffa6bc5e360020dcbe861e63a8c2f88df1707a88ee890

See more details on using hashes here.

File details

Details for the file datago-2026.2.2-cp314-cp314-manylinux_2_31_x86_64.whl.

File metadata

File hashes

Hashes for datago-2026.2.2-cp314-cp314-manylinux_2_31_x86_64.whl
Algorithm Hash digest
SHA256 c1f9e23213772af1c6e0f1c8c3e0e362c9ee091496d2c84b87185ed0a7c332c0
MD5 779809e29244ee5688b43999bdf85b89
BLAKE2b-256 4cf8ce53421583d914bbacaea24f12c57630aca0ff129e7a7d63f7b29dd02d93

See more details on using hashes here.

File details

Details for the file datago-2026.2.2-cp313-cp313-manylinux_2_31_x86_64.whl.

File metadata

File hashes

Hashes for datago-2026.2.2-cp313-cp313-manylinux_2_31_x86_64.whl
Algorithm Hash digest
SHA256 b65cdacc90a7ccabe1e58e0fb862e47488e29c62059b5082d36971da464415f4
MD5 4224073f71b770670a3245e21c2185b5
BLAKE2b-256 5384e20c4181cb6f4c8acc0059f0b19adec7db5fc634a01a1a60898f8b24cc6d

See more details on using hashes here.

File details

Details for the file datago-2026.2.2-cp312-cp312-manylinux_2_31_x86_64.whl.

File metadata

File hashes

Hashes for datago-2026.2.2-cp312-cp312-manylinux_2_31_x86_64.whl
Algorithm Hash digest
SHA256 9598a78ff864d336a25bb2463199515496774225b6f9f380668542351a6ceb97
MD5 ae34044f8d7c07919a1064595e2988b8
BLAKE2b-256 f86a3da017901061bd69a725e4c231857e6aa52ca9569be96e98420dfe281de3

See more details on using hashes here.

File details

Details for the file datago-2026.2.2-cp311-cp311-manylinux_2_31_x86_64.whl.

File metadata

File hashes

Hashes for datago-2026.2.2-cp311-cp311-manylinux_2_31_x86_64.whl
Algorithm Hash digest
SHA256 f30bc6343e73458a450bded0713ecefa78a1d3b684cf18bc71dadeca7b92fc18
MD5 ecb621dc103b9b77c5753d71551e3390
BLAKE2b-256 457933698918033323b5bc74962fa983d8b2b221a05bbfb312888c0a059c207d

See more details on using hashes here.

File details

Details for the file datago-2026.2.2-cp310-cp310-manylinux_2_31_x86_64.whl.

File metadata

File hashes

Hashes for datago-2026.2.2-cp310-cp310-manylinux_2_31_x86_64.whl
Algorithm Hash digest
SHA256 8e1874f3dc5f46f6643bfe749660edae60afe813b6713e643d38042cdd33ea98
MD5 3e09d7e87a2aacc5cbe23add2d266292
BLAKE2b-256 05cab9e34344b8bab2c55dfa5a10727301efa2e32008f51b8a4b7c0ae3b53b42

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page