Python module and CLI for hashing of file system directories.
A lightweight python module and CLI for computing the hash of any directory based on its files' structure and content.
- Supports all hashing algorithms of Python's built-in
- Glob/wildcard (".gitignore style") path matching for expressive filtering of files to include/exclude.
- Multiprocessing for up to 6x speed-up
The hash is computed according to the Dirhash Standard, which is designed to allow for consistent and collision resistant generation/verification of directory hashes across implementations.
pip install dirhash
Or directly from source:
git clone email@example.com:andhus/dirhash-python.git pip install dirhash/
from dirhash import dirhash dirpath = "path/to/directory" dir_md5 = dirhash(dirpath, "md5") pyfiles_md5 = dirhash(dirpath, "md5", match=["*.py"]) no_hidden_sha1 = dirhash(dirpath, "sha1", ignore=[".*", ".*/"])
dirhash path/to/directory -a md5 dirhash path/to/directory -a md5 --match "*.py" dirhash path/to/directory -a sha1 --ignore ".*" ".*/"
If you (or your application) need to verify the integrity of a set of files as well as their name and location, you might find this useful. Use-cases range from verification of your image classification dataset (before spending GPU-$$$ on training your fancy Deep Learning model) to validation of generated files in regression-testing.
There isn't really a standard way of doing this. There are plenty of recipes out
there (see e.g. these SO-questions for linux
but I couldn't find one that is properly tested (there are some gotcha:s to cover!)
and documented with a compelling user interface.
dirhash was created with this as
checksumdir is another python module/tool with similar intent (that inspired this project) but it lacks much of the functionality offered here (most notably including file names/structure in the hash) and lacks tests.
hashlib implementation of common hashing algorithms are highly
dirhash mainly parses the file tree, pipes data to
combines the output. Reasonable measures have been taken to minimize the overhead
and for common use-cases, the majority of time is spent reading data from disk
The main effort to boost performance is support for multiprocessing, where the reading and hashing is parallelized over individual files.
As a reference, let's compare the performance of the
with the shell command:
find path/to/folder -type f -print0 | sort -z | xargs -0 md5 | md5
which is the top answer for the SO-question: Linux: compute a single hash for a given folder & contents? Results for two test cases are shown below. Both have 1 GiB of random data: in "flat_1k_1MB", split into 1k files (1 MiB each) in a flat structure, and in "nested_32k_32kB", into 32k files (32 KiB each) spread over the 256 leaf directories in a binary tree of depth 8.
|Implementation||Test Case||Time (s)||Speed up|
|shell reference||flat_1k_1MB||2.29||-> 1.0|
|shell reference||nested_32k_32kB||6.82||-> 1.0|
The benchmark was run a MacBook Pro (2018), further details and source code here.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size dirhash-0.2.1-py3-none-any.whl (12.9 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
|Filename, size dirhash-0.2.1.tar.gz (13.8 kB)||File type Source||Python version None||Upload date||Hashes View|