A simple, timetested, family of random hash functions in Python, based on CRC32 and xxHash, affine transformations, and the Mersenne Twister.
Project description
Python randomhash
package
A simple, timetested, family of random hash functions in Python, based on CRC32 and xxHash, affine transformations, and the Mersenne Twister.
This is a companion library to the identical Java version.
Installation and usage
The library is available on PyPI, and can be installed through normal means:
$ pip install randomhash
Once installed, it can be called either by instantiating a family of random hash functions, or using the default instantiated functions:
import randomhash
# Create a family of random hash functions, with 10 hash functions
rfh = randomhash.RandomHashFamily(count=10)
print(rfh.hashes("hello")) # will compute the ten hashes for "hello"
# Use the default instantiated functions
print(randomhash.hashes("hello", count=10))
Features
This introduces a family of hash functions that can be used to implement probabilistic
algorithms such as HyperLogLog. It is based on affine transformations of either the
CRC32 hash functions, which have been empirically shown to provide good performance
(for consistency with other versions of this library, such as the Java version), or
the more complex xxHash hash functions that are
made available through the xxhash
Python bindings.
The pseudorandom numbers are drawn according to
the standard Python implementation
of the Mersenne Twister.
Some history
In 1983, G. N. N. Martin and Philippe Flajolet introduced the algorithm known as Probabilistic Counting, designed to provide an extremely accurate and efficient estimate of the number of unique words from a document that may contain repetitions. This was an incredibly important algorithm, which introduced a revolutionary idea at the time:
The only assumption made is that records can be hashed in a suitably pseudouniform manner. This does not however appear to be a severe limitation since empirical studies on large industrial files [5] reveal that careful implementations of standard hashing techniques do achieve practically uniformity of hashed values.
The idea is that hash functions can "transform" data into pseudorandom variables.
Then a text can be treated as a sequence of random variables drawn from a uniform
distribution, where a given word will always occur as the same random value (i.e.,
a b c a a b c
could be hashed as .00889 .31423 .70893 .00889 .00889 .31423 .70893
with
every occurrence of a
hashing to the same value). While this sounds strange,
empirical evidence suggests it is true enough in practice, and eventually some
theoretical basis
has come to support the practice.
The original Probabilistic Counting (1983) algorithm gave way to LogLog (2004), and then eventually HyperLogLog (2007), one of the most famous algorithms in the world as described in this article. These algorithms and others all used the same idea of hashing inputs to treat them as random variables, and proved remarkably efficient and accurate.
But as highlighted in the above passage, it is important to be careful.
Hash functions in practice
In practice, it is easy to use poor quality hash functions, or to use cryptographic functions which will significantly slow down the speed (and relevance) of the probabilistic estimates. However, on most data, some the cyclic polynomial checksums (such as Adler32 or CRC32) provide good resultsas do efficient, generalpurpose noncryptographic hash functions such as xxHash.
Project details
Release history Release notifications  RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for randomhash0.6.0py3noneany.whl
Algorithm  Hash digest  

SHA256  7dbc2506774d2e814411e2c7400c5463119124a47eef64cd53a516a6874f7937 

MD5  dc3a28f2be47b724be793f37b7f05cbb 

BLAKE2b256  8e1915aa9178557103636c35f79c762ffc9a48527a0546c3fcde4b272fe5edd3 