Skip to main content

Scalable bloom filter using different backends written in Python

Project description

python-bloomfilter

Scalable bloom filter using different backends written in Python. Current version only works with Python 3.

Installation

pip install BloomFilterPy

Backends

Currently, BloomFilterPy has the following backends available: numpy, bitarray and redis. The first two are recommended when the expected number of elements in the filter fit in memory. Redis backend is the preferred when:

  • Expect huge amount of data in the filter that it doesn't fit in memory.
  • You want a distributed filter available (i.e. more than one machine). Thanks to lua scripts, now is possible to take advantage of redis atomic operations in the server side and share the same filter across multiple machines.

Usage & API

BloomFilterPy implements a common API regardless of the backend used. Every backend extends BaseBackend class that implements the common API. In turn, this base class extends the default set class of Python, but just add and len operations are properly handled.

BloomFilter class

  • max_number_of_element_expected: Size of filter. Number of elements it will contain.
  • error_rate: rate of error you're willing to assume. Default is 0.0005.
  • backend: numpy, bitarray or redis. Default is numpy.
  • Only applies with redis backend:
    • redis_connection: url for redis connection as accepted by redis-py.
    • connection_retries: max number of connection retries in case of losing the connection with redis. Default is 3.
    • wait: max waiting time before trying to make a new request against redis.
    • prefix_key: key used in redis to store bloom filter data. Default is bloom_filter.

API

  • add(element): add a new element in the filter.
  • full: property that indicates if the filter is full.
  • false_positive_probability: property that indicates current and updated error rate of the filter. This value should match with choosed error_rate when BloomFilterPy was instanciated, but as new items are added, this value will change.
  • reset(): purge every element from the filter. In the case of bitarray or numpy, after calling reset() it is possible to keep using the filter. However, with redis backend, once reset() is called, you must reinstantiate the filter.
  • len: get the length of the filter (i.e. number of elements).

Local Example

from pybloom import BloomFilter

if __name__ == '__main__':
    f = BloomFilter(10, error_rate=0.0000003, backend='bitarray')  # or backend='numpy'

    for i in range(10):
        f.add(i)  # or f += i
        assert i in f

    print(f.false_positive_probability, 11 in f) # 6.431432780588261e-07 False

In the example above, we have created a bloom filter using bitarray backend, with 10 expected elements and max false probability assumed of 0.0000003.

Redis example

In order to build a bloom filter in redis, BloomFilterPy with RedisBackend will do all the work for you. The first process that wins the distributed lock, will be the responsible to initialize the filter.

from pybloom import BloomFilter

if __name__ == '__main__':
    f = BloomFilter(10, error_rate=0.0000003, backend='redis', redis_connection='redis://localhost:6379/0')

    for i in range(10):
        f.add(i)  # or f += i
        assert i in f

    print(f.false_positive_probability, 11 in f) # 6.431432780588261e-07 False

Once the filter is initiallized, if you don't change the prefix_key in BloomFilter object and current prefix_key already exists, BloomFilterPy will reuse it in a distributed fashion. In this case, max_number_of_element_expected and error_rate are ignored, but for compatibility with the rest of the backends, it is mandatory to set them up.

How can I extend it?

If you install this library from sources and are interested in build a new backend, like MongoBackend or FileSystemBackend for example, is very simple. You just need extend your new backend from:

  • ThreadBackend: if you want to develop a local thread-safe backend, like FileSystemBackend. This backend exposes a lock property to use it when a new item is added or the filter is reset.
  • SharedBackend: if you want to develop a shared backend across several machines, like DatabaseBackend.

and implement the following methods:

  • _add(*args, **kwargs): this method specify the way of adding new elements in the filter using the backend.
  • reset(): this method is used to delete or purge every element from the filter.
  • __contains__: this method returns the length of the filter using _capacity private variable (i.e. number of elements).

Besides, __init__ method must have three parameters to define the array size, optimal hash and filter_size. For convention, array_size: int, optimal_hash: int and filter_size: int are used.

For example, in a hypothetical MongoBackend, the skeleton would be something similar to:

class MongoBackend(SharedBackend):
  def __init__(self, array_size: int, optimal_hash: int, **kwargs):
    # In kwargs you can put mongodb connection details, like host, port and so on.

    self._mongo_connection = MongoClient(**kwargs)
    super(MongoBackend, self).__init__(array_size, hash_size)

  def _add(self, other):
    # perform hashing functions of other and save it in mongo using mongo_connection

  def reset(self):
    # purge bloom filter using mongo_connection

  def __contains__(self, item):
    # check if item is present in the filter

Once you have a new backend ready, you must add it into BloomFilter factory class:

class BloomFilter(object):
    def __new__(cls, max_number_of_element_expected: int, error_rate=.0005, backend='numpy', **kwargs):
      ...

      elif backend == 'MongoBackend':
        return MongoBackend(filter_metadata.optimal_size, filter_metadata.optimal_hash, max_number_of_element_expected, **kwargs)

      ...

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

BloomFilterPy-1.1.tar.gz (11.6 kB view details)

Uploaded Source

Built Distribution

BloomFilterPy-1.1-py3-none-any.whl (15.4 kB view details)

Uploaded Python 3

File details

Details for the file BloomFilterPy-1.1.tar.gz.

File metadata

  • Download URL: BloomFilterPy-1.1.tar.gz
  • Upload date:
  • Size: 11.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.20.1 setuptools/40.6.2 requests-toolbelt/0.8.0 tqdm/4.29.1 CPython/3.6.7

File hashes

Hashes for BloomFilterPy-1.1.tar.gz
Algorithm Hash digest
SHA256 b9611d9ef08d8bd5c0878233f0e25d667df0e639f7efa5f48a5b478474bd0659
MD5 c94efde99645f6da37b741d5cbba02de
BLAKE2b-256 3c9c0208d2ed0879af43e582580da7154f30999265ad732f2c00bf0e44d74829

See more details on using hashes here.

File details

Details for the file BloomFilterPy-1.1-py3-none-any.whl.

File metadata

  • Download URL: BloomFilterPy-1.1-py3-none-any.whl
  • Upload date:
  • Size: 15.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.20.1 setuptools/40.6.2 requests-toolbelt/0.8.0 tqdm/4.29.1 CPython/3.6.7

File hashes

Hashes for BloomFilterPy-1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f396622fb48f6770af0b7a65f0c05eebdac690d5fa445b4baa2b6e08d24e5509
MD5 f676d1cfbc19ef2777dd00e0b7e9efa6
BLAKE2b-256 0c80a0e895aba61c817a97a94c002a124c047643c2b7d4caf206ea39855b1605

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page