Skip to main content

A data processing pipeline and iterator with minimal dependencies for machine learning.

Project description

Lunas

PyPI version

Lunas is a Python 3-based library that provides a set of simple interfaces for data processing pipelines and an iterator for looping through data.

Basically, Lunas draws its data-handling style on Tensorflow, PyTorch, and some implementation details from AllenNLP.

Features

Reader A reader defines a dataset and corresponding preprocessing and filtering rules. Currently the following features are supported:

  1. Buffered reading.
  2. Buffered shuffling.
  3. Chained processing and filtering interface.
  4. Preprocess and filter the data buffer in parallel.
  5. Handling multiple input sources.
  6. Persistable.

DataIterator An iterator performs multi-pass iterations over the dataset and maintains the iteration state:

  1. Dynamic batch size at runtime.
  2. Custom stopping criteria.
  3. Sort samples of a batch, which is useful for learning text presentation by RNNs in PyTorch.
  4. Persistable.

Persistable provides the class with a PyTorch like interface to dump and load instance state, useful when the training process is accidentally aborted.

Requirements

  • Numpy
  • overrides
  • typings
  • Python = 3.x

Lunas hardly relies on any third-party libraries, all the required libraries are just to take advantage of the type hint feature provided by Python 3.

Installation

You can simply install Lunas by running pip:

pip install lunas

Example

Lunas exposes minimal interfaces to the user so as to make it as simple as possible. We try to avoid adding any unnecessary features to keep it light-weight.

However, you can still extend this library to suit your needs at any time to handle arbitrary data types such as text, images, and audios.

  1. Create a dataset reader and iterate through it.

    from lunas.readers import Range
    
    ds = Range(10)
    for sample in ds:
        print(sample)
    for sample in ds:
        print(sample)
    
    • We create a dataset similar to range(10) and iterate through it for one epoch. As you see, we can iterate through this dataset several times.
  2. Build a data processing pipeline.

    ds = Range(10).select(lambda x: x + 1).select(lambda x: x * 2).where(lambda x: x % 2 == 0)
    
    • we call Reader.select(fn) to define a processing procedure for the dataset.
    • select() returns the dataset itself to enable chaining invocations. You can apply any transformations to the dataset and return a sample of any type, say Dict, List and custom Sample.
    • where() accepts a predicate and returns a bool value to filter input sample, if True, the sample is preserved, otherwise discarded.
    • It should be noted that the processing is not executed immediately, but will be performed when iterating through ds.
  3. Deal with multiple input sources.

    from lunas.readers import Range, Zip, Shuffle
    
    ds1 = Range(10)
    ds2 = Range(10)
    ds = Zip(ds1, ds2).select(lambda x: x[0] + x[1])
    ds = Shuffle(ds)
    
    • In the above code, we create two datasets and zip them as a Zip reader. A Zip reader returns a tuple from its internal readers.
    • Shuffle performs randomized shuffling on the dataset.
  4. Practical use case in Machine Translation scenario.

    from lunas.readers import TextLine
    from lunas.iterator import Iterator
    
    # Tokenize the input into a list of tokens.
    tokenize = lambda line: line.split()
    # Ensure the inputs are of length no exceeding 50.
    limit = lambda src_tgt: max(map(len, src_tgt)) <= 50
    # Map word to id.
    word2id = lambda src_tgt: ...
    
    source = TextLine('train.fr').select(tokenize)
    target = TextLine('train.en').select(tokenize)
    ds = Zip(source, target).where(limit)
    ds = Shuffle().select(word2id)
    
    # Take maximum length of the sentence pair as sample_size
    sample_size = lambda x: max(map(len), x)
    # Convert a list of samples to model inputs
    collate_fn = lambda x: ...
    # Sort samples in batch by source text length
    sort_key = lambda x: len(x[0])
    
    it = Iterator(ds, batch_size=4096, cache_size=40960, sample_size_fn=lambda x, collate_fn=collate_fn, sort_desc_by=sort_key)
    
    # Iterate 100 epoch and 1000000 steps at most.
    for batch in it.while_true(lambda: it.epoch < 100 and it.step < 1e6):
    	print(it.epoch, it.step, it.step_in_epoch, batch)
    
    • This code should be simple enough to understand, even if you are not familiar with machine translation.
  5. Save and reload iteration state.

    import pickle
    pickle.dump(it.state_dict(), open('state.pkl', 'wb'))
    # ...
    state = pickle.load(open('state.pkl', 'rb'))
    it.load_state_dict(state)
    
    • state_dict() returns a picklable dictionary, which can be loaded by it.load_state_dict() to resume the iteration process later.
  6. Extend the reader.

    • You can refer to the implementation of Text reader to customize your own data reader.

Conclusions

Please feel free to contact me if you have any question or find any bug of Lunas.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Lunas-0.2.2.tar.gz (11.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

Lunas-0.2.2-py3-none-any.whl (14.8 kB view details)

Uploaded Python 3

File details

Details for the file Lunas-0.2.2.tar.gz.

File metadata

  • Download URL: Lunas-0.2.2.tar.gz
  • Upload date:
  • Size: 11.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.8

File hashes

Hashes for Lunas-0.2.2.tar.gz
Algorithm Hash digest
SHA256 5003de7c72af309bec41b06ac5ac7905eab9de2833e64496fc1e45bd191e0d2d
MD5 3c560e9912b31958c95eb9f982156219
BLAKE2b-256 06b6e0c58cbf47772cd79613b26c4ed40c2d19127f9ef8479a33fe15e4815657

See more details on using hashes here.

File details

Details for the file Lunas-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: Lunas-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 14.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.28.1 CPython/3.6.8

File hashes

Hashes for Lunas-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 c2e70f7a356a43e23521729d7133fdb74655eaa773c0d2097346a4ce1242a187
MD5 b56d2c2a3164b1bfb27dfb88a53c5b70
BLAKE2b-256 d956c1d41a715a87df436a68401715b4a5244947b7062cdc76368b34711d9f1a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page