Easy, efficient and Pythonic data loading of Parquet files for PyTorch-based libraries
Project description
PyTorch Parquet Data Loader
This library holds a number of classes which help reading data from Parquet files into the PyTorch ecosystem easily! Although this library is intended to be used for natural language processing projects and with NLP libraries, it is extremely flexible. Feel free to use, modify or fork this library in any way!
Supported Libraries
Library | Requirements | Usage Notes |
---|---|---|
PyTorch-Lightning | Requires PyArrow and (optionally) Petastorm | The more basic PyArrow implementation is far easier to understand, but not battle tested. |
Transformers | Can be used with either PyTorch-Lightning implementation, but Petastorm casts data types from one format to another several times midway, which can impare performance | |
AllenNLP | PyArrow | Not implemented yet |
Please look here for further information on using Petastorm with Hugging Face Transformers.
Difference from Petastorm
Petastorm is a great (albeit complex) library for using Parquet files in a large variety of situations. Although they have basic PyTorch support, their solution is tough to understand. This can make it difficult to debug and modify for personal use.
Alternatively, PyParquetLoaders
is focused on providing Python classes which are easy to use, understand and modify.
This means anyone can get started with their PyTorch models reasonably quickly, even if they're doing something slightly different/unique.
Currently PyParquetLoaders supports PyTorch-Lightning and Hugging Face Transformers, with AllenNLP support comming soon!
Usage Guide
To use a Parquet file for training a PyTorch model simply choose and import the right data set/loader (for your library of choice). Then you can simply try and use it just as you would for any other (simple) text/image file (look at your libraries relavent docs). Some examples are included in the Tests.py script.
Developer/Contributor Guide
To help develop/extend this library please use the following workflow:
- Fork and clone the repository
- Make your modifications
- Test your modifications (run tests with
python -m pytest
and install in test mode withpip install -e .
) - Commit (
git commit -m "description of changes"
) and push (git push
) your changes - Create a pull request
Feel free to add any feature you see fit, or fix/report any bug you find (with GitHub Issues). When creating a pull request please ensure you've carefully described what you're doing, why and a brief overview of your changes. The commits should be small, only changing one feature at a time.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pyparquetloaders-0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | fd911e2f30652521863a5a9c3b6b116196118e94b2aeda2b17225006d1bc65d5 |
|
MD5 | 2e754ca20a5c5660ca47f6f72894ddcd |
|
BLAKE2b-256 | 2e5fe6f90e019809ce19967f3fa41e13839e096f480814a9107e00f8cb183eb2 |