Skip to main content

A library for reading and writing hierarchical data files

Project description

richfile

A more natural approach to saving hierarchical data structures.

richfile saves any Python object using directory structures on disk, and loads them back again into the same Python objects.

richfile can save any atomic Python object, including custom classes, so long as you can write a function to save and load it. It is intended as a replacement for things like: pickle, json, yaml, HDF5, Parquet, netCDF, zarr, numpy, etc. when you want to save a complex data structure in a human-readable and editable format. We find the richfile format ideal to use when you are building a data processing pipeline and you want to contain intermediate results in a format that allows for custom data types, is insensitive to version changes (pickling issues), allows for easy debugging, and is human readable.

It is easy to use, the code is simple and pure python, and the operations follow ACID principles.

Installation

pip install richfile

Examples

Try out the examples in the demo_notebook.ipynb file.

Usage

Saving and loading data is simple:

## Given some complex data structure
data = {
    "name": "John Doe",
    "age": 25,
    "address": {
        "street": "1234 Elm St",
        "zip": None
    },
    "siblings": [
        "Jane",
        "Jim"
    ],
    "data": [1,2,3],
    (1,2,3): "complex key",
}

## Save it
import as rf
r = rf.RichFile("path/to/data.richfile").save(data)

## Load it back
data = rf.RichFile("path/to/data.richfile").load()

You can also load just a part of the data:

r = rf.RichFile("path/to/data.richfile")
first_sibling = r["siblings"][0]  ## Lazily load a single item using pythonic indexing
print(f"First sibling: {first_sibling}")

>>> First sibling: Jane

View the contents of a richfile directory without loading it:

r.view_directory_structure()

Output:

Directory structure:
Viewing tree structure of richfile at path: ~/path/data.richfile (dict)
├── name.dict_item (dict_item)
|   ├── key.json (str)
|   ├── value.json (str)
|   
├── age.dict_item (dict_item)
|   ├── key.json (str)
|   ├── value.json (int)
|   
├── address.dict_item (dict_item)
|   ├── key.json (str)
|   ├── value.dict (dict)
|   |   ├── street.dict_item (dict_item)
|   |   |   ├── key.json (str)
|   |   |   ├── value.json (str)
|   |   |   
|   |   ├── zip.dict_item (dict_item)
|   |   |   ├── key.json (str)
|   |   |   ├── value.json (None)
|   |   |   
|   |   
|   
├── siblings.dict_item (dict_item)
|   ├── key.json (str)
|   ├── value.list (list)
|   |   ├── 0.json (str)
|   |   ├── 1.json (str)
|   |   
|   
├── data.dict_item (dict_item)
|   ├── key.json (str)
|   ├── value.list (list)
|   |   ├── 0.json (int)
|   |   ├── 1.json (int)
|   |   ├── 2.json (int)
|   |   
|   
├── 5.dict_item (dict_item)
|   ├── key.tuple (tuple)
|   |   ├── 0.json (int)
|   |   ├── 1.json (int)
|   |   ├── 2.json (int)
|   |   
|   ├── value.json (str)
|   

You can also add new data types easily:

## Add type to a RichFile object
r = rf.RichFile("path/to/data.richfile")
r.register_type(
    type_name='numpy_array',
    function_load=lambda path: np.load(path),
    function_save=lambda path, obj: np.save(path, obj),
    object_class=np.ndarray,
    library='numpy',
    suffix='npy',
)

## OR
## Add type to environment so that all new RichFile objects can use it
rf.functions.register_type(
    type_name='numpy_array',
    function_load=lambda path: np.load(path),
    function_save=lambda path, obj: np.save(path, obj),
    object_class=np.ndarray,
    library='numpy',
    suffix='npy',
)

Installation from source

git clone https://github.com/RichieHakim/richfile
cd richfile
pip install -e .

Considerations and Limitations

  • Inversibility: When creating custom data types, it is important to consider whether the saving and loading operations are exactly reversible.
  • ACID principles are reasonably followed via the use of temporary files, file locks, and atomic operations. However, the library is not a database, and therefore cannot guarantee the same level of ACID compliance as a database. In addition, atomic replacements of existing non-empty directories require two operations, which reduces atomicity.
  • Performance: Data structures with many branches will require many files and operations, which may become slow. Consider packaging highly branched data structures into a single file that supports hierarchical data, such as JSON, HDF5, Parquet, netCDF, zarr, numpy, etc. and making a custom data type for it.

TODO:

  • Tests
  • Documentation
  • Examples
  • Readme
  • License
  • PyPi
  • Hashing
  • Item assignment (safely)
  • Custom saving/loading functions
  • Put the library imports in the function calls
  • Add handling for data without a known type
  • Change name of library to something more descriptive
  • Test out memmap stuff
  • Make it a .zip type

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

richfile-0.4.5.tar.gz (26.7 kB view details)

Uploaded Source

Built Distribution

richfile-0.4.5-py3-none-any.whl (22.5 kB view details)

Uploaded Python 3

File details

Details for the file richfile-0.4.5.tar.gz.

File metadata

  • Download URL: richfile-0.4.5.tar.gz
  • Upload date:
  • Size: 26.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for richfile-0.4.5.tar.gz
Algorithm Hash digest
SHA256 05e899ebc4ed6315b1ac8ba9b621e33845e54fbe123fadf6882624fe304e10d9
MD5 cb7d89cbcb51493bc4ab09aeb411ae03
BLAKE2b-256 7aebbe45e035e4bcdcfad3c4cd43d51a39962a5bfcf68ceb470ed0f17abfc17a

See more details on using hashes here.

File details

Details for the file richfile-0.4.5-py3-none-any.whl.

File metadata

  • Download URL: richfile-0.4.5-py3-none-any.whl
  • Upload date:
  • Size: 22.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for richfile-0.4.5-py3-none-any.whl
Algorithm Hash digest
SHA256 0d4d4c4dbc7c9acc649e6e9acaf742212bb8c9b1d365897e6c21236e4d6839cb
MD5 7618de61e2946a59bf2483a48e0a5e39
BLAKE2b-256 958ca8c1c15852724409da57023c842bb579c2ed3a5db190f9b8c9c80e7bb04d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page