Skip to main content

Ice (save) your data and high level objects for use later.

Project description

dataicer - latest-version

Ice (save) your data and high level objects for use later.

Do you have complex classes or objects that you want to save to disk and reinstate later? Do you want to use a data structures natural save methods? Do you want it to be easy and manageable, capturing key information so you can come back and load your data again later if you need to?

dataicer can help you with all this. Build on top of jsonpickle, dataicer allows you to create a central handler (just for a directory at the moment) where Python objects can be saved in json format. However, while json format might be ok for small objects or simple types it is not great for numpy.ndarray or pandas.DataFrame or xarray.Dataset complex structures. Complex structures also come with their own way of saving information and dataicer leverages this on top of jsonpickle to create portable and recreatable saved Python state.

Installation

Installation using pip via the source directory.

pip install .

or install from PyPi

pip install digirock

Usage

First, create a new DirectoryHandler class. This points at the archive folder you want to use.

If you have speical classes you need to pickle they need a special handler. Dataicer includes handlers for numpy.ndarray, xarray.Dataarray and xarray.Dataset and pandas.DataFrame. Handlers are unique to the DirectoryHandler instance.

from dataicer import DirectoryHandler, get_numpy_handlers, get_pandas_handlers, get_xarray_handlers

handlers = get_pandas_handlers()
handlers.update(get_xarray_handlers())

dh = DirectoryHandler("my_archive", handlers, mode="w")

Numpy arrays can be saved in single column "txt", "npy" binary, or "npz" compressed. Xarray structures can only be saved as "nc" netcdf. Pandas DataFrames can be saved as "h5" hdf5 or "csv" text files.

Objects are then passed to the ice function of the DirectoryHandler as keyword arguments.

import numpy as np
import xarry as xr
import pandas as pd

dh.ice(nparr=np.zeros(10), df=pd.DataFrame(data={"a":[1, 2, 3]}), xarrds=xr.tutorial.scatter_example_dataset())

dataicer will create the directory my_archive and place three files identified via a uuid in the directory for each object. There is also a JSON file with the key name containing all the meta information for the object saved and a meta.json file which contains information about the system state at the time the archive was created.

The deice command can be used to reload all of the arguments into a dictionary.

state = dh.deice()
state["nparr"]

    array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

dataicer-0.2.1-py3-none-any.whl (12.4 kB view details)

Uploaded Python 3

File details

Details for the file dataicer-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: dataicer-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 12.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.63.0 importlib-metadata/4.11.2 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.10

File hashes

Hashes for dataicer-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 bae887ddec59780a6195fe46624141151b0663b53719a6489085f5f0d49b7ce0
MD5 c4f477e12413b8c800a71e6789eb79e0
BLAKE2b-256 f8c0e9bca3010b720e02bd7250715d64d3504c92924d4d8e04729bada3abfea1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page