Skip to main content

No project description provided

Project description

LINDI - Linked Data Interface

latest-release tests codecov

LINDI is a cloud-friendly file format and Python library designed for managing scientific data, especially Neurodata Without Borders (NWB) datasets. It offers an alternative to HDF5 and Zarr, maintaining compatibility with both, while providing features tailored for linking to remote datasets stored in the cloud, such as those on the DANDI Archive. LINDI's unique structure and capabilities make it particularly well-suited for efficient data access and management in cloud environments.

What is a LINDI file?

A LINDI file is a cloud-friendly format for storing scientific data, designed to be compatible with HDF5 and Zarr while offering unique advantages. It comes in three types which are representations of the same underlying data: JSON/text format (.lindi.json), binary format (.lindi.tar), and directory format (.lindi.d).

In the JSON format, the hierarchical group structure, attributes, and small datasets are stored in a JSON structure, with references to larger data chunks stored in external files (inspired by kerchunk). This format is human-readable and easily inspected and edited.

The binary format is a .tar file that contains the JSON file (lindi.json) along with optional internal data chunks referenced by the JSON file, in addition to external chunks. This format can be used to create a new NWB file that builds on an existing NWB file without duplicating it and adds new data objects (see below).

The directory format is similar to the .tar format but it stores the lindi.json and the binary chunks in a directory rather than in a .tar.

What are the main use cases?

LINDI files are particularly useful in the following scenarios:

Efficient NWB File Representation on DANDI: A LINDI JSON file can represent an NWB file stored on the DANDI Archive (or other remote system). By downloading a condensed JSON file, the entire group structure can be retrieved in a single request, facilitating efficient loading of NWB files. For instance, Neurosift utilizes pre-generated LINDI JSON files to streamline the loading process of NWB files from DANDI (here is an example).

Creating Amended NWB Files: LINDI allows for the creation of amended NWB files that add new data objects to existing NWB files without duplicating the entire file. This is achieved by generating a binary or directory LINDI file that references the original NWB file and includes additional data objects stored as internal data chunks. This approach saves storage space by reducing redundancy and establishing dependencies between NWB files.

Why not use Zarr?

When comparing LINDI to Zarr it should be noted that LINDI files are in fact valid Zarr archives that can be accessed via the Zarr API. Indeed, a LINDI file is a special type of Zarr store that allows for external links to chunks (see kerchunk) and special conventions for representing HDF5 features used by NWB that are not natively supported in Zarr.

Traditional Zarr directory stores have some limitations. First, Zarr archives often consist of tens of thousands of individual files, making them cumbersome to manage. In contrast, LINDI adopts a single file approach similar to HDF5, enhancing manageability while retaining cloud-friendliness. Another limitation (as mentioned) is the lack of a mechanism to reference data chunks in external datasets as LINDI has. Finally, Zarr does not natively support certain features utilized by NWB, such as compound data types and references. These are supported by both HDF5 and LINDI.

Why not use HDF5?

HDF5 is not well-suited for cloud environments because accessing a remote HDF5 file often requires a large number of small requests to retrieve metadata before larger data chunks can be downloaded. LINDI addresses this by storing the entire group structure in a single JSON file, which can be downloaded in one request. Additionally, HDF5 lacks a built-in mechanism for referencing data chunks in external datasets. Furthermore, HDF5 does not support custom Python codecs, a feature available in both Zarr and LINDI.

Is tar format really cloud-friendly?

With LINDI, yes. See docs/tar.md for details.

Installation

pip install --upgrade lindi

Or from source

cd lindi
pip install -e .

Usage

Creating and reading a LINDI file

The simplest way to start is to use it like HDF5.

import lindi

# Create a new lindi.json file
with lindi.LindiH5pyFile.from_lindi_file('example.lindi.json', mode='w') as f:
    f.attrs['attr1'] = 'value1'
    f.attrs['attr2'] = 7
    ds = f.create_dataset('dataset1', shape=(10,), dtype='f')
    ds[...] = 12

# Later read the file
with lindi.LindiH5pyFile.from_lindi_file('example.lindi.json', mode='r') as f:
    print(f.attrs['attr1'])
    print(f.attrs['attr2'])
    print(f['dataset1'][...])

You can inspect the example.lindi.json file to get an idea of how data are stored. If you are familiar with the internal Zarr format, you will recognize the .group and .zarray files and the layout of the chunks. Here is an example of a LINDI JSON file that represents an NWB file stored on DANDI.

Because the above dataset is very small, it can all fit reasonably inside the JSON file. For storing larger arrays (the usual case) it is better to use the binary or directory format.

import numpy as np
import lindi

# Create a new lindi binary file
with lindi.LindiH5pyFile.from_lindi_file('example.lindi.tar', mode='w') as f:
    f.attrs['attr1'] = 'value1'
    f.attrs['attr2'] = 7
    ds = f.create_dataset('dataset1', shape=(1000, 1000), dtype='f')
    ds[...] = np.random.rand(1000, 1000)

# Later read the file
with lindi.LindiH5pyFile.from_lindi_file('example.lindi.tar', mode='r') as f:
    print(f.attrs['attr1'])
    print(f.attrs['attr2'])
    print(f['dataset1'][...])

Loading a remote NWB file from DANDI

With LINDI, it is easy to load an NWB file stored on DANDI. The following example demonstrates how to load an NWB file from DANDI, view it using the pynwb library, and save it as a relatively smaller .lindi.json file. The LINDI JSON file can then be read directly to access the NWB file.

import pynwb
import lindi

# Define the URL for a remote NWB file
h5_url = "https://api.dandiarchive.org/api/assets/11f512ba-5bcf-4230-a8cb-dc8d36db38cb/download/"

# Load as LINDI and view using pynwb
f = lindi.LindiH5pyFile.from_hdf5_file(h5_url)
with pynwb.NWBHDF5IO(file=f, mode="r") as io:
    nwbfile = io.read()
    print('NWB via LINDI')
    print(nwbfile)

    print('Electrode group at shank0:')
    print(nwbfile.electrode_groups["shank0"])  # type: ignore

    print('Electrode group at index 0:')
    print(nwbfile.electrodes.group[0])  # type: ignore

# Save as LINDI JSON
f.write_lindi_file('example.nwb.lindi.json')
f.close()

# Later, read directly from the LINDI JSON file
g = lindi.LindiH5pyFile.from_lindi_file('example.nwb.lindi.json')
with pynwb.NWBHDF5IO(file=g, mode="r") as io:
    nwbfile = io.read()
    print('')
    print('NWB from LINDI JSON:')
    print(nwbfile)

    print('Electrode group at shank0:')
    print(nwbfile.electrode_groups["shank0"])  # type: ignore

    print('Electrode group at index 0:')
    print(nwbfile.electrodes.group[0])  # type: ignore

Amending an NWB file

One of the main use cases of LINDI is to create amended NWB files that add new data objects to existing NWB files without duplicating the entire file. This is achieved by generating a binary or directory LINDI file that references the original NWB file and includes additional data objects stored as internal data chunks.

import numpy as np
import pynwb
from pynwb.file import TimeSeries
import lindi

# Load the remote NWB file from DANDI
h5_url = "https://api.dandiarchive.org/api/assets/11f512ba-5bcf-4230-a8cb-dc8d36db38cb/download/"
f = lindi.LindiH5pyFile.from_hdf5_file(h5_url)

# Write to a local .lindi.tar file
f.write_lindi_file('example.nwb.lindi.tar')
f.close()

# Open with pynwb and add new data
g = lindi.LindiH5pyFile.from_lindi_file('example.nwb.lindi.tar', mode='r+')
with pynwb.NWBHDF5IO(file=g, mode="a") as io:
    nwbfile = io.read()
    timeseries_test = TimeSeries(
        name="test",
        data=np.array([1, 2, 3, 4, 5, 4, 3, 2, 1]),
        rate=1.,
        unit='s'
    )
    ts = nwbfile.processing['behavior'].add(timeseries_test)  # type: ignore
    io.write(nwbfile)  # type: ignore

# Later on, you can read the file again
h = lindi.LindiH5pyFile.from_lindi_file('example.nwb.lindi.tar')
with pynwb.NWBHDF5IO(file=h, mode="r") as io:
    nwbfile = io.read()
    test_timeseries = nwbfile.processing['behavior']['test']  # type: ignore
    print(test_timeseries)

Using the Local Cache

LINDI includes a local caching feature that significantly improves performance when accessing remote files by storing frequently accessed data chunks locally. The cache uses SQLite as its storage backend and is particularly beneficial when repeatedly accessing the same remote datasets.

Basic cache usage

import lindi

# Create a local cache (defaults to ~/.lindi/cache)
local_cache = lindi.LocalCache()

# Or specify a custom cache directory
local_cache = lindi.LocalCache(cache_dir="/path/to/custom/cache")

# Use the cache when loading remote files
h5_url = "https://api.dandiarchive.org/api/assets/11f512ba-5bcf-4230-a8cb-dc8d36db38cb/download/"
f = lindi.LindiH5pyFile.from_hdf5_file(h5_url, local_cache=local_cache)

# Subsequent accesses will be much faster due to caching
data = f['some_dataset'][:]  # First access: downloads and caches
data = f['some_dataset'][:]  # Second access: retrieved from cache

Cache with LINDI files

The cache can also be used when working with LINDI JSON files that reference remote data:

import lindi

# Create a local cache
local_cache = lindi.LocalCache()

# Load a LINDI file with caching enabled
f = lindi.LindiH5pyFile.from_lindi_file('example.nwb.lindi.json', local_cache=local_cache)

# Access data - first time will cache, subsequent times will be faster
data = f['processing/ecephys/LFP/LFP/data'][:1000]

How the cache works

  • The cache stores data chunks from remote URLs based on URL, byte offset, and chunk size
  • By default, the cache directory is located at ~/.lindi/cache
  • Individual chunks are limited to 900 MB due to SQLite constraints
  • The cache persists across Python sessions, so subsequent runs will benefit from previously cached data
  • Cache files are automatically created and managed by LINDI

Cache benefits

  • Dramatically improves performance for repeated access to the same remote datasets
  • Reduces network bandwidth usage
  • Enables faster iteration when developing and testing code with remote data
  • Particularly effective for accessing NWB files from DANDI Archive multiple times

Notes

This project was inspired by kerchunk and hdmf-zarr.

For developers

Special Zarr annotations used by LINDI

License

See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lindi-0.4.6.tar.gz (53.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lindi-0.4.6-py3-none-any.whl (63.6 kB view details)

Uploaded Python 3

File details

Details for the file lindi-0.4.6.tar.gz.

File metadata

  • Download URL: lindi-0.4.6.tar.gz
  • Upload date:
  • Size: 53.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for lindi-0.4.6.tar.gz
Algorithm Hash digest
SHA256 9a486ec34e79462b7ace838626315d6af712d691de89a1c4a69d55aa1d189012
MD5 0b07992d001c4ae6fc05ef9e2abd759f
BLAKE2b-256 b5ff78c2219c6e05ea95ebe019b3060a86e9339aefddc38f8baa4b634b800ef4

See more details on using hashes here.

File details

Details for the file lindi-0.4.6-py3-none-any.whl.

File metadata

  • Download URL: lindi-0.4.6-py3-none-any.whl
  • Upload date:
  • Size: 63.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for lindi-0.4.6-py3-none-any.whl
Algorithm Hash digest
SHA256 defce634ef72ac2cfaeb5fb23671878b1dbe704905d236814716eb974e414fef
MD5 513431bf693d6a745f9a6f8582027266
BLAKE2b-256 0fef0a2b0f5d52458e6521920d0b9dad64e95bf4e800b8e640326f187e924b1f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page