A library for working with Flywheel datasets
Project description
fw-dataset
This repository contains classes and functions for creating, managing, and serving Flywheel Datasets. Flywheel Datasets are a way to organize, share, and query data from the Flywheel Data Model.
Work In Progress
This is a work in progress. All functionality is not yet implemented.
Getting started
Installation
The fw-dataset package has been built for use with Python 3.10 and above. It can be
installed with pip:
pip install fw-dataset
or poetry:
poetry add fw-dataset
Usage
Accessing Datasets
The fw-dataset package provides a FWDatasetClient class that can be used to access
existing Flywheel datasets defined on a Flywheel instance. The dataset is
defined with cloud storage or local filesystems.
from fw_dataset import FWDatasetClient
# Create a client with a Flywheel API-Key
api_key = "your-api-key"
dataset_client = FWClient(api_key=api_key)
# If you are in a Flywheel Jupyter Workspace with the environment variables
# FW_HOSTNAME and FW_WS_API_KEY set, the following will connect to the implicit
# Flywheel instance:
# dataset_client = FWClient()
# list existing datasets defined on instance(see below for Flywheel Project Requirements)
datasets = dataset_client.datasets()
# link to a specific project-associated dataset
# by project id
project_id = "your-project-id"
dataset = dataset_client.dataset(project_id=project_id)
# or by project path
group = "your-group"
project_label = "your-project-label"
dataset = dataset_client.dataset(project_path=f"fw://{group}/{project_label}")
# connect the dataset to all underlying data on the cloud storage or local filesystem
conn = dataset.connect()
# query the dataset
SQL = "SELECT * FROM acquisitions"
# get the results
results = conn.execute(SQL)
result_df = results.df()
result_df.head()
Rendering Datasets
The fw-dataset package provides a DatasetBuilder class that is used to convert the
Flywheel Data Model to a dataset from a Flywheel project. The
DatasetBuilder leverages the Flywheel Snapshot API to create an SQLite snapshot of the
Flywheel project. The DatasetBuilder then converts the project snapshot to a dataset
structure on a local or cloud filesystem.
from fw_dataset.admin.dataset_builder import DatasetBuilder
# Create a client with a Flywheel API-Key
api_key = "your-api-key"
project_id = "your-project-id"
storage_id = "your-storage-id"
# Initialize the dataset builder with an api-key, project-id, and Flywheel storage-id
dataset_builder = DatasetBuilder(api_key=api_key, project_id=project_id, storage_id=storage_id)
# Local Alternative
# dataset_builder = DatasetBuilder(api_key=api_key, project_id=project_id, root_path="path/to/dataset")
# Render the dataset structure and metadata
dataset = dataset_builder.render_dataset()
# Connect to the dataset
conn = dataset.connect()
# Query the dataset
SQL = "SELECT * FROM subjects LIMIT 10"
conn.execute(SQL).df()
The Dataset Structure will be rendered in the storage bucket or local storage under the path specified by:
{bucket}/datasets/{instance}/{group}/{project_id}/latest/
If the latest directory already exists, and is the version you are trying to render,
the Dataset object is returned. If the latest directory does not exist, the latest
directory is created and the Dataset object is returned. If you are creating a new
dataset from project with and existing snapshot, use the force_new parameter:
dataset = dataset_builder.render_dataset(force_new=True)
Additionally, if you want to render a projects tabular data files and custom information into dataset tables and schemas, you must use the following flags:
dataset = dataset_builder.render_dataset(parse_tabular_data=True, parse_custom_info=True)
Default Tables
The default tables that are rendered from the Flywheel Data Model are:
subjectssessionsacquisitionsfilesanalyses
If custom information exists in any of the above containers, it is extracted and stored
in a hidden table named custom_info. The reason for this is that the parquet format
cannot store JSON with variable value types. This table is not accessible by default.
The data in the custom_info table can be accessed by registering the table with the
dataset and querying it as a normal table. The results will contain the full parent
information of the Flywheel Data Model (e.g. parents.project, parents.subject,
parents.session,...) as well as a binary string of the custom information.
from fw_dataset.admin.admin_helpers import validate_dataset_table
# with an existing and fully connected dataset object
# validate that the custom_info table is accessible and non-empty
is_valid = validate_dataset_table(dataset.conn, dataset._fs, dataset, "custom_info")
if is_valid:
# query the custom_info table
SQL = "SELECT * FROM custom_info LIMIT 10;"
dataset.conn.execute(SQL).df()
Controlling the Container Columns
The default columns for the container tables may be more than you need. You can control the columns that are included in the container tables by providing a list of columns to the DatasetBuilder class:
control_columns = {
"subjects": ["sex", "race", "ethnicity"]
}
# Initialize the dataset builder with an api-key, project-id, and Flywheel storage-id
dataset_builder = DatasetBuilder(
api_key=api_key,
project_id=project_id,
root_path="path/to/dataset_root",
control_columns=control_columns
)
# Render the dataset structure and metadata
dataset = dataset_builder.render_dataset()
Rendering Tabular Data Files
The parse_tabular_data flag will render the tabular data files from all containers in
the Flywheel project into "schema-matched" tables in the dataset. The schema of each
tabular data file is inferred from the names and types of the columns. Each new inferred
schema is matched to an existing schema from the tabular-derived tables in dataset and
the data is appended to that table. If no matching schema is found, a new schema is
created and the data is inserted into a new table. The name of the table is derived from
the name of the tabular data file without the file extension (e.g. conditions.csv ->
conditions).
Rendering Custom Information
The parse_custom_info flag will render the custom information from all containers in
the Flywheel project into "schema-matched" tables in the dataset. The current schema is
derived from the first two levels of the custom information keys (e.g.
file.info.header.dicom becomes header_dicom). All keys and values are stored as
column names and row values respectively. If the schema already exists, the data is
appended to the existing table. If there are new keys in the custom information, new
columns are added to the existing schema and the data is appended to the existing table.
Existing partitions are updated with Null columns for the new keys.
Custom Rendering (TODO)
Custom transformation of the rendered Flywheel Data Model to a
modified dataset is possible with custom functions. Proposed is a custom_render
function that takes a custom function as a parameter and applies the function to the
rendered dataset. The custom function should take the rendered dataset as input and
return a modified dataset.
Unassociated Datasets
If you have a valid dataset that is not associated with a Flywheel project, you can still
use the FWDatasetClient to access the dataset. You will need to provide the
type,bucket, prefix, and credentials of cloud or local filesystem to instantiate
and query the dataset.
from fw_dataset import FWDatasetClient
# There is no need to provide an API-Key or instantiate the dataset client
fs_type = "s3" # or "gcs", "azure", "fs", "local"
bucket = "your-bucket"
prefix = "your-prefix"
credentials = {"url": "{bucket-specific-credential-string}"}
dataset = FWDatasetClient.get_dataset_from_filesystem(fs_type, bucket, prefix, credentials)
Merging Related Datasets
If you have multiple datasets that have related tables you want to query together, you can merge the datasets into a single dataset.
NOTE: Federated Querying is not yet enabled across datasets. This is a work in progress.
Requirements
-
The
sourcedataset must have a validtablesdirectory structure. -
The
sourcedataset must have a validschemasdirectory structure.-
Every table in the
tablesdirectory must have a valid corresponding schema file in theschemasdirectory. -
The schema file must be named
{table_name}.schema.jsonwhere{table_name}is the name of the table that the schema describes. -
The schema file must be a valid JSON file with the minimum structure:
{ "schema": "http://json-schema.org/draft-07/schema#", "id": "{table_name}", "description": "", "properties": {}, "required": [], "type": "object" }
-
-
The
destinationdataset must have the same requirements as thesourcedataset. -
Tables and schemas selected from the
sourceMUST NOT have the same names as existing ones in thedestination
Once the above requirements have been met, you may merge the datasets by copying or
moving the selected tables and schemas from the source dataset to the destination
dataset.
Flywheel Project Requirements
For the Flywheel Dataset Client and the Dataset objects to function, the following requirements must be met:
Flywheel Project Structure
The Flywheel Project must have the following valid custom information metadata:
{
"dataset": {
"type": "s3",
"bucket": "bucket-name",
"prefix": "path/to/dataset",
"storage_id": "storage-id-of-fw-storage-object"
}
}
type
The type field must be one of the following:
s3: The dataset is stored in an S3 bucket.gcs: The dataset is stored in a Google Cloud Storage bucket.azure: The dataset is stored in an Azure Blob Storage container.fs,local: The dataset is stored on a local filesystem.
bucket
The bucket field is the name of the bucket or container where the dataset is stored.
prefix
The prefix field is the path to the dataset within the bucket or container.
The directory structure beneath the prefix should be as described in the
Dataset Structure section.
storage_id
The storage_id field is the Flywheel ID of the cloud storage record that describes the
filesystem or cloud storage bucket that the dataset is stored in. This should be a valid
storage object in the Flywheel database.
Dataset Structure
The dataset should be stored in the bucket or container with the following structure:
{bucket}/{prefix}/
├── latest/
| └── latest/
| ├── provenance/
| │ └── dataset_description.json
| ├── tables/
| │ └── {table_name}/ (a directory structure of partitioned parquet files)
| │ └── /{partitions}/{hash}.parquet
| └── schemas/
| └── {table_name}.schema.json
└── versions/
├── latest_version.json (provenance/dataset_description.json of versions/latest)
└── {version}/
├── provenance/
│ └── dataset_description.json
├── tables/
│ └── {table_name}/ (a directory structure of partitioned parquet files)
│ └── /{partitions}/{hash}.parquet
└── schemas/
└── {table_name}.schema.json
The latest_version.json file is a copy of the provenance/dataset_description.json.
Both of these are minimal descriptions of a dataset version. The latest directory
represents the latest version of the dataset. Archived versions of the dataset are also
stored in the versions directory for archival purposes. They can be deleted once they
are no longer needed.
The above structure is more completely described in the
Dataset Definition Document in the
docs directory.
Schema Files
The schema files are JSON files that describe the schema of the tables in the dataset.
The schema files are stored in the schemas directory. The schema files are named
{table_name}.schema.json where {table_name} is the name of the table that the schema
describes.
Ideally, the schema files should be fully descriptive. However, if a minimal schema is desired merely to allow the dataset to be queried, the schema file can be as simple as:
{
"schema": "http://json-schema.org/draft-07/schema#",
"id": "{table_name}",
"description": "Table derived from Tabular Data File: conditions.csv",
"properties": {},
"required": [],
"type": "object"
}
Appendix
Flywheel Data Model
The Flywheel Data Model is a hierarchical structure that organizes data in a Flywheel Project. The Flywheel Data Model is organized as follows:
Project(hasfilesandanalyses)Subject(hasfilesandanalyses)Session(hasfilesandanalyses)Acquisition(hasfilesandanalyses)FileAnalysis
The SQLite snapshot of the Flywheel Data Model has each of the above entities as tables.
The tables consist of an id column and a data column. The data column is a binary
string containing the JSON representation of each entity.
Future Development
Future development will include:
- Dataset creation and management from library
- Create a new dataset from a Flywheel project
- Dataset will be structured on local or cloud storage
- Dataset essentials will be stored in the Flywheel project metadata
- Dataset versions can be deleted from the storage structure
- Dataset versions can be archived
- Dataset can be removed from a Flywheel project
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fw_dataset-0.1.0rc13-py3-none-any.whl.
File metadata
- Download URL: fw_dataset-0.1.0rc13-py3-none-any.whl
- Upload date:
- Size: 48.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.0.1 CPython/3.12.8 Linux/5.15.154+
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
25aef7ba2d1068c8ca18bf8c1d87a83940f1461654ed1e9b70356f078aaffa45
|
|
| MD5 |
763fb1ff31e5ec2027fd118ac93bc771
|
|
| BLAKE2b-256 |
20a87500a2033835f83fa519368c3b3bc02157576efc4933e5a7682e4322ed39
|