Skip to main content

CLI tool for creating hyperspectral image datasets for machine learning.

Project description

SpectralDatamaker

Python CLI tool designed to facilitate the creation of datasets with hyperspectral images for machine learning.

The dataset structure is organized as follows:

dataset_root/
├── images
│   ├── DATASET-01_image-name_0
│   ├── DATASET-01_image-name_1
│   ├── DATASET-01_image-name_2
│   └── DATASET-01_image-name_3
├── masks
│   ├── RoiMASK_image-name.csv
│   ├── PxMASK_image-name.npy
│   ├── DATASET-01_image-name_0
│   ├── DATASET-01_image-name_1
│   ├── DATASET-01_image-name_2
│   └── DATASET-01_image-name_3
├── source
│   ├── image-name.hdr
│   └── image-name.raw
└── metadata.json

This tool provides functionalities for processing the source images, generating region of interest (ROI) masks, pixel masks, labels, and cropping the images based on the generated masks.

CLI Usage

After installing the package, you can use the console command:

spectral-datamaker --help

You can also invoke the package module directly:

python -m spectral_datamaker --help

The CLI provides four main commands:

Create a complete dataset:

spectral-datamaker create <config.yaml> <output_directory>

Options:

  • --dry-run: Validate configuration without executing
  • --skip-validation: Skip final dataset validation
  • --no-interactive: Skip interactive mask adjustment (not yet implemented)

Validate an existing dataset:

spectral-datamaker validate <dataset_directory>

Options:

  • --config <file>: Validate against a specific configuration file

Inspect dataset metadata:

spectral-datamaker inspect <dataset_directory>

Options:

  • --format [json|yaml|table]: Output format (default: table)
  • --show-images: List all processed images

Execute individual pipeline steps:

spectral-datamaker step <step_name> <config.yaml> <dataset_directory>

Available steps: structure, roi-mask, pixel-mask, crop, metadata

Library usage (Python API)

Besides the CLI, SpectralDatamaker can be used as a Python library. The most useful classes for inspection and validation are:

  • DatasetStructure: infers canonical dataset locations (images/, masks/, source/, metadata.json) from a root directory.
  • Filenames: derives expected filenames and absolute paths for masks, labels, cropped outputs, and metadata.
  • DatasetValidator: validates an existing dataset either from a config file or from metadata.json.
from spectral_datamaker.config import DatasetStructure, Filenames
from spectral_datamaker.processors import DatasetValidator

dataset_root = "/path/to/dataset_root"

# 1) Infer dataset structure from root directory
structure = DatasetStructure(dataset_root)
print(structure.images_dir)
print(structure.masks_dir)
print(structure.source_dir)
print(structure.metadata_file)

# 2) Derive expected file paths and names
names = Filenames(structure)
print(names.get_roi_mask("image_1.hdr", abs=True))
print(names.get_px_mask("image_1.hdr", abs=True))
print(names.get_dataset_metadata(abs=True))

# 3) Validate dataset contents
validator = DatasetValidator(structure)
validator.validate_dataset_from_config("/path/to/dataset.yaml")
# Or, if metadata already exists:
# validator.validate_dataset_from_metadata()

Dataset config file

The dataset configuration file (e.g., dataset.yaml) contains the necessary information for creating a dataset from ENVI images. The YAML file should have the following structure:

dataset:
  name: dataset-example
  description: An example dataset created with SpectralDatamaker.

  source-images:
    - path: /path/to/source/image_1.hdr
      masking:
        shape: circle
        size: 35
        num: 6

    - path: /path/to/source/image_2.hdr
      masking:
        shape: square
        size: 20
        num: 4

    - path: /path/to/source/image_n.hdr
      masking:
        shape: triangle
        size: 50
        num: 2

  segmentation:
    enabled: true
    classes:
      - type_A
      - type_B

  classification:
    enabled: false

Segmentation mode

When segmentation mode is enabled, SpectralDatamaker will generate a dataset with segmentation masks for each source image. The steps are as follows:

  1. Creates ROI masks based on the specified shape, size, and number of regions in the configuration file. A napari viewer is launched to allow the user to adjust the generated masks if necessary. Masks are saved when the user closes the viewer.
  2. Generates pixel masks from the ROI masks, asking the user to label each region of interest (ROI) with the corresponding class from the configuration file.
  3. Crops the source images based on the generated masks and saves the cropped images, masks in the appropriate directories.

Classification mode

[!NOTE] The classification mode is currently in development is not yet available for use. The following description is based on the intended functionality.

When classification mode is enabled, SpectralDatamaker will generate a dataset with class labels for each source image. The steps are as follows:

  1. Creates ROI masks based on the specified shape, size, and number of regions in the configuration file. A napari viewer is launched to allow the user to adjust the generated masks if necessary. Masks are saved when the user closes the viewer.
  2. Asks the user to label each ROI with the corresponding class from the configuration file. Saves the class labels in a CSV file.
  3. Crops the source images based on the generated masks and saves the cropped images in the appropriate directories.

Dataset metadata

SpectralDatamaker generates a metadata.json file containing information about the dataset, including the dataset name, description, source images, and the processing steps applied to each image. This metadata file is recognized by the SpectralDatamaker and can be used to validate the dataset structure and contents. An example of the metadata.json structure is as follows:

{
    "name": "dataset-03",
    "description": "Dataset created with one hyperespectral image.",
    "last_update": "2026-04-08 13:52:01",
    "source_images": ["/path/to/image_1.hdr"],
    "types": ["segmentation"],
    "segmentation_masking": {
        "image_1": {
            "label_map": {"0": "background", "1": "type_A", "2": "type_B"},
            "num_classes": 3,
            "classes": ["type_A", "type_B"],
            "assignments": {
                "type_A": [0,2,3],
                "type_B": [1,5,4]
            },
            "source_image": "image_1.hdr",
            "rois_file": "RoiMASK_image_1.csv",
            "mask_file": "PxMASK_image_1.npy",
            "created": "2026-04-08T13:51:33.931524",
            "format": "npy"
        }
    }
}

Validations

SpectralDatamaker includes validation checks allowing users to verify the generated dataset structure and contents, as well as validate existing datasets. The validation includes checks for the presence of required directories and expected files.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spectral_datamaker-0.3.0.tar.gz (17.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spectral_datamaker-0.3.0-py3-none-any.whl (22.0 kB view details)

Uploaded Python 3

File details

Details for the file spectral_datamaker-0.3.0.tar.gz.

File metadata

  • Download URL: spectral_datamaker-0.3.0.tar.gz
  • Upload date:
  • Size: 17.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.11

File hashes

Hashes for spectral_datamaker-0.3.0.tar.gz
Algorithm Hash digest
SHA256 60be61a316b1393abfdfb403a84d2c5eb447a462e47cf76af65ebd6f160f9afb
MD5 2cc743f519fccbac3e63ead0c84f6d76
BLAKE2b-256 5b5199cff418e75cd68f423c0f41c14c7c5fd7110fbd4c58d8eefc864ded0657

See more details on using hashes here.

File details

Details for the file spectral_datamaker-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for spectral_datamaker-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 762cf2af5301d41d018865a0104f57220c85e4a267085e3a2ec14f1fe7973a6b
MD5 0ddc54d5038ed38606371c5097c34802
BLAKE2b-256 fb35e615cdcbf3a97580db30fe14f889f770740912bcc9e30eef30fac33f11af

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page