Skip to main content

Postprocessing steps from LISA Mojito L01 data for use with L2D noise analysis

Project description

mojito-processor

DOI Documentation

Postprocessing tools for LISA Mojito L01 data for use with L2D noise analysis.

Goal of package

The goal of this package is to provide a simple, modular, and well-documented set of tools for processing LISA Mojito L1 data. The package applies a signal processing pipeline (filtering, downsampling, trimming, windowing) to data loaded via the mojito package. The design emphasizes ease of use and flexibility, allowing users to customize the processing steps as needed for their specific analysis tasks.

Dependencies

This package depends on mojito, the official LISA L1 file reader. All dependencies are installed automatically via pip or uv.

Installation

pip install mojito-processor
# or
uv pip install mojito-processor

Development Setup

# Clone the repository
git clone https://github.com/OllieBurke/MojitoProcessor.git
cd MojitoProcessor

# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install the package and all dependency groups
uv sync --all-groups

# Install pre-commit hooks
uv run pre-commit install

# Run pre-commit on all files (optional)
uv run pre-commit run --all-files

Quick Start

from MojitoProcessor import load_file, load_processed, process_pipeline, write

# ── Load Mojito L1 data ───────────────────────────────────────────────────────
data = load_file("mojito_data.h5")

# ── Pipeline parameters ───────────────────────────────────────────────────────

downsample_kwargs = {
    "target_fs": 0.2,      # Hz — target sampling rate (None = no downsampling)
    "kaiser_window": 31.0, # Kaiser window beta (higher = more aggressive anti-aliasing)
}

filter_kwargs = {
    "highpass_cutoff": 5e-6,                                # Hz — high-pass cutoff (always applied)
    "lowpass_cutoff": 0.8 * downsample_kwargs["target_fs"], # Hz — low-pass cutoff (None for high-pass only)
    "order": 2,                                             # Butterworth filter order
}

trim_kwargs = {
    "fraction": 0.02,  # Total fraction of data trimmed symmetrically from both ends
}

truncate_kwargs = {
    "days": 7.0,  # Segment length in days (splits dataset into non-overlapping chunks)
}

window_kwargs = {
    "window": "tukey",  # Window type: 'tukey', 'hann', 'hamming', 'blackman', 'blackmanharris', 'planck'
    "alpha": 0.0125,    # Taper fraction for Tukey/Planck windows
}

# ─────────────────────────────────────────────────────────────────────────────

processed_segments = process_pipeline(
    data,
    downsample_kwargs=downsample_kwargs,
    filter_kwargs=filter_kwargs,
    trim_kwargs=trim_kwargs,
    truncate_kwargs=truncate_kwargs,
    window_kwargs=window_kwargs,
)

# Access processed data
sp = processed_segments["segment0"]
print(f"Sampling rate: {sp.fs} Hz")
print(f"Duration:      {sp.T / 86400:.2f} days")
print(f"TCB start:     {sp.t0:.6g} s")

# ── Write to HDF5 ─────────────────────────────────────────────────────────────
write(
    "processed.h5",
    processed_segments,
    raw_data=data,
    filter_kwargs=filter_kwargs,
    downsample_kwargs=downsample_kwargs,
    trim_kwargs=trim_kwargs,
    truncate_kwargs=truncate_kwargs,
    window_kwargs=window_kwargs,
)

Alternatively, load, process, and write in a single call using the high-level pipeline:

from MojitoProcessor.pipelines import read_and_process

segments = read_and_process(
    "mojito_data.h5",
    filter_kwargs=filter_kwargs,
    downsample_kwargs=downsample_kwargs,
    trim_kwargs=trim_kwargs,
    truncate_kwargs=truncate_kwargs,
    window_kwargs=window_kwargs,
    output_path="processed.h5",
)

Features

  • Loadload_file reads LISA Mojito L1 HDF5 files via the mojito package
  • Processprocess_pipeline applies filtering, downsampling, trimming, segmentation, and windowing in a single call
  • Writewrite saves processed segments and raw auxiliary data (LTTs, orbits, noise estimates) to HDF5
  • Reloadload_processed reads a file written by write back into a dict[str, SignalProcessor], enabling deferred analysis without reprocessing
  • Pipelineread_and_process combines load, process, and write into one function, with an optional CLI interface
  • TCB time trackingt0 is propagated through every processing step, including segmentation
  • TDI channel support — XYZ and AET (via SignalProcessor.to_aet())

Building the Documentation

First, install Pandoc (required by nbsphinx to render the example notebooks):

# macOS
brew install pandoc

# Linux (Debian/Ubuntu)
sudo apt-get install pandoc

Then install the docs dependencies and build:

uv sync --group docs
uv run sphinx-build -b html docs docs/_build/html

Open docs/_build/html/index.html in your browser to view the result.

License

MIT License — see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mojito_processor-0.5.0.tar.gz (184.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mojito_processor-0.5.0-py3-none-any.whl (21.7 kB view details)

Uploaded Python 3

File details

Details for the file mojito_processor-0.5.0.tar.gz.

File metadata

  • Download URL: mojito_processor-0.5.0.tar.gz
  • Upload date:
  • Size: 184.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.15

File hashes

Hashes for mojito_processor-0.5.0.tar.gz
Algorithm Hash digest
SHA256 d9fc26f69d7de519d99b0348fdfe96363b897d261af938be34ce57afcad03881
MD5 78c4e230353a0a2f64811b059b89a435
BLAKE2b-256 2bd8900865b685f11a4616daaf516c59d29349c884fcaea0f7bacc0fc8394dd7

See more details on using hashes here.

File details

Details for the file mojito_processor-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mojito_processor-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 369e9ac51839bee20c2496d155ef599702f0c5c20a660c5e5f06f35ccff406ca
MD5 83685e8e26277f738cf9e8a81ebbf9bb
BLAKE2b-256 ba6d45305866cd01f6c3f586359ae1785e42ea88fe2a07325a7aac608eced531

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page