Skip to main content

Postprocessing steps from LISA Mojito L01 data for use with L2D noise analysis

Project description

mojito-processor

DOI Documentation

Postprocessing tools for LISA Mojito L01 data for use with L2D noise analysis.

Goal of package

The goal of this package is to provide a simple, modular, and well-documented set of tools for processing LISA Mojito L1 data. The package applies a signal processing pipeline (filtering, downsampling, trimming, windowing) to data loaded via the mojito package. The design emphasizes ease of use and flexibility, allowing users to customize the processing steps as needed for their specific analysis tasks.

Dependencies

This package depends on mojito, the official LISA L1 file reader. All dependencies are installed automatically via pip or uv.

Installation

pip install mojito-processor
# or
uv pip install mojito-processor

Development Setup

# Clone the repository
git clone https://github.com/OllieBurke/MojitoProcessor.git
cd MojitoProcessor

# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install the package and all dependency groups
uv sync --all-groups

# Install pre-commit hooks
uv run pre-commit install

# Run pre-commit on all files (optional)
uv run pre-commit run --all-files

Quick Start

from MojitoProcessor import load_file, load_processed, process_pipeline, write, read_and_process

# ── Load Mojito L1 data ───────────────────────────────────────────────────────
data = load_file("mojito_data.h5")

# ── Pipeline parameters ───────────────────────────────────────────────────────

downsample_kwargs = {
    "target_fs": 0.2,      # Hz — target sampling rate (None = no downsampling)
    "kaiser_window": 31.0, # Kaiser window beta (higher = more aggressive anti-aliasing)
}

filter_kwargs = {
    "highpass_cutoff": 5e-6,                                # Hz — high-pass cutoff (always applied)
    "lowpass_cutoff": 0.8 * downsample_kwargs["target_fs"], # Hz — low-pass cutoff (None for high-pass only)
    "order": 2,                                             # Butterworth filter order
}

trim_kwargs = {
    "fraction": 0.02,  # Total fraction of data trimmed symmetrically from both ends
}

truncate_kwargs = {
    "days": 7.0,  # Segment length in days (splits dataset into non-overlapping chunks)
}

window_kwargs = {
    "window": "tukey",  # Window type: 'tukey', 'hann', 'hamming', 'blackman', 'blackmanharris', 'planck'
    "alpha": 0.0125,    # Taper fraction for Tukey/Planck windows
}

# ─────────────────────────────────────────────────────────────────────────────

processed_segments = process_pipeline(
    data,
    downsample_kwargs=downsample_kwargs,
    filter_kwargs=filter_kwargs,
    trim_kwargs=trim_kwargs,
    truncate_kwargs=truncate_kwargs,
    window_kwargs=window_kwargs,
)

# Access processed data
sp = processed_segments["segment0"]
print(f"Sampling rate: {sp.fs} Hz")
print(f"Duration:      {sp.T / 86400:.2f} days")
print(f"TCB start:     {sp.t0:.6g} s")

# ── Write to HDF5 ─────────────────────────────────────────────────────────────
# Use segment_ids to write only a subset, e.g. segment_ids=[0, 1, 2]
write(
    "processed.h5",
    processed_segments,
    raw_data=data,
    filter_kwargs=filter_kwargs,
    downsample_kwargs=downsample_kwargs,
    trim_kwargs=trim_kwargs,
    truncate_kwargs=truncate_kwargs,
    window_kwargs=window_kwargs,
)

# ── Reload from HDF5 (deferred analysis, select specific segments) ────────────
segments, raw_data = load_processed(
    "processed.h5",
    segment_ids=[0, 1, 2],   # omit to load all segments
)
# raw_data["orbits"]["sc_position_1"]  → full-span spacecraft positions [m]
# raw_data["noise_estimates"]["xyz"]   → frequency-domain noise covariance cubes
# raw_data["metadata"]                 → laser frequency and pipeline names

Alternatively, load, process, and write in a single call using the high-level pipeline:

from MojitoProcessor import read_and_process

segments = read_and_process(
    "mojito_data.h5",
    filter_kwargs=filter_kwargs,
    downsample_kwargs=downsample_kwargs,
    trim_kwargs=trim_kwargs,
    truncate_kwargs=truncate_kwargs,
    window_kwargs=window_kwargs,
    output_path="processed.h5",
)

Features

  • Loadload_file reads LISA Mojito L1 HDF5 files via the mojito package
  • Processprocess_pipeline applies filtering, downsampling, trimming, segmentation, and windowing in a single call
  • Writewrite saves processed segments and raw auxiliary data (orbits, noise estimates, LTTs) to HDF5; use segment_ids to write only a subset of segments
  • Reloadload_processed reads a written file back into a dict[str, SignalProcessor] plus a raw_data dict (orbits, noise estimates, metadata); use segment_ids to load only the segments you need
  • Pipelineread_and_process combines load, process, and write into one function, with segment_ids support and an optional CLI interface
  • TCB time trackingt0 is propagated through every processing step, including segmentation
  • TDI channel support — XYZ and AET (via SignalProcessor.to_aet())

Building the Documentation

First, install Pandoc (required by nbsphinx to render the example notebooks):

# macOS
brew install pandoc

# Linux (Debian/Ubuntu)
sudo apt-get install pandoc

Then install the docs dependencies and build:

uv sync --group docs
uv run sphinx-build -b html docs docs/_build/html

Open docs/_build/html/index.html in your browser to view the result.

License

MIT License — see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mojito_processor-0.6.1.tar.gz (185.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mojito_processor-0.6.1-py3-none-any.whl (22.9 kB view details)

Uploaded Python 3

File details

Details for the file mojito_processor-0.6.1.tar.gz.

File metadata

  • Download URL: mojito_processor-0.6.1.tar.gz
  • Upload date:
  • Size: 185.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mojito_processor-0.6.1.tar.gz
Algorithm Hash digest
SHA256 4fee383f40c1a4a38eee844cd53fdc169fb910270f24e66f19d8422d0788e284
MD5 9a378d9b6566878a24ac1e111ee8d39d
BLAKE2b-256 29919069f20d1f76a4d8c5ae5a4ea3d4528aec0cd793ea1008394c9d3cb1720b

See more details on using hashes here.

Provenance

The following attestation bundles were made for mojito_processor-0.6.1.tar.gz:

Publisher: publish.yml on OllieBurke/MojitoProcessor

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mojito_processor-0.6.1-py3-none-any.whl.

File metadata

File hashes

Hashes for mojito_processor-0.6.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9c7831834be0862f2912c11b1af501081404bd3fc25c9d11086ee17510300076
MD5 d8b7e2873f6d114f6c2dfedba7cd6677
BLAKE2b-256 5213a391000660423ef7f629ec975f6168863223cc8020bd6d334f3712f43294

See more details on using hashes here.

Provenance

The following attestation bundles were made for mojito_processor-0.6.1-py3-none-any.whl:

Publisher: publish.yml on OllieBurke/MojitoProcessor

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page