Postprocessing steps from LISA Mojito L01 data for use with L2D noise analysis
Project description
mojito-processor
Postprocessing tools for LISA Mojito L01 data for use with L2D noise analysis.
Goal of package
The goal of this package is to provide a simple, modular, and well-documented set of tools for processing LISA Mojito L1 data. The package applies a signal processing pipeline (filtering, downsampling, trimming, windowing) to data loaded via the mojito package. The design emphasizes ease of use and flexibility, allowing users to customize the processing steps as needed for their specific analysis tasks.
Dependencies
This package depends on mojito, the official LISA L1 file reader. All dependencies are installed automatically via pip or uv.
Installation
pip install mojito-processor
# or
uv pip install mojito-processor
Development Setup
# Clone the repository
git clone https://github.com/OllieBurke/MojitoProcessor.git
cd MojitoProcessor
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install the package and all dependency groups
uv sync --all-groups
# Install pre-commit hooks
uv run pre-commit install
# Run pre-commit on all files (optional)
uv run pre-commit run --all-files
Quick Start
from MojitoProcessor import load_file, load_processed, process_pipeline, write, read_and_process
# ── Load Mojito L1 data ───────────────────────────────────────────────────────
data = load_file("mojito_data.h5")
# ── Pipeline parameters ───────────────────────────────────────────────────────
downsample_kwargs = {
"target_fs": 0.2, # Hz — target sampling rate (None = no downsampling)
"kaiser_window": 31.0, # Kaiser window beta (higher = more aggressive anti-aliasing)
}
filter_kwargs = {
"highpass_cutoff": 5e-6, # Hz — high-pass cutoff (always applied)
"lowpass_cutoff": 0.8 * downsample_kwargs["target_fs"], # Hz — low-pass cutoff (None for high-pass only)
"order": 2, # Butterworth filter order
}
trim_kwargs = {
"fraction": 0.02, # Total fraction of data trimmed symmetrically from both ends
}
truncate_kwargs = {
"days": 7.0, # Segment length in days (splits dataset into non-overlapping chunks)
}
window_kwargs = {
"window": "tukey", # Window type: 'tukey', 'hann', 'hamming', 'blackman', 'blackmanharris', 'planck'
"alpha": 0.0125, # Taper fraction for Tukey/Planck windows
}
# ─────────────────────────────────────────────────────────────────────────────
processed_segments = process_pipeline(
data,
downsample_kwargs=downsample_kwargs,
filter_kwargs=filter_kwargs,
trim_kwargs=trim_kwargs,
truncate_kwargs=truncate_kwargs,
window_kwargs=window_kwargs,
)
# Access processed data
sp = processed_segments["segment0"]
print(f"Sampling rate: {sp.fs} Hz")
print(f"Duration: {sp.T / 86400:.2f} days")
print(f"TCB start: {sp.t0:.6g} s")
# ── Write to HDF5 ─────────────────────────────────────────────────────────────
# Use segment_ids to write only a subset, e.g. segment_ids=[0, 1, 2]
write(
"processed.h5",
processed_segments,
raw_data=data,
filter_kwargs=filter_kwargs,
downsample_kwargs=downsample_kwargs,
trim_kwargs=trim_kwargs,
truncate_kwargs=truncate_kwargs,
window_kwargs=window_kwargs,
)
# ── Reload from HDF5 (deferred analysis, select specific segments) ────────────
segments, raw_data = load_processed(
"processed.h5",
segment_ids=[0, 1, 2], # omit to load all segments
)
# raw_data["orbits"]["sc_position_1"] → full-span spacecraft positions [m]
# raw_data["noise_estimates"]["xyz"] → frequency-domain noise covariance cubes
# raw_data["metadata"] → laser frequency and pipeline names
Alternatively, load, process, and write in a single call using the high-level pipeline:
from MojitoProcessor import read_and_process
segments = read_and_process(
"mojito_data.h5",
filter_kwargs=filter_kwargs,
downsample_kwargs=downsample_kwargs,
trim_kwargs=trim_kwargs,
truncate_kwargs=truncate_kwargs,
window_kwargs=window_kwargs,
output_path="processed.h5",
)
Features
- Load —
load_filereads LISA Mojito L1 HDF5 files via themojitopackage - Process —
process_pipelineapplies filtering, downsampling, trimming, segmentation, and windowing in a single call - Write —
writesaves processed segments and raw auxiliary data (orbits, noise estimates, LTTs) to HDF5; usesegment_idsto write only a subset of segments - Reload —
load_processedreads a written file back into adict[str, SignalProcessor]plus araw_datadict (orbits, noise estimates, metadata); usesegment_idsto load only the segments you need - Pipeline —
read_and_processcombines load, process, and write into one function, withsegment_idssupport and an optional CLI interface - TCB time tracking —
t0is propagated through every processing step, including segmentation - TDI channel support — XYZ and AET (via
SignalProcessor.to_aet())
Building the Documentation
First, install Pandoc (required by nbsphinx to render the example notebooks):
# macOS
brew install pandoc
# Linux (Debian/Ubuntu)
sudo apt-get install pandoc
Then install the docs dependencies and build:
uv sync --group docs
uv run sphinx-build -b html docs docs/_build/html
Open docs/_build/html/index.html in your browser to view the result.
License
MIT License — see LICENSE file for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mojito_processor-0.6.2.tar.gz.
File metadata
- Download URL: mojito_processor-0.6.2.tar.gz
- Upload date:
- Size: 185.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
173b1509ef323a82161228872f658ab4078d40bc6dbac9d89027cb9a2d8f0a41
|
|
| MD5 |
d756b6fc8ef262d52933b8f508196e43
|
|
| BLAKE2b-256 |
8095d48e3487463521322a0aa3fe93064602af5b11886c874bc44821700f407f
|
Provenance
The following attestation bundles were made for mojito_processor-0.6.2.tar.gz:
Publisher:
publish.yml on OllieBurke/MojitoProcessor
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mojito_processor-0.6.2.tar.gz -
Subject digest:
173b1509ef323a82161228872f658ab4078d40bc6dbac9d89027cb9a2d8f0a41 - Sigstore transparency entry: 1399866142
- Sigstore integration time:
-
Permalink:
OllieBurke/MojitoProcessor@78f9b538bdb0cc381bba60dcdad534b6d1884e9c -
Branch / Tag:
refs/tags/v0.6.2 - Owner: https://github.com/OllieBurke
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@78f9b538bdb0cc381bba60dcdad534b6d1884e9c -
Trigger Event:
push
-
Statement type:
File details
Details for the file mojito_processor-0.6.2-py3-none-any.whl.
File metadata
- Download URL: mojito_processor-0.6.2-py3-none-any.whl
- Upload date:
- Size: 22.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b52a5095c5be9ae693f48bdd148c39eeaee3e0eaed300ba2efae1bbae420be10
|
|
| MD5 |
512346f3c824e036029c67e6ca19b528
|
|
| BLAKE2b-256 |
2cbeaae9426efbdd4fadc2a6baf9e00f5da9373d31d2451441d066d7fb15b4e7
|
Provenance
The following attestation bundles were made for mojito_processor-0.6.2-py3-none-any.whl:
Publisher:
publish.yml on OllieBurke/MojitoProcessor
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mojito_processor-0.6.2-py3-none-any.whl -
Subject digest:
b52a5095c5be9ae693f48bdd148c39eeaee3e0eaed300ba2efae1bbae420be10 - Sigstore transparency entry: 1399866161
- Sigstore integration time:
-
Permalink:
OllieBurke/MojitoProcessor@78f9b538bdb0cc381bba60dcdad534b6d1884e9c -
Branch / Tag:
refs/tags/v0.6.2 - Owner: https://github.com/OllieBurke
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@78f9b538bdb0cc381bba60dcdad534b6d1884e9c -
Trigger Event:
push
-
Statement type: