Skip to main content

Radio interferometric imaging suite based on a preconditioned forward-backward approach

Project description

pfb-imaging

Radio interferometric imaging suite based on the preconditioned forward-backward algorithm. The project follows the hip-cargo package format: lightweight CLI installation with auto-generated stimela cab definitions and containerised execution.

Installation

Lightweight (CLI + cabs only):

pip install pfb-imaging

This installs the CLI and stimela cab definitions without the full scientific stack. The cabs can be included in stimela recipes using:

_include:
  - (pfb_imaging.cabs)init.yml

Full scientific stack:

pip install "pfb-imaging[full]"

Or for development:

git clone https://github.com/ratt-ru/pfb-imaging.git
cd pfb-imaging
uv sync --extra full --extra dev
uv run pre-commit install

For maximum performance install ducc0 in no-binary mode:

pip install ducc0 --no-binary ducc0

Quick start

The easiest way to use pfb-imaging is via the stimela recipes given in the recipes folder. Once the package is installed, a recipe can be queried for its input and output parameters using the stimela doc command. For example, to see the inputs and outputs of the sara recipe, simply run

stimela doc 'pfb_imaging.recipes::sara.yaml'

The recipe can then be run with the stimela run command:

stimela run 'pfb_imaging.recipes::sara.yaml' sara \
  ms=path/to/data.ms \
  base-dir=path/to/base/output/directory \
  image-name=saraout

The recipe should contain sensible defaults for MeerKAT data at L-band.

CLI documentation

The CLI is built with Typer and provides rich, auto-generated documentation. To list all available commands:

pfb --help

To get detailed documentation for a specific command including all parameters, types, and defaults:

pfb init --help

This is often more useful than stimela doc as it shows the full parameter documentation with types and defaults directly in the terminal.

CLI commands

The processing pipeline follows a modular pattern where each step is a separate command:

  1. pfb init -- Parse measurement sets into xarray datasets
  2. pfb grid -- Create dirty images, PSFs, and weights
  3. pfb kclean -- Classical deconvolution (Hogbom/Clark)
  4. pfb sara -- Advanced deconvolution with sparsity constraints
  5. pfb restore -- Restore clean components to final image
  6. pfb degrid -- Subtract model from visibilities

Additional commands:

  • pfb deconv -- General deconvolution (replaces individual algorithm apps)
  • pfb hci -- High cadence imaging
  • pfb fluxtractor -- Flux extraction
  • pfb model2comps -- Convert model to components

Execution backends

Every command supports a --backend option that controls how the command is executed. This is provided by hip-cargo and enables container fallback execution: when the full scientific stack is not installed locally, commands automatically run inside a container.

Available backends:

  • auto (default) -- Try native execution first; if the core module import fails (lightweight install), fall back to the best available container runtime.
  • native -- Run natively using the locally installed Python environment. Fails with ImportError if dependencies are missing.
  • docker -- Run inside a Docker container.
  • podman -- Run inside a Podman container (daemonless, rootless).
  • apptainer -- Run inside an Apptainer container (HPC-friendly, formerly Singularity).
  • singularity -- Run inside a Singularity container.

An additional --always-pull-images flag forces re-pulling the container image before execution, useful for ensuring you have the latest version.

Example usage:

# Run natively (requires full install)
pfb init --ms data.ms --output-filename out --backend native

# Run in a Docker container (lightweight install only)
pfb init --ms data.ms --output-filename out --backend docker

# Auto-detect: native if available, otherwise container
pfb init --ms data.ms --output-filename out

Volume mounts are resolved automatically from the command's type hints: input paths are mounted read-only, output paths read-write. Docker and Podman run as the current user to avoid root-owned output files.

Default naming conventions

Output files follow consistent naming patterns using --output-filename, --product, and --suffix:

  • XDS datasets: {output_filename}_{product}.xds
  • DDS datasets: {output_filename}_{product}_{suffix}.dds
  • Models: {output_filename}_{product}_{suffix}_model.mds
  • FITS files: same convention with appropriate extensions

The --suffix parameter (default main) allows imaging multiple fields from a single set of corrected Stokes visibilities. For example, the sun can be imaged by setting --target sun --suffix sun. The --target parameter accepts any object recognised by astropy or HH:MM:SS,DD:MM:SS format.

Parallelism settings

Two settings control parallelism:

  • --nworkers controls how many chunks (usually imaging bands) are processed in parallel.
  • --nthreads specifies threads available to each worker (gridding, FFTs, wavelet transforms).

By default a single worker is used for the smallest memory footprint and easy debugging. Set --nworkers larger than one to use multiple Dask workers for parallel chunk processing. The product of --nworkers and --nthreads should not exceed available resources.

Package structure

The project follows the hip-cargo src layout:

pfb-imaging/
├── src/pfb_imaging/
│   ├── cli/          # Lightweight CLI wrappers (Typer)
│   ├── core/         # Core implementations (lazy-loaded)
│   ├── cabs/         # Generated Stimela cab definitions (YAML)
│   ├── deconv/       # Deconvolution algorithms
│   ├── operators/    # Mathematical operators (gridding, PSF, Psi)
│   ├── opt/          # Optimization algorithms (PCG, FISTA, primal-dual)
│   ├── prox/         # Proximal operators
│   ├── utils/        # Utility functions
│   └── wavelets/     # Wavelet transform implementations
├── scripts/          # Profiling and automation scripts
├── tests/
├── Dockerfile
└── pyproject.toml

Key separation: CLI modules (cli/) are lightweight with lazy imports so that pfb --help and cab generation don't pull in the full scientific stack. Core implementations live in core/ and are imported only when a command is executed.

Container images

Container images are published to GitHub Container Registry at ghcr.io/ratt-ru/pfb-imaging. The full image URL (including tag) is the single source of truth and lives in src/pfb_imaging/_container_image.py as the CONTAINER_IMAGE variable, loaded via importlib (no CWD dependency, no uv sync needed).

CONTAINER_IMAGE = "ghcr.io/ratt-ru/pfb-imaging:<tag>"

The <tag> is managed by three mechanisms:

  • Feature branches: the developer manually updates the tag in _container_image.py to match the branch name.
  • Merge to main: the update-cabs.yml GitHub Action rewrites the tag to latest, regenerates cab definitions, and commits the changes.
  • Releases: tbump rewrites the tag to the semantic version (e.g. 0.0.9) via before_commit hooks in tbump.toml.

Cab definitions are auto-generated with the correct image tag via pre-commit hooks and the update-cabs.yml GitHub Action -- the image URL is read from _container_image.py at generation time, so the --image flag is not needed.

Development

# Install with full and dev dependencies
uv sync --extra full --extra dev

# Install pre-commit hooks
uv run pre-commit install

# Run tests
uv run pytest -v tests/

# Format and lint
uv run ruff format .
uv run ruff check . --fix

Acknowledgement

If you find any of this useful please cite the pfb-imaging paper.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pfb_imaging-0.0.9rc1.tar.gz (167.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pfb_imaging-0.0.9rc1-py3-none-any.whl (231.9 kB view details)

Uploaded Python 3

File details

Details for the file pfb_imaging-0.0.9rc1.tar.gz.

File metadata

  • Download URL: pfb_imaging-0.0.9rc1.tar.gz
  • Upload date:
  • Size: 167.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pfb_imaging-0.0.9rc1.tar.gz
Algorithm Hash digest
SHA256 c0e418af78cfadc611cd6fd48468146dcacc7279c8218ee0ce2b9f2c95473561
MD5 4c0f367c8ccf877c9a19bdb85c85ffda
BLAKE2b-256 7f8dc96d3ba919bac84dfd5e556db9216cfd60ccd29a30b9dc22045670969624

See more details on using hashes here.

Provenance

The following attestation bundles were made for pfb_imaging-0.0.9rc1.tar.gz:

Publisher: publish.yml on ratt-ru/pfb-imaging

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pfb_imaging-0.0.9rc1-py3-none-any.whl.

File metadata

  • Download URL: pfb_imaging-0.0.9rc1-py3-none-any.whl
  • Upload date:
  • Size: 231.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pfb_imaging-0.0.9rc1-py3-none-any.whl
Algorithm Hash digest
SHA256 7820bb07b6ae238d2a4f8e9698bcdc753bf894e3171c6d5073808959a465dc71
MD5 dd1d58af5edbdff04d31c48cd7c3a713
BLAKE2b-256 053e1404cf3299af4a1d78952712aca4344e3d04167f228b2655117cfab28eac

See more details on using hashes here.

Provenance

The following attestation bundles were made for pfb_imaging-0.0.9rc1-py3-none-any.whl:

Publisher: publish.yml on ratt-ru/pfb-imaging

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page