Skip to main content

normalising Flow exoPlanet Parameter Inference Toolkyt

Project description

FlopPITy

normalizing Flow exoplanet Parameter Inference Toolkyt

FlopPITy is a small Python package for atmospheric retrievals with simulation-based inference. You provide observed spectra, define parameters, and point FlopPITy at a simulator that returns model spectra. FlopPITy trains a posterior with sbi.

This README is the shortest path to running a retrieval. For all options, ARCiS details, binary/multi-component retrievals, output files, resuming, plotting, PCA, and post-processing, see docs/detailed_guide.md.

Install

FlopPITy supports Python >=3.10, <3.13.

conda create -n floppity_env python=3.12.9
conda activate floppity_env
pip install floppity

Observation Files

Each observation file is a plain text table with at least three columns:

# wavelength    observed_value    uncertainty

Minimal Retrieval

import numpy as np
from floppity import Retrieval

def simulator(obs, parameters, thread=0, **kwargs):
    """Return model spectra keyed like obs.

    parameters has shape (n_samples, n_parameters).
    Each returned spectrum has shape (n_samples, n_wavelengths).
    """
    spectra = {}

    for key, obs_array in obs.items():
        wavelength = obs_array[:, 0]
        spectra[key] = # array of shape (n_samples, n_wavelengths)

    return spectra

R = Retrieval(simulator)

R.get_obs(["path/to/observation.txt"])

R.add_parameter("parameter_name", prior_min, prior_max)

R.run()

That is the core FlopPITy workflow:

  1. Create a Retrieval.
  2. Load observations with get_obs.
  3. Add parameters with add_parameter.
  4. Run with run.

Multiple Observations

R.get_obs({
    "prism": "path/to/prism.txt",
    "miri": "path/to/miri.txt",
})

Your simulator should return spectra with the same keys:

return {
    "prism": prism_model,
    "miri": miri_model,
}

Retrieval with ARCiS

FlopPITy comes with an ARCiS wrapper by default that can import the observations and parameters from an ARCiS input file. The workflow would then be:

from floppity import Retrieval
from floppity.simulators import read_ARCiS_input, ARCiS

R = Retrieval(ARCiS)

ARCiS_kwargs = dict(
    input_file = 'path/to/ARCiS/input.txt',
    output_dir = 'path/to/ARCiS/output',
)

parameters, observations = read_ARCiS_input(ARCiS_kwargs[input_file])

R.get_obs(observations)
R.parameters = parameters

R.run(simulator_kwargs=ARCiS_kwargs)

Inspecting Results

After run(), the trained posterior proposals are stored on the retrieval:

posterior = R.proposals[-1]
samples = posterior.sample((1000,))

You can also save and load the retrieval checkpoint:

R.save("retrieval.pkl")
R = Retrieval.load("retrieval.pkl")

If save_posterior_samples=True, posterior_samples_round_X.txt stores 1000 posterior samples from each round in natural parameter units. If save_data=True, rounds/round_XXX/training_data.npz stores the sampled parameters, simulated spectra, and per-sample metadata used for that training round. It also writes rounds/round_XXX/sbi_data.npz with the exact normalized theta, x, and default_x arrays passed to SBI.

For stochastic checks, R.run_ensemble(...) repeats the same retrieval, reuses member 1's prior simulations, and writes aggregated samples/data under an aggregated/ folder. Use resume=True, add_members=True to append more members, or resume=True, extend_rounds=True to continue every existing member for more rounds.

Basic troubleshooting

  • It takes a very long time to sample from the posterior. "Only xxx% of samples are accepted, consider changing to MCMC": This probably means that the samples are too dissimilar to the observation. Usual suspects: incorrectly setup model, incorrect priors, or mismatching wavelength axes.

Next Steps

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

floppity-0.4.tar.gz (122.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

floppity-0.4-py3-none-any.whl (85.8 kB view details)

Uploaded Python 3

File details

Details for the file floppity-0.4.tar.gz.

File metadata

  • Download URL: floppity-0.4.tar.gz
  • Upload date:
  • Size: 122.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for floppity-0.4.tar.gz
Algorithm Hash digest
SHA256 271c72e15e68516f54f8e3888ff38f1aa953608cbf84ddaedcd96178e900aa5d
MD5 feca22195c87cf1b0149a3fb313c3d3b
BLAKE2b-256 9d5cea1db55fadadb70989be916debef258f92ac92f579256599b0de0295b86d

See more details on using hashes here.

Provenance

The following attestation bundles were made for floppity-0.4.tar.gz:

Publisher: publish-pypi.yml on franciscoardevolmartinez/FlopPITy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file floppity-0.4-py3-none-any.whl.

File metadata

  • Download URL: floppity-0.4-py3-none-any.whl
  • Upload date:
  • Size: 85.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for floppity-0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 17a8d81d0cd5c497dbd8f29b79fafe13bf14f996374e14274e31888ef9c30772
MD5 093958a895e39f538010c6abc2bb3f9c
BLAKE2b-256 40691d3c88a412027b3836573d463bc6c87cdfc4e218284a056138f4fccadb7b

See more details on using hashes here.

Provenance

The following attestation bundles were made for floppity-0.4-py3-none-any.whl:

Publisher: publish-pypi.yml on franciscoardevolmartinez/FlopPITy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page