Skip to main content

normalising Flow exoPlanet Parameter Inference Toolkyt

Project description

FlopPITy

normalizing Flow exoplanet Parameter Inference Toolkyt

FlopPITy allows the user to easily perform atmospheric retrievals using SNPE-C (citation) and neural spline flows (citation).

Installation guide

Currently FlopPITy doesn't work with python 3.13

$ conda create -n floppity_env python==3.12.9
$ conda activate floppity_env
$ pip install floppity

Basic usage:

  • First, import FlopPITy:
from floppity import Retrieval
  • Now you can initialize the retrieval class with a simulator. A simulator is a function that takes in parameters and returns spectra, look below to see specifically how it needs to be written. Functionality for ARCiS and PICASO comes built-in (you need to install them separately). Look further down for examples.
R = Retrieval(your_simulator_function)
  • Read in observations and define parameters to retrieve:
R.get_obs({obs_0:'path/to/obs_0', obs_1:'path/to/obs_1',..., obs_n:np.array(shape=[n_wvl,>3])})
    
R.add_parameter(par_0, min, max)
R.add_parameter(par_1, min, max)
...
R.add_parameter(par_m, min, max)
  • You can now run the retrieval, indicating the number of rounds and samples per round:
R.run(n_rounds=10, n_samples=1000, simulator_kwargs=simulator_kwargs)
  • Great! You can now inspect your posterior:
fig = R.plot_corner()

ARCiS example:

  • Firstly, initialize your retrieval object:
from floppity import Retrieval
from floppity.simulators import read_ARCiS_input, ARCiS

R = Retrieval(ARCiS)
  • For ARCiS, the observations and parameters can be read from the ARCiS input file:
pars, obs_list = read_ARCiS_input('path/to/ARCiS/input')
R.get_obs(obs_list)
R.parameters=pars
  • The input file and output directory need to be passed in a dictionary:
ARCiS_kwargs= dict(
                    ARCiS_dir = "/path/to/ARCiS/executable", #only needs to be set if ARCiS is not on the default path
                    input_file = 'path/to/ARCiS/input',
                    output_dir = 'path/to/output',
                  )
  • You can now run the retrieval as usual:
R.run(n_rounds=10, n_samples=1000, simulator_kwargs=ARCiS_kwargs)

PICASO example:

  • Running a retrieval with PICASO is very similar (this only works with the gridtree branch):
from floppity import Retrieval
from floppity.simulators import read_PICASO_config, PICASO

R = Retrieval(PICASO)

pars, obs_list = read_PICASO_config('path/to/config.toml')
R.get_obs(obs_list)
R.parameters=pars
  • The configuration file needs to be passed as a kwarg:
PICASO_kwargs= dict(
                    config_file = 'path/to/config.toml'
                  )
  • You can now run the retrieval as usual:
R.run(n_rounds=10, n_samples=1000, simulator_kwargs=PICASO_kwargs)

Writing a simulator

Writing a simulator to work for FlopPITy is relatively straightforward. All that's needed is a function that takes in observations and parameters and returns spectra. The spectra need to be returned in a dictionary where each key represents each of the observations simulated (e.g. simulated['prism'] contains PRISM spectra and simulated['lrs'] contains MIRI/LRS spectra):

def simulator(obs, parameters, **kwargs):
    wvl_prism = obs['prism'][:,0]
    wvl_lrs = obs['lrs'][:,0]
    ...
    wvl_n = obs[n][:,0]

    spectra={}
    spectra['prism'] = # array of shape (ndims, len(wvl_prism))
    spectra['lrs'] = # array of shape (ndims, len(wvl_lrs))
    ...
    spectra[n] = # array of shape (ndims, len(wvl_n))

    return spectra

Advanced options:

  • Additional post processing parameters (currently RV, vrot, offset and scaling) can be added, for example:
R.add_parameter('RV', -100, 100, post_process=True) # km/s
  • For offsets and scalings between different observations, the parameters should be named 'offset_{observation_key}'. For example, if we wanted to fit for a scaling factor between 0.95 and 1.05:
R.add_parameter('scaling_obs2', 0.95, 1.05, post_process=True)




  

  
  

  

  

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

floppity-0.2.2.tar.gz (1.6 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

floppity-0.2.2-py3-none-any.whl (74.0 kB view details)

Uploaded Python 3

File details

Details for the file floppity-0.2.2.tar.gz.

File metadata

  • Download URL: floppity-0.2.2.tar.gz
  • Upload date:
  • Size: 1.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.7

File hashes

Hashes for floppity-0.2.2.tar.gz
Algorithm Hash digest
SHA256 a5adc36e70a7093b156393fac6e689c9815ca656e7128374989cedf5b5741c55
MD5 2357b1255e8b41b948e57b14f917800c
BLAKE2b-256 9a7ed9b15bc694a687444e37d758f5a7b0490e73cbd1fc2b69786bdcd06be42b

See more details on using hashes here.

File details

Details for the file floppity-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: floppity-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 74.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.7

File hashes

Hashes for floppity-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9c8885c7900d4a3037b0ea818e0a0d6d0a3ca2a2f64ba8455d56cfb546cf8280
MD5 0fb4ee2d2227466103b3767133b954cb
BLAKE2b-256 51e8ac6d2716e5f7d21429c1e1c132ccf216158ed0578c29f44dfc69a63c6571

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page