Skip to main content

Interface with Cloudy Photoionization Software data and RAMSES-RT Simulation data to create galaxy images and spectra in nebular emission lines.

Project description

Merlin

Nebular Line Emission Diagnostics from Cosmological Simulations of

Early Universe Galaxies.

Author: Braden Nowicki Advisor: Dr. Massimo Ricotti

Interface with Cloudy Photoionization Software data and RAMSES-RT Simulation data to create galaxy images and spectra in nebular emission lines.

File Structure:

MERLIN

merlin/ : primary package for analysis and visualizations CloudyFiles/ : files related to Cloudy grid runs. Reference/ : previously-developed analysis code.

Creating a Line List with Cloudy

The Cloudy Photoionization Code is used to generate line emission (flux) throughout the desired parameter space. We consider gas parameters Ionization Parameter, Hydrogen Number Density, and Temperature. A simple grid run is performed, simulating emission for lines present in CloudyFiles/gridrun-09-02-2026/LineList_NebularO.dat and CloudyFiles/gridrun-09-02-2026/LineList_NebularCN.dat (can be adjusted) from a 1 cm thick gas cell. The exact conditions can be adjusted (the star SED, for instance) for a given run; a simulation for every combination of the varying parameters is performed. Line list data is output in 'LineList_NebularCN.dat' and 'LineList_NebularO.dat'. Add a header to both with limits of U (ionisation parameter), N (H number density), T (temperature), such as "-9.0 2.0 0.5 -4.0 7.0 0.5 1.0 8.0 0.2", based on the input file. Combine into a single 'linelist.dat' file via 'combine_tables.py'.

See CloudyFiles/gridrun-09-02-2026. interp6.in is the input file. The run file (with executable privilege) can be used in your installation of Cloudy. For instance, in the terminal, './run interp6' would run the desired simulations (note that grid runs require the use of the -r option, present in 'run', and therefore the input file is given as interp6, ommitting .in.)

Running Merlin on RAMSES-RT Output Data

Either use the source files themselves or 'python3 -m pip install merlin-spectra'. With the latter, 'main.py' can be run with a simple 'import merlin_spectra'. I recommend running in a conda environment with the necessary packages installed.

'main.py' outlines the process: create derived field functions; load the simulation given by the filepath (as a command line argument) pointing to the info_*.txt file within an output folder; instantiate an EmissionLineInterpolator object for the derived flux and luminosity fields using the desired 'linelist.dat' table; instantiate a VisualizationManager object; and run the desired routines/analysis for the time slice.

NOTICE: Store the 'linelist-all.dat' file locally and update the path in main.py. This will be updated soon to use data files from within the imported package source.

The class structures allow for easy testing of new features and updates. One must call save_sim_info() for Object variables like current_redshift to be created; afterwards, any function can be called in a modular fashion. There are some exceptions: phase_with_profiles(), for instance, uses an object generated by phase_plot().

NOTE: The proper updated hydro_file_descriptor.txt is formatted as, e.g.:

# version: 1 # ivar, variable_name, variable_type 1, density, d 2, velocity_x, d 3, velocity_y, d 4, velocity_z, d 5, pressure, d 6, metallicity, d 7, xHI, d 8, xHII, d 9, xHeII, d 10, xHeIII, d 11, refinement-param, i

Running in the Cluster on Multiple Time Slices

With the desired routines and field setup chosen in a driver script like 'main.py', analysis can be performed on multiple time slices in parallel. For RAMSES-RT output data stored in a cluster, use scp to copy merlin, a driver script like 'main.py', and a shell script like 'analysis.sh' to the cluster scratch space. One may test that it works in the scratch space by running 'main.py' on one time slice. You must first perform the module loads and pip installs at the beginning of 'main.py' for the dependencies. The shell script 'analysis-1.sh' shows one way to run the script on multiple time slices in parallel (in one job). Preferably, the jobs are submitted in a job array to run when resources are available, since the tasks are completely independent (as in 'analysis.sh'). A python install path can be specified to avoid using the version zaratan jobs default to.

Bulk Analysis of Multiple Time Slices

A Note on the Naming of this Code

The Merlin is a small species of falcon from the Northern Hemisphere. The naming of this package is inspired by the Merlin's exceptionally sharp eyesight; we generate observational diagnostics from simulated distant, high-redshift galaxies. Birds are a bellwether of environmental decline; populations are down nearly 3 billion birds since 1970. Please consider supporting local efforts to safeguard birds, their migration, and their habitats.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

merlin_spectra-0.0.29.tar.gz (5.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

merlin_spectra-0.0.29-py3-none-any.whl (6.1 MB view details)

Uploaded Python 3

File details

Details for the file merlin_spectra-0.0.29.tar.gz.

File metadata

  • Download URL: merlin_spectra-0.0.29.tar.gz
  • Upload date:
  • Size: 5.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for merlin_spectra-0.0.29.tar.gz
Algorithm Hash digest
SHA256 e5c8318e7b628ba4e8f42a7bd85b8eb83a298ad445f11634ec85086c09431452
MD5 0c5daca15143774e2776b52cb31a04d2
BLAKE2b-256 493a04b6ce0046111a993b75e9efb2e6b49a08ef6b954e694e01d6137370ca38

See more details on using hashes here.

File details

Details for the file merlin_spectra-0.0.29-py3-none-any.whl.

File metadata

File hashes

Hashes for merlin_spectra-0.0.29-py3-none-any.whl
Algorithm Hash digest
SHA256 57de6f8a6a38f5582ccd00f208e3566563507466816e732764b98a1b364cfd6d
MD5 c2db77ec7dbe6e1601569277bc9a7145
BLAKE2b-256 423f2d0bc2fab3f14f7028d3e003bf05283b2a1ac02103b11091f461d4777e8a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page