Fast likelihood analysis in more dimensions for xenon TPCs
Project description
Flamedisx
Fast likelihood analysis in more dimensions for xenon TPCs.
Flamedisx aims to increase the practical number of dimensions and parameters in likelihoods for liquid-xenon (LXe) detectors, which are leading the field of direct dark matter detection.
Traditionally, particle physicists compute signal and background models by filling histogram 'templates' with high-statistics Monte Carlo (MC) simulations. However, the LXe model can also be computed with a series of (large) matrix multiplications, equivalent to the integral approximated by the MC simulation. Using TensorFlow makes this computation differentiable and GPU-scalable, so it can be used practically for fitting and statistical inference.
The result is a better sensitivity, since the likelihood can use all observables, and more robust fits, because using simultaneous correlated nuisance parameters no longer requires challenging interpolation and template morphing.
Getting started
To get started, Launch our tutorial on Colaboratory, or view it statically on GitHub or ReadTheDocs.
Our paper gives a detailed description of Flamedisx, and compares Flamedisx quantitatively to traditional template-based methods.
If you want all the details, see the Flamedisx Documentation and our Notebooks repository.
FlameNEST
Since version 2.0.0, flamedisx includes an implementation of electronic and nuclear recoil models from the Noble Element Simulation Technique. To use this, use sources from the fd.nest
subpackage, e.g. fd.nest.ERSource
. See the flameNEST paper for a detailed description and validation.
As of April 2024, we implement NEST version 2.3.0, which was released November 2021 (see #249).
2.1.0 / 2024-04-03
- Mu estimation options, including grid interpolation (#222, #242, #285)
- Template sources and morphing (#241, #317, #318)
- NEST sources: update to NEST 2.3.0 (#249), add beta/gamma models (#252)
- Default sources: Reconstruction bias/smearing (#273)
- Non-integer dimensions (#258)
- Non-asymptotic inference (#269, #345)
- Reservoir source that caches rates (#247)
- Fixes and cleanups (#231, #270, #284, #327, #329)
- Tested on tensorflow 2.6.1 / numpy 1.26.4
2.0.0 / 2022-05-20
- FlameNEST models fully implemented (https://arxiv.org/abs/2204.13621)
- NEST models for pre-quanta processes (#205)
- Bayesian bounds estimation (#174)
- NEST source fixes (#152)
- Fix covariance used in
LogLikelihood.summary
(#176) - Avoid calculating
produced_quanta = 0
probability (#181) electron_loss
model function (#193)- Add exposure parameter to WIMPEnergySpectrum (#223)
- Always reset data index (#225)
- XENON sources:
- Wall events model (#143)
double_pe_fraction
model function (#208)- Updates to config defaults (#209)
- Spatially dependent drift field map (#221)
- Configurable drift field, S2 AFT (#213, #218)
1.5.0 / 2021-06-29
- Variable stepping, support for high-energy models (#127)
- NEST models for post-quanta processes (#136)
- Configuration system (#140, #147)
- XENON1T: Fix S2 acceptance (#138) and unused imports (#128)
- Update block system documentation (#139)
1.4.1 / 2021-04-20
- Stabilize default optimizer with better parameter scaling (#114)
- XENONnT: Support reading data from private repository (#115)
- XENON1T: Variable elife (#118)
- XENON1T: Npz resource reading (#123)
1.4.0 / 2021-03-05
- Fix 'sticky defaults' bug (#110)
- Enable GitHub Actions and Dependabot (#109)
- Documentation updates (#92, notebooks#3)
- Likelihood
defaults
support, simulate argument fixes (#103) - SpatialRateEnergySpectrum: Simplify API (#100) and fix draw_positions (#105)
- WIMPEnergySpectrum: Accept event times slightly out of range (#99)
- Do not round photons_detected_mle (#91)
- XENON1T: fix S2 acceptance (#97) and name reconstruction efficiency pivots (#102)
1.3.0 / 2020-08-25
- Block system (#81)
- Documentation (#81)
- Bugfixes (#83, #87, #89)
1.2.0 / 2020-07-21
- Access BBF data and XENON-utilities (#80)
- Double photoelectron emission modeling (#78)
- Optimization improvements (#76)
- Bugfix (#79)
1.1.0 / 2020-07-09
- Nonlinear constraint limit setting (experimental) (#70)
- Dimension scaling inside optimizers (#72)
- Auto-guess rate multipliers (#74)
- Python 3.8 builds (#73)
- Add sanity checks on input and guess (#69)
1.0.0 / 2020-03-26
- Fiducial volume specification (#64)
- Added default cS1 cut (#63)
- Cleanup and optimizations (#63, #64, #65)
0.5.0 / 2020-01-31
- Autographed Hessian; use Hessian in the optimizer (#62)
- Check for optimizer failures (#61)
- Trace single-batch likelihood, but use numpy thereafter (#61)
- Fix simulation/data discrepancy in recombination fluctuation
- Adjust optimizer defaults
- Option to use time-averaged WIMP spectra
0.4.0 / 2020-01-15
- Many changes to objectives and inference (#59, #60)
- Add tilt to objective for interval/limit searches
- one_parameter_interval -> limit and interval methods
- Optimizers use bounds
- Tolerance option homogenization (first pass)
- Auto-guess limits
0.3.1 / 2019-11-26
- Performance improvements and cleanup (#58)
- Improve one_parameter_interval arguments (#56)
- Add Tutorial output to flamedisx-notebooks (#56)
- Bugfixes (#57)
0.3.0 / 2019-11-19
- Split off notebook folder to flamedisx-notebooks
- Pass source specific parameters correctly (#51)
- Flexible event padding (#54)
- SciPy optimizer and optimizer settings (#54)
- one_parameter_interval (#54)
- Bugfixes (#46, #55, #51)
- Unify optimizers (#54)
0.2.2 / 2019-10-30
- Minuit optimizer (#40)
- Likelihood simulator (#43, #44)
- Updates to NRSource (#40)
0.2.1 / 2019-10-24
- Workaround for numerical errors (#38, #39)
0.2.0 / 2019-10-11
- Spatially dependent rates (#27)
- Time dependent energy spectra (#24)
- XENON1T SR1-like model / fixes (#22, #32)
- Switch optimizer to BFGS + Hessian (#19)
- Multiple source support (#14)
- Optimization (#13)
- Bugfixes / refactor (#18, #20, #21, #28, #30, #31, #35)
0.1.2 / 2019-07-24
- Speedup ER computation, add tutorial (#11)
- Optimize lookup-axis1 (#10)
0.1.1 / 2019-07-21
- 5x speedup for Hessian (#9)
- Fix pip install
0.1.0 / 2019-07-16
- Batching (#7)
- Inference (#6)
- Ported to tensorflow / GPU support (#1, #2, #3, #5)
0.0.1 / 2019-03-17
- Initial numpy-based version
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file flamedisx-2.1.0.tar.gz
.
File metadata
- Download URL: flamedisx-2.1.0.tar.gz
- Upload date:
- Size: 123.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ba1776347915c4fd89709a6e1d842bfa114bd02fc98c2da699ea168db9c30cb7 |
|
MD5 | c764c16fccaf18f2eee26ab2114ff8b1 |
|
BLAKE2b-256 | 28f5c30a39db7051815599d4ff9748d0ee758687d21572601736fbd30e5b677c |
File details
Details for the file flamedisx-2.1.0-py3-none-any.whl
.
File metadata
- Download URL: flamedisx-2.1.0-py3-none-any.whl
- Upload date:
- Size: 141.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 49980a74f1b40c917c971cd35c0f55711223cc37d5cb5141d946989e0749e5b7 |
|
MD5 | 4a88f5321603f5c18d1ad7a86c031ecf |
|
BLAKE2b-256 | 0d572d2f5b67db2420e23af8460ead9b7b320267d74a39ed60b4e0aa73bf4af9 |