Neoclassical transport solver with CPU/GPU and differentiable JAX workflows
Project description
sfincs_jax
sfincs_jax is a standalone neoclassical transport code for radially local
drift-kinetic calculations in stellarator and tokamak geometry. It combines
high-fidelity kinetic models, CPU/GPU execution, modern matrix-free numerics,
parallel workflows, and optional differentiable solve paths in one codebase.
On the current main branch, the full audited example suite runs cleanly on CPU and GPU.
The default CLI path is tuned for robust production solves and practical throughput,
while the Python API can opt into differentiable solve paths when gradients matter.
It is designed for:
- high-performance runs on CPU/GPU,
- research and production transport workflows,
- memory-efficient large solves,
- end-to-end differentiable workflows.
The figure above shows a representative transport benchmark. The release-facing validation and benchmark artifacts are documented in the docs and in the audit table below.
Installation
Install from PyPI:
pip install sfincs_jax
Install from source:
git clone https://github.com/uwplasma/sfincs_jax.git
cd sfincs_jax
pip install .
After installing from a source checkout, you can run the CLI immediately on the
bundled tiny example. The suffix of --out selects the output format:
.h5/.hdf5 for the Fortran-compatible HDF5 file, .nc/.netcdf for NetCDF4,
and .npz for a fast NumPy archive.
cd sfincs_jax
sfincs_jax write-output \
--input examples/getting_started/input.namelist \
--out sfincsOutput.h5 \
--geometry-only
sfincs_jax --plot sfincsOutput.h5
This is the fast installation smoke test. It writes sfincsOutput.h5 and then
writes a multi-page PDF diagnostics panel next to it as
sfincsOutput_summary.pdf. The same command works for NetCDF and NPZ:
sfincs_jax write-output --input examples/getting_started/input.namelist --out sfincsOutput.nc --geometry-only
sfincs_jax write-output --input examples/getting_started/input.namelist --out sfincsOutput.npz --geometry-only
sfincs_jax --plot sfincsOutput.nc
Physics in One Page
sfincs_jax solves the radially local, steady, linearized drift-kinetic
equation for the non-adiabatic distribution-function perturbation
f_s1 on a flux surface. In normalized form the solved kinetic balance is
(parallel streaming + mirror force + E x B drift + magnetic drift
+ energy/pitch-angle drifts - linearized collisions) f_s1 = thermodynamic drives.
The unknown distribution can be coupled to the flux-surface electrostatic
potential variation Phi1(theta,zeta) through quasineutrality when requested.
The output fluxes, flows, transport matrices, and diagnostics are moments of
this solved f_s1. The full equations, normalizations, switches, and source-code
mapping are documented in docs/system_equations.rst, docs/physics_models.rst,
and docs/method.rst.
Quick Start (CLI)
You can run sfincs_jax from anywhere in your terminal. You do not need to be
inside the repository folder.
Run an input file:
sfincs_jax /path/to/input.namelist
Write output explicitly:
sfincs_jax write-output --input /path/to/input.namelist --out /path/to/sfincsOutput.h5
Plot an existing output file:
sfincs_jax --plot /path/to/sfincsOutput.h5
By default this writes /path/to/sfincsOutput_summary.pdf, a multi-page panel
with geometry, radial profiles, particle/heat/momentum fluxes, NTV, moments, and
transport-matrix diagnostics when those datasets are present. Use
sfincs_jax plot-output --input-h5 ... --out custom.pdf to choose a filename.
Override the equilibrium file at the CLI without changing input.namelist:
sfincs_jax write-output \
--input /path/to/input.namelist \
--out /path/to/sfincsOutput.h5 \
--wout-path /path/to/wout.nc
The bare sfincs_jax /path/to/input.namelist form accepts the same
--equilibrium-file and --wout-path overrides.
Quick Start (Python)
Read a namelist, run sfincs_jax, write an output file, and inspect results directly in memory:
from pathlib import Path
from sfincs_jax.io import write_sfincs_jax_output_h5
input_namelist = Path("input.namelist")
out_path, results = write_sfincs_jax_output_h5(
input_namelist=input_namelist,
output_path=Path("sfincsOutput.h5"),
return_results=True,
)
print("Wrote:", out_path)
print("Available datasets:", len(results))
print("Example key:", "particleFlux_vm_psiHat" in results)
Set output_path=Path("sfincsOutput.nc") for NetCDF4 or
output_path=Path("sfincsOutput.npz") for a fast NumPy archive. The calculation
is identical; only the writer changes.
If you need to override the equilibrium file without editing the namelist, pass
equilibrium_file=... or the VMEC-friendly alias wout_path=...:
write_sfincs_jax_output_h5(
input_namelist=input_namelist,
output_path=Path("sfincsOutput.h5"),
wout_path=Path("/path/to/wout.nc"),
)
sfincs_jax write-output and the scan utilities use the explicit
performance-oriented solve path by default. When calling
write_sfincs_jax_output_h5(...) directly, pass differentiable=False for the
same fast path or request the implicit/differentiable linear-solve path only when
you need gradients:
write_sfincs_jax_output_h5(
input_namelist=input_namelist,
output_path=Path("sfincsOutput.h5"),
differentiable=False,
)
write_sfincs_jax_output_h5(
input_namelist=input_namelist,
output_path=Path("sfincsOutput.h5"),
differentiable=True,
)
Repository examples that map directly onto common first tasks:
- run the bundled tiny CLI example:
sfincs_jax examples/getting_started/input.namelist - write a tiny tokamak output:
python examples/getting_started/write_sfincs_output_tokamak.py - write a tiny VMEC output with
wout_path:python examples/getting_started/write_sfincs_output_vmec.py - plot an output file:
python examples/getting_started/plot_sfincs_output.py - write HDF5/NetCDF/NPZ and plot a PDF panel:
python examples/getting_started/write_and_plot_multiple_formats.py - run autodiff examples:
python examples/autodiff/autodiff_gradient_nu_n_residual.py - run the optional VMEC/Boozer differentiable geometry handoff:
python examples/autodiff/vmec_jax_to_boozer_sfincs_pipeline.py --wout /path/to/wout.nc - benchmark CPU/GPU parallel solves:
python examples/performance/benchmark_sharded_solve_scaling.py --backend cpu --devices 1 2 --inner-warmup-solves 1 --sample-timeout-s 300 ...
Parallel CLI controls are now first-class:
# Multi-core CPU host sharding on one node
sfincs_jax --cores 8 --shard-axis auto /path/to/input.namelist
# Parallel transport-matrix RHS solves
sfincs_jax transport-matrix-v3 \
--input /path/to/input.namelist \
--transport-workers 4
# High-nu LHD/W7-X campaign pilot on a dual-GPU node
CUDA_VISIBLE_DEVICES=0,1 \
python examples/publication_figures/generate_sfincs_paper_figs.py \
--case lhd \
--collision-operators 0 \
--nuprime-min 17.78279101649707 \
--nuprime-max 17.78279101649707 \
--n-points 1 \
--transport-workers 2 \
--transport-parallel-backend gpu \
--transport-sparse-direct-max 30000 \
--require-residuals \
--max-transport-residual 1e-6 \
--max-transport-relative-residual 1e-6 \
--scan-only
# The current office dual-GPU LHD pilot for that point is residual-clean in
# ~262 s, compared with ~345 s on one GPU and ~569 s on the older implicit path.
# For the first W7-X FP high-nu point, use the bounded one-worker sparse-LU lane
# below: it closes all three RHS residual gates in ~9.7 min on one office GPU
# with sparse-helper factor reuse, compared with ~33.8 min before reuse.
# W7-X FP high-nu residual-clean pilot, intentionally one worker to limit sparse
# LU memory pressure:
CUDA_VISIBLE_DEVICES=0 \
SFINCS_JAX_TRANSPORT_SPARSE_FACTOR_DTYPE=float32 \
python examples/publication_figures/generate_sfincs_paper_figs.py \
--case w7x \
--collision-operators 0 \
--nuprime-min 17.78332923601508 \
--nuprime-max 17.78332923601508 \
--n-points 1 \
--transport-workers 1 \
--transport-parallel-backend gpu \
--transport-sparse-direct-max 40000 \
--transport-maxiter 800 \
--require-residuals \
--max-transport-residual 1e-6 \
--max-transport-relative-residual 1e-6 \
--scan-only
# To compare candidate preconditioners before widening W7-X high-nu scans,
# isolate single-RHS behavior:
CUDA_VISIBLE_DEVICES=0 \
python examples/performance/benchmark_w7x_high_nu_preconditioners.py \
--preconditioners auto,fp_tzfft,xmg \
--which-rhs 2 \
--sparse-direct-max 40000 \
--sparse-factor-dtype float32 \
--maxiter 800 \
--timeout-s 900
The W7-X high-nu figure is generated by
python examples/publication_figures/generate_w7x_high_nu_performance.py.
The checked run preserves the previous residual-clean transport matrix exactly,
reduces the full one-point wall time from about 2028 s to 582 s, and lowers
measured peak RSS from about 19.9 GB to 15.3 GB.
# One-node multi-GPU sharded solve (experimental for very large single-RHS cases)
CUDA_VISIBLE_DEVICES=0,1 \
sfincs_jax write-output \
--input /path/to/input.namelist \
--shard-axis theta \
--distributed-gmres auto
# Multi-host JAX distributed bootstrap
sfincs_jax write-output \
--input /path/to/input.namelist \
--distributed \
--process-count 8 \
--process-id ${RANK} \
--coordinator-address node0 \
--coordinator-port 1234
Use -v to have the executable print the active parallel runtime summary
(cores, shard axis, transport workers, distributed Krylov mode, and multi-host
bootstrap fields) before the solve starts.
Current recommendation:
- CPU host sharding is supported and deterministic, but the measured speedup is still case-dependent.
- The current sharded RHSMode=1 CPU path uses a wider Schwarz patch rule plus a bounded multilevel residual correction to avoid the worst 4/8-device fragmentation failures seen in earlier releases.
- Use one GPU per case or scan point for production throughput today.
- Multi-GPU single-case sharding is available for benchmarking and very large runs, but it remains experimental and is not yet the default recommendation.
- The sharded-solve benchmark helper supports both
--backend cpuand--backend gpu; the GPU path usesCUDA_VISIBLE_DEVICESand disables JAX preallocation in the subprocess, withcuda_malloc_asyncenabled for the benchmark subprocess allocator, so one-node GPU scaling experiments are more reproducible. - For practical multi-GPU usage today, the strongest measured path is
transport-worker parallelism with one worker per GPU on RHSMode=2/3 runs.
On the fresh office 2-GPU rerun of
examples/performance/transport_parallel_2min.input.namelist, this path measured351.1s -> 237.7sfrom1 -> 2GPU workers, i.e.1.48xspeedup on a 3-RHS case, essentially at the finite-task ideal of1.5x. - Multi-GPU single-case sharding remains experimental. Use it for research and benchmarking, not as the default production scaling path.
You can reproduce the recommended multi-GPU transport-worker benchmark with:
python examples/performance/benchmark_transport_parallel_scaling.py \
--input examples/performance/transport_parallel_2min.input.namelist \
--backend gpu \
--workers 1 2
Compare two outputs:
sfincs_jax compare-h5 --a sfincsOutput_jax.h5 --b sfincsOutput_fortran.h5
Advanced CLI, plotting, and solver options are documented in docs/usage.rst,
docs/outputs.rst, and docs/performance_techniques.rst.
Models, Numerics, and Validation
sfincs_jax solves the same class of neoclassical drift-kinetic problems as mature
SFINCS workflows, but it is documented and maintained as its own code. In particular:
- the public executable favors bounded, performance-oriented solve strategies,
- the Python API can switch to differentiable solve paths when end-to-end sensitivities are needed,
- CPU runs lean on JIT-cached kernels and selected host sparse factorizations for hard linear branches,
- repeated RHSMode=1 output-writing runs reuse prebuilt grids, geometry, and operator state to cut setup cost on large HSX/geometry11 cases,
- GPU runs keep operator applications on device, then fall back to accelerator-safe or host rescue paths only when conditioning or memory demands it,
- and the documentation maps the governing equations directly onto the source tree.
The main documentation entry points are:
- physics and equations:
docs/physics_models.rst,docs/system_equations.rst,docs/physics_reference.rst - geometry and numerics:
docs/geometry.rst,docs/method.rst,docs/numerics.rst - inputs and outputs:
docs/inputs.rst,docs/outputs.rst - parallel and performance workflows:
docs/parallelism.rst,docs/performance.rst - examples, applications, and testing:
docs/examples.rst,docs/applications.rst,docs/testing.rst - external trust-building comparisons:
docs/fortran_comparison.rst
Current Example-Suite Audit
Regenerate this block from the current main working tree with:
python scripts/run_scaled_example_suite.py \
--examples-root examples/sfincs_examples \
--resolution-reference-root /Users/rogeriojorge/local/tests/sfincs_original/fortran/version3/examples \
--reference-results-root tests/scaled_example_suite_recheck_cpu_frozen_2026-04-23_postkeyfix \
--out-root tests/scaled_example_suite_release_cpu_frozen_2026-04-25_v106 \
--scale-factor 1.0 \
--runtime-target-basis fortran \
--fortran-min-runtime-s 0.0 \
--runtime-adjustment-iters 0 \
--runtime-baseline-report tests/scaled_example_suite_recheck_cpu_frozen_2026-04-23_postkeyfix/suite_report.json \
--jax-profile-marks on
python scripts/generate_readme_fast_branch_audit.py \
--out-root tests/scaled_example_suite_release_cpu_frozen_2026-04-25_v106 \
--gpu-out-root tests/scaled_example_suite_release_gpu_2026-04-25_v106
The benchmark policy on main is:
- start from the original Fortran v3 example resolution,
- only downscale when a case is too expensive for a practical suite run,
- benchmark JAX CPU and GPU against a frozen CPU-generated Fortran reference root,
- and never intentionally push a reduced case below about
1sof Fortran wall time unless the original example is already that small.
That avoids the misleading sub-second Fortran rows that came from blind global downscaling, keeps the GPU lane tied to a deterministic reference, and makes the additional example part of the same artifact set as the standard suite.
Current main CPU audit comes from tests/scaled_example_suite_release_cpu_frozen_2026-04-25_v106.
Matching frozen-reference GPU audit comes from tests/scaled_example_suite_release_gpu_2026-04-25_v106.
- Recorded cases:
39/39 - Practical status counts:
parity_ok=39 - Strict status counts:
parity_ok=39 - GPU practical status counts:
parity_ok=39 - GPU strict status counts:
parity_ok=39 - CPU output-key coverage:
missing_total=0, extra_total=70, audited_cases=39, skipped_cases=0 - GPU output-key coverage:
missing_total=0, extra_total=70, audited_cases=39, skipped_cases=0 - CPU runtime drift watchlist vs
tests/scaled_example_suite_recheck_cpu_frozen_2026-04-23_postkeyfix/suite_report.json: none - GPU runtime drift watchlist vs
tests/scaled_example_suite_fast_gpu_full_v11_refresh/suite_report.json: none - Resolution policy:
reference_first_runtime_window, scale_factor=1.0, runtime_basis=fortran, fortran_min=0.0, fortran_max=None, adjust_iters=0 - Remaining cases: none
- Additional example:
parity_okon CPU andparity_okon GPU
Current mismatches:
- CPU practical mismatches: none
- CPU strict mismatches: none
- GPU practical/strict mismatches: none
Full per-case runtime / memory table:
| Case | Fortran CPU(s) | JAX CPU(s) | CPU x | JAX GPU(s) | GPU x | Fortran MB | JAX CPU MB | CPU MB x | JAX GPU MB | GPU MB x | CPU mismatch | GPU mismatch | CPU print | GPU print | CPU status | GPU status |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
HSX_FPCollisions_DKESTrajectories |
29.664 | 3.060 | 0.10x | 5.152 | 0.17x | 103.0 | 496.9 | 4.82x | 919.3 | 8.92x | 0/193 (strict 0/193) | 0/193 (strict 0/193) | 9/9 | 9/9 | parity_ok | parity_ok |
HSX_FPCollisions_fullTrajectories |
88.504 | 3.054 | 0.03x | 5.152 | 0.06x | 100.8 | 506.8 | 5.03x | 924.7 | 9.17x | 0/193 (strict 0/193) | 0/193 (strict 0/193) | 9/9 | 9/9 | parity_ok | parity_ok |
HSX_PASCollisions_DKESTrajectories |
0.994 | 3.951 | 3.97x | 6.508 | 6.55x | 112.0 | 1146.3 | 10.23x | 1183.5 | 10.56x | 0/123 (strict 0/123) | 0/123 (strict 0/123) | 7/7 | 7/7 | parity_ok | parity_ok |
HSX_PASCollisions_fullTrajectories |
2.510 | 3.724 | 1.48x | 9.684 | 3.86x | 179.2 | 1441.6 | 8.04x | 2058.6 | 11.49x | 0/193 (strict 0/193) | 0/193 (strict 0/193) | 9/9 | 9/9 | parity_ok | parity_ok |
additional_examples |
120.074 | 1.733 | 0.01x | 2.682 | 0.02x | 102.1 | 429.2 | 4.20x | 884.3 | 8.66x | 0/193 (strict 0/193) | 0/193 (strict 0/193) | 9/9 | 9/9 | parity_ok | parity_ok |
filteredW7XNetCDF_2species_magneticDrifts_noEr |
89.052 | 2.069 | 0.02x | 3.035 | 0.03x | 103.2 | 481.3 | 4.66x | 898.4 | 8.70x | 0/193 (strict 0/193) | 0/193 (strict 0/193) | 9/9 | 9/9 | parity_ok | parity_ok |
filteredW7XNetCDF_2species_magneticDrifts_withEr |
95.440 | 2.011 | 0.02x | 3.188 | 0.03x | 96.2 | 512.0 | 5.32x | 904.6 | 9.40x | 0/193 (strict 0/193) | 0/193 (strict 0/193) | 9/9 | 9/9 | parity_ok | parity_ok |
filteredW7XNetCDF_2species_noEr |
128.508 | 1.550 | 0.01x | 2.785 | 0.02x | 100.3 | 457.0 | 4.56x | 892.7 | 8.90x | 0/193 (strict 0/193) | 0/193 (strict 0/193) | 9/9 | 9/9 | parity_ok | parity_ok |
geometryScheme4_1species_PAS_withEr_DKESTrajectories |
1.365 | 2.209 | 1.62x | 4.851 | 3.55x | 127.3 | 1080.1 | 8.49x | 1255.6 | 9.86x | 0/207 (strict 0/207) | 0/207 (strict 0/207) | 9/9 | 9/9 | parity_ok | parity_ok |
geometryScheme4_2species_PAS_noEr |
0.953 | 2.622 | 2.75x | 5.807 | 6.09x | 162.7 | 1808.8 | 11.12x | 1816.8 | 11.17x | 0/207 (strict 0/207) | 0/207 (strict 0/207) | 9/9 | 9/9 | parity_ok | parity_ok |
geometryScheme4_2species_noEr |
139.240 | 1.733 | 0.01x | 2.885 | 0.02x | 92.2 | 468.0 | 5.07x | 913.3 | 9.90x | 0/207 (strict 0/207) | 0/207 (strict 0/207) | 9/9 | 9/9 | parity_ok | parity_ok |
geometryScheme4_2species_noEr_withPhi1InDKE |
293.275 | 1.936 | 0.01x | 3.491 | 0.01x | 100.6 | 480.9 | 4.78x | 942.1 | 9.36x | 0/265 (strict 0/265) | 0/265 (strict 0/265) | 9/9 | 9/9 | parity_ok | parity_ok |
geometryScheme4_2species_noEr_withQN |
146.734 | 1.769 | 0.01x | 3.136 | 0.02x | 95.1 | 468.1 | 4.92x | 930.2 | 9.79x | 0/265 (strict 0/265) | 0/265 (strict 0/265) | 9/9 | 9/9 | parity_ok | parity_ok |
geometryScheme4_2species_withEr_fullTrajectories |
58.053 | 1.710 | 0.03x | 2.832 | 0.05x | 113.4 | 475.5 | 4.19x | 907.7 | 8.01x | 0/193 (strict 0/193) | 0/193 (strict 0/193) | 9/9 | 9/9 | parity_ok | parity_ok |
geometryScheme4_2species_withEr_fullTrajectories_withQN |
211.358 | 1.889 | 0.01x | 3.287 | 0.02x | 98.8 | 486.4 | 4.92x | 932.5 | 9.44x | 0/251 (strict 0/251) | 0/251 (strict 0/251) | 9/9 | 9/9 | parity_ok | parity_ok |
geometryScheme5_3species_loRes |
98.976 | 1.615 | 0.02x | 3.942 | 0.04x | 129.6 | 540.3 | 4.17x | 911.1 | 7.03x | 0/193 (strict 0/193) | 0/193 (strict 0/193) | 9/9 | 9/9 | parity_ok | parity_ok |
inductiveE_noEr |
166.614 | 1.644 | 0.01x | 3.037 | 0.02x | 99.2 | 468.4 | 4.72x | 912.5 | 9.20x | 0/207 (strict 0/207) | 0/207 (strict 0/207) | 9/9 | 9/9 | parity_ok | parity_ok |
monoenergetic_geometryScheme1 |
0.795 | 1.924 | 2.42x | 12.959 | 16.30x | 110.2 | 704.0 | 6.39x | 995.1 | 9.03x | 0/203 (strict 0/203) | 0/203 (strict 0/203) | 9/9 | 9/9 | parity_ok | parity_ok |
monoenergetic_geometryScheme11 |
0.861 | 2.853 | 3.31x | 5.705 | 6.63x | 118.7 | 1205.2 | 10.16x | 1003.1 | 8.45x | 0/210 (strict 0/210) | 0/210 (strict 0/210) | 9/9 | 9/9 | parity_ok | parity_ok |
monoenergetic_geometryScheme5_ASCII |
1.052 | 1.678 | 1.59x | 4.347 | 4.13x | 142.1 | 497.0 | 3.50x | 989.4 | 6.96x | 0/207 (strict 0/207) | 0/207 (strict 0/207) | 9/9 | 9/9 | parity_ok | parity_ok |
monoenergetic_geometryScheme5_netCDF |
1.029 | 1.884 | 1.83x | 4.446 | 4.32x | 131.4 | 533.7 | 4.06x | 988.5 | 7.52x | 0/207 (strict 0/207) | 0/207 (strict 0/207) | 9/9 | 9/9 | parity_ok | parity_ok |
quick_2species_FPCollisions_noEr |
166.945 | 1.531 | 0.01x | 2.836 | 0.02x | 97.1 | 459.3 | 4.73x | 913.2 | 9.40x | 0/207 (strict 0/207) | 0/207 (strict 0/207) | 9/9 | 9/9 | parity_ok | parity_ok |
sfincsPaperFigure3_geometryScheme11_FPCollisions_2Species_DKESTrajectories |
76.666 | 1.653 | 0.02x | 3.081 | 0.04x | 106.7 | 479.6 | 4.49x | 915.8 | 8.58x | 0/207 (strict 0/207) | 0/207 (strict 0/207) | 9/9 | 9/9 | parity_ok | parity_ok |
sfincsPaperFigure3_geometryScheme11_FPCollisions_2Species_fullTrajectories |
93.439 | 1.767 | 0.02x | 3.286 | 0.04x | 94.0 | 491.9 | 5.24x | 920.3 | 9.80x | 0/207 (strict 0/207) | 0/207 (strict 0/207) | 9/9 | 9/9 | parity_ok | parity_ok |
sfincsPaperFigure3_geometryScheme11_PASCollisions_2Species_DKESTrajectories |
1.104 | 2.658 | 2.41x | 6.606 | 5.98x | 130.7 | 1458.0 | 11.16x | 1587.7 | 12.15x | 0/207 (strict 0/207) | 0/207 (strict 0/207) | 9/9 | 9/9 | parity_ok | parity_ok |
sfincsPaperFigure3_geometryScheme11_PASCollisions_2Species_fullTrajectories |
1.706 | 2.768 | 1.62x | 7.363 | 4.32x | 144.6 | 1482.2 | 10.25x | 2097.7 | 14.51x | 0/207 (strict 0/207) | 0/207 (strict 0/207) | 9/9 | 9/9 | parity_ok | parity_ok |
tokamak_1species_FPCollisions_noEr |
160.856 | 1.395 | 0.01x | 2.530 | 0.02x | 93.2 | 372.9 | 4.00x | 860.4 | 9.23x | 0/188 (strict 0/188) | 0/188 (strict 0/188) | 9/9 | 9/9 | parity_ok | parity_ok |
tokamak_1species_FPCollisions_noEr_withPhi1InDKE |
259.575 | 1.783 | 0.01x | 3.442 | 0.01x | 89.6 | 450.1 | 5.02x | 932.9 | 10.41x | 0/274 (strict 0/274) | 0/274 (strict 0/274) | 9/9 | 9/9 | parity_ok | parity_ok |
tokamak_1species_FPCollisions_noEr_withQN |
237.879 | 1.508 | 0.01x | 2.886 | 0.01x | 102.6 | 421.5 | 4.11x | 918.4 | 8.95x | 0/274 (strict 0/274) | 0/274 (strict 0/274) | 9/9 | 9/9 | parity_ok | parity_ok |
tokamak_1species_FPCollisions_withEr_DKESTrajectories |
155.955 | 1.510 | 0.01x | 2.937 | 0.02x | 103.1 | 431.4 | 4.18x | 905.7 | 8.79x | 0/214 (strict 0/214) | 0/214 (strict 0/214) | 9/9 | 9/9 | parity_ok | parity_ok |
tokamak_1species_FPCollisions_withEr_fullTrajectories |
154.953 | 1.623 | 0.01x | 3.036 | 0.02x | 101.1 | 437.6 | 4.33x | 911.1 | 9.01x | 0/214 (strict 0/214) | 0/214 (strict 0/214) | 9/9 | 9/9 | parity_ok | parity_ok |
tokamak_1species_PASCollisions_noEr |
0.309 | 2.098 | 6.79x | 4.846 | 15.68x | 114.2 | 579.0 | 5.07x | 984.9 | 8.62x | 0/212 (strict 0/212) | 0/212 (strict 0/212) | 9/9 | 9/9 | parity_ok | parity_ok |
tokamak_1species_PASCollisions_noEr_Nx1 |
0.017 | 1.843 | 108.41x | 3.484 | 204.96x | 100.9 | 507.6 | 5.03x | 929.4 | 9.21x | 0/212 (strict 0/212) | 0/212 (strict 0/212) | 9/9 | 9/9 | parity_ok | parity_ok |
tokamak_1species_PASCollisions_noEr_withQN |
0.888 | 1.977 | 2.23x | 3.388 | 3.82x | 120.9 | 533.7 | 4.42x | 987.8 | 8.17x | 0/274 (strict 0/274) | 0/274 (strict 0/274) | 9/9 | 9/9 | parity_ok | parity_ok |
tokamak_1species_PASCollisions_withEr_fullTrajectories |
0.017 | 2.770 | 162.92x | 3.792 | 223.03x | 102.0 | 609.1 | 5.97x | 923.3 | 9.05x | 0/212 (strict 0/212) | 0/212 (strict 0/212) | 9/9 | 9/9 | parity_ok | parity_ok |
tokamak_2species_PASCollisions_noEr |
0.331 | 2.436 | 7.36x | 4.698 | 14.19x | 123.6 | 863.5 | 6.99x | 1148.7 | 9.29x | 0/212 (strict 0/212) | 0/212 (strict 0/212) | 9/9 | 9/9 | parity_ok | parity_ok |
tokamak_2species_PASCollisions_withEr_fullTrajectories |
1.330 | 3.247 | 2.44x | 7.315 | 5.50x | 121.8 | 1607.5 | 13.19x | 1246.0 | 10.23x | 0/212 (strict 0/212) | 0/212 (strict 0/212) | 9/9 | 9/9 | parity_ok | parity_ok |
transportMatrix_geometryScheme11 |
0.025 | 1.582 | 63.28x | 3.191 | 127.62x | 102.6 | 438.8 | 4.28x | 925.9 | 9.02x | 0/194 (strict 0/194) | 0/194 (strict 0/194) | 9/9 | 9/9 | parity_ok | parity_ok |
transportMatrix_geometryScheme2 |
0.031 | 1.576 | 50.85x | 3.188 | 102.83x | 100.5 | 436.4 | 4.34x | 924.7 | 9.20x | 0/194 (strict 0/194) | 0/194 (strict 0/194) | 9/9 | 9/9 | parity_ok | parity_ok |
Largest CPU runtime improvements vs tests/scaled_example_suite_recheck_cpu_frozen_2026-04-23_postkeyfix/suite_report.json:
monoenergetic_geometryScheme5_ASCII: 3.5s -> 1.7s (delta=1.8s)tokamak_2species_PASCollisions_noEr: 4.0s -> 2.4s (delta=1.6s)HSX_PASCollisions_fullTrajectories: 5.3s -> 3.7s (delta=1.6s)HSX_PASCollisions_DKESTrajectories: 5.5s -> 4.0s (delta=1.5s)sfincsPaperFigure3_geometryScheme11_PASCollisions_2Species_fullTrajectories: 3.8s -> 2.8s (delta=1.0s)
Largest CPU memory improvements vs tests/scaled_example_suite_recheck_cpu_frozen_2026-04-23_postkeyfix/suite_report.json:
monoenergetic_geometryScheme5_ASCII: 3066.4 MB -> 497.0 MB (delta=2569.4 MB)tokamak_2species_PASCollisions_noEr: 2088.6 MB -> 863.5 MB (delta=1225.1 MB)HSX_PASCollisions_DKESTrajectories: 2053.6 MB -> 1146.3 MB (delta=907.3 MB)sfincsPaperFigure3_geometryScheme11_PASCollisions_2Species_fullTrajectories: 2298.6 MB -> 1482.2 MB (delta=816.4 MB)monoenergetic_geometryScheme5_netCDF: 1162.7 MB -> 533.7 MB (delta=629.0 MB)
Documentation
Build docs locally:
sphinx-build -b html -W docs docs/_build/html
Entry points:
docs/index.rstdocs/system_equations.rstdocs/method.rstdocs/normalizations.rstdocs/performance.rstdocs/parallelism.rst
Testing
pytest -q
License
See LICENSE.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sfincs_jax-1.0.7.tar.gz.
File metadata
- Download URL: sfincs_jax-1.0.7.tar.gz
- Upload date:
- Size: 26.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a2d71f1cc897cbb595991d3b896831b0dce5c33195413295d693a5cb4963f670
|
|
| MD5 |
c370f436b57fa5e8c85879dac6adc221
|
|
| BLAKE2b-256 |
8d82229a7331cca0530db2408990d430e7b756db998fc2f2e0e143418a7d8b1d
|
Provenance
The following attestation bundles were made for sfincs_jax-1.0.7.tar.gz:
Publisher:
publish-pypi.yml on uwplasma/sfincs_jax
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sfincs_jax-1.0.7.tar.gz -
Subject digest:
a2d71f1cc897cbb595991d3b896831b0dce5c33195413295d693a5cb4963f670 - Sigstore transparency entry: 1391137065
- Sigstore integration time:
-
Permalink:
uwplasma/sfincs_jax@8d2d8cc59a0b0b705b43fb88efb475331c5c48b1 -
Branch / Tag:
refs/tags/v1.0.7 - Owner: https://github.com/uwplasma
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@8d2d8cc59a0b0b705b43fb88efb475331c5c48b1 -
Trigger Event:
push
-
Statement type:
File details
Details for the file sfincs_jax-1.0.7-py3-none-any.whl.
File metadata
- Download URL: sfincs_jax-1.0.7-py3-none-any.whl
- Upload date:
- Size: 27.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f9b265b09d7838d236b898e4dc77fa9852975a08bfee63e527fc59c75a890de2
|
|
| MD5 |
5ba1fb9130587c82592462a23ff1177c
|
|
| BLAKE2b-256 |
2696554de612b6efc65e0804ffb06d4742ae4581a2e410b8924905523dd42feb
|
Provenance
The following attestation bundles were made for sfincs_jax-1.0.7-py3-none-any.whl:
Publisher:
publish-pypi.yml on uwplasma/sfincs_jax
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sfincs_jax-1.0.7-py3-none-any.whl -
Subject digest:
f9b265b09d7838d236b898e4dc77fa9852975a08bfee63e527fc59c75a890de2 - Sigstore transparency entry: 1391137189
- Sigstore integration time:
-
Permalink:
uwplasma/sfincs_jax@8d2d8cc59a0b0b705b43fb88efb475331c5c48b1 -
Branch / Tag:
refs/tags/v1.0.7 - Owner: https://github.com/uwplasma
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@8d2d8cc59a0b0b705b43fb88efb475331c5c48b1 -
Trigger Event:
push
-
Statement type: