A benchmarking tool for comparing different parameter optimization algorithms for ion channel models
Project description
ionBench
A benchmarking tool for comparing different parameter optimization algorithms for ion channel models.
Project Structure
The tree structure of this project is outlined below.
├───docs
├───ionbench
│ ├───modification
│ ├───benchmarker
│ ├───data
│ │ ├───loewe2016
│ │ ├───moreno2016
│ │ └───staircase
│ ├───optimisers
│ │ ├───external_optimisers
│ │ ├───pints_optimisers
│ │ ├───scipy_optimisers
│ │ └───spsa_spsa.py
│ ├───problems
│ ├───uncertainty
│ └───multistart.py
└───test
The docs directory contains information and guides on how to use the benchmarker, the test problems and the optimisation algorithms.
The ionbench directory contains the majority of the code, including the benchmarker and problems classes and the different optimisation algorithms. This is what is installed using pip.
-
The modification subdirectory contains the modification classes, generalised settings for handling transformations and bounds.
-
The benchmarker subdirectory contains the main Benchmarker class that the test problems all inherit from and defines the core features of the benchmarkers. It also contains the Tracker class which contains the functions to take performance metrics over time.
-
The data subdirectoy is split up into the available test problems. Each subdirectory contains the Myokit .mmt files, the voltage clamp protocols stored as .csv files where relevant, and output data to train the models, also stored as a .csv.
-
The optimisers subdirectory contains all of the optimisation algorithms that are currently implemented. These are then further subdivided into three directories, containing the optimisers from pints, from scipy, and other optimisation algorithms used in fitting ion channel models that have been implemented specifically for ionBench.
-
The problems subdirectory contains the classes for the available benchmarking problems. This features the problems from Loewe et al 2016 and Moreno et al 2016. In addition to these previously defined problems, we have introduced two further test problems, a Hodgkin-Huxley IKr model from Beattie et al 2017 and a Markov IKr model from Fink et al 2008.
-
The final subdirectory, uncertainty, contains functions for determining uncertainty and unidentifiability in the problems, such as calculating profile likelihood plots and Fisher's Information Matrix.
-
multistart.py provides a tool for rerunning an optimiser to derive average performace metrics.
The test directory contains unit tests for ensuring changes do not break previous functionality.
Installation
ionBench can be installed using pip.
pip install ionbench
Note that ionBench uses myokit to do its simulations, which relies on CVODES (from Sundials). For Linux and Mac OS users a working installation of CVODES is required. For Windows users, CVODES should be automatically installed with myokit.
Getting Started
If you want to use ionBench, check out the introduction and tutorial in the docs directory.
Workflow
The intended workflow for using the benchmarker is to generate a benchmarker object, setup the optimisers modification and apply it to the benchmarker, and pass the benchmarker into the optimisation algorithm to evaluate. All optimisers should accept a single benchmarker as input with all other inputs being optional.
import ionbench
bm = ionbench.problems.staircase.HH_Benchmarker()
modification = ionbench.optimisers.pints_optimisers.cmaes_pints.get_modification()
modification.apply(bm)
optimisedParameters = ionbench.optimisers.scipy_optimisers.nelderMead_scipy.run(bm)
Future Features
-
Bounds - Current only bounds on the parameters can be included in the benchmarker but it would be nice to have bounds on the rates. Additionally, it would be good to include barrier function style bounds to allow them to work nicely with gradient based methods.
-
Additional optimisation algorithms - There are still lots of different algorithms from various papers for fitting ion channel models to include (Current plans to include a further 29 external optimisers).
-
Parallelisation - Its not clear yet how well the benchmarker would handle being run in parallel (specifically for the tracker) but it is something that would be worth looking into.
-
Real data - Both Moreno et al 2016 and Loewe et al 2016 include real data in the papers. It would be nice to see how the algorithms handle fitting to real data but its not clear how to best implement the performance metrics, two of which rely on knowing the true parameters.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.