Skip to main content

ABC random forests for model choice and parameter estimation, python wrapper

Project description

PyPI abcranger-build

Random forests methodologies for :

  • ABC model choice [@pudlo2015reliable]
  • ABC Bayesian parameter inference [@raynal2016abc]

Libraries we use :

  • Ranger [@wright2015ranger] : we use our own fork and have tuned forests to do “online”[1] computations (Growing trees AND making predictions in the same pass, which removes the need of in-memory storage of the whole forest)[2].
  • Eigen3 [@eigenweb]

As a mention, we use our own implementation of LDA and PLS from [@friedman2001elements{81, 114}].

There is one set of binaries, which contains a Macos/Linux/Windows (x64 only) binary for each platform. There are available within the “Releases” tab, under “Assets” section (unfold it to see the list).

This is pure command line binary, and they are no prerequisites or library dependencies in order to run it. Just download them and launch them from your terminal software of choice. The usual caveats with command line executable apply there : if you’re not proficient with the command line interface of your platform, please learn some basics or ask someone who might help you in those matters.

The standalone is part of a specialized Population Genetics graphical interface DIYABC-RF, with a (currently under review) submission to MER (Molecular Ecology Resources), [@Collin_2020].

Python

Installation

pip install pyabcranger

Notebooks examples

Usage

 - ABC Random Forest - Model choice or parameter estimation command line options
Usage:
  ../build/abcranger [OPTION...]

  -h, --header arg        Header file (default: headerRF.txt)
  -r, --reftable arg      Reftable file (default: reftableRF.bin)
  -b, --statobs arg       Statobs file (default: statobsRF.txt)
  -o, --output arg        Prefix output (modelchoice_out or estimparam_out by
                          default)
  -n, --nref arg          Number of samples, 0 means all (default: 0)
  -m, --minnodesize arg   Minimal node size. 0 means 1 for classification or
                          5 for regression (default: 0)
  -t, --ntree arg         Number of trees (default: 500)
  -j, --threads arg       Number of threads, 0 means all (default: 0)
  -s, --seed arg          Seed, generated by default (default: 0)
  -c, --noisecolumns arg  Number of noise columns (default: 5)
      --nolinear          Disable LDA for model choice or PLS for parameter
                          estimation
      --plsmaxvar arg     Percentage of maximum explained Y-variance for
                          retaining pls axis (default: 0.9)
      --chosenscen arg    Chosen scenario (mandatory for parameter
                          estimation)
      --noob arg          number of oob testing samples (mandatory for
                          parameter estimation)
      --parameter arg     name of the parameter of interest (mandatory for
                          parameter estimation)
  -g, --groups arg        Groups of models
      --help              Print help
  • If you provide --chosenscen, --parameter and --noob, parameter estimation mode is selected.
  • Otherwise by default it’s model choice mode.
  • Linear additions are LDA for model choice and PLS for parameter estimation, “–nolinear” options disables them in both case.

Model Choice

Terminal model choice

Example

Example :

abcranger -t 10000 -j 8

Header, reftable and statobs files should be in the current directory.

Groups

With the option -g (or --groups), you may “group” your models in several groups splitted . For example if you have six models, labeled from 1 to 6 `-g “1,2,3;4,5,6”

Generated files

Four files are created :

  • modelchoice_out.ooberror : OOB Error rate vs number of trees (line number is the number of trees)
  • modelchoice_out.importance : variables importance (sorted)
  • modelchoice_out.predictions : votes, prediction and posterior error rate
  • modelchoice_out.confusion : OOB Confusion matrix of the classifier

Parameter Estimation

Terminal estim param

Composite parameters

When specifying the parameter (option --parameter), one may specify simple composite parameters as division, addition or multiplication of two existing parameters. like t/N or T1+T2.

A note about PLS heuristic

The --plsmaxvar option (defaulting at 0.90) fixes the number of selected pls axes so that we get at least the specified percentage of maximum explained variance of the output. The explained variance of the output of the m first axes is defined by the R-squared of the output:

Yvar^m = \frac{\sum_{i=1}^{N}{(\hat{y}^{m}_{i}-\bar{y})^2}}{\sum_{i=1}^{N}{(y_{i}-\hat{y})^2}}

where \hat{y}^{m} is the output Y scored by the pls for the mth component. So, only the n_{comp} first axis are kept, and :

n_{comp} = \underset{Yvar^m \leq{} 0.90*Yvar^M, }{\operatorname{argmax}}

Note that if you specify 0 as --plsmaxvar, an “elbow” heuristic is activiated where the following condition is tested for every computed axis :

\frac{Yvar^{k+1}+Yvar^{k}}{2} \geq 0.99(N-k)\left(Yvar^{k+1}-Yvar^ {k}\right)

If this condition is true for a windows of previous axes, sized to 10% of the total possible axis, then we stop the PLS axis computation.

In practice, we find this n_{heur} close enough to the previous n_{comp} for 99%, but it isn’t guaranteed.

The signification of the noob parameter

The median global/local statistics and confidence intervals (global) measures for parameter estimation need a number of OOB samples (--noob) to be reliable (typlially 30% of the size of the dataset is sufficient). Be aware than computing the whole set (i.e. assigning --noob the same than for --nref) for weights predictions [@raynal2016abc] could be very costly, memory and cpu-wise, if your dataset is large in number of samples, so it could be adviseable to compute them for only choose a subset of size noob.

Example (parameter estimation)

Example (working with the dataset in test/data) :

abcranger -t 1000 -j 8 --parameter ra --chosenscen 1 --noob 50

Header, reftable and statobs files should be in the current directory.

Generated files (parameter estimation)

Five files (or seven if pls activated) are created :

  • estimparam_out.ooberror : OOB MSE rate vs number of trees (line number is the number of trees)
  • estimparam_out.importance : variables importance (sorted)
  • estimparam_out.predictions : expectation, variance and 0.05, 0.5, 0.95 quantile for prediction
  • estimparam_out.predweights : csv of the value/weights pairs of the prediction (for density plot)
  • estimparam_out.oobstats : various statistics on oob (MSE, NMSE, NMAE etc.)

if pls enabled :

  • estimparam_out.plsvar : variance explained by number of components
  • estimparam_out.plsweights : variable weight in the first component (sorted by absolute value)

TODO

Input/Output

  • Integrate hdf5 (or exdir? msgpack?) routines to save/load reftables/observed stats with associated metadata
  • Provide R code to save/load the data
  • Provide Python code to save/load the data

C++ standalone

  • Merge the two methodologies in a single executable with the (almost) the same options
  • (Optional) Possibly move to another options parser (CLI?)

External interfaces

  • R package
  • Python package

Documentation

  • Code documentation
  • Document the build

Continuous integration

  • Fix travis build. Currently the vcpkg download of eigen3 head is broken.
  • osX travis build
  • Appveyor win32 build

Long/Mid term TODO

  • methodologies parameters auto-tuning
    • auto-discovering the optimal number of trees by monitoring OOB error
    • auto-limiting number of threads by available memory
  • Streamline the two methodologies (model choice and then parameters estimation)
  • Write our own tree/rf implementation with better storage efficiency than ranger
  • Make functional tests for the two methodologies
  • Possible to use mondrian forests for online batches ? See [@lakshminarayanan2014mondrian]

References

This have been the subject of a proceedings in JOBIM 2020, PDF and video (in french), [@collin:hal-02910067].

[1] The term “online” there and in the code has not the usual meaning it has, as coined in “online machine learning”. We still need the entire training data set at once. Our implementation is an “online” one not by the sequential order of the input data, but by the sequential order of computation of the trees in random forests, sequentially computed and then discarded.

[2] We only use the C++ Core of ranger, which is under MIT License, same as ours.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyabcranger-0.0.46.tar.gz (50.3 kB view details)

Uploaded Source

Built Distributions

pyabcranger-0.0.46-cp38-cp38-win_amd64.whl (617.0 kB view details)

Uploaded CPython 3.8 Windows x86-64

pyabcranger-0.0.46-cp37-cp37m-win_amd64.whl (617.6 kB view details)

Uploaded CPython 3.7m Windows x86-64

pyabcranger-0.0.46-cp36-cp36m-win_amd64.whl (617.6 kB view details)

Uploaded CPython 3.6m Windows x86-64

File details

Details for the file pyabcranger-0.0.46.tar.gz.

File metadata

  • Download URL: pyabcranger-0.0.46.tar.gz
  • Upload date:
  • Size: 50.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.6

File hashes

Hashes for pyabcranger-0.0.46.tar.gz
Algorithm Hash digest
SHA256 aac5fbb267dc5ebaa9f5e18b5ad9a25df2b82bcb1153a7b58f63e15a9df21c6f
MD5 1cea7f96cbb38b4d7f197131583afcb8
BLAKE2b-256 b8f43ca9a7d953ce655a4956a48ee0ce7fef46243e7252ca64ffbfa943dc138b

See more details on using hashes here.

File details

Details for the file pyabcranger-0.0.46-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: pyabcranger-0.0.46-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 617.0 kB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.6

File hashes

Hashes for pyabcranger-0.0.46-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 b8d87f56b9dba7d190aaff9cb315fb47fea90bc55cb39b99247a64c0efc961b1
MD5 0863e6002b8f41c98457878737c64325
BLAKE2b-256 7e7ab6558fc1f87f70c89d0facb5df9ec907d156d18aded68a1f3c44cccdc2ff

See more details on using hashes here.

File details

Details for the file pyabcranger-0.0.46-cp38-cp38-manylinux2014_x86_64.whl.

File metadata

  • Download URL: pyabcranger-0.0.46-cp38-cp38-manylinux2014_x86_64.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.8
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.6

File hashes

Hashes for pyabcranger-0.0.46-cp38-cp38-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 262bb64d064a5c767b774ad9a5141971a36ccf6752c2c12e8b72694e1b417b24
MD5 d8704a08b0c4ddd7cf2fd75ac660e81f
BLAKE2b-256 3eb6686ae1096d22233884504ac74d4e0060e7fc4b91fc221bd70abef00bcce8

See more details on using hashes here.

File details

Details for the file pyabcranger-0.0.46-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: pyabcranger-0.0.46-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 617.6 kB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.6

File hashes

Hashes for pyabcranger-0.0.46-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 8a3b0ac5f46d4a89808514ba697a96945aa0c9ba30ca7294ebb11ea8a73d452f
MD5 752e53f2547507b9281a4354bcb460a5
BLAKE2b-256 cf55496c73309ce9ddb13ff8e17659fa4d22c6e71b4d10994d7a47ff79d5dc65

See more details on using hashes here.

File details

Details for the file pyabcranger-0.0.46-cp37-cp37m-manylinux2014_x86_64.whl.

File metadata

  • Download URL: pyabcranger-0.0.46-cp37-cp37m-manylinux2014_x86_64.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.7m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.6

File hashes

Hashes for pyabcranger-0.0.46-cp37-cp37m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 555ae37a5939c30884afa1ff9184181501b9186af36f0ec07cd59f8716d834f3
MD5 2e5be8116617ccd55595ef026a9d15fb
BLAKE2b-256 a1d04f351a45a8e3f0c5988b6b56804c3f1e80f520d2680bf16bf7a3c28de406

See more details on using hashes here.

File details

Details for the file pyabcranger-0.0.46-cp36-cp36m-win_amd64.whl.

File metadata

  • Download URL: pyabcranger-0.0.46-cp36-cp36m-win_amd64.whl
  • Upload date:
  • Size: 617.6 kB
  • Tags: CPython 3.6m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.6

File hashes

Hashes for pyabcranger-0.0.46-cp36-cp36m-win_amd64.whl
Algorithm Hash digest
SHA256 691886eba421a40ed89bec63b7ec576464be54e655faa585f750529e16b2308c
MD5 8eef1e4158dcf547e5910663d9a6afa3
BLAKE2b-256 e4da4b04bbb850c8862b2cd84d3b5dee2adc96bdd5272585b0351c55401d9ba4

See more details on using hashes here.

File details

Details for the file pyabcranger-0.0.46-cp36-cp36m-manylinux2014_x86_64.whl.

File metadata

  • Download URL: pyabcranger-0.0.46-cp36-cp36m-manylinux2014_x86_64.whl
  • Upload date:
  • Size: 3.4 MB
  • Tags: CPython 3.6m
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.6

File hashes

Hashes for pyabcranger-0.0.46-cp36-cp36m-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 26df5007c5fba2f1424dd325d218eb61c58936e210ae306f24aec45d37bac8a4
MD5 2cfe662bbe25b43b0f7441f6ea41d09b
BLAKE2b-256 ac2172541e89ae1744d1a2e3e134f50e901e0c1c9f100da85fa37f1d2d924f58

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page