The Cross-Entropy Method for either rare-event sampling or optimization.
Project description
The Cross Entropy Method
The Cross Entropy Method (CE or CEM) is an approach for optimization or rare-event sampling in a given class of distributions {D_p} and a score function R(x).
- In its sampling version, it is given a reference p0 and aims to sample from the tail of the distribution x ~ (D_p0 | R(x)<q), where q is defined as either a numeric value q or a quantile alpha (where q=q_alpha(R)).
- In its optimization version, it aims to find argmin_x{R(x)}.
The exact implementation of the CEM depends on the distributions family {D_p} as defined in the problem.
This repo provides a general implementation as an abstract class, where a concrete use requires writing a simple, small inherited class.
The attached tutorial.ipynb
provides a more detailed background on the CEM and on this package, along with usage examples.
Installation: pip install cross-entropy-method
.
CEM for sampling (left): the mean of the sample distribution (green) shifts from the mean of the original distribution (blue) towards its 10%-tail (orange). CEM for optimization (right): the mean of the sample distribution aims to be minimized. (images from tutorial.ipynb ) |
Supporting non-stationary score functions
On top of the standard CEM, we also support a non-stationary score function R. This affects the reference distribution of scores and thus the quantile threshold q (if specified as a quantile). Thus, we have to repeatedly re-estimate q, using importance-sampling correction to compensate for the CEM distributional shift.
In our separate work, we demonstrate the use of the CEM for the more realistic problem of sampling high-risk environment-conditions in risk-averse reinforcement learning. There, D_p determines the distribution of the environment-conditions, p0 corresponds to the original distribution (or test distribution), and R(x; agent) is the return function of the agent given the conditions x. Note that since the agent evolves with the training, the score function is indeed non-stationary.
Cite this repo
@misc{cross_entropy_method,
title={Cross Entropy Method with Non-stationary Score Function},
author={Ido Greenberg},
howpublished={\url{https://pypi.org/project/cross-entropy-method/}},
year={2022}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for cross-entropy-method-0.0.9.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | bdbe05d76e104047cc2b510fdb2beb3c3fb4480df1ad363912c5a618e58dbcd3 |
|
MD5 | 81f37e591e3a64f4bbef7a4798abc13a |
|
BLAKE2b-256 | 26b470f37d9682fc0dc16e7cc5c3def6b661057f2ecd4c6301cf6fc127c46302 |
Hashes for cross_entropy_method-0.0.9-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9223c851a2ee0cd0954af2d207ee9c0d74e22cb67f53632ef6c07d736ad0d15e |
|
MD5 | 9d8e8151151d43fa52d36c566018b693 |
|
BLAKE2b-256 | c59c6ee7c838b42290bbc3a20c249010fbbad00c9c84e2b5a7d3610d67dde288 |