Skip to main content

The Cross-Entropy Method for either rare-event sampling or optimization.

Project description

The Cross Entropy Method

The Cross Entropy Method (CE or CEM) is an approach for optimization or rare-event sampling in a given class of distributions {D_p} and a score function R(x).

  • In its sampling version, it is given a reference p0 and aims to sample from the tail of the distribution x ~ (D_p0 | R(x)<q), where q is defined as either a numeric value q or a quantile alpha (where q=q_alpha(R)).
  • In its optimization version, it aims to find argmin_x{R(x)}.

The exact implementation of the CEM depends on the distributions family {D_p} as defined in the problem. This repo provides a general implementation as an abstract class, where a concrete use requires writing a simple, small inherited class. The attached tutorial.ipynb provides a more detailed background on the CEM and on this package, along with usage examples.

Installation: pip install cross-entropy-method.

CEM for sampling (left): the mean of the sample distribution (blue) aims to coincide with the mean of the tail of the original distribution (black). CEM for optimization (right): the mean of the sample distribution aims to be minimized. (images from tutorial.ipynb)

Supporting non-stationary score functions

On top of the standard CEM, we also support a non-stationary score function R. This affects the reference distribution of scores and thus the quantile threshold q (if specified as a quantile). Thus, we have to repeatedly re-estimate q, using importance-sampling correction to compensate for the CEM distributional shift.

In our separate work, we demonstrate the use of the CEM for the more realistic problem of sampling high-risk environment-conditions in risk-averse reinforcement learning. There, D_p determines the distribution of the environment-conditions, p0 corresponds to the original distribution (or test distribution), and R(x; agent) is the return function of the agent given the conditions x. Note that since the agent evolves with the training, the score function is indeed non-stationary.

Cite this repo

@misc{cross_entropy_method,
  title={Cross Entropy Method with Non-stationary Score Function},
  author={Ido Greenberg},
  howpublished={\url{https://pypi.org/project/cross-entropy-method/}},
  year={2022}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cross-entropy-method-0.0.6.tar.gz (11.6 kB view hashes)

Uploaded Source

Built Distribution

cross_entropy_method-0.0.6-py3-none-any.whl (17.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page