Confidence intervals for binomial distributions.
Project description
binomial_cis
This package computes confidence intervals for the probability of success parameter, $p$, of a binomial distribution. The confidence intervals computed by this package cover $p$ with exactly the user-specified probability and have minimal excess length.
Installation
Install the package with pip:
pip install git+https://github.com/TRI-ML/binomial_cis.git
What is a binomial confidence interval?
The binomial distribution represents the likelihood of observing $k$ successes in $n$ trials where the probability of success for each trial is $p$. One often does not know the true value of $p$ and wishes to estimate this value. After observing $k$ successes in $n$ trials with unknown probability of success $p$, a confidence interval (CI) is constructed in such a way that it contains the true value of $p$ with some high probability.
In constructing confidence intervals one has to tradeoff between three quantities:
- Confidence: The probability that the CI contains the true parameter. Often written as $1-\alpha$ where $\alpha$ is small.
- Volume: The length of the CI. If the CI is constructed as a lower bound on $p$ (i.e. $[\underline{p}, 1]$), then we care about the length of the CI which is below $p$. This is known as the shortage.
- Number of samples: In general, with more samples one can construct CIs with higher confidence and smaller volume.
Why does this package exist?
Existing implementations of binomial CIs fail to optimally control the tradeoffs between coverage, volume, and number of samples. What this means in practice is that if a user specifies $k$ and $n$, existing implementations return CIs with more/less coverage than desired and/or CIs with higher volume than necessary.
Existing software implementations for binomial CIs include:
- statsmodels.stats.proportion.proportion_confint (Python)
- scipy.stats._result_classes.BinomTestResult.proportion_ci (Python)
- astropy.stats.binom_conf_interval (Python)
- scipy.stats.binom (Python)
- EBCIC (Python)
- binom.test (R)
- binom.confint (R)
- BinomCI (R)
- HypothesisTests.jl (Julia)
- RobustStats.jl (Julia)
- ClinicalTrialUtilities.jl (Julia)
- binofit (Matlab)
The methods these packages implement include:
- Wald/Normal [1]
- Agresti-Coull [1]
- Clopper-Pearson (*) [1]
- Wilson [1]
- Modified Wilson [1]
- Wilson with continuity correction (*) [1]
- Jeffreys [1]
- Bayesian uniform prior
- Inverting the binomial test (*) [3]
- Arcsine [1]
- Logit [1]
- Probit [8]
- Complementary log [8]
- Likelihood (Profile) [1]
- Witting (*) [4]
- Pratt [5]
- Mid-p [7]
- Blaker (*) [6]
- Second-order corrected [2]
The reference of each method points to a survey paper (if possible) rather than the original derivation of the method.
Only the methods marked with (*) are guaranteed to provide at least as much coverage as desired. However, none of these methods provide exactly the coverage desired (Witting might, but the listed reference is not freely available online and is only published in German).
It is also worth noting the ump R package. This package implements UMP and UMPU hypothesis tests for the binomial distribution. Such tests could be leveraged to construct UMA and UMAU confidence intervals, but it this doesn't appear to be implemented based on the documented functions. In addition, in the documentation the authors of this package note that their implemetation has issues with numerical stability.
Existing software implementations for computing binomial CIs with exact coverage are then either non-existent or unsatisfactory. Unsurprisingly then, there is also no open-source implementation of computing the expected shortage (or expected excess or expected width) of UMA and UMAU CIs, and no implementation for computing the worst-case values of these quantities. This package exists to fill this gap.
What exactly does this package do?
This package constructs optimal confidence intervals for the probability of success parameter of a binomial distribution.
Lower Bounds
Given user specified miscoverage rate ($\alpha$) and maximum expected shortage ($\text{MES}$), return a lower bound on $p$ that satisfies the following requirements:
- achieves exact desired coverage: $\mathbb{P}[\underline{p} \le p] = 1-\alpha$
- $[\underline{p}, 1]$ is uniformly most accurate
- achieves exact desired maximum expected shortage: $\max_p \ \mathbb{E}_p[\max (p - \underline{p}, 0)] = \text{MES}$
- uses the minimum number of samples $n$ to achieve requirements 1,2,3.
Upper Bounds
Given user specified miscoverage rate ($\alpha$) and maximum expected excess ($\text{MEE}$), return an upper bound on $p$ that satisfies the following requirements:
- achieves exact desired coverage: $\mathbb{P}[p \le \overline{p}] = 1-\alpha$
- $[0, \overline{p}]$ is uniformly most accurate
- achieves exact desired maximum expected excess: $\max_p \ \mathbb{E}_p[\max (\overline{p} - p, 0)] = \text{MEE}$
- uses the minimum number of samples $n$ to achieve requirements 1,2,3.
Simultaneous Lower and Upper Bounds
Given the user specified miscoverage rate ($\alpha$) and maximum expected width ($\text{MEW}$), return simultaneous lower and upper bounds on $p$ that satisfy the following requirements:
- achieves exact desired coverage: $\mathbb{P}[\underline{p} \le p \le \overline{p}] = 1-\alpha$
- $[\underline{p}, \overline{p}]$ is uniformly most accurate unbiased
- achieves exact desired maximum expected width: $\max_p \ \mathbb{E}_p[\overline{p} - \underline{p}] = \text{MEW}$
- uses the minimum number of samples $n$ to achieve requirements 1,2,3.
How do I use this package?
Lower Bounds
Find a lower bound on $p$:
from binomial_cis import binom_ci
k = 5 # number of successes
n = 10 # number of trials
alpha = 0.05 # miscoverage probability
lb = binom_ci(k, n, alpha, 'lb')
Find maximum expected shortage given miscoverage rate and number of samples:
mes_ub, mes_lb, p_lb, num_iters = max_expected_shortage(alpha, n, tol=1e-3)
Upper Bounds
Find an upper bound on $p$:
from binomial_cis import binom_ci
k = 5 # number of successes
n = 10 # number of trials
alpha = 0.05 # miscoverage probability
ub = binom_ci(k, n, alpha, 'ub')
Find maximum expected excess given miscoverage rate and number of samples:
mee_ub, mee_lb, p_lb, num_iters = max_expected_excess(alpha, n, tol=1e-3)
2-Sided Bounds
Find simultaneous lower and upper bounds on $p$:
from binomial_cis import binom_ci
k = 5 # number of successes
n = 10 # number of trials
alpha = 0.05 # miscoverage probability
lb, ub = binom_ci(k, n, alpha, 'lb,ub')
Find maximum expected width given miscoverage rate and number of samples:
mew_ub, mew_lb, p_lb, num_iters = max_expected_width(alpha, n, tol=1e-3)
Notebooks
The notebooks/
directory has the following notebooks:
tradeoff_table.ipynb
: Computes MES vs miscoverage rate $\alpha$ and number of samples $n$. Precomputed values have been stored inMES_table.csv
which is visualized in a plot from the last cell of the notebook.visualizations.ipynb
: Visualizes the mixed-monotonic forms of expected shortage and expected width. Also visualizes how these functions vary with $p$ and their maxima.
The tests/
directory has the following notebooks:
binom_helper_validation.ipynb
: Tests our implementation of the binomial coefficient, binomial pmf, and binomial cdf against their SciPy counterparts.conf_set_validation.ipynb
: Tests the theoretical guarantees of the CIs (coverage, shortage, excess, width, unbiasedness) using Monte-Carlo simulation.
Automated Tests
Not implemented yet, but future automated tests may include
- validating binomial_helper.py
- comparing the bounds with those in the appendix of Table of Neyman-shortest unbiased confidence intervals for the binomial parameter by Blyth and Hutchinson
Caution!
Randomized CIs
The methods used in this package to construct CIs are based on the inversion of randomized hypothesis tests. This means that calling binom_ci()
with the same k,n,alpha
will result in different CIs. For the guarantees of the CI to hold it is critical that the user only construct one CI for the experiment they have. Constructing multiple CIs and choosing the best one invalidates the guarantees of the CI.
For the 1-sided bounds there is the option to get less efficient but non-randomized CIs:
lb = binom_ci(k, n, alpha, 'lb', randomized=False)
ub = binom_ci(k, n, alpha, 'ub', randomized=False)
These non-randomized 1-sided bounds are equivalent to 1-sided Clopper-Pearson bounds. We currently don't have an implementation of non-randomized 2-sided bounds.
Randomization allows the CIs to be UMA. Although randomization has been a point of debate amongst statisticians, we take the view (first given by Mark Eudey) that insofar as construction of confidence intervals can be treated as a (von Neumann) game, randomization merely allows the statistician to employ a mixed strategy.
Multiple Tests
As with all CIs one must take special care when interpreting the results of multiple CIs constructed from independent tests. If one constructs $m$ CIs where the probability of each CI containing the true parameter is $1-\alpha$, then the probability that all $m$ CIs contain their respective parameters is less than $1-\alpha$. For more explanation, see the Wikipedia article on the multiple comparisons problem.
Building the Package
Activate the virtual environment and run
python -m build
You will need the build
package for this.
Relevant Literature
Below are some of the papers that we found most useful for understanding binomial confidence intervals.
- Testing Statistical Hypotheses by Lehmann and Romano
- On the treatment of discontinuous random variables by Eudey
- Table of Neyman-shortest unbiased confidence intervals for the binomial parameter by Blyth and Hutchinson
- Length of Confidence Intervals by Pratt
- More on length of confidence intervals by Madansky
- Binomial confidence intervals by Blyth and Still
- Smallest confidence intervals for one binomial proportion by Wang
- Fuzzy and randomized confidence intervals and p-values by Geyer and Meeden
- Nonoptimality of Randomized Confidence Sets by Casella and Robert
References
- Interval Estimation for a Binomial Proportion by Brown, Cai, and DasGupta
- One-sided confidence intervals in discrete distributions by Cai
- Some Remarks on Confidence or Fiducial Limits by Sterne
- Mathematische Statistik I. by Witting
- Binomial Confidence Intervals by Blyth and Still
- Confidence Curves and Improved Exact Confidence Intervals for Discrete Distributions by Blaker
- Comment: Randomized Confidence Intervals and the Mid-P Approach by Agresti and Gottard
- binom by Sundar Dorai-Raj
- Fuzzy and randomized confidence intervals and p-values by Geyer and Meeden.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file binomial_cis-0.0.10.tar.gz
.
File metadata
- Download URL: binomial_cis-0.0.10.tar.gz
- Upload date:
- Size: 19.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d1bd099f0ae6c7ecd221b6b6ebcee543fb09621a271aaf74836a303b78e71244 |
|
MD5 | 96b56be0cc1994c840ae5b1d33137dd1 |
|
BLAKE2b-256 | c6b3b381dd0b6a44169f3fb9fb1d4c011b0c8c010fc3458cb94c6e44ac866fd8 |
File details
Details for the file binomial_cis-0.0.10-py3-none-any.whl
.
File metadata
- Download URL: binomial_cis-0.0.10-py3-none-any.whl
- Upload date:
- Size: 17.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e5549d56b836ecf8def50f36b34886262b1e69296d03ee79ba0b8ca0015e820d |
|
MD5 | e2717a860ded2b77a70502567e70fcbc |
|
BLAKE2b-256 | 0359a92be82afabc80214e31627609c464fdbca0c7d51a1d784c68f862d6b09b |