Skip to main content

Statistical package to evaluate ab tests in experimentation platform.

Project description

PyPI version Python versions Code style Code style


Statistical package for the experimentation platform.

It provides a general Python package and REST API that can be used to evaluate any metric in an AB test experiment.


  • Robust two-tailed t-test implementation with multiple p-value corrections and delta methods applied.
  • Sequential evaluations allow experiments to be stopped early.
  • Connect it to any data source to get either pre-aggregated or per randomization unit data.
  • Simple expression language to define arbitrary metrics.
  • REST API to integrate it as a service in experimentation portal with score cards.


We have got a lovely documentation.

Base Example

ep-stats allows for a quick experiment evaluation. We are using sample testing data to evaluate metric Click-through Rate in experiment test-conversion.

from epstats.toolkit import Experiment, Metric, SrmCheck
experiment = Experiment(
        'Click-through Rate',
    [SrmCheck(1, 'SRM', 'count(')],

# This gets testing data, use other Dao or get aggregated goals in some other way.
from epstats.toolkit.testing import TestData
goals = TestData.load_goals_agg(

# evaluate experiment
ev = experiment.evaluate_agg(goals)

ev contains evaluations of exposures, metrics, and checks. This will provide the following output.


exp_id exp_variant_id exposures
test-conversion a 21
test-conversion b 26


exp_id metric_id metric_name exp_variant_id count mean std sum_value confidence_level diff test_stat p_value confidence_interval standard_error degrees_of_freedom
test-conversion 1 Click-through Rate a 21 0.238095 0.436436 5 0.95 0 0 1 1.14329 0.565685 40
test-conversion 1 Click-through Rate b 26 0.269231 0.452344 7 0.95 0.130769 0.223152 0.82446 1.18137 0.586008 43.5401


exp_id check_id check_name variable_id value
test-conversion 1 SRM p_value 0.465803
test-conversion 1 SRM test_stat 0.531915
test-conversion 1 SRM confidence_level 0.999000


You can install this package via pip.

pip install ep-stats


You can run a testing version of ep-stats via

python -m epstats

Then, see Swagger on http://localhost:8080/docs for API documentation.


To get started locally, you can clone the repo and quickly get started using the Makefile.

git clone
cd ep-stats
make install-dev

It sets a new virtual environment venv in ./venv using venv, installs all development dependencies, and sets pre-commit git hooks to keep the code neatly formatted with flake8 and brunette.

To run tests, you can use Makefile as well.

source venv/bin/activate  # activate python environment
make check

To run a development version of ep-stats do

source venv/bin/activate
cd src
python -m epstats


To update documentation run

mkdocs gh-deploy

It updates documentation in GitHub pages stored in branch gh-pages.


Software engineering practices of this package have been heavily inspired by marvelous site managed by Vincent D. Warmerdam.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ep-stats-1.3.1.tar.gz (36.2 kB view hashes)

Uploaded source

Built Distribution

ep_stats-1.3.1-py3-none-any.whl (47.1 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page