A divergence estimator of two sets of samples.
Project description
universal-divergence
universal-divergence is a Python module for estimating divergence of two sets of samples generated from the two underlying distributions. The theory of the estimator is based on a paper written by Q.Wang et al [1].
Install
pip install universal-divergence
Example
from __futere__ import print_function import numpy as np from universal_divergence import estimate mean = [0, 0] cov = [[1, 0], [0, 10]] x = np.random.multivariate_normal(mean, cov, 100) y = np.random.multivariate_normal(mean, cov, 100) print(estimate(x, y)) # will be close to 0.0 mean2 = [10, 0] cov2 = [[5, 0], [0, 5]] z = np.random.multivariate_normal(mean2, cov2, 100) print(estimate(x, z)) # will be bigger than 0.0
References
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Close
Hashes for universal-divergence-0.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 461839cc794f9e668ad373f48aed147cd2767bd42e6a5167715d86d336cb5447 |
|
MD5 | 18263ea4eeffdf14e5bc7cf80fd33f33 |
|
BLAKE2b-256 | fb4401e2fb3b3aa6534bbe3b8d3c18d9b3a0284739b5b74d03c105d823a5df53 |