A divergence estimator of two sets of samples.
Project description
universal-divergence
universal-divergence is a Python module for estimating divergence of two sets of samples generated from the two underlying distributions. The theory of the estimator is based on a paper written by Q.Wang et al [1].
Install
pip install universal-divergence
Example
from __future__ import print_function import numpy as np from universal_divergence import estimate mean = [0, 0] cov = [[1, 0], [0, 10]] x = np.random.multivariate_normal(mean, cov, 100) y = np.random.multivariate_normal(mean, cov, 100) print(estimate(x, y)) # will be close to 0.0 mean2 = [10, 0] cov2 = [[5, 0], [0, 5]] z = np.random.multivariate_normal(mean2, cov2, 100) print(estimate(x, z)) # will be bigger than 0.0
References
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file universal-divergence-0.2.0.tar.gz
.
File metadata
- Download URL: universal-divergence-0.2.0.tar.gz
- Upload date:
- Size: 2.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 812fb206a02ca21a00038403d412138578ee048848b32526bd2b1f95daaaebce |
|
MD5 | 7c324fb82b10832c3e924441ad5bcc69 |
|
BLAKE2b-256 | 58575f5e30e6ff1c6d63fb925517fcc467325bfa82f9a2e81a39bc69197e76d1 |