Skip to main content

Python package for the log-likelihood-ratio

Project description

LLR-Evaluation (llreval)

This is an authorized fork from [PYLLR]((https://github.com/bsxfan/PYLLR).

Python toolkit for likelihood-ratio calibration of binary classifiers.

The emphasis is on binary classifiers (for example speaker verification), where the output of the classifier is in the form of a well-calibrated log-likelihood-ratio (LLR). The tools include:

  • PAV and ROCCH score analysis.
  • DET curves and EER
  • DCF and minDCF
  • Bayes error-rate plots
  • Cllr

Most of the algorithms in LLR-Evaluation are Python translations of the older MATLAB BOSARIS Tookit. Descriptions of the algorithms are available in:

Niko Brümmer and Edward de Villiers, The BOSARIS Toolkit: Theory, Algorithms and Code for Surviving the New DCF, 2013.

Install

Install using pip

pip install llreval

Usage

import llreval

Out of a hundred trials, how many errors does your speaker verifier make?

We have included in the examples directory, some code that reproduces the plots in our paper:

Niko Brümmer, Luciana Ferrer and Albert Swart, "Out of a hundred trials, how many errors does your speaker verifier make?", 2011, https://arxiv.org/abs/2104.00732.

For instructions, go to the readme.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llreval-0.0.2.tar.gz (13.6 kB view hashes)

Uploaded Source

Built Distribution

llreval-0.0.2-py3-none-any.whl (15.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page