Skip to main content

Evidence Theory Tools — belief functions, combination rules and contextual correction mechanisms for the Dempster-Shafer / Transferable Belief Model

Project description

evtools

Evidence Theory Tools — a Python library for working with belief functions in the Dempster-Shafer theory / Transferable Belief Model. Version 0.6.0.

Modules

Module Description
evtools.dsvector DSVector — unified container for any belief function representation
evtools.conversions Low-level conversions via the Fast Möbius Transform
evtools.combinations Combination rules: CRC, Dempster, DRC, Cautious, Bold, and decombinations
evtools.corrections Correction mechanisms: discounting, reinforcement, negating
evtools.display Display formats: ANSI terminal, plain text, HTML, LaTeX
evtools.constants Numerical tolerance constants

evtools.dsvector

DSVector is the central object of evtools. It represents any belief function as a vector on 2^Ω, in both sparse (dict) and dense (numpy array) forms. The sparse representation is the master; the dense array is computed on demand and cached.

Kind enum

Kind Symbol Name
Kind.M m Basic Belief Assignment (mass function)
Kind.BEL bel Belief function
Kind.PL pl Plausibility function
Kind.B b Commonality function
Kind.Q q Implicability function
Kind.V v Disjunctive weight function
Kind.W w Conjunctive weight function

Constructors

from evtools.dsvector import DSVector, Kind

# Human-friendly: name focal elements as strings
# Missing mass is automatically assigned to Ω
m = DSVector.from_focal(["a", "b", "c"], {"a": 0.3, "b,c": 0.5})

# From a dense numpy array (binary index ordering, Smets 2002)
m = DSVector.from_dense(["a", "b", "c"], np.array([0, 0.3, 0, 0, 0.5, 0, 0, 0.2]))

# From a sparse dict of frozensets
m = DSVector.from_sparse(["a", "b", "c"], {
    frozenset({"a"}):          0.3,
    frozenset({"b", "c"}):    0.5,
    frozenset({"a","b","c"}): 0.2,
})

Simple MF constructors

Simple MFs are the elementary building blocks of correction mechanisms.

# Simple MF A^β — focal sets Ω (mass β) and A (mass 1−β)
# Used in Contextual Reinforcement (CR), CdR, CN
s = DSVector.simple(["a", "b", "c"], frozenset({"a"}), beta=0.6)

# Negative simple MF A_β — focal sets ∅ (mass β) and A (mass 1−β)
# Used in Contextual Discounting (CD), CdD
ns = DSVector.negative_simple(["a", "b", "c"], frozenset({"a"}), beta=0.4)

Conversions

pl  = m.to(Kind.PL)   # returns a new DSVector with kind=Kind.PL
bel = m.to_bel()      # shortcut
b   = m.to_b()        # commonality
q   = m.to_q()        # implicability
v   = m.to_v()        # disjunctive weights (requires subnormal BBA)
w   = m.to_w()        # conjunctive weights (requires subnormal BBA)

Accessing values

m.sparse                     # dict[frozenset, float]
m.dense                      # np.ndarray of length 2^n
m.is_valid                   # True if all masses ≥ 0 and sum = 1 (Kind.M only)
m[frozenset({"a"})]          # value for a given subset (0.0 if absent)
for subset, value in m: ...  # iterate over non-zero focal elements

Display

m.display("ansi")    # colored terminal (default __repr__)
m.display("plain")   # plain text, no colors
m.display("html")    # HTML table (Jupyter renders this automatically)
m.display("latex")   # LaTeX tabular for papers

evtools.combinations

Combination rules for aggregating beliefs from multiple sources.

from evtools.combinations import crc, dempster, drc, cautious, bold
from evtools.combinations import decombine_crc, decombine_drc

m12 = crc(m1, m2)        # m1 & m2  — Conjunctive Rule (TBM), distinct reliable sources
m12 = dempster(m1, m2)   # m1 @ m2  — Dempster's normalized rule
m12 = drc(m1, m2)        # m1 | m2  — Disjunctive Rule, at least one reliable
m12 = cautious(m1, m2)   # Cautious rule, nondistinct reliable sources (idempotent)
m12 = bold(m1, m2)       # Bold disjunctive rule, nondistinct possibly unreliable (idempotent)

# Decombination — inverse operations (result may not be valid, check .is_valid)
m1 = decombine_crc(m12, m2)  # m12 6∩ m2 — removes m2 from a conjunctive combination
m1 = decombine_drc(m12, m2)  # m12 6∪ m2 — removes m2 from a disjunctive combination

Choice of rule:

All sources reliable At least one reliable
Distinct sources crc / dempster drc
Nondistinct sources cautious bold

Both crc and drc support method="sparse" (default) or method="dense".


evtools.corrections

Correction mechanisms for adjusting a BBA based on knowledge about the quality of a source (reliability, truthfulness).

Notation:

  • A^β — simple MF: focal sets Ω (mass β) and A (mass 1−β)
  • A_β — negative simple MF: focal sets ∅ (mass β) and A (mass 1−β)
from evtools.corrections import (
    discount,
    contextual_discount,
    theta_contextual_discount,
    contextual_reinforce,
    contextual_dediscount,
    contextual_dereinforce,
    contextual_negate,
)

# Classical discounting — source reliable with degree β ∈ [0,1]
# β=1: unchanged; β=0: vacuous BBA
m_disc = discount(m, beta=0.6)

# Contextual discounting (CD) — reliability per singleton context
# Uses negative simple MFs A_β and the DRC
betas = {frozenset({"a"}): 0.6, frozenset({"h"}): 1.0, frozenset({"r"}): 1.0}
m_cd = contextual_discount(m, betas)

# Θ-contextual discounting — reliability per coarsening partition
betas_theta = {frozenset({"a"}): 0.4, frozenset({"h","r"}): 0.9}
m_theta = theta_contextual_discount(m, betas_theta)

# Contextual Reinforcement (CR) — dual of CD, uses simple MFs A^β and the CRC
m_cr = contextual_reinforce(m, betas)

# Inverse operations (result may not be valid — check .is_valid)
m_cdd = contextual_dediscount(m_cd, betas)    # reverses CD
m_cdr = contextual_dereinforce(m_cr, betas)   # reverses CR

# Contextual Negating (CN) — source non-truthful with probability 1−β
m_cn = contextual_negate(m, {frozenset({"a"}): 0.7})

Hierarchy of discounting:

discount(m, β)
  └── theta_contextual_discount(m, {Ω: β})

contextual_discount(m, β)
  └── theta_contextual_discount(m, β)   [Θ = singletons]

theta_contextual_discount(m, β)         [general Θ partition]

evtools.display

Four output formats, all adapting the column header to the kind (m, bel, pl, ...). In Jupyter notebooks, DSVector._repr_html_() is called automatically.

from evtools.display import repr_plain, repr_html, repr_latex

print(repr_plain(m))   # plain text, no colors
print(repr_latex(m))   # LaTeX tabular for papers
m.display("ansi")      # colored terminal (default)
m.display("html")      # HTML table

evtools.conversions

Low-level conversion functions operating on plain numpy arrays (length 2^n), using the Fast Möbius Transform (Smets 2002). Every conversion is available as <source>to<target>, e.g. mtob, pltom, qtow, beltov, etc.

from evtools.conversions import mtob, mtopl, mtobel, mtoq

m = np.array([0.0, 0.5, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0])
print(mtoq(m))    # commonality function
print(mtopl(m))   # plausibility function

Array indices follow the binary ordering of Smets (2002): index i corresponds to the subset whose members are the frame atoms at the bit positions set in i.


Installation

pip install evtools-dst

Or from source:

git clone https://github.com/daviddavkanmercier/evtools.git
cd evtools
pip install -e .

Running tests

pip install -e ".[dev]"
pytest tests/

References

  • P. Smets. The application of the matrix calculus to belief functions, International Journal of Approximate Reasoning, 31(1–2):1–30, 2002.
  • T. Denœux. Conjunctive and disjunctive combination of belief functions induced by non-distinct bodies of evidence, Artificial Intelligence, 172:234–264, 2008.
  • D. Mercier, B. Quost, T. Denœux. Refined modeling of sensor reliability in the belief function framework using contextual discounting, Information Fusion, Vol. 9, Issue 2, pp 246-258, April 2008.
  • F. Pichon, D. Mercier, É. Lefèvre, F. Delmotte. Proposition and learning of some belief function contextual correction mechanisms, International Journal of Approximate Reasoning, Vol. 72, pp 4-42, May 2016.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

evtools_dst-0.7.0.tar.gz (32.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

evtools_dst-0.7.0-py3-none-any.whl (25.5 kB view details)

Uploaded Python 3

File details

Details for the file evtools_dst-0.7.0.tar.gz.

File metadata

  • Download URL: evtools_dst-0.7.0.tar.gz
  • Upload date:
  • Size: 32.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for evtools_dst-0.7.0.tar.gz
Algorithm Hash digest
SHA256 1751eaa589eeb8bf7334f50a59ec9354d024ec15f4ae2a1edec4ab7b9570f4ca
MD5 25dcb621a2d90073e3ad2580d919ce34
BLAKE2b-256 ea7de40c4edb187bb99baa801a5551367c5744186ed64980b5d4fc62421a4626

See more details on using hashes here.

File details

Details for the file evtools_dst-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: evtools_dst-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 25.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for evtools_dst-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 23c2359f7839234e42aac8b39bebdabec38b29c2e9104f10a11863b4b94cdbaa
MD5 a5ac9b370a7e8a09d53d96efc63cfcb5
BLAKE2b-256 46ad065ee6a7a6162f55b7098bac99d7513351a2048ea13e360007275ca21517

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page