Skip to main content

The generalized, nodalized HGF for predictive coding.

Project description

hgf

pre-commit license codecov black mypy Imports: isort pip

PyHGF: A Neural Network Library for Predictive Coding

PyHGF is a Python library to create and manipulate dynamic hierarchical probabilistic networks for predictive coding. The networks can approximate Bayesian inference and optimize beliefs through the diffusion of predictions and precision-weighted prediction errors, and their structure is flexible during both observation and inference. These systems can serve as biologically plausible cognitive models for computational psychiatry and reinforcement learning or as a generalisation of Bayesian filtering to arbitrarily sized dynamic graphical structures for signal processing or decision-making agents. The default implementation supports the generalisation and nodalisation of the Hierarchical Gaussian Filters for predictive coding (gHGF, Weber et al., 2024), but the framework is flexible enough to support any possible algorithm. The library is written on top of JAX, the core functions are derivable and JIT-able whenever feasible and it is possible to sample free parameters from a network under given observations. It is conceived to facilitate manipulation and modularity, so the user can focus on modeling while interfacing smoothly with other libraries in the ecosystem for Bayesian inference or optimization. A binding with an implementation in Rust - that will provide full flexibility on structures during inference - is also under active development.

Getting started

Installation

The last official release can be downloaded from PIP:

pip install pyhgf

The current version under development can be installed from the master branch of the GitHub folder:

pip install “git+https://github.com/ilabcode/pyhgf.git”

How does it work?

A dynamic hierarchical probabilistic network can be defined as a tuple containing the following variables:

  • The attributes (dictionary) that store each node's states and parameters (e.g. value, precision, learning rates, volatility coupling, ...).
  • The edges (tuple) that lists, for each node, the indexes of the parents and children.
  • A set of update functions. An update function receive a network tuple and returns an updated network tuple.
  • An update sequence (tuple) that defines the order and target of the update functions.

png

You can find a deeper introduction to how to create and manipulate networks under the following link:

The Generalized Hierarchical Gaussian Filter

Generalized Hierarchical Gaussian Filters (gHGF) are specific instances of dynamic probabilistic networks where each node encodes a Gaussian distribution that inherits its value (mean) and volatility (variance) from its parent. The presentation of a new observation at the lowest level of the hierarchy (i.e., the input node) triggers a recursive update of the nodes' belief (i.e., posterior distribution) through top-down predictions and bottom-up precision-weighted prediction errors. The resulting probabilistic network operates as a Bayesian filter, and a decision function can parametrize actions/decisions given the current beliefs. By comparing those behaviours with actual outcomes, a surprise function can be optimized over a set of free parameters. The Hierarchical Gaussian Filter for binary and continuous inputs was first described in Mathys et al. (2011, 2014), and later implemented in the Matlab HGF Toolbox (part of TAPAS (Frässle et al. 2021).

You can find a deeper introduction on how does the HGF works under the following link:

Model fitting

Here we demonstrate how to fit forwards a two-level binary Hierarchical Gaussian filter. The input time series are binary observations using an associative learning task Iglesias et al. (2013).

from pyhgf.model import Network
from pyhgf import load_data

# Load time series example data (observations, decisions)
u, y = load_data("binary")

# Create a two-level binary HGF from scratch
hgf = (
    Network()
    .add_nodes(kind="binary-input")
    .add_nodes(kind="binary-state", value_children=0)
    .add_nodes(kind="continuous-state", value_children=1)
)

# add new observations
hgf.input_data(input_data=u)

# visualization of the belief trajectories
hgf.plot_trajectories();

png

from pyhgf.response import binary_softmax_inverse_temperature

# compute the model's surprise (-log(p)) 
# using the binary softmax with inverse temperature as the response model
surprise = hgf.surprise(
    response_function=binary_softmax_inverse_temperature,
    response_function_inputs=y,
    response_function_parameters=4.0
)
print(f"Sum of surprises = {surprise.sum()}")

Model's surprise = 138.8992462158203

Acknowledgments

This implementation of the Hierarchical Gaussian Filter was inspired by the original Matlab HGF Toolbox. A Julia implementation with similar aims is also available here.

References

  1. Mathys, C. (2011). A Bayesian foundation for individual learning under uncertainty. In Frontiers in Human Neuroscience (Vol. 5). Frontiers Media SA. https://doi.org/10.3389/fnhum.2011.00039
  2. Mathys, C. D., Lomakina, E. I., Daunizeau, J., Iglesias, S., Brodersen, K. H., Friston, K. J., & Stephan, K. E. (2014). Uncertainty in perception and the hierarchical Gaussian filter. Frontiers in Human Neuroscience, 8. https://doi.org/10.3389/fnhum.2014.00825
  3. Weber, L. A., Waade, P. T., Legrand, N., Møller, A. H., Stephan, K. E., & Mathys, C. (2023). The generalized Hierarchical Gaussian Filter (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2305.10937
  4. Frässle, S., Aponte, E. A., Bollmann, S., Brodersen, K. H., Do, C. T., Harrison, O. K., Harrison, S. J., Heinzle, J., Iglesias, S., Kasper, L., Lomakina, E. I., Mathys, C., Müller-Schrader, M., Pereira, I., Petzschner, F. H., Raman, S., Schöbi, D., Toussaint, B., Weber, L. A., … Stephan, K. E. (2021). TAPAS: An Open-Source Software Package for Translational Neuromodeling and Computational Psychiatry. In Frontiers in Psychiatry (Vol. 12). Frontiers Media SA. https://doi.org/10.3389/fpsyt.2021.680811
  5. Iglesias, S., Kasper, L., Harrison, S. J., Manka, R., Mathys, C., & Stephan, K. E. (2021). Cholinergic and dopaminergic effects on prediction error and uncertainty responses during sensory associative learning. In NeuroImage (Vol. 226, p. 117590). Elsevier BV. https://doi.org/10.1016/j.neuroimage.2020.117590

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyhgf-0.1.7.tar.gz (73.6 kB view details)

Uploaded Source

Built Distribution

pyhgf-0.1.7-py3-none-any.whl (76.9 kB view details)

Uploaded Python 3

File details

Details for the file pyhgf-0.1.7.tar.gz.

File metadata

  • Download URL: pyhgf-0.1.7.tar.gz
  • Upload date:
  • Size: 73.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for pyhgf-0.1.7.tar.gz
Algorithm Hash digest
SHA256 4976016782c74fce8185f713b61cd74157a480d83a2da617ec37ce5147510c08
MD5 6aa41bbd57099246a87c1a8222f50bb6
BLAKE2b-256 387cef403bc4af0c9ceefb51d14c32039a9f98f4d17c0cae3454ef701e13589b

See more details on using hashes here.

File details

Details for the file pyhgf-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: pyhgf-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 76.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for pyhgf-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 3b8768df89719b9d506fa25e27db71d29b33faeab585407d9a282f2b28ef9209
MD5 e1e4cedc27a8d76f2075fdd500ae42e5
BLAKE2b-256 185fb7391806983c067ebcdfc8b10619e70e59074524dfa6e6fc0ea4c6bedfff

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page