Skip to main content

Reverse engineering of Metacognition toolbox

Project description

Go directly to:

ReMeta Toolbox

The ReMeta toolbox allows researchers to estimate latent type 1 and type 2 parameters based on data of cognitive or perceptual decision-making tasks with two response categories.

Minimal example

Three types of data are required to fit a model:

  • stimuli: list/array of signed stimulus intensity values, where the sign codes the stimulus category and the absolute value codes the intensity. The stimuli should be normalized to [-1; 1], although there is a setting (normalize_stimuli_by_max) to auto-normalize stimuli.
  • choices: list/array of choices coded as 0 (or alternatively -1) for the negative stimuli category and 1 for the positive stimulus category.
  • confidence: list/array of confidence ratings. Confidence ratings must be normalized to [0; 1]. Discrete confidence ratings must be normalized accordingly (e.g., if confidence ratings are 1-4, subtract 1 and divide by 3).

A minimal example would be the following:

# Minimal example
import remeta
stimuli, choices, confidence = remeta.load_dataset('simple')  # load example dataset
rem = remeta.ReMeta()
rem.fit(stimuli, choices, confidence)

Output:

Loading dataset 'simple' which was generated as follows:
..Generative model:
    Metatacognitive noise type: noisy_report
    Metatacognitive noise distribution: truncated_norm
    Link function: probability_correct
..Generative parameters:
    noise_sens: 0.7
    bias_sens: 0.2
    noise_meta: 0.1
    evidence_bias_mult_meta: 1.2
..Characteristics:
    No. subjects: 1
    No. samples: 1000
    Type 1 performance: 78.5%
    Avg. confidence: 0.668
    M-Ratio: 0.921
    
+++ Sensory level +++
Initial guess (neg. LL: 1902.65)
    [guess] noise_sens: 0.1
    [guess] bias_sens: 0
Performing local optimization
    [final] noise_sens: 0.745 (true: 0.7)
    [final] bias_sens: 0.24 (true: 0.2)
Final neg. LL: 461.45
Neg. LL using true params: 462.64
Total fitting time: 0.15 secs

+++ Metacognitive level +++
Initial guess (neg. LL: 1938.81)
    [guess] noise_meta: 0.2
    [guess] evidence_bias_mult_meta: 1
Grid search activated (grid size = 60)
    [grid] noise_meta: 0.15
    [grid] evidence_bias_mult_meta: 1.4
Grid neg. LL: 1879.3
Grid runtime: 2.43 secs
Performing local optimization
    [final] noise_meta: 0.102 (true: 0.1)
    [final] evidence_bias_mult_meta: 1.21 (true: 1.2)
Final neg. LL: 1872.24
Neg. LL using true params: 1872.27
Total fitting time: 3.4 secs

Since the dataset is based on simulation, we know the true parameters (in brackets above) of the underlying generative model, which are indeed quite close to the fitted parameters.

We can access the fitted parameters by invoking the summary() method on the ReMeta instance:

# Access fitted parameters
result = rem.summary()
for k, v in result.model.params.items():
    print(f'{k}: {v:.3f}')

Ouput:

noise_sens: 0.745
bias_sens: 0.240
noise_meta: 0.102
evidence_bias_mult_meta: 1.213

By default, the model fits parameters for type 1 noise (noise_sens) and a type 1 bias (bias_sens), as well as metacognitive 'type 2' noise (noise_meta) and a metacognitive bias (evidence_bias_mult_meta). Moreover, by default the model assumes that metacognitive noise occurs at the stage of the confidence report (setting meta_noise_type='noisy_report'), that observers aim at reporting probability correct with their confidence ratings (setting meta_link_function='probability_correct') and that metacognitive noise can be described by a truncated normal distribution (setting meta_noise_dist='truncated_norm').

All settings can be changed via the Configuration object which is optionally passed to the ReMeta instance. For example:

cfg = remeta.Configuration()
cfg.meta_noise_type = 'noisy_readout'
rem = remeta.ReMeta(cfg)
...

Supported parameters

Type 1 parameters:

  • noise_sens: type 1 noise
  • bias_sens: type 1 bias towards one of the two stimulus categories
  • thresh_sens: a (sensory) threshold, building on the assumption that a certain minimal stimulus intensity is required to elicit behavior; use only if there are stimulus intensities close to threshold
  • noise_transform_sens: parameter to specify stimulus-dependent type 1 noise (e.g. multiplicative noise)
  • warping: a nonlinear transducer parameter, allowing for nonlinear transformations of stimulus intensities-

Type 2 (metacognitive) parameters:

  • noise_meta: metacognitive noise
  • evidence_bias_mult_meta: multiplicative metacognitive bias applying at the level of evidence
  • evidence_bias_add_meta: additive metacognitive bias applying at the level of evidence
  • confidence_bias_mult_meta: multiplicative metacognitive bias applying at the level of confidence
  • confidence_bias_add_meta: additive metacognitive bias applying at the level of confidence
  • noise_transform_meta: (experimental) parameter to specify decision-value-dependent type 2 noise (e.g. multiplicative noise)
  • criterion{i}_meta: i-th confidence criterion (in case of a criterion-based link function)
  • level{i}_meta: i-th confidence level (in case of a criterion-based link function, confidence levels correspond to the confidence at the respective criteria)

In addition, each parameter can be fitted in "duplex mode", such that separate values are fitted depending on the stimulus category (for type 1 parameters) or depending on the sign of the type 1 decision values (for type 2 parameters).

A more detailed guide to use the toolbox is provided in the following Jupyter notebook: Basic Usage

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

remeta-0.1.5.tar.gz (209.0 kB view details)

Uploaded Source

File details

Details for the file remeta-0.1.5.tar.gz.

File metadata

  • Download URL: remeta-0.1.5.tar.gz
  • Upload date:
  • Size: 209.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.11

File hashes

Hashes for remeta-0.1.5.tar.gz
Algorithm Hash digest
SHA256 8e9bc695cece07f0faa442fd428745d411f13a665616dfded240370abc5c70b0
MD5 92757cad901220e7961125a4176faf9a
BLAKE2b-256 37e0d2be820ed31edaf0d3c40dab41e5f128ac6e740c8c47f7aaf395af874fcf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page