Skip to main content

Predict brain activity from text, audio, and video using Meta's TRIBE v2 model

Project description

TRIBE v2

A Foundation Model of Vision, Audition, and Language for In-Silico Neuroscience

PyPI Python 3.11+ License: CC BY-NC 4.0 Open In Colab

📄 Paper | 🤗 Weights

TRIBE v2 is a deep multimodal brain encoding model from Meta AI that predicts fMRI brain responses to naturalistic stimuli. It maps text, audio, and video through a Fusion Transformer onto the fsaverage5 cortical surface (~20k vertices).

Quick Start

pip install cognitive-scoring

Two-line API

from tribev2 import BrainAPI

api = BrainAPI.load()
result = api.analyze("She opened the letter and tears of joy streamed down her face.")

print(result.valence)         # +0.031  (positive = happy)
print(result.learning)        # 0.577   (deeper cognitive processing)
print(result.attention)       # 0.600   (focused attention)
print(result.scores)          # {"prefrontal": 0.62, "temporal": 0.63, ...}
print(result.classification)  # [("happy", 0.85), ("calm", 0.32), ...]
print(result.summary())       # Full formatted output with bar charts

HTTP API

pip install cognitive-scoring[server]
python -c "from tribev2.server import main; main()"
curl -X POST http://localhost:8000/analyze \
  -H "Content-Type: application/json" \
  -d '{"text": "She opened the letter and tears of joy streamed down her face."}'

Interactive docs at http://localhost:8000/docs.

Lower-level API

from tribev2 import TribeModel

model = TribeModel.from_pretrained("facebook/tribev2", cache_folder="./cache")

df = model.get_events_dataframe(text_path="story.txt")
preds, segments = model.predict(events=df)
print(preds.shape)  # (n_timesteps, 20484)

Installation

# Core (text + audio inference)
pip install cognitive-scoring

# With video support (adds torchvision + moviepy)
pip install cognitive-scoring[video]

# With HTTP API server
pip install cognitive-scoring[server]

# With brain visualization (3D surface plots)
pip install cognitive-scoring[plotting]

# Everything except training
pip install cognitive-scoring[all]

# For development
pip install -e ".[all,test]"

Optional extras

Extra What it adds Use case
video torchvision, moviepy Video file input (.mp4, .avi, etc.)
server FastAPI, uvicorn HTTP API server
plotting nilearn, pyvista, matplotlib Brain surface heatmaps
training lightning, wandb, torchmetrics Model training from scratch
optimized torchao INT8 quantization
menubar rumps, pyobjc macOS menu bar app for server lifecycle
all video + plotting + server + optimized + menubar Everything

What You Get

Region Scores (0–1)

Each text is scored on 10 functional brain region groups. A score of 0.5 = baseline; above means more activation than average.

Region What it measures
prefrontal Executive function, planning, decision-making
reward_vmPFC Reward processing, positive affect
anterior_cingulate Conflict monitoring, curiosity
default_mode Self-referential thought, mind-wandering
insula Emotional awareness, negative affect
temporal Language comprehension, social cognition
visual Visual processing
attention_parietal Focused attention
motor Sensorimotor processing
fusiform_parahip Memory encoding, face/object recognition

Composite Scores

Score Formula Meaning
Valence reward − (insula + ACC) / 2 Positive = happy, negative = sad
Learning (prefrontal + ACC + temporal) / 3 Higher = deeper processing
Attention (parietal + prefrontal) / 2 Higher = more focused

Performance

Operation First run Cached
Model loading ~15s ~15s
Text analysis ~3–5 min ~5–10s
Compare two texts ~6–10 min ~10–20s

Feature extraction (V-JEPA, Wav2Vec, LLaMA 3.2) is the bottleneck. Results are cached by content hash; repeated analysis of the same text is instant.

Training

export DATAPATH="/path/to/studies"
export SAVEPATH="/path/to/output"

# Local test run
python -m tribev2.grids.test_run

# Grid search on Slurm
python -m tribev2.grids.run_cortical
python -m tribev2.grids.run_subcortical

Project Structure

tribev2/
├── api.py               # BrainAPI: simple two-line interface
├── server.py            # FastAPI HTTP server
├── demo_utils.py        # TribeModel: model loading + inference
├── brain_states.py      # BrainAtlas, scoring, classification
├── model.py             # FmriEncoder: Fusion Transformer architecture
├── main.py              # Data + TribeExperiment pipeline
├── _mps_compat.py       # Apple Silicon MPS patches
├── eventstransforms.py  # Text/audio/video → events
├── plotting/            # Brain visualization backends
└── studies/             # Dataset definitions

Caveats

  • Predictions are cortical surface only — subcortical structures (amygdala, hippocampus, basal ganglia) are NOT represented.
  • All cognitive/emotional labels are approximations based on cortical correlates.
  • NOT suitable for clinical diagnosis or treatment decisions.

Citation

@article{dAscoli2026TribeV2,
  title={A foundation model of vision, audition, and language for in-silico neuroscience},
  author={d'Ascoli, St{\'e}phane and Rapin, J{\'e}r{\'e}my and Benchetrit, Yohann and Brookes, Teon and Begany, Katelyn and Raugel, Jos{\'e}phine and Banville, Hubert and King, Jean-R{\'e}mi},
  year={2026}
}

License

Copyright © Meta Platforms, Inc. and affiliates. All rights reserved.

This work is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).

You may use, share, and adapt this material for non-commercial purposes only, provided you give appropriate credit, indicate if changes were made, and do not impose additional restrictions. See LICENSE for the full legal text.

Disclaimer of Warranties (§5): This software is provided "AS-IS" and "AS-AVAILABLE" without any warranties of any kind, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the licensor be liable for any damages arising from use of this software.

Modifications: This repository contains modifications to the original TRIBE v2 codebase by Meta Platforms, Inc., including (among other things) a high-level Python API, an HTTP server, ROI-based brain-state scoring, and Apple Silicon compatibility patches. For a complete record of all changes, see the git history. These modifications are also licensed under CC BY-NC 4.0.

Contributing

See CONTRIBUTING.md for how to get involved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cognitive_scoring-1.0.1.tar.gz (121.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cognitive_scoring-1.0.1-py3-none-any.whl (136.6 kB view details)

Uploaded Python 3

File details

Details for the file cognitive_scoring-1.0.1.tar.gz.

File metadata

  • Download URL: cognitive_scoring-1.0.1.tar.gz
  • Upload date:
  • Size: 121.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for cognitive_scoring-1.0.1.tar.gz
Algorithm Hash digest
SHA256 ef5fb607de16061a49f0cc7d59702d47e1d20c321ff12d03cb0919289d881d93
MD5 44b7b9de3c3973d522be95cad90d3d95
BLAKE2b-256 e018a2057da53ef78ef426058aab3bf496417c9a22a1ca687523024ce61b1537

See more details on using hashes here.

Provenance

The following attestation bundles were made for cognitive_scoring-1.0.1.tar.gz:

Publisher: release.yml on suncloudsmoon/cognitive-scoring

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cognitive_scoring-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for cognitive_scoring-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d16c157f2a9a0db4e37186d3845ae6d549d0514e2f9fafc75a7ba40779520f9b
MD5 2603dcbee44e9fde7521c9335ac5b264
BLAKE2b-256 1a786a418c8507455761b8225dc5f2d1d981847b6feea35175fdc9c1d9eaef3e

See more details on using hashes here.

Provenance

The following attestation bundles were made for cognitive_scoring-1.0.1-py3-none-any.whl:

Publisher: release.yml on suncloudsmoon/cognitive-scoring

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page