Predict brain activity from text, audio, and video using Meta's TRIBE v2 model
Project description
TRIBE v2
A Foundation Model of Vision, Audition, and Language for In-Silico Neuroscience
TRIBE v2 is a deep multimodal brain encoding model from Meta AI that predicts fMRI brain responses to naturalistic stimuli. It maps text, audio, and video through a Fusion Transformer onto the fsaverage5 cortical surface (~20k vertices).
Quick Start
pip install tribev2
Two-line API
from tribev2 import BrainAPI
api = BrainAPI.load()
result = api.analyze("She opened the letter and tears of joy streamed down her face.")
print(result.valence) # +0.031 (positive = happy)
print(result.learning) # 0.577 (deeper cognitive processing)
print(result.attention) # 0.600 (focused attention)
print(result.scores) # {"prefrontal": 0.62, "temporal": 0.63, ...}
print(result.classification) # [("happy", 0.85), ("calm", 0.32), ...]
print(result.summary()) # Full formatted output with bar charts
HTTP API
pip install tribev2[server]
python -c "from tribev2.server import main; main()"
curl -X POST http://localhost:8000/analyze \
-H "Content-Type: application/json" \
-d '{"text": "She opened the letter and tears of joy streamed down her face."}'
Interactive docs at http://localhost:8000/docs.
Lower-level API
from tribev2 import TribeModel
model = TribeModel.from_pretrained("facebook/tribev2", cache_folder="./cache")
df = model.get_events_dataframe(text_path="story.txt")
preds, segments = model.predict(events=df)
print(preds.shape) # (n_timesteps, 20484)
Installation
# Core (text + audio inference)
pip install tribev2
# With video support (adds torchvision + moviepy)
pip install tribev2[video]
# With HTTP API server
pip install tribev2[server]
# With brain visualization (3D surface plots)
pip install tribev2[plotting]
# Everything except training
pip install tribev2[all]
# For development
pip install -e ".[all,test]"
Optional extras
| Extra | What it adds | Use case |
|---|---|---|
video |
torchvision, moviepy | Video file input (.mp4, .avi, etc.) |
server |
FastAPI, uvicorn | HTTP API server |
plotting |
nilearn, pyvista, matplotlib | Brain surface heatmaps |
training |
lightning, wandb, torchmetrics | Model training from scratch |
optimized |
torchao | INT8 quantization |
all |
video + plotting + server + optimized | Everything |
What You Get
Region Scores (0–1)
Each text is scored on 10 functional brain region groups. A score of 0.5 = baseline; above means more activation than average.
| Region | What it measures |
|---|---|
prefrontal |
Executive function, planning, decision-making |
reward_vmPFC |
Reward processing, positive affect |
anterior_cingulate |
Conflict monitoring, curiosity |
default_mode |
Self-referential thought, mind-wandering |
insula |
Emotional awareness, negative affect |
temporal |
Language comprehension, social cognition |
visual |
Visual processing |
attention_parietal |
Focused attention |
motor |
Sensorimotor processing |
fusiform_parahip |
Memory encoding, face/object recognition |
Composite Scores
| Score | Formula | Meaning |
|---|---|---|
| Valence | reward − (insula + ACC) / 2 |
Positive = happy, negative = sad |
| Learning | (prefrontal + ACC + temporal) / 3 |
Higher = deeper processing |
| Attention | (parietal + prefrontal) / 2 |
Higher = more focused |
Performance
| Operation | First run | Cached |
|---|---|---|
| Model loading | ~15s | ~15s |
| Text analysis | ~3–5 min | ~5–10s |
| Compare two texts | ~6–10 min | ~10–20s |
Feature extraction (V-JEPA, Wav2Vec, LLaMA 3.2) is the bottleneck. Results are cached by content hash; repeated analysis of the same text is instant.
Training
export DATAPATH="/path/to/studies"
export SAVEPATH="/path/to/output"
# Local test run
python -m tribev2.grids.test_run
# Grid search on Slurm
python -m tribev2.grids.run_cortical
python -m tribev2.grids.run_subcortical
Project Structure
tribev2/
├── api.py # BrainAPI: simple two-line interface
├── server.py # FastAPI HTTP server
├── demo_utils.py # TribeModel: model loading + inference
├── brain_states.py # BrainAtlas, scoring, classification
├── model.py # FmriEncoder: Fusion Transformer architecture
├── main.py # Data + TribeExperiment pipeline
├── _mps_compat.py # Apple Silicon MPS patches
├── eventstransforms.py # Text/audio/video → events
├── plotting/ # Brain visualization backends
└── studies/ # Dataset definitions
Caveats
- Predictions are cortical surface only — subcortical structures (amygdala, hippocampus, basal ganglia) are NOT represented.
- All cognitive/emotional labels are approximations based on cortical correlates.
- NOT suitable for clinical diagnosis or treatment decisions.
Citation
@article{dAscoli2026TribeV2,
title={A foundation model of vision, audition, and language for in-silico neuroscience},
author={d'Ascoli, St{\'e}phane and Rapin, J{\'e}r{\'e}my and Benchetrit, Yohann and Brookes, Teon and Begany, Katelyn and Raugel, Jos{\'e}phine and Banville, Hubert and King, Jean-R{\'e}mi},
year={2026}
}
License
Copyright © Meta Platforms, Inc. and affiliates. All rights reserved.
This work is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
You may use, share, and adapt this material for non-commercial purposes only, provided you give appropriate credit, indicate if changes were made, and do not impose additional restrictions. See LICENSE for the full legal text.
Disclaimer of Warranties (§5): This software is provided "AS-IS" and "AS-AVAILABLE" without any warranties of any kind, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the licensor be liable for any damages arising from use of this software.
Modifications: This repository contains modifications to the original TRIBE v2 codebase by Meta Platforms, Inc., including (among other things) a high-level Python API, an HTTP server, ROI-based brain-state scoring, and Apple Silicon compatibility patches. For a complete record of all changes, see the git history. These modifications are also licensed under CC BY-NC 4.0.
Contributing
See CONTRIBUTING.md for how to get involved.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cognitive_scoring-1.0.0.tar.gz.
File metadata
- Download URL: cognitive_scoring-1.0.0.tar.gz
- Upload date:
- Size: 121.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d4b701766ddd4f477e8e5d6075ca15d3917634398cfc68cd0064edc5629d2b5f
|
|
| MD5 |
22c8995f66f531f25e0f5f57b8d97f86
|
|
| BLAKE2b-256 |
6994c13081c2ae824c0a574d789f087adba5f418e36803567c67d708808ff1a0
|
Provenance
The following attestation bundles were made for cognitive_scoring-1.0.0.tar.gz:
Publisher:
release.yml on suncloudsmoon/cognitive-scoring
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cognitive_scoring-1.0.0.tar.gz -
Subject digest:
d4b701766ddd4f477e8e5d6075ca15d3917634398cfc68cd0064edc5629d2b5f - Sigstore transparency entry: 1347303277
- Sigstore integration time:
-
Permalink:
suncloudsmoon/cognitive-scoring@6e764e1c09af97434a00dd385c0cf8e37c14d4dc -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/suncloudsmoon
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@6e764e1c09af97434a00dd385c0cf8e37c14d4dc -
Trigger Event:
push
-
Statement type:
File details
Details for the file cognitive_scoring-1.0.0-py3-none-any.whl.
File metadata
- Download URL: cognitive_scoring-1.0.0-py3-none-any.whl
- Upload date:
- Size: 136.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2736166f4f41499110e1b9291461576437bfc7118d8866ae7384c255a7009b7e
|
|
| MD5 |
e9bc6f9cbd1c2e1c5635246c8cb751f8
|
|
| BLAKE2b-256 |
675d74e4840de1f5f6c3c04a78b72134f062fd1cd5aa9cdb34d5b59aa42e3ef5
|
Provenance
The following attestation bundles were made for cognitive_scoring-1.0.0-py3-none-any.whl:
Publisher:
release.yml on suncloudsmoon/cognitive-scoring
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
cognitive_scoring-1.0.0-py3-none-any.whl -
Subject digest:
2736166f4f41499110e1b9291461576437bfc7118d8866ae7384c255a7009b7e - Sigstore transparency entry: 1347303362
- Sigstore integration time:
-
Permalink:
suncloudsmoon/cognitive-scoring@6e764e1c09af97434a00dd385c0cf8e37c14d4dc -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/suncloudsmoon
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@6e764e1c09af97434a00dd385c0cf8e37c14d4dc -
Trigger Event:
push
-
Statement type: