Skip to main content

Moedified from the official PyTorch codebase for the video joint-embedding predictive architecture, V-JEPA, a method for self-supervised learning of visual representations from video.

Project description

V-JEPA: Video Joint Embedding Predictive Architecture

Official PyTorch codebase for the video joint-embedding predictive architecture, V-JEPA, a method for self-supervised learning of visual representations from video.

Meta AI Research, FAIR

Adrien Bardes, Quentin Garrido, Jean Ponce, Xinlei Chen, Michael Rabbat, Yann LeCun, Mahmoud Assran*, Nicolas Ballas*

[Blog] [Paper] [Yannic Kilcher's Video]

V-JEPA models are trained by passively watching video pixels from the VideoMix2M dataset, and produce versatile visual representations that perform well on downstream video and image tasks, without adaption of the model’s parameters; e.g., using a frozen backbone and only a light-weight task-specific attentive probe.

Method

V-JEPA pretraining is based solely on an unsupervised feature prediction objective, and does not utilize pretrained image encoders, text, negative examples, human annotations, or pixel-level reconstruction.

     

Visualizations

As opposed to generative methods that have a pixel decoder, V-JEPA has a predictor that makes predictions in latent space. We train a conditional diffusion model to decode the V-JEPA feature-space predictions to interpretable pixels; the pretrained V-JEPA encoder and predictor networks are kept frozen in this process. The decoder is only fed the representations predicted for the missing regions of the video, and does not have access to the unmasked regions of the video.

The V-JEPA feature predictions are indeed grounded, and exhibit spatio-temporal consistency with the unmasked regions of the video.



MODEL ZOO

Pretrained models

model patch size resolution iterations batch size data download
ViT-L 2x16x16 224x224 90K 3072 VideoMix2M checkpoint configs
ViT-H 2x16x16 224x224 90K 3072 VideoMix2M checkpoint configs
ViT-H 2x16x16 384x384 90K 2400 VideoMix2M checkpoint configs

K400 Attentive probes

model resolution accuracy (16x8x3) download
ViT-L/16 224x224 80.8 attentive probe checkpoint configs
ViT-H/16 224x224 82.0 attentive probe checkpoint configs
ViT-H/16 384x384 81.9 attentive probe checkpoint configs

SSv2 Attentive probes

model resolution accuracy (16x2x3) download
ViT-L/16 224x224 69.5 attentive probe checkpoint configs
ViT-H/16 224x224 71.4 attentive probe checkpoint configs
ViT-H/16 384x384 72.2 attentive probe checkpoint configs

ImageNet1K Attentive probes

model resolution accuracy download
ViT-L/16 224x224 74.8 attentive probe checkpoint configs
ViT-H/16 224x224 75.9 attentive probe checkpoint configs
ViT-H/16 384x384 77.4 attentive probe checkpoint configs

Places205 Attentive probes

model resolution accuracy download
ViT-L/16 224x224 60.3 attentive probe checkpoint configs
ViT-H/16 224x224 61.7 attentive probe checkpoint configs
ViT-H/16 384x384 62.8 attentive probe checkpoint configs

iNat21 Attentive probes

model resolution accuracy download
ViT-L/16 224x224 67.8 attentive probe checkpoint configs
ViT-H/16 224x224 67.9 attentive probe checkpoint configs
ViT-H/16 384x384 72.6 attentive probe checkpoint configs

Code Structure

Config files: All experiment parameters are specified in config files (as opposed to command-line arguments). See the configs/ directory for example config files. Note, before launching an experiment, you must update the paths in the config file to point to your own directories, indicating where to save the logs and checkpoints and where to find the training data.

.
├── app                       # the only place where training loops are allowed
│   ├── vjepa                 #   Video JEPA pre-training
│   ├── main_distributed.py   #   entrypoint for launching app on slurm cluster
│   └── main.py               #   entrypoint for launching app locally on your machine for debugging
├── evals                     # the only place where evaluation of 'apps' are allowed
│   ├── image_classification  #   training an attentive probe for image classification with frozen backbone
│   ├── video_classification  #   training an attentive probe for video classification with frozen backbone
│   ├── main_distributed.py   #   entrypoint for launching distributed evaluations on slurm cluster
│   └── main.py               #   entrypoint for launching evaluations locally on your machine for debugging
├── src                       # the package
│   ├── datasets              #   datasets, data loaders, ...
│   ├── models                #   model definitions
│   ├── masks                 #   mask collators, masking utilities, ...
│   └── utils                 #   shared utilities
└── configs                   # the only place where config files are allowed (specify experiment params for app/eval runs)
    ├── evals                 #   configs for launching vjepa frozen evaluations
    └── pretrain              #   configs for launching vjepa pretraining

Data preparation

Video Datasets

V-JEPA pretraining and evaluations work with many standard video formats. To make a video dataset compatible with the V-JEPA codebase, you simply need to create a .csv file with the following format and then specify the path to this CSV file in your config.

/absolute_file_path.[mp4, webvid, etc.] $integer_class_label
/absolute_file_path.[mp4, webvid, etc.] $integer_class_label
/absolute_file_path.[mp4, webvid, etc.] $integer_class_label
...

Since V-JEPA is entirely unsupervised, the pretraining code will disregard the $integer_class_label in the CSV file. Thus, feel free to put a random value in this column. However, if you wish to run a supervised video classification evaluation on your video dataset, you must replace $integer_class_label with the ground truth label for each video.

Image Datasets

We use the standard PyTorch ImageFolder class in our image classification evals. Thus, to set up an image dataset for the image classification evaluation, first create a directory to store your image datasets $your_directory_containing_image_datasets. Next, download your image datasets into this directory in a format compatible with PyTorch ImageFolder.

For example, suppose we have a directory called my_image_datasets. We would then download our image datasets into this directory so that we end up with the following file tree

.
└── /my_image_datasets/                # where we store image datasets
    ├── places205/121517/pytorch/      #   Places205
    │   └── [...]
    ├── iNaturalist-2021/110421/       #   iNaturalist21
    │   └── [...]
    ├── [...]                          #   Other Image Datasets
    │   └── [...]
    └── imagenet_full_size/061417/     #   ImageNet1k
        └── train
        │   ├── $class_1
        │   │    ├── xxx.[png, jpeg, etc.]
        │   │    ├── [...]
        │   │    └── xxz.[png, jpeg, etc.]
        │   ├── [...]
        │   └── $class_n
        │       ├── abc.[png, jpeg, etc.]
        │       ├── [...]
        │       └── abz.[png, jpeg, etc.]
        └── val
            ├── $class_1
            │    ├── xxx.[png, jpeg, etc.]
            │    ├── [...]
            │    └── xxz.[png, jpeg, etc.]
            ├── [...]
            └── $class_n
                ├── abc.[png, jpeg, etc.]
                ├── [...]
                └── abz.[png, jpeg, etc.]

Launching V-JEPA pretraining

Local training

If you wish to debug your code or setup before launching a distributed training run, we provide the functionality to do so by running the pretraining script locally on a multi-GPU (or single-GPU) machine, however, reproducing our results requires launching distributed training.

The single-machine implementation starts from the app/main.py, which parses the experiment config file and runs the pretraining locally on a multi-GPU (or single-GPU) machine. For example, to run V-JEPA pretraining on GPUs "0", "1", and "2" on a local machine using the config configs/pretrain/vitl16.yaml, type the command:

python -m app.main \
  --fname configs/pretrain/vitl16.yaml \
  --devices cuda:0 cuda:1 cuda:2

Distributed training

To launch a distributed training run, the implementation starts from app/main_distributed.py, which, in addition to parsing the config file, also allows for specifying details about distributed training. For distributed training, we use the popular open-source submitit tool and provide examples for a SLURM cluster.

For example, to launch a distributed pre-training experiment using the config configs/pretrain/vitl16.yaml, type the command:

python -m app.main_distributed \
  --fname configs/pretrain/vitl16.yaml \
  --folder $path_to_save_stderr_and_stdout \
  --partition $slurm_partition

Launching Evaluations

Local training

If you wish to debug your eval code or setup before launching a distributed training run, we provide the functionality to do so by running the evaluation script locally on a multi-GPU (or single-GPU) machine, however, reproducing the full eval would require launching distributed training. The single-machine implementation starts from the eval/main.py, which parses the experiment config file and runs the eval locally on a multi-GPU (or single-GPU) machine.

For example, to run ImageNet image classification on GPUs "0", "1", and "2" on a local machine using the config configs/eval/vitl16_in1k.yaml, type the command:

python -m evals.main \
  --fname configs/eval/vitl16_in1k.yaml \
  --devices cuda:0 cuda:1 cuda:2

Distributed training

To launch a distributed evaluation run, the implementation starts from eval/main_distributed.py, which, in addition to parsing the config file, also allows for specifying details about distributed training. For distributed training, we use the popular open-source submitit tool and provide examples for a SLURM cluster.

For example, to launch a distributed ImageNet image classification experiment using the config configs/eval/vitl16_in1k.yaml, type the command:

python -m evals.main_distributed \
  --fname configs/eval/vitl16_in1k.yaml \
  --folder $path_to_save_stderr_and_stdout \
  --partition $slurm_partition

Similarly, to launch a distributed K400 video classification experiment using the config configs/eval/vitl16_k400.yaml, type the command:

python -m evals.main_distributed \
  --fname configs/eval/vitl16_k400.yaml \
  --folder $path_to_save_stderr_and_stdout \
  --partition $slurm_partition

Setup

Run:

conda create -n jepa python=3.9 pip
conda activate jepa
python setup.py install

License

See the LICENSE file for details about the license under which this code is made available.

Citation

If you find this repository useful in your research, please consider giving a star :star: and a citation

@article{bardes2024revisiting,
  title={Revisiting Feature Prediction for Learning Visual Representations from Video},
  author={Bardes, Adrien and Garrido, Quentin and Ponce, Jean and Rabbat, Michael, and LeCun, Yann and Assran, Mahmoud and Ballas, Nicolas},
  journal={arXiv:2404.08471},
  year={2024}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vjepa-0.1.1.tar.gz (48.8 kB view details)

Uploaded Source

Built Distribution

vjepa-0.1.1-py3-none-any.whl (58.9 kB view details)

Uploaded Python 3

File details

Details for the file vjepa-0.1.1.tar.gz.

File metadata

  • Download URL: vjepa-0.1.1.tar.gz
  • Upload date:
  • Size: 48.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.9.19 Linux/5.15.0-101-generic

File hashes

Hashes for vjepa-0.1.1.tar.gz
Algorithm Hash digest
SHA256 f5cb87368fdb99545eaa81620e08b497a6c3b29f55941cfbb2d35707489dde49
MD5 9125af3ced39da0dc8277b6ad99022af
BLAKE2b-256 446b0577b305e3dc3e1df3b226a0fa3734642b74006c5d38ed9bc28ddf42d2c2

See more details on using hashes here.

File details

Details for the file vjepa-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: vjepa-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 58.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.9.19 Linux/5.15.0-101-generic

File hashes

Hashes for vjepa-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7cf288302fa94effd050fb1f6ceafb0c15c7d03b60d4d85c48dc5acd1375db21
MD5 3dc73fb6392d533d0365041b6a815bbb
BLAKE2b-256 052bbd837b90f3077399a6b2c435fee71260d217a3c98bf579f1e70cbf5723d0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page