A collection of multimodal datasets multimodal for research.
Project description
multimodal
A collection of multimodal (vision and language) datasets and visual features for deep learning research.
Currently it supports the following datasets:
- VQA v1
- VQA v2
- VQA-CP v1
- VQA-CP v2
And the following features:
- Bottom-Up Top-Down features (10-100)
- Bottom-Up Top-Down features (36)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for multimodal-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 160927f467df695374dd9ce35c816edbd2af230be9041d66df85c2f58ed3710f |
|
MD5 | d4831e3acef6c78c9cae60b3b1a6dab4 |
|
BLAKE2b-256 | b1c8a5d123315f8bfe22ade641fc4abe7f222d18a2d45f0225ff56d9c8e14133 |