Skip to main content

Using pretrained encoder and language models to generate captions from multimedia inputs.

Project description

ClipCap

Using pretrained encoder and language models to generate captions from multimedia inputs, allowing high fidelity text generation using the rich textual detail already learned by pretrained LMs on tasks such as image captioning, VQA, audio captioning and more.

More details and results to come soon.

Installation

By default, the encoders remained uninstalled for ease of access. View the data preprocessing documentation for info on how to install these.

pip install git+https://github.com/TheoCoombes/ClipCap.git

Supported Encoders

  • CLIP for tasks such as Image Captioning, VQA etc.
  • CLAP for tasks such as Audio Captioning, Audio Question Answering, etc.

Data Preprocessing

You can run the data preprocess script using the command below. (More info)

python3 -m clipcap.preprocess --help

Training

You can run the training script using preprocessed data with the command below. (More info)

python3 -m clipcap.train --help

Acknowledgments

This repository is heavily based on @rmokady's original implementation of ClipCap and also contains modified versions of @rom1504's clip-inference and embedding-reader libraries. Many thanks to both for their amazing work :)

TODO

Improved documentation and eval + inference scripts to come soon.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ClipCap-1.0.0.tar.gz (18.6 kB view hashes)

Uploaded Source

Built Distribution

ClipCap-1.0.0-py3-none-any.whl (26.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page