Skip to main content

The Vision-Language Toolkit (VLTK)

Project description

# Installation To install (add editable for personal custimization) ` git clone https://github.com/eltoto1219/vltk.git && cd vltk && pip install -e . ` Alternatively: ` pip install vltk `

# Documentation The documentation is up! at [vltk documentation](http://avmendoza.info/vltk/)

It is pretty bare bones for now, however first on the agenda to be added will be: 1. Usage of adapters to rapidly create datasets. 2. An overview of all the config options for automatically instantiating PyTorch dataloaders from one to many different datasets at once 3. An overview of how dataset metadata is automatically + deterministically collected from multiple datasets 4. Usage of modality prcoessors for language, vision, and language X vision which make it possible to universally load any visn, lang, visn-lang dataset.

# Collaboration

There are many exciting directions and improvements I have in mind to make in vltk. While this is the “official” beginning of the project, please email me for any suggestions/collaboration ideas: antonio36764@gmail.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vltk-1.0.4.tar.gz (104.0 kB view hashes)

Uploaded Source

Built Distribution

vltk-1.0.4-py3-none-any.whl (140.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page