Open, Clean Datasets for Computer Vision.
Project description
VL-Datasets
Open, Clean, Curated Datasets for Computer Vision
🔥 We use
fastdup - a free tool to clean all datasets shared in this repo.
Explore the docs »
Report Issues
·
Read Blog
·
Get In Touch
·
About Us
Description
vl-datasets
is a Python package that provides access to clean computer vision datasets with only 2 lines of code.
For example, to get access to the clean version of the Food-101 dataset simply run:
We support some of the most widely used computer vision datasets. Let us know if you have additional request to support a new dataset.
All the datasets are analyzed for issues such as:
- Duplicates.
- Near Duplicates.
- Broken images.
- Outliers.
- Dark/Bright/Blurry images.
- Mislabels.
- Data Leakage.
Why?
Computer vision is an exciting and rapidly advancing field, with new techniques and models emerging now and then. However, to develop and evaluate these models, it's essential to have reliable and standardized datasets to work with.
Even with the recent success of generative models, data quality remains an issue that's mainly overlooked. Training models will erroneours data impacts model accuracy, incurs costs in time, storage and computational resources.
We believe that access to clean and high-quality computer vision datasets leads to accurate, non-biased, and efficient model.
By providing public access to vl-datasets
we hope it helps advance the field of computer vision.
Datasets & Access
vl-datasets
provides a convenient way to access the cleaned version of the datasets in Python.
Alternatively, for each dataset in this repo, we provide a .csv
file that lists the problematic images from the dataset.
You can use the listed images in the .csv
to improve the model by re-labeling the them or just simply remove it from the dataset.
We're a startup and we'd like to offer free access to the datasets as much as we can afford to. But in doing so, we'd also need your support.
We're offering select .csv
files completely free with no strings attached.
For access to our complete dataset and exclusive beta features, all we ask is that you sign up to be a beta tester – it's completely free and your feedback will help shape the future of our platform.
Here is a table of widely used computer vision datasets, issues we found and a link to access the .csv
file.
Dataset | Issues | CSV | Import Statement |
---|---|---|---|
Food-101 |
|
Download here. | from vl_datasets import VLFood101 |
Oxford-IIIT Pet |
|
Download here. | from vl_datasets import VLOxfordIIITPet |
LAION-1B |
|
Request access here. | WIP |
ImageNet-21K |
|
Request access here. | WIP |
ImageNet-1K |
|
Request access here. | WIP |
KITTI |
|
Request access here. | WIP |
DeepFashion |
|
Request access here. | WIP |
CelebA-HQ |
|
Request access here. | WIP |
COCO |
|
Request access here. | WIP |
Learn more on how we clean the datasets using our profilling tool here.
Installation
Option 1 - Install vl_datasets
package from PyPI:
pip install vl-datasets
Option 2 - Install the bleeding edge version on GitHub:
pip install git+https://github.com/visual-layer/vl-datasets.git@main --upgrade
Usage
To start using vl-datasets
, import the clean version of the dataset with:
from vl_datasets import VLFood101
This should import the clean version of the Food101
dataset.
Next, you can load the dataset as a PyTorch Dataset
.
train_dataset = VLFood101('./', split='train')
valid_dataset = VLFood101('./', split='test')
If you have a custom .csv
file you can optionally pass in the file:
train_dataset = VLFood101('./', split='train', exclude_csv='my-file.csv')
The filenames listed in the .csv
will be excluded in the dataset.
Next, you can load the train and validation datasets in a PyTorch training loop.
See the Learn from Examples section to learn more.
NOTE: Sign up here for free to be our beta testers and get full access to the all the
.csv
files for the dataset listed in this repo.
With the dataset loaded you can train a model using PyTorch training loop.
Learn from Examples
|
||
|
||
License
vl-datasets
is licensed under the Apache 2.0 License. See LICENSE.
However, you are bound to the usage license of the original dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license. We provide no warranty or guarantee of accuracy or completeness.
Usage Tracking
This repository incorporates usage tracking using Sentry.io to monitor and collect valuable information about the usage of the application.
Usage tracking allows us to gain insights into how the application is being used in real-world scenarios. It provides us with valuable information that helps in understanding user behavior, identifying potential issues, and making informed decisions to improve the application.
We DO NOT collect folder names, user names, image names, image content and other personaly identifiable information.
What data is tracked?
- Errors and Exceptions: Sentry captures errors and exceptions that occur in the application, providing detailed stack traces and relevant information to help diagnose and fix issues.
- Performance Metrics: Sentry collects performance metrics, such as response times, latency, and resource usage, enabling us to monitor and optimize the application's performance.
To opt out, define an environment variable named SENTRY_OPT_OUT
.
On Linux run the following:
export SENTRY_OPT_OUT=True
Read more on Sentry's official webpage.
Getting Help
Get help from the Visual Layer team or community members via the following channels -
About Visual-Layer
Visual Layer is founded by the authors of XGBoost, Apache TVM & Turi Create - Danny Bickson, Carlos Guestrin and Amir Alush.
Learn more about Visual Layer here.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
File details
Details for the file vl_datasets-0.0.10-py3.10-none-any.whl
.
File metadata
- Download URL: vl_datasets-0.0.10-py3.10-none-any.whl
- Upload date:
- Size: 18.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 03e356e337450cc1b039db699ddc601d50c84053f7b5200716d3c6b5293f6d6c |
|
MD5 | 39d324e66a1602e313e8d46cca1e5316 |
|
BLAKE2b-256 | 781b16531030e585ec083456874c88095dd68cd188ba945ecb80254c2b064802 |
File details
Details for the file vl_datasets-0.0.10-py3.9-none-any.whl
.
File metadata
- Download URL: vl_datasets-0.0.10-py3.9-none-any.whl
- Upload date:
- Size: 18.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 69214f206f37dbd4663dfd76bf2fd007d2ee3f7bda97ff10797f0efc9e8a02b7 |
|
MD5 | aaafe91fab7a5c22b6b349fc6ca7b3ea |
|
BLAKE2b-256 | 9b63604d9fe2a9a200c5751afe042762b0561851cc02603db3207b273533bbec |