Harlequin: Color-driven Generation of Synthetic Data for Referring Expression Comprehension
Project description
Harlequin: Color-driven Generation of Synthetic Data for Referring Expression Comprehension
Luca Parolari, Elena Izzo, Lamberto Ballan
About
Referring Expression Comprehension (REC) aims to identify a particular object in a scene by a natural language expression, and is an important topic in visual language understanding.
State-of-the-art methods for this task are based on deep learning, which generally requires expensive and manually labeled annotations. Some works tackle the problem with limited-supervision learning or relying on Large Vision and Language Models. However, the development of techniques to synthesize labeled data is overlooked.
In this paper, we propose a novel framework that generates artificial data for the REC task, taking into account both textual and visual modalities. At first, our pipeline processes existing data to create variations in the annotations.
Then, it generates an image using altered annotations as guidance. The result of this pipeline is a new dataset, called Harlequin, made by more than 1M queries.
This approach eliminates manual data collection and annotation, enabling scalability and facilitating arbitrary complexity.
We pre-train two REC models on Harlequin, then fine-tuned and evaluated on human-annotated datasets. Our experiments show that the pre-training on artificial data is beneficial for performance.
Our pipeline
Installation
pip install harlequin-dataset
Usage
from harlequin import Harlequin
harlequin = Harlequin(
"data/harlequin/images",
"data/harlequin/annotations/instances_test.json"
)
print(len(harlequin)) # 13434
Data
We release Harlequin annotations and images at this link: [Google Drive].
Harlequin is exported in coco format, and provides three annotations file in the annotations
folder, while images are in the images
folder.
data
`-- harlequin
|-- annotations
| |-- instances_train.json
| |-- instances_val.json
| `-- instances_test.json
`-- images
You can download it in the data
folder.
Setup
NOTE: if you want to contribute, please see Sec. Development. The following instuctions are for a production environment (e.g. cluster).
Requirements
- Python 3.10
- Anaconda (we suggest Miniconda)
pip install -r requirements.txt
Our code uses in PyTorch 2 and Pytorch Lightning 2.
Development
Please read the CONTRIBUTING.md file to setup a development environment and submit your contribution.
This repository is structured as follows:
data
contains datasets (images, annotations, etc)docs
contains documentation about the projectnotebooks
contains*.ipynb
filesharlequin
is the main packagetests
contains possible unit teststools
contains useful scripts and commands for the project
Utils
Our Makefile provides some utilities for testing and formatting the code:
❯ make
Usage: make <target>
Targets:
help: ## Show the help.
fmt: ## Format code using black & isort.
test: ## Run tests.
test-cov: ## Run tests and generate coverage report.
virtualenv: ## Create a virtual environment.
install: ## Install dependencies.
precommit-install: ## Install pre-commit hooks.
precommit-uninstall: ## Uninstall pre-commit hooks.
release: ## Create a new tag for release.
Specifically,
test
runs pytest and executes all the unit tests listed intests
folderfmt
formats the code using black and organizes the import thoughisort
Manual commands
If you want to manually run those utilities use:
pytest -v --cov-config .coveragerc --cov=harlequin -l --tb=short --maxfail=1 tests/
for testingcoverage html
for the coverage reportisort *.py harlequin/
to organize importsblack *.py harlequin/
for the code style
Citation
TODO
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file harlequin_dataset-0.1.5.tar.gz
.
File metadata
- Download URL: harlequin_dataset-0.1.5.tar.gz
- Upload date:
- Size: 3.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.14
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d47d3e129e5a53d8885574c5037f89bc7bb3fbb21879dd4e120d84225a30f0eb |
|
MD5 | 1c624a4bb504a103d065f42be91efa18 |
|
BLAKE2b-256 | 51b392851f7bd734e54c9207e782b51943fcf256b2f27fef8102ab2ca6b38096 |
File details
Details for the file harlequin_dataset-0.1.5-py3-none-any.whl
.
File metadata
- Download URL: harlequin_dataset-0.1.5-py3-none-any.whl
- Upload date:
- Size: 4.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.14
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0c60266d1167376913cfd88ce6b7739e1be3525818c64894d6ecec4b4bb572bf |
|
MD5 | d2e542dc8638eed5bbe91788f24bc854 |
|
BLAKE2b-256 | dab7e70aea5fd8223b19d7c3e9c928e3829e36ac034b572189bee5efc1ff0e44 |