Skip to main content

Harlequin: Color-driven Generation of Synthetic Data for Referring Expression Comprehension

Project description

Harlequin: Color-driven Generation of Synthetic Data for Referring Expression Comprehension

Luca Parolari, Elena Izzo, Lamberto Ballan

[Paper]

teaser.jpg

About

Referring Expression Comprehension (REC) aims to identify a particular object in a scene by a natural language expression, and is an important topic in visual language understanding.

State-of-the-art methods for this task are based on deep learning, which generally requires expensive and manually labeled annotations. Some works tackle the problem with limited-supervision learning or relying on Large Vision and Language Models. However, the development of techniques to synthesize labeled data is overlooked.

In this paper, we propose a novel framework that generates artificial data for the REC task, taking into account both textual and visual modalities. At first, our pipeline processes existing data to create variations in the annotations.

Then, it generates an image using altered annotations as guidance. The result of this pipeline is a new dataset, called Harlequin, made by more than 1M queries.

This approach eliminates manual data collection and annotation, enabling scalability and facilitating arbitrary complexity.

We pre-train two REC models on Harlequin, then fine-tuned and evaluated on human-annotated datasets. Our experiments show that the pre-training on artificial data is beneficial for performance.

Our pipeline

docs/pipeline.jpg

Usage

TBD: installation?

from harlequin import Harlequin

harlequin = Harlequin(
    "data/harlequin/images",
    "data/harlequin/annotations/instances_test.json"
)

print(len(harlequin))  # 13434

Data

We release Harlequin annotations and images at this link: [Google Drive].

Harlequin is exported in coco format, and provides three annotations file in the annotations folder, while images are in the images folder.

data
`-- harlequin
    |-- annotations
    |   |-- instances_train.json
    |   |-- instances_val.json
    |   `-- instances_test.json
    `-- images

You can download it in the data folder.

Setup

NOTE: if you want to contribute, please see Sec. Development. The following instuctions are for a production environment (e.g. cluster).

Requirements

  • Python 3.10
  • Anaconda (we suggest Miniconda)
pip install -r requirements.txt

Our code uses in PyTorch 2 and Pytorch Lightning 2.

Development

Please read the CONTRIBUTING.md file to setup a development environment and submit your contribution.

This repository is structured as follows:

  • data contains datasets (images, annotations, etc)
  • docs contains documentation about the project
  • notebooks contains *.ipynb files
  • harlequin is the main package
  • tests contains possible unit tests
  • tools contains useful scripts and commands for the project

Utils

Our Makefile provides some utilities for testing and formatting the code:

 make
Usage: make <target>

Targets:
help:                  ## Show the help.
fmt:                   ## Format code using black & isort.
test:                  ## Run tests.
test-cov:              ## Run tests and generate coverage report.
virtualenv:            ## Create a virtual environment.
install:               ## Install dependencies.
precommit-install:     ## Install pre-commit hooks.
precommit-uninstall:   ## Uninstall pre-commit hooks.

Specifically,

  • test runs pytest and executes all the unit tests listed in tests folder
  • fmt formats the code using black and organizes the import though isort

Manual commands

If you want to manually run those utilities use:

  • pytest -v --cov-config .coveragerc --cov=harlequin -l --tb=short --maxfail=1 tests/ for testing
  • coverage html for the coverage report
  • isort *.py harlequin/ to organize imports
  • black *.py harlequin/ for the code style

Citation

TODO

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

harlequin_dataset-0.1.0.tar.gz (3.6 kB view details)

Uploaded Source

Built Distribution

harlequin_dataset-0.1.0-py3-none-any.whl (4.1 kB view details)

Uploaded Python 3

File details

Details for the file harlequin_dataset-0.1.0.tar.gz.

File metadata

  • Download URL: harlequin_dataset-0.1.0.tar.gz
  • Upload date:
  • Size: 3.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.14

File hashes

Hashes for harlequin_dataset-0.1.0.tar.gz
Algorithm Hash digest
SHA256 8e466b42081183b868a11907ae02b66d0dfef98b38330cbfd6d2132cbb342a50
MD5 31b6fb07effa7f91bec2b939c216825b
BLAKE2b-256 99318f0a97c15d4b4da59d4840118827b5ee2006efc523853ee428111ad67462

See more details on using hashes here.

File details

Details for the file harlequin_dataset-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for harlequin_dataset-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f84149549c128b3fe91b0a0279b8b1629f6f5d3c885f10c6d6bee59feef72db2
MD5 d23713c433a0a481760ebac3b368c64d
BLAKE2b-256 b0b3e2f89e386a9fe0596cc0a0af639936b4da4e242830381fca8887a1800cb1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page