Skip to main content

Easily convert common crawl to image caption set using pyspark

Project description

cc2dataset

pypi Try it on gitpod

Easily convert common crawl to a dataset of caption and document. Image/text Audio/text Video/text, ...

Common crawl has 5M wat files. They provide links of the web. This simple tool allows you to process one warc in about 50s and get documents link along with the alt text.

It also runs deduplication against url+text in order to save on output space and speed up the process.

This makes it possible to do the first step of building a dataset like laion5B in 70k cpu core hours. (5*10^6*50/(3600)) That's $2.8k using aws EC2 (0.04$/core hour)

Intended usage

This tool produces a collection of link + caption. It is meant as the stage 1 of creating a dataset. It does deduplication and as minimal as possible filtering (does it look like an url / is the caption non empty).

This produces a large quantity of raw data that can then be further filtered by appropriate techniques. An example of stage 2 can be to estimate the similarity between (link, text) with a model such as CLIP. This may reduce the quantity of data by a factor of up to 100x depending on the chosen threshold.

What hardware to pick ?

CC is big and located at s3 us east 1, so it makes a lot of sense in term of network to use machines located in the same place.

cpu128-dy-c6i-32xlarge instances are advised. Spark stores the non duplicated first stage in local disk. They should be nvme drive for speed during deduplication. At this first stage, one wat takes about 20MB, so the total (over all workers) space must be more than 20MB times wat count. So for example for the whole CC, that means 100TB. So for example that can fit in 150 instances with 1TB nvme drive each. 150 instances of 128 cores is 19200 cores so the whole processing takes 2h. Less instances with bigger hard drives can work too. It's also a possibility to do the processing in multiple pieces if temporary disk space is an issue by specifying --multipart.

Document type

This tool support extracting several documents from CC:

  • image/text: about 300B after dedup
  • image/text even with empty text: estimated 1T
  • audio/text: about 2B after dedup
  • text doc : about 10B after dedup
  • video/text: about 2B after dedup

They can be selected with eg --document_type audio. You may experiment with more document kinds by running python example single_warc_example.py and exploring the resulting output.parquet.

Install

pip install cc2dataset

Python examples

Checkout these examples:

If you have a slurm cluster, refer to https://gist.github.com/rom1504/67ada3dedbecc113ae2dbdfd9c642d83 to start a spark cluster there.

API

This module exposes a single function cc2dataset which takes the same arguments as the command line tool:

  • output_path the output path, should probably start with s3://. The output will be written to this path sufixed by the date (required)
  • wat_index_count the number of wat index files to read, can be None for all. (default 1)
  • wat_count the number of wat files to read, can be None for all, will randomly subsample if present. (default 100)
  • master the spark master url. (default local)
  • num_cores the number of cores of each spark executor. (default 128)
  • mem_gb the memory of each spark executor. (default 256)
  • multipart runs the processing of the specified number of parts, merge at the end (default None)
  • shuffle randomly shuffle the output right before saving (default True)
  • resume the specific path of the output to resume (default None)
  • spark_builder a function that create a spark session, None will default to the built-in methods (default None)
  • document_type the kind of document to extract (default image)
  • source_cc_protocol get common crawl from http or s3 (default s3)

For development

Either locally, or in gitpod (do export PIP_USER=false there)

Setup a virtualenv:

python3 -m venv .env
source .env/bin/activate
pip install -e .

to run tests:

pip install -r requirements-test.txt

then

make lint
make test

You can use make black to reformat the code

python -m pytest -x -s -v tests -k "dummy" to run a specific test

Thanks

  • Vaishaal for providing the initial CC parsing code with efficient libraries
  • rvencu for optimizing the cc parsing code for laion5B on which the idea of this package is based on

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cc2dataset-1.5.0.tar.gz (11.8 kB view details)

Uploaded Source

Built Distribution

cc2dataset-1.5.0-py3-none-any.whl (12.3 kB view details)

Uploaded Python 3

File details

Details for the file cc2dataset-1.5.0.tar.gz.

File metadata

  • Download URL: cc2dataset-1.5.0.tar.gz
  • Upload date:
  • Size: 11.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.17

File hashes

Hashes for cc2dataset-1.5.0.tar.gz
Algorithm Hash digest
SHA256 9677a85d2e5d2aefe1ef76ef9b01074c4ca316aae827cb12700b023ff45a2252
MD5 2af8d852037ba4b31e8ee649b9698934
BLAKE2b-256 1c346d3150135577dd3650811d4eb72b57142a3f413ff968ea378bd3734e2cf2

See more details on using hashes here.

File details

Details for the file cc2dataset-1.5.0-py3-none-any.whl.

File metadata

  • Download URL: cc2dataset-1.5.0-py3-none-any.whl
  • Upload date:
  • Size: 12.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.17

File hashes

Hashes for cc2dataset-1.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e903a02b39f0bb98d320d966b80dd4abfc8646e385488d811abd6bd7e9619ef5
MD5 daf7cca8dee1d839a755ba4a5860f676
BLAKE2b-256 17b7edae6e5bb33371b4b324c5d15aa35229002b2d10d755aa07013deea160a1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page