Skip to main content

MTG image dataset with automatic image scraping and conversion.

Project description

🧙‍♂️ Magic The Gathering ✨
🧚‍♀️ Dataset 🧝‍♀️

With automated multithreaded image downloading,
caching and optional dataset conversion.


Example reconstructions of dataset elements
using a simple Beta-VAE


⚡️  Quickstart

  1. Install mtgdata with pip install mtgdata

  2. Prepare or convert the MTG data using the command line with python -m mtgdata --help


📋  Features

MTG Card Face Dataset

  • Automatically scrape and download card images from Scryfall
  • Multithreaded download through a randomized proxy list
  • Only return valid images that are not placeholders
  • Return all the faces of a card
  • Normalise data, some images are incorrectly sized
  • Cached

Convert to HDF5

  • Convert the data to an hdf5 dataset
  • Much faster than raw jpg or png image accesses
  • Metadata json file allows linking back to original scryfall information.

Pickle HD5F Dataset Class

  • Load the converted HDF5 dataset from disk from multiple threads / processes

⬇️  Download Images

Command Line

You can prepare (download) all the normal quality images from the default Scryfall bulk data by running mtgdata/__main__.py:

python3 mtgdata prepare --help

Otherwise, you can instead convert the downloaded images into an hdf5 dataset by running:

python3 mtgdata convert --help

Programmatically

Alternatively you can download the images from within python by simply instantiating the mtgdata.ScryfallDataset object. Similar arguments can be specified as that of the command line approach.

from mtgdata import ScryfallDataset, ScryfallImageType, ScryfallBulkType

data = ScryfallDataset(
    img_type=ScryfallImageType.small,
    bulk_type=ScryfallBulkType.default_cards,
    transform=None,
)

# you can access the dataset elements like usual
Proxy Issues?

The scrape logic used to obtain the proxy list for mtgdata.utils.proxy.ProxyDownloader will probably go out of date. You can override the default scrape logic used by the Dataset download logic by registering a new scrape function.

from doorway.x import proxies_register_scraper
from typing import List, Dict

@proxies_register_scraper(name='my_proxy_source', is_default=True)
def custom_proxy_scraper(proxy_type: str) -> List[Dict[str, str]]:
    # you should respect this setting
    assert proxy_type in ('all', 'http', 'https')
    # proxies is a list of dictionaries, where each dictionary only has one entry:
    # - the key is the protocol
    # - the value is the matching full url
    return [
        {'HTTP': 'http://<my-http-proxy>.com'},
        {'HTTPS': 'https://<my-https-proxy>.com'},
    ]

🔄  Convert Images to an HDF5 Dataset

Command Line

The images can be convert to hdf5 format by running the file mtgdata.scryfall_convert. Various arguments can be specified, please see the argparse arguments at the bottom of the file for more information.

python3 mtgdata/scryfall_convert.py

The resulting data file will have the data key corresponding to the images data.

Programmatically

Alternatively you can convert and generate the hdf5 dataset from within python by simply calling the mtgdata.scryfall_convert.generate_converted_dataset function. Similar arguments can be specified as that of the command line approach.

from mtgdata import generate_converted_dataset, ScryfallImageType, ScryfallBulkType

generate_converted_dataset(
    out_img_type=ScryfallImageType.small,
    out_bulk_type=ScryfallBulkType.default_cards,
    save_root='./data/converted/',
    out_obs_size_wh=(224, 160),
    convert_speed_test=True,
)

Loading The Data

We provide a helper dataset class for loading this generated file.

from torch.utils.data import DataLoader
from mtgdata import Hdf5Dataset


# this h5py dataset supports pickling, and can be wrapped with a pytorch dataset.
data = Hdf5Dataset(
    h5_path='data/converted/mtg-default_cards-normal-60459x224x160x3.h5',  # name will differ
    h5_dataset_name='data',
    transform=None,
)

# you can wrap the dataset with a pytorch dataloader like usual, and specify more than one worker
dataloader = DataLoader(data, shuffle=True, num_workers=2, batch_size=64)

# to load the data into memory as a numpy array, you can call `data = data.numpy()`
# this will take a long time depending on your disk speed and use a lot of memory.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mtgdata-0.3.0.tar.gz (446.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mtgdata-0.3.0-py3-none-any.whl (21.0 kB view details)

Uploaded Python 3

File details

Details for the file mtgdata-0.3.0.tar.gz.

File metadata

  • Download URL: mtgdata-0.3.0.tar.gz
  • Upload date:
  • Size: 446.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for mtgdata-0.3.0.tar.gz
Algorithm Hash digest
SHA256 7c5bc65351cedc2eb980d0dc158ac8083ee63e0fe1747e2303e17979c9bc66b3
MD5 e70b6734a1d45a515567ef2b81f3c49b
BLAKE2b-256 84503ba2fcd9da7e11eaffc59cc5610660159dd59cf2a2b2c681f2cdebc7fdca

See more details on using hashes here.

File details

Details for the file mtgdata-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: mtgdata-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 21.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for mtgdata-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3f2709ba0091c118cc6a1a2725a90eae825e6a9623bf19175c896a0d57920efe
MD5 9571a7239268e43a931d63ace791ead6
BLAKE2b-256 7b981771a7f8a76893e510ed10f70b3f0428d4f7a7dc6905bef354239e3ef722

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page