Skip to main content

Python module to scrape Shopify store URLs

Project description

shopify-scrape

CircleCI codecov

Installation

pip install shopify_scrape

Usage

python -m shopify_scrape.extract url -h

usage: extract.py url [-h] [-d DEST_PATH] [-o OUTPUT_TYPE]
                      [-p PAGE_RANGE [PAGE_RANGE ...]] [-c] [-f FILE_PATH]
                      url

positional arguments:
  url                   URL to extract.

optional arguments:
  -h, --help            show this help message and exit
  -d DEST_PATH, --dest_path DEST_PATH
                        Destination folder. Defaults to current directory
                        ('./')
  -o OUTPUT_TYPE, --output_type OUTPUT_TYPE
                        Output file type ('json' or 'csv'). Defaults to 'json'
  -p PAGE_RANGE [PAGE_RANGE ...], --page_range PAGE_RANGE [PAGE_RANGE ...]
                        Page range as tuple to extract. There are 30 items per
                        page.
  -c, --collections     If true, extracts '/collections.json' instead.
  -f FILE_PATH, --file_path FILE_PATH
                        File path to write. Defaults to
                        '[dest_path]/[url].products' or
                        '[dest_path]/[url].collections'

python -m shopify_scrape.extract batch -h

usage: extract.py batch [-h] [-d DEST_PATH] [-o OUTPUT_TYPE]
                        [-p PAGE_RANGE [PAGE_RANGE ...]] [-c]
                        [-r ROW_RANGE [ROW_RANGE ...]] [-l]
                        urls_file_path url_column

positional arguments:
  urls_file_path        File path of csv containing URLs to extract.
  url_column            Name of unique column with URLs.

optional arguments:
  -h, --help            show this help message and exit
  -d DEST_PATH, --dest_path DEST_PATH
                        Destination folder. Defaults to current directory
                        ('./')
  -o OUTPUT_TYPE, --output_type OUTPUT_TYPE
                        Output file type ('json' or 'csv'). Defaults to 'json'
  -p PAGE_RANGE [PAGE_RANGE ...], --page_range PAGE_RANGE [PAGE_RANGE ...]
                        Page range as tuple to extract. There are 30 items per
                        page.
  -c, --collections     If true, extracts '/collections.json' instead.
  -r ROW_RANGE [ROW_RANGE ...], --row_range ROW_RANGE [ROW_RANGE ...]
                        Row range specified as two integers.
  -l, --log             If true, logs the success of each URL attempt.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

shopify-scrape-0.0.4.tar.gz (7.6 kB view hashes)

Uploaded Source

Built Distribution

shopify_scrape-0.0.4-py3-none-any.whl (10.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page