A simple image scraper to download all images from a given url
Project description
A cool command line tool which downloads all images in the given webpage.
Build Status |
Version |
Downloads |
---|---|---|
Download
pip install(recommended)
You can also download using pip:
$ pip install ImageScraper
Dependencies
Note that ImageScraper depends on lxml and requests. If you run into problems in the compilation of lxml through pip, install the libxml2-dev and libxslt-dev packages on your system.
Usage
$ image-scraper [OPTIONS] URL
Options
-h, --help Print help
-m, --max-images <number> Maximum number images to be scraped
-s, --save-dir <path> Name of the folder to save the images (default: ./images_<domain>)
--max-filesize <size> Limit on size of image in bytes (default: 100000000)
--dump-urls Print the URLs of the images
If you downloaded the tar:
Extract the contents of the tar file.
$cd ImageScraper/
$python setup.py install
$image-scraper --max-images 10 [url to scrape]
If installed using pip:
Open python in terminal.
$image-scraper --max-images 10 [url to scrape]
NOTE:
A new folder called “images_” will be created in the same place, containing all the downloaded images.
Upgrading
Check if a newer version if available and upgrade using:
$ sudo pip install ImageScraper --upgrade
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
ImageScraper-2.0.5.tar.gz
(6.6 kB
view hashes)