Skip to main content

A Python tool for scraping images, galleries, and comments from Reddit using browser cookies.

Project description

RedditMiner: Subreddit Image Scraper

Quick Start

  1. Install RedditMiner from PyPI:

    pip install redditminer
    
  2. Export your Reddit cookies (see below) and save as cookies.txt in your working directory.

  3. Run example commands:

    • Scrape 200 top posts and save as JSON:

      redditminer --subreddit funny --limit 200 --sort top
      
    • Scrape only image URLs (TXT file):

      redditminer --subreddit funny --output-mode image_url
      
    • Scrape posts with top-level comments included (JSON):

      redditminer --subreddit funny --output-mode post --with-comment
      
    • Scrape and immediately download all images:

      redditminer --subreddit funny --output-mode image_url --download-images
      
    • Customize the download directory and parallelism:

      redditminer --subreddit funny --output-mode image_url --download-images --output-dir my_images --max-workers 16
      

Note: Make sure you have cookies.txt in your current directory for authentication. If you encounter rate limiting, RedditMiner will automatically slow down and retry.

RedditMiner Arctic Logo

RedditMiner: Subreddit Image Scraper

RedditMiner is a lightweight, open-source Python tool for scraping image and gallery URLs from any public or private subreddit using your browser session cookies. No Reddit API credentials required—works even for NSFW and restricted subreddits.

Features

  • Cookie Authentication: Uses your browser session cookies for seamless access.
  • Image & Gallery Support: Extracts direct image links and all images from Reddit galleries.
  • Deep Pagination: Efficiently fetches large numbers of posts using Reddit's pagination.
  • Command-Line Interface: Specify the subreddit and options directly via command line.

Installation

  1. Clone the repository

    git clone https://github.com/MisbahKhan0009/RedditMiner.git
    cd RedditMiner
    
  2. Install dependencies

    pip install requests
    
  3. Export your Reddit cookies

    • Log into Reddit in your browser.
    • Use a browser extension like "EditThisCookie" or "Get cookies.txt" to export your cookies for reddit.com.
    • Save the exported file as cookies.txt in the project root directory.

You can install RedditMiner directly from PyPI:

pip install redditminer

Or, for development, clone the repository and install dependencies:

git clone https://github.com/MisbahKhan0009/RedditMiner.git
cd RedditMiner
pip install -e .

Export your Reddit cookies

  • Log into Reddit in your browser.
  • Use a browser extension like "EditThisCookie" or "Get cookies.txt" to export your cookies for reddit.com.
  • Save the exported file as cookies.txt in your working directory.

Usage

Run the scraper with your desired subreddit:

python main.py --subreddit EarthPorn

After installation, you can use RedditMiner from the command line:

redditminer --subreddit EarthPorn

Or, if running from source:

python main.py --subreddit EarthPorn

Optional arguments

  • post (default): Full post data (JSON)
  • image_url: Only image URLs (from both image_url and gallery_images fields, TXT file)
  • post_with_comments: Full post data with comments (JSON, same as post if --with-comment is not set)

Rate Limiting: If Reddit returns a 429 (Too Many Requests) error, the scraper will automatically slow down and retry after 60 seconds. This helps avoid being blocked by Reddit's rate limits. For best results, avoid running multiple scrapes in parallel and consider using a fresh set of cookies if you encounter repeated rate limiting.

Command-line options

  • --subreddit : Subreddit name to scrape (required)
  • --limit : Number of posts to scrape (default: 100)
  • --sort : Sort order (new, hot, top, etc.; default: new)
  • --output-mode : Output format. Options:
    • post (default): Full post data (JSON)
    • image_url: Only image URLs (from both image_url and gallery_images fields, TXT file)
    • post_with_comments: Full post data with comments (JSON, same as post if --with-comment is not set)
  • --with-comment : Include top-level comments for each post (JSON output modes only). Comments from "AutoModerator" are automatically skipped.
  • --download-images : Download all found images
  • --output-dir : Directory to save images (default: images)
  • --max-workers : Number of parallel downloads (default: 8)

Rate Limiting: If Reddit returns a 429 (Too Many Requests) error, RedditMiner will automatically slow down and retry after 60 seconds. For best results, avoid running multiple scrapes in parallel and consider using a fresh set of cookies if you encounter repeated rate limiting.

Examples:

Scrape 200 top posts and save as JSON:

python main.py --subreddit funny --limit 200 --sort top

Scrape only image URLs (TXT file):

python main.py --subreddit funny --output-mode image_url

Scrape posts with top-level comments included (JSON):

python main.py --subreddit funny --output-mode post --with-comment

Each post in the output JSON will have a comments field containing a list of top-level comments (author, body, score, created_utc). Comments from "AutoModerator" are excluded.

Scrape and immediately download all images:

python main.py --subreddit funny --output-mode image_url --download-images

You can customize the download directory and parallelism:

python main.py --subreddit funny --output-mode image_url --download-images --output-dir my_images --max-workers 16

Examples:

Scrape 200 top posts and save as JSON:

redditminer --subreddit funny --limit 200 --sort top

Scrape only image URLs (TXT file):

redditminer --subreddit funny --output-mode image_url

Scrape posts with top-level comments included (JSON):

redditminer --subreddit funny --output-mode post --with-comment

Each post in the output JSON will have a comments field containing a list of top-level comments (author, body, score, created_utc). Comments from "AutoModerator" are excluded.

Scrape and immediately download all images:

redditminer --subreddit funny --output-mode image_url --download-images

You can customize the download directory and parallelism:

redditminer --subreddit funny --output-mode image_url --download-images --output-dir my_images --max-workers 16

Downloaded images are automatically organized by subreddit:

  • For example, images from r/EarthPorn will be saved in images/EarthPorn/ by default.
  • If you specify a custom output directory, images will be saved in <output-dir>/<subreddit>/.

Results are saved as:

  • JSON: output/images_[subreddit]_[timestamp].json
  • TXT (image URLs): output/images_[subreddit]_[timestamp].txt
  • Downloaded images: in images/<subreddit>/ (or <output-dir>/<subreddit>/ if specified)

Project Structure

RedditMiner/
│
├── redditminer/
│   ├── __init__.py
│   └── scraper.py         # Core scraping logic and RedditImageScraper class
│
├── main.py                # Command-line entry point
├── cookies.txt            # Your exported Reddit cookies
├── README.md
└── ...

Contributing

Contributions are welcome! Please open issues or submit pull requests for new features, bug fixes, or improvements.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Disclaimer

This tool is intended for personal and educational use. Please respect Reddit's Terms of Service and do not use this tool for spamming or violating site rules.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

redditminer-1.0.1.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

redditminer-1.0.1-py3-none-any.whl (7.1 kB view details)

Uploaded Python 3

File details

Details for the file redditminer-1.0.1.tar.gz.

File metadata

  • Download URL: redditminer-1.0.1.tar.gz
  • Upload date:
  • Size: 10.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for redditminer-1.0.1.tar.gz
Algorithm Hash digest
SHA256 260c4e0bf059061092dbe91d12f1a6acb2f5f019428d9150bef06767991f92da
MD5 751a392b6505c7dee8f9d457c0df0872
BLAKE2b-256 0918d10bedf17ee672f9c5971e37544b2fba13b5901648f50e77e1c8619170f3

See more details on using hashes here.

File details

Details for the file redditminer-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: redditminer-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 7.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for redditminer-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 efc2b703121e02fd82ca80ecaca7bd5467e0438421e594c07e711b5e0e4f2f54
MD5 e6e7a0a0345b00eeb3662a2e7354d8ce
BLAKE2b-256 2f2eb82e5b21b38105e35f218ee41cf3eb12cc0b3aea4b0e90aee9848fc9db54

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page