Skip to main content

A Python tool for scraping images, galleries, and comments from Reddit using browser cookies.

Project description

RedditMiner: Subreddit Image Scraper

Quick Start

  1. Install RedditMiner from PyPI:

    pip install redditminer
    
  2. Export your Reddit cookies (see below) and save as cookies.txt in your working directory.

  3. Run example commands:

    • Scrape 200 top posts and save as JSON:

      redditminer --subreddit funny --limit 200 --sort top
      
    • Scrape only image URLs (TXT file):

      redditminer --subreddit funny --output-mode image_url
      
    • Scrape posts with top-level comments included (JSON):

      redditminer --subreddit funny --output-mode post --with-comment
      
    • Scrape and immediately download all images:

      redditminer --subreddit funny --output-mode image_url --download-images
      
    • Customize the download directory and parallelism:

      redditminer --subreddit funny --output-mode image_url --download-images --output-dir my_images --max-workers 16
      

Note: Make sure you have cookies.txt in your current directory for authentication. If you encounter rate limiting, RedditMiner will automatically slow down and retry.

RedditMiner Arctic Logo

RedditMiner: Subreddit Image Scraper

RedditMiner is a lightweight, open-source Python tool for scraping image and gallery URLs from any public or private subreddit using your browser session cookies. No Reddit API credentials required—works even for NSFW and restricted subreddits.

Features

  • Cookie Authentication: Uses your browser session cookies for seamless access.
  • Image & Gallery Support: Extracts direct image links and all images from Reddit galleries.
  • Deep Pagination: Efficiently fetches large numbers of posts using Reddit's pagination.
  • Command-Line Interface: Specify the subreddit and options directly via command line.

Installation

  1. Clone the repository

    git clone https://github.com/MisbahKhan0009/RedditMiner.git
    cd RedditMiner
    
  2. Install dependencies

    pip install requests
    
  3. Export your Reddit cookies

    • Log into Reddit in your browser.
    • Use a browser extension like "EditThisCookie" or "Get cookies.txt" to export your cookies for reddit.com.
    • Save the exported file as cookies.txt in the project root directory.

You can install RedditMiner directly from PyPI:

pip install redditminer

Or, for development, clone the repository and install dependencies:

git clone https://github.com/MisbahKhan0009/RedditMiner.git
cd RedditMiner
pip install -e .

Export your Reddit cookies

  • Log into Reddit in your browser.
  • Use a browser extension like "EditThisCookie" or "Get cookies.txt" to export your cookies for reddit.com.
  • Save the exported file as cookies.txt in your working directory.

Usage

Run the scraper with your desired subreddit:

python main.py --subreddit EarthPorn

After installation, you can use RedditMiner from the command line:

redditminer --subreddit EarthPorn

Or, if running from source:

python main.py --subreddit EarthPorn

Optional arguments

  • post (default): Full post data (JSON)
  • image_url: Only image URLs (from both image_url and gallery_images fields, TXT file)
  • post_with_comments: Full post data with comments (JSON, same as post if --with-comment is not set)

Rate Limiting: If Reddit returns a 429 (Too Many Requests) error, the scraper will automatically slow down and retry after 60 seconds. This helps avoid being blocked by Reddit's rate limits. For best results, avoid running multiple scrapes in parallel and consider using a fresh set of cookies if you encounter repeated rate limiting.

Command-line options

  • --subreddit : Subreddit name to scrape (required)
  • --limit : Number of posts to scrape (default: 100)
  • --sort : Sort order (new, hot, top, etc.; default: new)
  • --output-mode : Output format. Options:
    • post (default): Full post data (JSON)
    • image_url: Only image URLs (from both image_url and gallery_images fields, TXT file)
    • post_with_comments: Full post data with comments (JSON, same as post if --with-comment is not set)
  • --with-comment : Include top-level comments for each post (JSON output modes only). Comments from "AutoModerator" are automatically skipped.
  • --download-images : Download all found images
  • --output-dir : Directory to save images (default: images)
  • --max-workers : Number of parallel downloads (default: 8)

Rate Limiting: If Reddit returns a 429 (Too Many Requests) error, RedditMiner will automatically slow down and retry after 60 seconds. For best results, avoid running multiple scrapes in parallel and consider using a fresh set of cookies if you encounter repeated rate limiting.

Examples:

Scrape 200 top posts and save as JSON:

python main.py --subreddit funny --limit 200 --sort top

Scrape only image URLs (TXT file):

python main.py --subreddit funny --output-mode image_url

Scrape posts with top-level comments included (JSON):

python main.py --subreddit funny --output-mode post --with-comment

Each post in the output JSON will have a comments field containing a list of top-level comments (author, body, score, created_utc). Comments from "AutoModerator" are excluded.

Scrape and immediately download all images:

python main.py --subreddit funny --output-mode image_url --download-images

You can customize the download directory and parallelism:

python main.py --subreddit funny --output-mode image_url --download-images --output-dir my_images --max-workers 16

Examples:

Scrape 200 top posts and save as JSON:

redditminer --subreddit funny --limit 200 --sort top

Scrape only image URLs (TXT file):

redditminer --subreddit funny --output-mode image_url

Scrape posts with top-level comments included (JSON):

redditminer --subreddit funny --output-mode post --with-comment

Each post in the output JSON will have a comments field containing a list of top-level comments (author, body, score, created_utc). Comments from "AutoModerator" are excluded.

Scrape and immediately download all images:

redditminer --subreddit funny --output-mode image_url --download-images

You can customize the download directory and parallelism:

redditminer --subreddit funny --output-mode image_url --download-images --output-dir my_images --max-workers 16

Downloaded images are automatically organized by subreddit:

  • For example, images from r/EarthPorn will be saved in images/EarthPorn/ by default.
  • If you specify a custom output directory, images will be saved in <output-dir>/<subreddit>/.

Results are saved as:

  • JSON: output/images_[subreddit]_[timestamp].json
  • TXT (image URLs): output/images_[subreddit]_[timestamp].txt
  • Downloaded images: in images/<subreddit>/ (or <output-dir>/<subreddit>/ if specified)

Project Structure

RedditMiner/
│
├── redditminer/
│   ├── __init__.py
│   └── scraper.py         # Core scraping logic and RedditImageScraper class
│
├── main.py                # Command-line entry point
├── cookies.txt            # Your exported Reddit cookies
├── README.md
└── ...

Contributing

Contributions are welcome! Please open issues or submit pull requests for new features, bug fixes, or improvements.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Disclaimer

This tool is intended for personal and educational use. Please respect Reddit's Terms of Service and do not use this tool for spamming or violating site rules.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

redditminer-1.0.2.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

redditminer-1.0.2-py3-none-any.whl (7.1 kB view details)

Uploaded Python 3

File details

Details for the file redditminer-1.0.2.tar.gz.

File metadata

  • Download URL: redditminer-1.0.2.tar.gz
  • Upload date:
  • Size: 10.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for redditminer-1.0.2.tar.gz
Algorithm Hash digest
SHA256 ed84ccdee618cc3981710d10d24c0a01a4a48db01943df8335919d646fc371a1
MD5 fc86d254757bbf689d96caf0c04687d3
BLAKE2b-256 eec03ad2ebf9ff5f8ee8d1e848e163b087aea9a4be30d4eefe1c4e87afac7d80

See more details on using hashes here.

File details

Details for the file redditminer-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: redditminer-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 7.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for redditminer-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 3d1476250fd6e58f346ab964b98cf4d3d6b7cd17091c08e682e13f10579c762f
MD5 3e55439b9e0bc0279275dde87337f5a7
BLAKE2b-256 52de1389fd2757520cb87915c9fe12a2a64b7392d3671d807c36c477a6da3ec0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page