Skip to main content

Extract Reddit threads and comment trees for research and analysis

Project description

Reddit Comment Harvester

Small Python utility for pulling Reddit threads (posts + comment trees) into structured Python objects or flat CSV for analysis.

Built for research workflows where you already have thread URLs and want repeatable exports of post metadata (title, subreddit, score) and comment data (authors, bodies, scores).

Quick disclaimer: You're responsible for complying with Reddit's Terms of Service and rate limits. This tool adds optional randomized delays to reduce request bursts.

Why This Exists

Many tools rely on Reddit's API (like PRAW), which requires authentication and limits access. This tool fetches public thread pages and parses the rendered HTML directly, so you can:

  • Extract threads without API registration
  • Get full comment trees with metadata (authors, scores, timestamps)
  • Export to CSV for analysis
  • Add optional randomized delays to reduce request bursts

What it doesn't do: vote, post, access private/restricted communities, or authenticate with Reddit.

How it works: Uses BeautifulSoup to parse Reddit's public HTML pages (no official API required).

Table of Contents

About

Reddit Comment Harvester is a lightweight Python package for research workflows involving Reddit discussions. It extracts thread and comment data by parsing Reddit's public HTML pages, without requiring API authentication.

Data captured:

  • Thread: title, author, score, subreddit, post date, comment count
  • Comments: author, body text, score, depth in tree, comment date

Limitations: Comments with deleted/removed bodies appear with empty text fields. Comment nesting depth is preserved but trees are flattened in CSV export.

Getting Started

Installation

pip install reddit-comment-harvester

Or from source:

git clone https://github.com/wlyastn/reddit-comment-harvester.git
cd reddit-comment-harvester
pip install -e .

Quick Start

from reddit_comment_harvester import RedditScraper

scraper = RedditScraper()
thread = scraper.scrape("https://reddit.com/r/python/comments/abc123/")

print(f"Title: {thread.title}")
print(f"Subreddit: {thread.subreddit}")
print(f"Score: {thread.score}")
print(f"Comments: {len(thread.comments)}")

Usage

Extract a Single Thread

from reddit_comment_harvester import RedditScraper

scraper = RedditScraper()
thread = scraper.scrape("https://reddit.com/r/python/comments/abc123/")

Batch Process Multiple URLs

from reddit_comment_harvester import RedditScraper

scraper = RedditScraper()

urls = [
    "https://reddit.com/r/python/comments/abc123/",
    "https://reddit.com/r/python/comments/def456/",
    "https://reddit.com/r/python/comments/ghi789/",
]

threads = scraper.scrape_batch(urls)
print(f"Scraped {len(threads)} threads")

Process URLs from CSV

from reddit_comment_harvester import RedditScraper

scraper = RedditScraper()

results = scraper.scrape_csv(
    input_file="urls.csv",
    output_file="results.csv",
    url_column="URL"
)

print(f"Saved {len(results)} results to results.csv")

Example Output

Thread Object

After scraping a thread, you get a Thread object:

thread.title
# "Why Python is the best language for beginners"

thread.author
# "john_coder"

thread.subreddit
# "python"

thread.score
# 2847

thread.num_comments
# 156

thread.comments[0]
# Comment(
#   author='jane_dev',
#   body='Great explanation! Especially liked the...',
#   score=245,
#   depth=0
# )

CSV Export

When exported to CSV, each row represents one comment (the post becomes a metadata header):

url,title,subreddit,post_id,author,score,comment_author,comment_body,comment_score,comment_depth
https://reddit.com/r/python/comments/abc123/,Why Python is best...,python,abc123,john_coder,2847,jane_dev,"Great explanation! Especially liked...",245,0
https://reddit.com/r/python/comments/abc123/,Why Python is best...,python,abc123,john_coder,2847,mike_learn,"I disagree with point 2 because...",89,1

Configuration

Optional parameters for scraper behavior:

scraper = RedditScraper(
    timeout=60.0,           # Request timeout in seconds (default: 60.0)
    delay=True,             # Add random delays between requests (default: True)
    proxies=None            # Optional proxy config (default: None)
)

timeout: How long to wait for a response (seconds). Increase if you get timeouts on large threads.

delay: Adds 2–6 second random waits between requests. Recommended to keep enabled.

proxies: Use if you need to route requests through a proxy. Format: {"https": "http://proxy:8080"}

Update configuration on an existing scraper:

scraper.set_timeout(45.0)
scraper.set_delay(True)
scraper.set_proxy({"https": "http://proxy.example.com:8080"})

API Reference

RedditScraper Class

scrape(url: str) -> Thread

Scrape a single Reddit thread or comment.

thread = scraper.scrape("https://reddit.com/r/python/comments/abc123/")

scrape_batch(urls: List[str], skip_errors: bool = True) -> List[Thread]

Scrape multiple URLs and return results.

threads = scraper.scrape_batch(urls, skip_errors=True)

scrape_csv(input_file: str, output_file: Optional[str] = None, url_column: str = "URL", skip_errors: bool = True) -> List[dict]

Scrape URLs from a CSV file and optionally save results.

results = scraper.scrape_csv("urls.csv", output_file="results.csv")

Data Models

Thread

Represents a Reddit thread with the following attributes:

thread.title          # str - Thread title
thread.subreddit      # str - Subreddit name
thread.author         # str - Post author username
thread.url            # str - Full Reddit URL
thread.post_id        # str - Reddit post ID
thread.score          # int - Post upvotes/score
thread.comments       # List[Comment] - List of comments

Comment

Represents a comment with the following attributes:

comment.author        # str - Comment author username
comment.body          # str - Comment text/content
comment.score         # int - Comment upvotes/score
comment.timestamp     # str - Comment timestamp

CSV Format

Input Format

Pass a CSV file with a URL column:

URL
https://reddit.com/r/python/comments/abc123/
https://reddit.com/r/python/comments/def456/

Output Format

Post metadata repeats on each row (one comment per row):

url,title,subreddit,author,score,num_comments
https://reddit.com/r/python/comments/abc123/,Why Python is best...,python,john_coder,2847,156
https://reddit.com/r/python/comments/abc123/,Why Python is best...,python,john_coder,2847,156

Rate Limiting & Responsible Use

Important: You must comply with Reddit's Terms of Service and rate limits.

Best practices:

  • Keep delay=True (default). It adds 2–6 second waits to reduce request bursts.
  • Don't scrape the same content repeatedly. Cache results.
  • Stop immediately if you see 429 (Too Many Requests) errors.
  • Don't use this for spam, manipulation, or violating Reddit's policies.

If you get rate-limited:

scraper.set_timeout(90.0)  # Increase timeout
scraper.set_delay(True)     # Ensure delays are on
# Then try again after 10+ minutes

Alternatives

When to use PRAW instead:

  • You need to access private/restricted subreddits
  • You want to interact with Reddit (voting, posting, composing)
  • You prefer the official Python wrapper

When to use this:

  • You have public URLs and want quick, one-off extraction
  • You don't want to manage API credentials
  • CSV export is your primary output

Contributing

Contributions welcome

License

MIT License, see LICENSE for details.

Disclaimer & Responsibility

This tool is provided as-is for research and analysis. You are responsible for:

  • Complying with Reddit's Terms of Service and any legal restrictions in your jurisdiction
  • Using appropriate rate limits and delays
  • Respecting Reddit's infrastructure and user privacy
  • Obtaining consent if needed for your intended use

The maintainers assume no liability for misuse or violations. Use responsibly.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reddit_comment_harvester-0.1.1.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reddit_comment_harvester-0.1.1-py3-none-any.whl (8.3 kB view details)

Uploaded Python 3

File details

Details for the file reddit_comment_harvester-0.1.1.tar.gz.

File metadata

File hashes

Hashes for reddit_comment_harvester-0.1.1.tar.gz
Algorithm Hash digest
SHA256 81cdcbe440574c524b0e940e98fa937e340aaf43161498dc9d047cfebc22b5e3
MD5 e71ad0c3e2146452918b03959b544297
BLAKE2b-256 9491edecc9a1b566351e091a38a62d45d87993f297267848003e5875bcc5550b

See more details on using hashes here.

File details

Details for the file reddit_comment_harvester-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for reddit_comment_harvester-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 64212151367840709fac17b2131b264f2a8dee28ad3f4d1daaad557eeff5df9a
MD5 6a3ab8997f5a8aa44d17aa835dbcfeb9
BLAKE2b-256 699862f5294d215611757b87b3c7c7cd0838c8048ed874f731300054ca339735

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page