MCP server for scraping Reddit - no API keys required
Project description
mcp-reddit
MCP server for scraping Reddit - no API keys required.
Scrapes posts, comments, and media from subreddits and user profiles using old.reddit.com and Libreddit mirrors.
Features
- No API keys - Scrapes directly, no Reddit API credentials needed
- Media downloads - Images, videos with audio (requires ffmpeg)
- Local persistence - Query scraped data offline
- Rich filtering - By post type, score, keywords
- Comments included - Full thread scraping
Installation
pip install mcp-reddit
Or with uvx:
uvx mcp-reddit
Usage Modes
Local (stdio) - Default
For local MCP clients like Claude Desktop and Claude Code:
uvx mcp-reddit
Remote (HTTP/SSE)
For remote MCP clients that connect via URL:
uvx mcp-reddit --http --port 8000
Options:
--http- Run in HTTP/SSE mode instead of stdio--host- Host to bind to (default: 0.0.0.0)--port- Port to listen on (default: 8000, orPORTenv var)
The server exposes:
GET /sse- SSE endpoint for MCP connectionPOST /messages/- Message endpointGET /health- Health check
Configuration
Add to your Claude Desktop or Claude Code settings:
Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json)
Claude Desktop doesn't inherit your shell PATH, so you need the full path to uvx:
# Find your uvx path
which uvx
Then use the full path in your config:
{
"mcpServers": {
"reddit": {
"command": "/Users/YOUR_USERNAME/.local/bin/uvx",
"args": ["mcp-reddit"]
}
}
}
Replace /Users/YOUR_USERNAME/.local/bin/uvx with the output from which uvx.
Claude Code
claude mcp add reddit -- uvx mcp-reddit
Or manually in ~/.claude.json:
{
"mcpServers": {
"reddit": {
"command": "uvx",
"args": ["mcp-reddit"]
}
}
}
Available Tools
| Tool | Description |
|---|---|
scrape_subreddit |
Scrape posts from a subreddit |
scrape_user |
Scrape posts from a user's profile |
scrape_post |
Fetch a specific post by URL (supports media download) |
get_posts |
Query stored posts with filters |
get_comments |
Query stored comments |
search_reddit |
Search across all scraped data |
get_top_posts |
Get highest scoring posts |
list_scraped_sources |
List all scraped subreddits/users |
Example Usage
"Scrape the top 50 posts from r/LocalLLaMA"
"Fetch this post and download the image: https://reddit.com/r/ClaudeAI/comments/abc123/title"
"Search my scraped data for posts about 'fine-tuning'"
"Get the top 10 posts from r/ClaudeAI by score"
Data Storage
Data is stored in ~/.mcp-reddit/data/ by default.
Set MCP_REDDIT_DATA_DIR environment variable to customize:
{
"mcpServers": {
"reddit": {
"command": "/Users/YOUR_USERNAME/.local/bin/uvx",
"args": ["mcp-reddit"],
"env": {
"MCP_REDDIT_DATA_DIR": "/path/to/your/data"
}
}
}
}
Optional: Video with Audio
To download Reddit videos with audio, install ffmpeg:
# macOS
brew install ffmpeg
# Ubuntu/Debian
sudo apt install ffmpeg
# Windows
choco install ffmpeg
Credits
Built on top of reddit-universal-scraper by @ksanjeev284 - a full-featured Reddit scraper with analytics dashboard, REST API, and plugin system.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcp_reddit-0.3.0.tar.gz.
File metadata
- Download URL: mcp_reddit-0.3.0.tar.gz
- Upload date:
- Size: 15.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e43b7adb23ff91aed1286ab895be6fcab24eb2f8daf2a1f34cbd14c71555c48b
|
|
| MD5 |
72416b0f3f180b48d889eb44b84ddc81
|
|
| BLAKE2b-256 |
a9f06f0688ed6b7e78c30ff197fbacde636659e43a549e9774353c67dc45a375
|
File details
Details for the file mcp_reddit-0.3.0-py3-none-any.whl.
File metadata
- Download URL: mcp_reddit-0.3.0-py3-none-any.whl
- Upload date:
- Size: 14.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4c6d98bd7a179541beca0733eb5f6bc7bbbdeead5556782e5e749aa642c56193
|
|
| MD5 |
412b38fa1ac246069e4acf8d020968be
|
|
| BLAKE2b-256 |
23a22dbf1fabb2de3938031fe0f6ffe8886082930e7f2d4f7ac0d6c05584b453
|