Skip to main content

A scraper which will scrape out multimedia data from reddit.

Project description

Reddit Multimodal Crawler Downloads

This is a wrapper to the PRAW package to scrape content from image in the form of csv, json, tsv, sql files.

This repository will help you scrape various subreddits, and will return to you multi-media attributes.

You can pip install this to integrate with some other application, or use it as an commandline application.

pip install reddit-multimodal-crawler

How to use the repository?

Before running the code, you should have registered with the Reddit API and create a sample project to run the code and obtain the client_id, client_secret and make a user_agent. Then pass them in the arguements.

Although, the easier way is to use the pip install reddit-multimodal-crawler.

Functionalities

This will help you scrape multiple subreddits just like PRAW but, will also return and save datasets for the same. Will scrape the posts and the comments as well.

Sample Code

import nltk
from reddit_multimodal_crawler.crawler import Crawler
import argparse

nltk.download("vader_lexicon")

if __name__ == "__main__":

    parser = argparse.ArgumentParser()
    parser.add_argument(
        "--subreddit_file_path",
        "A path to the file which contains the subreddits to scrape from.",
        type=str,
    )
    parser.add_argument(
        "--limit", "The limit to number of articles to scrape.", type=int
    )
    parser.add_argument("--client_id", "The Client ID provided by Reddit.", type=str)
    parser.add_argument(
        "--client_secret", "The Secret ID provided by the Reddit.", type=str
    )
    parser.add_argument(
        "--user_agent",
        "The User Agent in the form of <APP_NAME> <VERSION> by /u/<REDDIT_USERNAME>",
        type=str,
    )
    parser.add_argument(
        "--posts", "A boolean variable to parse through the posts or not.", type=bool
    )
    parser.add_argument(
        "--comments",
        "A boolean variable to parse through the comments of the top posts of subreddit",
        type=bool,
    )

    args = parser.parse_args()

    client_id = args["client_id"]
    client_secret = args["client_secret"]
    user_agent = args["user_agent"]
    file_path = args["subreddit_file_path"]
    limit = args["limit"]

    r = Crawler(client_id=client_id, client_secret=client_secret, user_agent=user_agent)

    subreddit_list = open(file_path, "r").readlines().split()

    print(subreddit_list)

    if args["posts"]:
        r.get_posts(subreddit_names=subreddit_list, sort_by="top", limit=limit)

    if args["comments"]:
        r.get_comments(subreddit_names=subreddit_list, sort_by="top", limit=limit)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reddit_multimodal_crawler-1.3.2.tar.gz (4.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reddit_multimodal_crawler-1.3.2-py3-none-any.whl (5.4 kB view details)

Uploaded Python 3

File details

Details for the file reddit_multimodal_crawler-1.3.2.tar.gz.

File metadata

File hashes

Hashes for reddit_multimodal_crawler-1.3.2.tar.gz
Algorithm Hash digest
SHA256 96485a1a0aa7c111fbbe8165e97eeef80e7f8715c4eaa639c419d9311ea22882
MD5 48f3b2a959fd22ec25c66a9b488dfec2
BLAKE2b-256 541b552761ce29265fc0f53fed692d5414d9138c5ba5f9dcd9f9d9001cca9649

See more details on using hashes here.

File details

Details for the file reddit_multimodal_crawler-1.3.2-py3-none-any.whl.

File metadata

File hashes

Hashes for reddit_multimodal_crawler-1.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e627454c015000e79b10b20d3057bd4156e8e7af88dea11f4814487b50a7ce10
MD5 94bd45fdee3610ced26e9b182adcb0cb
BLAKE2b-256 de8f650efcb4ce8a2f2b0779e311af9a798c056dac0b73daf4f530c73fb1d4af

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page