Skip to main content

Scrape article metadata and comments from NYTimes

Project description

nytimes-scraper

PyPI

Scrape article metadata and comments from NYTimes

Setup

pip install nytimes-scraper

CLI usage

The scraper will automatically fetch metadata and comments for every article published on nytimes.com. Articles are processed month by month, starting with the current month. For each month, a {year}-{month}-articles.pickle and {year}-{month}-comments.pickle will be generated in the current directory. If the process is restarted, existing outputs will not be overridden and the scraper will continue at the month where it left off. To use it, run

python -m nytimes_scraper <API_KEY>

Programmatic usage

The scraper can also be started programmatically

import datetime as dt
from nytimes_scraper import run_scraper, scrape_month

# scrape february of 2020
article_df, comment_df = scrape_month('<your_api_key>', date=dt.date(2020, 2, 1))

# scrape all articles month by month
run_scraper('<your_api_key>')

Alternatively, the nytimes_scraper.articles and nytimes_scraper.comments modules can be used for more fine-grained access:

import datetime as dt
from nytimes_scraper.nyt_api import NytApi
from nytimes_scraper.articles import fetch_articles_by_month, articles_to_df
from nytimes_scraper.comments import fetch_comments, fetch_comments_by_article, comments_to_df

api = NytApi('<your_api_key>')

# Fetch articles of a specific month
articles = fetch_articles_by_month(api, dt.date(2020, 2, 1))
article_df = articles_to_df(articles)

# Fetch comments from multiple articles
# a) using the results of a previous article query
article_ids_and_urls = list(article_df['web_url'].iteritems())
comments_a = fetch_comments(api, article_ids_and_urls)
comment_df = comments_to_df(comments_a)

# b) using a custom list of articles
comments_b = fetch_comments(api, article_ids_and_urls=[
    ('nyt://article/316ef65c-7021-5755-885c-a9e1ef2cfdf2', 'https://www.nytimes.com/2020/01/03/world/middleeast/trump-iran-suleimani.html'),
    ('nyt://article/b2d1b802-412e-51f7-8864-efc931e87bb3', 'https://www.nytimes.com/2020/01/04/opinion/impeachment-witnesses.html'),
])

# Fetch comment for one specific article by its URL
comments_c = fetch_comments_by_article(api, 'https://www.nytimes.com/2019/11/30/opinion/sunday/bernie-sanders.html')

Project details


Release history Release notifications

This version

1.0.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for nytimes-scraper, version 1.0.0
Filename, size File type Python version Upload date Hashes
Filename, size nytimes_scraper-1.0.0-py3-none-any.whl (8.9 kB) File type Wheel Python version py3 Upload date Hashes View hashes
Filename, size nytimes-scraper-1.0.0.tar.gz (7.0 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page