Scrape article metadata and comments from DER SPIEGEL
Project description
spiegel-scraper
Scrape article metadata and comments from DER SPIEGEL
Setup
pip install spiegel-scraper
Usage
from datetime import date
import spiegel_scraper as spon
# list all articles from 2020-01-31
archive_entries = spon.archive.by_date(date(2020, 1, 31))
# or, for later replication, retrieve and scrape the html instead
archive_html = spon.archive.html_by_date(date(2020, 1, 31))
archive_entries_from_html = spon.archive.scrape_html(archive_html)
# fetch one article by url
article_url = archive_entries[0]['url']
article = spon.article.by_url(article_url)
# or alternatively using the html
article_html = spon.article.html_by_url(article_url)
article_from_html = spon.article.scrape_html(article_html)
# retrieve all comments for an article
comments = spon.comments.by_article_id(article['id'])
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
spiegel-scraper-1.0.0.tar.gz
(3.9 kB
view hashes)
Built Distribution
Close
Hashes for spiegel_scraper-1.0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1053ee0ad31be51c42bcc05cba020bf86a6543e30e124f45462e12acb6b4b852 |
|
MD5 | 74abd8db6b792d21e828280cada4fde0 |
|
BLAKE2b-256 | 5a4e8afeec504dc461604d5c4c9799a616bdc90db2603d3daa31a782e103ab18 |