Scrape article metadata and comments from DER SPIEGEL
Project description
spiegel-scraper
Scrape articles and comments from DER SPIEGEL
Setup
pip install spiegel-scraper
Usage
from datetime import date
import spiegel_scraper as spon
# list all articles from 2020-01-31
archive_entries = spon.archive.by_date(date(2020, 1, 31))
# or, for later replication, retrieve and scrape the html instead
archive_html = spon.archive.html_by_date(date(2020, 1, 31))
archive_entries_from_html = spon.archive.scrape_html(archive_html)
# fetch one article by url
article_url = archive_entries[0]['url']
article = spon.article.by_url(article_url)
# or alternatively using the html
article_html = spon.article.html_by_url(article_url)
article_from_html = spon.article.scrape_html(article_html)
# retrieve all comments for an article
comments = spon.comments.by_article_id(article['id'])
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
spiegel-scraper-1.1.1.tar.gz
(4.0 kB
view hashes)
Built Distribution
Close
Hashes for spiegel_scraper-1.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9239f0f9f10472182c7b40d3c1f87e970ee9e05f050d66ccf9d485d12eff21b0 |
|
MD5 | 848d67afafcd65470abe506090da3c86 |
|
BLAKE2b-256 | 1ebf08312e038cdc6b699123acebedee3ec0f307653578fc28abbdea25f69fc4 |