DIY Atom feeds in times of social media and paywalls
Once upon a time every website offered an RSS feed to keep readers updated about new articles/blog posts via the users’ feed readers. These times are long gone. The once iconic orange RSS icon has been replaced by “social share” buttons.
Feeds aims to bring back the good old reading times. It creates Atom feeds for websites that don’t offer them (anymore). It allows you to read new articles of your favorite websites in your feed reader (e.g. TinyTinyRSS) even if this is not officially supported by the website.
Furthermore it can also enhance existing feeds by inlining the actual content into the feed entry so it can be read without leaving the feed reader.
Feeds is based on Scrapy, a framework for extracting data from websites, and it’s easy to add support for new websites. Just take a look at the existing spiders and feel free to open a pull request!
Feeds comes with extensive documentation. It is available at https://pyfeeds.readthedocs.io.
Feeds is currently able to create full text Atom feeds for various sites. The complete list of supported websites is available in the documentation.
Feeds is meant to be installed on your server and run periodically in a cron job or similar job scheduler. We recommend to install Feeds inside a virtual environment.
Feeds can be installed from PyPI using pip:
$ pip install PyFeeds
You may also install the current development version. The master branch is considered stable enough for daily use:
$ pip install https://github.com/pyfeeds/pyfeeds/archive/master.tar.gz
After installation feeds is available in your virtual environment.
Feeds supports Python 3.7+.
List all available spiders:
$ feeds list
Feeds allows to crawl one or more spiders without configuration, e.g.:
$ feeds crawl tvthek.orf.at
A configuration file is supported too. Simply copy the template configuration and adjust it. Enable the spiders you are interested in and adjust the output path where Feeds stores the scraped Atom feeds:
$ cp feeds.cfg.dist feeds.cfg $ $EDITOR feeds.cfg $ feeds --config feeds.cfg crawl
Point your feed reader to the generated Atom feeds and start reading. Feeds works best when run periodically in a cron job.
Run feeds --help or feeds <subcommand> --help for help and usage details.
Feeds caches HTTP responses by default to save bandwidth. Entries are cached for 90 days by default (this can be overwritten in the config file). Outdated entries are purged from cache automatically after a crawl. It’s also possible to explicitly purge the cache from outdated entries:
$ feeds --config feeds.cfg cleanup
How to contribute
Search the existing issues in the issue tracker.
File a new issue in case the issue is undocumented.
Fork the project to your private repository.
Create a topic branch and make your desired changes.
Open a pull request. Make sure the GitHub CI checks are passing.
AGPL3, see https://pyfeeds.readthedocs.io/en/latest/license.html for details.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for PyFeeds-2022.6.18-py3-none-any.whl