Archive all episodes from your favorite podcasts
Archive all episodes from your favorite podcasts.
The archiver takes the feed URLs of your favorite podcasts and downloads all available episodes for you. Even those files "hidden" in a paged feed will be tapped, so you'll have an entire backup of the series. The archiver also supports updating an existing archive, so that it lends itself to be set up as a cronjob.
In my experience, very few full-fledged podcast clients are able to access a paged feed (following IETF RFC5005), so only the last few episodes of a podcast will be available to download. When you discover a podcast that has been around for quite a while, you'll have a hard time to follow the "gentle listener's duty" and listen to the whole archive. The script in this repository is supposed to help you acquiring every last episode of your new listening pleasure.
Before downloading any episode the function first fetches all available pages of the feed and prepares a list. That way, you will never miss any episode.
podcast-archiver is Python 3.9+ compatible.
# Latest tagged/published version on PyPI: pip install podcast-archiver # Latest master from GitHub: pip install git+https://github.com/janw/podcast-archiver.git
podcast-archiver is available as a docker image as well:
# Latest tagged/published version, same as on PyPI: docker run --rm ghcr.io/janw/podcast-archiver:latest # Latest master from GitHub: docker run --rm ghcr.io/janw/podcast-archiver:edge
podcast-archiver --help for details on how to use it.
podcast-archiver -d ~/Music/Podcasts \ --subdirs \ --date-prefix \ --progress \ --verbose \ -f http://logbuch-netzpolitik.de/feed/m4a \ -f http://raumzeit-podcast.de/feed/m4a/ \ -f https://feeds.lagedernation.org/feeds/ldn-mp3.xml
Process the feed list from a file
If you have a larger list of podcasts and/or want to update the archive on a cronjob basis, the
-f argument can be outsourced into a text file. The text file may contain one feed URL per line, looking like this:
podcast-archiver -d ~/Music/Podcasts -s -u -f feedlist.txt
feedlist.txt contains the URLs as if entered into the command line:
http://logbuch-netzpolitik.de/feed/m4a http://raumzeit-podcast.de/feed/m4a/ https://feeds.lagedernation.org/feeds/ldn-mp3.xml
This way, you can easily add and remove feeds to the list and let the archiver fetch the newest episodes for example by adding it to your crontab.
Excursion: Unicode Normalization in Slugify
--slugify option removes all ambiguous characters from folders and filenames used in the archiving process. The removal includes unicode normalization according to Compatibility Decomposition. What? Yeah, me too. I figured this is best seen in an example, so here's a fictitious episode name, and how it would be translated to an target filename using the Archiver:
SPR001_Umlaute sind ausschließlich in schönen Sprachen/Dialekten zu finden.mp3
will be turned into
Note that "decorated" characters like
ö are replaced with their basic counterparts (
o), while somewhat ligatur-ish ones like
ß (amongst most unessential punctuation) are removed entirely.
- Add ability to define a preferred format on feeds that contain links for multiple audio codecs.
- Add ability to define a range of episodes or time to download only episode from that point on or from there to the beginning or or or …
- Add ability to choose a prefix episodes with the episode number (rarely necessary, since most podcasts feature some kind of episode numbering in the filename)
- Add unittests
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for podcast_archiver-0.5.1-py3-none-any.whl