Skip to main content

Download posts and media from Fantia

Project description

FantiaDL

Download media and other data from Fantia fanclubs and posts. A session cookie must be provided with the -c/--cookie argument directly or by passing the path to a legacy Netscape cookies file. Please see the About Session Cookies section.

usage: fantiadl.py [options] url

positional arguments:
  url                   fanclub or post URL

options:
  -h, --help            show this help message and exit
  -c SESSION_COOKIE, --cookie SESSION_COOKIE
                        _session_id cookie or cookies.txt
  -q, --quiet           suppress output
  -v, --version         show program's version number and exit
  --db DB_PATH          database to track post download state (creates tables when first specified)"
  --db-bypass-post-check
                        bypass checking a post for new content if it's marked as completed on the database

download options:
  -i, --ignore-errors   continue on download errors
  -l #, --limit #       limit the number of posts to process per fanclub (excludes -n)
  -o OUTPUT_PATH, --output-directory OUTPUT_PATH
                        directory to download to
  -s, --use-server-filenames
                        download using server defined filenames
  -r, --mark-incomplete-posts
                        add .incomplete file to post directories that are incomplete
  -m, --dump-metadata   store metadata to file (including fanclub icon, header, and background)
  -x, --parse-for-external-links
                        parse posts for external links
  -t, --download-thumbnail
                        download post thumbnails
  -f, --download-fanclubs
                        download posts from all followed fanclubs
  -p, --download-paid-fanclubs
                        download posts from all fanclubs backed on a paid plan
  -n #, --download-new-posts #
                        download a specified number of new posts from your fanclub timeline
  -d %Y-%m, --download-month %Y-%m
                        download posts only from a specific month, e.g. 2007-08 (excludes -n)
  --exclude EXCLUDE_FILE
                        file containing a list of filenames to exclude from downloading

To track post downloads, specify a database path using --db, e.g. --db ~/fantiadl.db. When existing post content downloads are encountered, they will be skipped over. When all post contents under a parent post have been downloaded, the post will be marked complete on the database. If future requests to download a post indicate the post was modified based on its timestamp, new contents will be checked for; this behavior can be disabled by setting --db-bypass-post-check.

When parsing for external links using -x, a .crawljob file is created in your root directory (either the directory provided with -o or the directory the script is being run from) that can be parsed by JDownloader. As posts are parsed, links will be appended and assigned their appropriate post directories for download. You can import this file manually into JDownloader (File -> Load Linkcontainer) or setup the Folder Watch plugin to watch your root directory for .crawljob files.

About Session Cookies

Due to recent changes imposed by Fantia, providing an email and password to login from the command line is no longer supported. In order to login, you will need to provide the _session_id cookie for your Fantia login session using -c/--cookie. After logging in normally on your browser, this value can then be extracted and used with FantiaDL. This value expires and may need to be updated with some regularity.

Mozilla Firefox

  1. On https://fantia.jp, press Ctrl + Shift + I to open Developer Tools.
  2. Select the Storage tab at the top. In the sidebar, select https://fantia.jp under the Cookies heading.
  3. Locate the _session_id cookie name. Click on the value to copy it.

Google Chrome

  1. On https://fantia.jp, press Ctrl + Shift + I to open DevTools.
  2. Select the Application tab at the top. In the sidebar, expand Cookies under the Storage heading and select https://fantia.jp.
  3. Locate the _session_id cookie name. Click on the value to copy it.

Third-Party Extensions (cookies.txt)

You also have the option of passing the path to a legacy Netscape format cookies file with -c/--cookie, e.g. -c ~/cookies.txt. Using an extension like cookies.txt, create a text file matching the accepted format:

# Netscape HTTP Cookie File
# https://curl.haxx.se/rfc/cookie_spec.html
# This is a generated file! Do not edit.

fantia.jp	FALSE	/	FALSE	1595755239	_session_id	a1b2c3d4...

Only the _session_id cookie is required.

Download

Check the releases page for the latest binaries.

Build Requirements

  • Python 3.x
  • requests
  • beautifulsoup4

Roadmap

  • More robust logging

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

fantiadl-2.0.4-py3-none-any.whl (15.6 kB view details)

Uploaded Python 3

File details

Details for the file fantiadl-2.0.4-py3-none-any.whl.

File metadata

  • Download URL: fantiadl-2.0.4-py3-none-any.whl
  • Upload date:
  • Size: 15.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for fantiadl-2.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 bf35ac52adf0378ec569b1323c225f4279978fe919e03d28b05e7add60204fc4
MD5 eeb726329671af8c3e315d85ccb551f8
BLAKE2b-256 b0ef61029f6627610147775a46a4e3e5512bb3f0e4413e9e1b177c3f779718ac

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page