Skip to main content

Download snapshots from the Wayback Machine

Project description

python wayback machine downloader

PyPI PyPI - Downloads Python Version License: MIT

Downloading archived web pages from the Wayback Machine.

Internet-archive is a nice source for several OSINT-information. This tool is a work in progress to query and fetch archived web pages.

This tool allows you to download content from the Wayback Machine (archive.org). You can use it to download either the latest version or all versions of web page snapshots within a specified range.

Installation

Pip

  1. Install the package
    pip install pywaybackup
  2. Run the tool
    waybackup -h

Manual

  1. Clone the repository
    git clone https://github.com/bitdruid/python-wayback-machine-downloader.git
  2. Install
    pip install .
    • in a virtual env or use --break-system-package

Usage infos

  • Linux recommended: On Windows machines, the path length is limited. This can only be overcome by editing the registry. Files that exceed the path length will not be downloaded.
  • If you query an explicit file (e.g. a query-string ?query=this or login.html), the --explicit-argument is recommended as a wildcard query may lead to an empty result.

Arguments

  • -h, --help: Show the help message and exit.
  • -a, --about: Show information about the tool and exit.

Required

  • -u, --url:
    The URL of the web page to download. This argument is required.

Mode Selection (Choose One)

  • -c, --current:
    Download the latest version of each file snapshot. You will get a rebuild of the current website with all available files (but not any original state because new and old versions are mixed).
  • -f, --full:
    Download snapshots of all timestamps. You will get a folder per timestamp with the files available at that time.
  • -s, --save:
    Save a page to the Wayback Machine. (beta)

Optional query parameters

  • -l, --list:
    Only print the snapshots available within the specified range. Does not download the snapshots.

  • -e, --explicit:
    Only download the explicit given URL. No wildcard subdomains or paths. Use e.g. to get root-only snapshots. This is recommended for explicit files like login.html or ?query=this.

  • -o, --output:
    Defaults to waybackup_snapshots in the current directory. The folder where downloaded files will be saved.

  • Range Selection:
    Specify the range in years or a specific timestamp either start, end, or both. If you specify the range argument, the start and end arguments will be ignored. Format for timestamps: YYYYMMDDhhmmss. You can only give a year or increase specificity by going through the timestamp starting on the left.
    (year 2019, year+month 201901, year+month+day 20190101, year+month+day+hour 2019010112)

    • -r, --range:
      Specify the range in years for which to search and download snapshots.
    • --start:
      Timestamp to start searching.
    • --end:
      Timestamp to end searching.

Additional behavior manipulation

  • --csv <path>:
    Path defaults to output-dir. Saves a CSV file with the json-response for successfull downloads. If --list is set, the CSV contains the CDX list of snapshots. If --current or --full is set, CSV contains downloaded files. Named as waybackup_<sanitized_url>.csv.

  • --skip <path>:
    Path defaults to output-dir. Checks for an existing waybackup_<sanitized_url>.csv for URLs to skip downloading. Useful for interrupted downloads. Files are checked by their root-domain, ensuring consistency across queries. This means that if you download http://example.com/subdir1/ and later http://example.com, the second query will skip the first path.

  • --no-redirect:
    Disables following redirects of snapshots. Useful for preventing timestamp-folder mismatches caused by Archive.org redirects.

  • --verbosity <level>:
    Sets verbosity level. Options are json (prints JSON response) or progress (shows progress bar).

  • --log <path>:
    Path defaults to output-dir. Saves a log file with the output of the tool. Named as waybackup_<sanitized_url>.log.

  • --workers <count>:
    Sets the number of simultaneous download workers. Default is 1, safe range is about 10. Be cautious as too many workers may lead to refused connections from the Wayback Machine.

  • --retry <attempts>:
    Specifies number of retry attempts for failed downloads.

  • --delay <seconds>:
    Specifies delay between download requests in seconds. Default is no delay (0).

CDX Query Handling:

  • --cdxbackup <path>:
    Path defaults to output-dir. Saves the result of CDX query as a file. Useful for later downloading snapshots and overcoming refused connections by CDX server due to too many queries. Named as waybackup_<sanitized_url>.cdx.

  • --cdxinject <filepath>:
    Injects a CDX query file to download snapshots. Ensure the query matches the previous --url for correct folder structure.

Auto:

  • --auto:
    If set, csv, skip and cdxbackup/cdxinject are handled automatically. Keep the files and folders as they are. Otherwise they will not be recognized when restarting a download.

Debug

  • --debug: If set, full traceback will be printed in case of an error. The full exception will be written into waybackup_error.log.

Examples

Download latest snapshot of all files:
waybackup -u http://example.com -c

Download latest snapshot of a specific file:
waybackup -u http://example.com/subdir/file.html -c

Download all snapshots sorted per timestamp with a specified range and do not follow redirects:
waybackup -u http://example.com -f -r 5 --no-redirect

Download all snapshots sorted per timestamp with a specified range and save to a specified folder with 3 workers:
waybackup -u http://example.com -f -r 5 -o /home/user/Downloads/snapshots --workers 3

Download all snapshots from 2020 to 12th of December 2022 with 4 workers, save a csv and show a progress bar: waybackup -u http://example.com -f --start 2020 --end 20221212 --workers 4 --csv --verbosity progress

Download all snapshots and output a json response:
waybackup -u http://example.com -f --verbosity json

List available snapshots per timestamp without downloading and save a csv file to home folder:
waybackup -u http://example.com -f -l --csv /home/user/Downloads

Output path structure

The output path is currently structured as follows by an example for the query:
http://example.com/subdir1/subdir2/assets/:

For the current version (-c):

  • The requested path will only include all files/folders starting from your query-path.
your/path/waybackup_snapshots/
└── the_root_of_your_query/ (example.com/)
    └── subdir1/
        └── subdir2/
            └── assets/
                ├── image.jpg
                ├── style.css
                ...

For all versions (-f):

  • Will currently create a folder named as the root of your query. Inside this folder, you will find all timestamps and per timestamp the path you requested.
your/path/waybackup_snapshots/
└── the_root_of_your_query/ (example.com/)
    ├── yyyymmddhhmmss/
    │   ├── subidr1/
    │   │   └── subdir2/
    │   │       └── assets/
    │   │           ├── image.jpg
    │   │           └── style.css
    ├── yyyymmddhhmmss/
    │   ├── subdir1/
    │   │   └── subdir2/
    │   │       └── assets/
    │   │           ├── image.jpg
    │   │           └── style.css
    ...

Json Response

For download queries:

[
   {
      "file": "/your/path/waybackup_snapshots/example.com/yyyymmddhhmmss/index.html",
      "id": 1,
      "redirect_timestamp": "yyyymmddhhmmss",
      "redirect_url": "http://web.archive.org/web/yyyymmddhhmmssid_/http://example.com/",
      "response": 200,
      "timestamp": "yyyymmddhhmmss",
      "url_archive": "http://web.archive.org/web/yyyymmddhhmmssid_/http://example.com/",
      "url_origin": "http://example.com/"
   },
    ...
]

For list queries:

[
   {
      "digest": "DIGESTOFSNAPSHOT",
      "id": 1,
      "mimetype": "text/html",
      "status": "200",
      "timestamp": "yyyymmddhhmmss",
      "url": "http://example.com/"
   },
   ...
]

CSV Output

The csv contains the json response in a table format.

Contributing

I'm always happy for some feature requests to improve the usability of this tool. Feel free to give suggestions and report issues. Project is still far from being perfect.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pywaybackup-1.4.1.tar.gz (272.9 kB view details)

Uploaded Source

Built Distribution

pywaybackup-1.4.1-py3-none-any.whl (21.6 kB view details)

Uploaded Python 3

File details

Details for the file pywaybackup-1.4.1.tar.gz.

File metadata

  • Download URL: pywaybackup-1.4.1.tar.gz
  • Upload date:
  • Size: 272.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for pywaybackup-1.4.1.tar.gz
Algorithm Hash digest
SHA256 10b44bf56a2b0376788dba190d9365be43917b238edb01a5583636c8f1d31440
MD5 bf8e470b8dee4dd7476ba4b7c2521fda
BLAKE2b-256 8cdf2c302f1f5902574b50ab4089cc97bc6651df9bde065c1e67e113190b5fa3

See more details on using hashes here.

File details

Details for the file pywaybackup-1.4.1-py3-none-any.whl.

File metadata

  • Download URL: pywaybackup-1.4.1-py3-none-any.whl
  • Upload date:
  • Size: 21.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for pywaybackup-1.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 2fd1336dc16f589b720beec781cb2727402a04d2260a182a7624ddff86369ce8
MD5 812e6b1bcebb9736c9d25d9ec0aa7e78
BLAKE2b-256 c6c345a99a706c808e4bd244e21c6ddddeb9f5b3d9b5f6d7ec22de97e8b25c3e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page