Skip to main content

A Python library that interfaces with the Internet Archive's Wayback Machine API. Archive pages and retrieve archived pages easily.

Project description

waybackpy

contributions welcome Build Status codecov Downloads Release Codacy Badge Maintainability CodeFactor made-with-python pypi PyPI - Python Version Maintenance Repo size License: MIT

Wayback Machine

Waybackpy is a Python library that interfaces with Internet Archive's Wayback Machine API. Archive webpages and retrieve archived webpages easily.

Table of contents

Installation

Using pip:

pip install waybackpy

or direct from this repository using git.

pip install git+https://github.com/akamhy/waybackpy.git

Usage

As a Python package

Capturing aka Saving an url using save()

import waybackpy

new_archive_url = waybackpy.Url(

    url = "https://en.wikipedia.org/wiki/Multivariable_calculus",
    user_agent = "Mozilla/5.0 (Windows NT 5.1; rv:40.0) Gecko/20100101 Firefox/40.0"

).save()

print(new_archive_url)
https://web.archive.org/web/20200504141153/https://github.com/akamhy/waybackpy

Try this out in your browser @ https://repl.it/@akamhy/WaybackPySaveExample

Retrieving the oldest archive for an URL using oldest()

import waybackpy

oldest_archive_url = waybackpy.Url(

    "https://www.google.com/",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:40.0) Gecko/20100101 Firefox/40.0"
).oldest()

print(oldest_archive_url)
http://web.archive.org/web/19981111184551/http://google.com:80/

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyOldestExample

Retrieving the newest archive for an URL using newest()

import waybackpy

newest_archive_url = waybackpy.Url(

    "https://www.facebook.com/",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:39.0) Gecko/20100101 Firefox/39.0"

).newest()

print(newest_archive_url)
https://web.archive.org/web/20200714013225/https://www.facebook.com/

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyNewestExample

Retrieving archive close to a specified year, month, day, hour, and minute using near()

from waybackpy import Url

user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Firefox/38.0"
github_url = "https://github.com/"


github_wayback_obj = Url(github_url, user_agent)

# Do not pad (don't use zeros in the month, year, day, minute, and hour arguments). e.g. For January, set month = 1 and not month = 01.
github_archive_near_2010 = github_wayback_obj.near(year=2010)
print(github_archive_near_2010)
https://web.archive.org/web/20100719134402/http://github.com/
github_archive_near_2011_may = github_wayback_obj.near(year=2011, month=5)
print(github_archive_near_2011_may)
https://web.archive.org/web/20110519185447/https://github.com/
github_archive_near_2015_january_26 = github_wayback_obj.near(
    year=2015, month=1, day=26
)
print(github_archive_near_2015_january_26)
https://web.archive.org/web/20150127031159/https://github.com
github_archive_near_2018_4_july_9_2_am = github_wayback_obj.near(
    year=2018, month=7, day=4, hour = 9, minute = 2
)
print(github_archive_near_2018_4_july_9_2_am)
https://web.archive.org/web/20180704090245/https://github.com/

The library doesn't supports seconds yet. You are encourged to create a PR ;)

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyNearExample

Get the content of webpage using get()

import waybackpy

google_url = "https://www.google.com/"

User_Agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36"

waybackpy_url_object = waybackpy.Url(google_url, User_Agent)


# If no argument is passed in get(), it gets the source of the Url used to create the object.
current_google_url_source = waybackpy_url_object.get()
print(current_google_url_source)


# The following chunk of code will force a new archive of google.com and get the source of the archived page.
# waybackpy_url_object.save() type is string.
google_newest_archive_source = waybackpy_url_object.get(
    waybackpy_url_object.save()
)
print(google_newest_archive_source)


# waybackpy_url_object.oldest() type is str, it's oldest archive of google.com
google_oldest_archive_source = waybackpy_url_object.get(
    waybackpy_url_object.oldest()
)
print(google_oldest_archive_source)

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyGetExample#main.py

Count total archives for an URL using total_archives()

import waybackpy

URL = "https://en.wikipedia.org/wiki/Python (programming language)"

UA = "Mozilla/5.0 (iPad; CPU OS 8_1_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B435 Safari/600.1.4"

archive_count = waybackpy.Url(
    url=URL,
    user_agent=UA
).total_archives()

print(archive_count) # total_archives() returns an int
2440

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyTotalArchivesExample

List of URLs that Wayback Machine knows and has archived for a domain name

  1. If alive=True is set, waybackpy will check all URLs to identify the alive URLs. Don't use with popular websites like google or it would take too long.
  2. To include URLs from subdomain set sundomain=True
import waybackpy

URL = "akamhy.github.io"
UA = "Mozilla/5.0 (iPad; CPU OS 8_1_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B435 Safari/600.1.4"

known_urls = waybackpy.Url(url=URL, user_agent=UA).known_urls(alive=True, subdomain=False) # alive and subdomain are optional.

print(known_urls) # known_urls() returns list of URLs
['http://akamhy.github.io',
'https://akamhy.github.io/waybackpy/',
'https://akamhy.github.io/waybackpy/assets/css/style.css?v=a418a4e4641a1dbaad8f3bfbf293fad21a75ff11',
'https://akamhy.github.io/waybackpy/assets/css/style.css?v=f881705d00bf47b5bf0c58808efe29eecba2226c']

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyKnownURLsToWayBackMachineExample#main.py

With the Command-line interface

Save

$ waybackpy --url "https://en.wikipedia.org/wiki/Social_media" --user_agent "my-unique-user-agent" --save
https://web.archive.org/web/20200719062108/https://en.wikipedia.org/wiki/Social_media

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashSave

Oldest archive

$ waybackpy --url "https://en.wikipedia.org/wiki/SpaceX" --user_agent "my-unique-user-agent" --oldest
https://web.archive.org/web/20040803000845/http://en.wikipedia.org:80/wiki/SpaceX

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashOldest

Newest archive

$ waybackpy --url "https://en.wikipedia.org/wiki/YouTube" --user_agent "my-unique-user-agent" --newest
https://web.archive.org/web/20200606044708/https://en.wikipedia.org/wiki/YouTube

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashNewest

Total number of archives

$ waybackpy --url "https://en.wikipedia.org/wiki/Linux_kernel" --user_agent "my-unique-user-agent" --total
853

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashTotal

Archive near time

$ waybackpy --url facebook.com --user_agent "my-unique-user-agent" --near --year 2012 --month 5 --day 12
https://web.archive.org/web/20120512142515/https://www.facebook.com/

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashNear

Get the source code

waybackpy --url google.com --user_agent "my-unique-user-agent" --get url # Prints the source code of the url
waybackpy --url google.com --user_agent "my-unique-user-agent" --get oldest # Prints the source code of the oldest archive
waybackpy --url google.com --user_agent "my-unique-user-agent" --get newest # Prints the source code of the newest archive
waybackpy --url google.com --user_agent "my-unique-user-agent" --get save # Save a new archive on wayback machine then print the source code of this archive.

Try this out in your browser @ https://repl.it/@akamhy/WaybackPyBashGet

Fetch all the URLs that the Wayback Machine knows for a domain

  1. You can add the '--alive' flag to only fetch alive links.
  2. You can add the '--subdomain' flag to add subdomains.
  3. '--alive' and '--subdomain' flags can be used simultaneously.
  4. All links will be saved in a file, and the file will be created in the current working directory.
pip install waybackpy

# Ignore the above installation line.

waybackpy --url akamhy.github.io --user_agent "my-user-agent" --known_urls 
# Prints all known URLs under akamhy.github.io


waybackpy --url akamhy.github.io --user_agent "my-user-agent" --known_urls --alive 
# Prints all known URLs under akamhy.github.io which are still working and not dead links.


waybackpy --url akamhy.github.io --user_agent "my-user-agent" --known_urls --subdomain 
# Prints all known URLs under akamhy.github.io inclusing subdomain


waybackpy --url akamhy.github.io --user_agent "my-user-agent" --known_urls --subdomain --alive 
# Prints all known URLs under akamhy.github.io including subdomain which are not dead links and still alive.

Try this out in your browser @ https://repl.it/@akamhy/WaybackpyKnownUrlsFromWaybackMachine#main.sh

Tests

Here

Dependency

None, just python standard libraries (re, json, urllib, argparse and datetime). Both python 2 and 3 are supported :)

Packaging

  1. Increment version.

  2. Build package python setup.py sdist bdist_wheel.

  3. Sign & upload the package twine upload -s dist/*.

License

Released under the MIT License. See license for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

waybackpy-2.1.9.tar.gz (15.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

waybackpy-2.1.9-py3-none-any.whl (12.2 kB view details)

Uploaded Python 3

File details

Details for the file waybackpy-2.1.9.tar.gz.

File metadata

  • Download URL: waybackpy-2.1.9.tar.gz
  • Upload date:
  • Size: 15.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.0 requests-toolbelt/0.9.1 tqdm/4.50.0 CPython/3.8.5

File hashes

Hashes for waybackpy-2.1.9.tar.gz
Algorithm Hash digest
SHA256 b3679e283457128a6f46a218aa363b0bff7987a89a3958b6f1ec80f124f20041
MD5 51e3f18e9f844aafa1179e4c9cfa5645
BLAKE2b-256 2b190180e863addd1da342c85c414026fac141c76ece8e44834c515ac445752e

See more details on using hashes here.

File details

Details for the file waybackpy-2.1.9-py3-none-any.whl.

File metadata

  • Download URL: waybackpy-2.1.9-py3-none-any.whl
  • Upload date:
  • Size: 12.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.0 requests-toolbelt/0.9.1 tqdm/4.50.0 CPython/3.8.5

File hashes

Hashes for waybackpy-2.1.9-py3-none-any.whl
Algorithm Hash digest
SHA256 21533ce044861f5a3cfa5ac08c68a1aad5920726c15d140f21120b22db62c4d2
MD5 255ca28be840533b381e4fcc0e6b986f
BLAKE2b-256 7f3d2a8d60cb6295ae42aa216403b1fc5fae25574d4ded8dbfe62550572f0f17

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page