Skip to main content

A Python library that interfaces with the Internet Archive's Wayback Machine API. Archive pages and retrieve archived pages easily.

Project description

waybackpy

Build Status Downloads Release Codacy Badge License: MIT Maintainability CodeFactor made-with-python pypi PyPI - Python Version Maintenance codecov contributions welcome

Internet Archive Wayback Machine

Waybackpy is a Python library that interfaces with the Internet Archive's Wayback Machine API. Archive pages and retrieve archived pages easily.

Table of contents

Installation

Using pip:

pip install waybackpy

Usage

Capturing aka Saving an url Using save()

import waybackpy
# Capturing a new archive on Wayback machine.
target_url = waybackpy.Url("https://github.com/akamhy/waybackpy", user_agnet="My-cool-user-agent")
archived_url = target_url.save()
print(archived_url)

This should print an URL similar to the following archived URL:

https://web.archive.org/web/20200504141153/https://github.com/akamhy/waybackpy

Receiving the oldest archive for an URL Using oldest()

import waybackpy
# retrieving the oldest archive on Wayback machine.
target_url = waybackpy.Url("https://www.google.com/", "My-cool-user-agent")
oldest_archive = target_url.oldest()
print(oldest_archive)

This should print the oldest available archive for https://google.com.

http://web.archive.org/web/19981111184551/http://google.com:80/

Receiving the newest archive for an URL using newest()

import waybackpy
# retrieving the newest/latest archive on Wayback machine.
target_url = waybackpy.Url(url="https://www.google.com/", user_agnet="My-cool-user-agent")
newest_archive = target_url.newest()
print(newest_archive)

This print the newest available archive for https://www.microsoft.com/en-us, something just like this:

http://web.archive.org/web/20200429033402/https://www.microsoft.com/en-us/

Receiving archive close to a specified year, month, day, hour, and minute using near()

import waybackpy
# retriving the the closest archive from a specified year.
# supported argumnets are year,month,day,hour and minute
target_url = waybackpy.Url(https://www.facebook.com/", "Any-User-Agent")
archive_near_year = target_url.near(year=2010)
print(archive_near_year)

returns : http://web.archive.org/web/20100504071154/http://www.facebook.com/

Please note that if you only specify the year, the current month and day are default arguments for month and day respectively. Just putting the year parameter would not return the archive closer to January but the current month you are using the package. You need to specify the month "1" for January , 2 for february and so on.

Do not pad (don't use zeros in the month, year, day, minute, and hour arguments). e.g. For January, set month = 1 and not month = 01.

Get the content of webpage using get()

import waybackpy
# retriving the webpage from any url including the archived urls. Don't need to import other libraies :)
# supported argumnets encoding and user_agent
target = waybackpy.Url("google.com", "any-user_agent")
oldest_url = target.oldest()
webpage = target.get(oldest_url) # We are getting the source of oldest archive of google.com.
print(webpage)

This should print the source code for oldest archive of google.com. If no URL is passed in get() then it should retrive the source code of google.com and not any archive.

Count total archives for an URL using total_archives()

from waybackpy import Url
# retriving the content of a webpage from any url including but not limited to the archived urls.
count = Url("https://en.wikipedia.org/wiki/Python (programming language)", "User-Agent").total_archives()
print(count)

This should print an integer (int), which is the number of total archives on archive.org

Tests

Dependency

  • None, just python standard libraries (re, json, urllib and datetime). Both python 2 and 3 are supported :)

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

waybackpy-2.0.2.tar.gz (7.7 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page