Skip to main content

A python wrapper for Internet Archive's Wayback Machine API. Archive pages and retrieve archived pages easily.

Project description

waybackpy

Build Status Downloads Release Codacy Badge License: MIT made-with-python pypi Maintenance

Internet Archive Wayback Machine

The waybackpy is a python wrapper for Internet Archive's Wayback Machine.

Table of contents

Installation

Using pip:

pip install waybackpy

Usage

Capturing aka Saving an url Using save()

+ waybackpy.save(url, UA=user_agent)

url is mandatory. UA is not, but highly recommended.

import waybackpy
# Capturing a new archive on Wayback machine.
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
archived_url = waybackpy.save("https://github.com/akamhy/waybackpy", UA = "Any-User-Agent")
print(archived_url)

This should print something similar to the following archived URL:

https://web.archive.org/web/20200504141153/https://github.com/akamhy/waybackpy

Receiving the oldest archive for an URL Using oldest()

+ waybackpy.oldest(url, UA=user_agent)

url is mandatory. UA is not, but highly recommended.

import waybackpy
# retrieving the oldest archive on Wayback machine.
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
oldest_archive = waybackpy.oldest("https://www.google.com/", UA = "Any-User-Agent")
print(oldest_archive)

This returns the oldest available archive for https://google.com.

http://web.archive.org/web/19981111184551/http://google.com:80/

Receiving the newest archive for an URL using newest()

+ waybackpy.newest(url, UA=user_agent)

url is mandatory. UA is not, but highly recommended.

import waybackpy
# retrieving the newest archive on Wayback machine.
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
newest_archive = waybackpy.newest("https://www.microsoft.com/en-us", UA = "Any-User-Agent")
print(newest_archive)

This returns the newest available archive for https://www.microsoft.com/en-us, something just like this:

http://web.archive.org/web/20200429033402/https://www.microsoft.com/en-us/

Receiving archive close to a specified year, month, day, hour, and minute using near()

+ waybackpy.near(url, year=2020, month=1, day=1, hour=1, minute=1, UA=user_agent)

url is mandotory. year,month,day,hour and minute are optional arguments. UA is not mandotory, but higly recomended.

import waybackpy
# retriving the the closest archive from a specified year.
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
# supported argumnets are year,month,day,hour and minute
archive_near_year = waybackpy.near("https://www.facebook.com/", year=2010, UA ="Any-User-Agent")
print(archive_near_year)

returns : http://web.archive.org/web/20100504071154/http://www.facebook.com/

waybackpy.near("https://www.facebook.com/", year=2010, month=1, UA ="Any-User-Agent") returns: http://web.archive.org/web/20101111173430/http://www.facebook.com//

waybackpy.near("https://www.oracle.com/index.html", year=2019, month=1, day=5, UA ="Any-User-Agent") returns: http://web.archive.org/web/20190105054437/https://www.oracle.com/index.html

Please note that if you only specify the year, the current month and day are default arguments for month and day respectively. Do not expect just putting the year parameter would return the archive closer to January but the current month you are using the package. If you are using it in July 2018 and let's say you use waybackpy.near("https://www.facebook.com/", year=2011, UA ="Any-User-Agent") then you would be returned the nearest archive to July 2011 and not January 2011. You need to specify the month "1" for January.

Do not pad (don't use zeros in the month, year, day, minute, and hour arguments). e.g. For January, set month = 1 and not month = 01.

Get the content of webpage using get()

+ waybackpy.get(url, encoding="UTF-8", UA=user_agent)

url is mandatory. UA is not, but highly recommended. encoding is detected automatically, don't specify unless necessary.

from waybackpy import get
# retriving the webpage from any url including the archived urls. Don't need to import other libraies :)
# Default user-agent (UA) is "waybackpy python package", if not specified in the call.
# supported argumnets are url, encoding and UA
webpage = get("https://example.com/", UA="User-Agent")
print(webpage)

This should print the source code for https://example.com/.

Tests

Dependency

  • None, just python standard libraries (json, urllib and datetime). Both python 2 and 3 are supported :)

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

waybackpy-1.4.tar.gz (5.3 kB view details)

Uploaded Source

File details

Details for the file waybackpy-1.4.tar.gz.

File metadata

  • Download URL: waybackpy-1.4.tar.gz
  • Upload date:
  • Size: 5.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/39.0.1 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.6.7

File hashes

Hashes for waybackpy-1.4.tar.gz
Algorithm Hash digest
SHA256 68c7b0d783267eb7e6750104c0b734783dc3fc6ee7a706d3ff2305e1611e76d7
MD5 6e9ebcf1a552bd4dc17df2a00f417ae3
BLAKE2b-256 c6da05161ba454a777ad0576768bd6bd5d55f65e9c0e69093bad28bfdd9f2bf6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page