Skip to main content

Python library to clone/archive pages or sites from the Internet.

Reason this release was yanked:

AttributeError: WebPage.html_mime_types 'tuple' object attribute '__doc__' is read-only

Project description

    ____       _       __     __    ______
   / __ \__  _| |     / /__  / /_  / ____/___  ____  __  __
  / /_/ / / / / | /| / / _ \/ __ \/ /   / __ \/ __ \/ / / /
 / ____/ /_/ /| |/ |/ /  __/ /_/ / /___/ /_/ / /_/ / /_/ /
/_/    \__, / |__/|__/\___/_.___/\____/\____/ .___/\__, /
      /____/                               /_/    /____/

Created By : Raja Tomar License : Apache License 2.0 Email: rajatomar788@gmail.com

PyWebCopy is a free tool for copying full or partial websites locally onto your hard-disk for offline viewing.

PyWebCopy will scan the specified website and download its content onto your hard-disk. Links to resources such as style-sheets, images, and other pages in the website will automatically be remapped to match the local path. Using its extensive configuration you can define which parts of a website will be copied and how.

What can PyWebCopy do?

PyWebCopy will examine the HTML mark-up of a website and attempt to discover all linked resources such as other pages, images, videos, file downloads - anything and everything. It will download all of theses resources, and continue to search for more. In this manner, WebCopy can "crawl" an entire website and download everything it sees in an effort to create a reasonable facsimile of the source website.

What can PyWebCopy not do?

PyWebCopy does not include a virtual DOM or any form of JavaScript parsing. If a website makes heavy use of JavaScript to operate, it is unlikely PyWebCopy will be able to make a true copy if it is unable to discover all of the website due to JavaScript being used to dynamically generate links.

PyWebCopy does not download the raw source code of a web site, it can only download what the HTTP server returns. While it will do its best to create an offline copy of a website, advanced data driven websites may not work as expected once they have been copied.

Installation

pywebcopy is available on PyPi and is easily installable using pip

$ pip install pywebcopy

You are ready to go. Read the tutorials below to get started.

First steps

You should always check if the latest pywebcopy is installed successfully.

>>> import pywebcopy
>>> pywebcopy.__version___
7.x.x

Your version may be different, now you can continue the tutorial.

Basic Usages

To save any single page, just type in python console

from pywebcopy import save_webpage
save_webpage(
      url="https://httpbin.org/",
      project_folder="E://savedpages//",
      project_name="my_site",
      bypass_robots=True,
      debug=True,
      open_in_browser=True,
      delay=None,
      threaded=False,
)

To save full website (This could overload the target server, So, be careful)

from pywebcopy import save_website
save_website(
url="https://httpbin.org/",
project_folder="E://savedpages//",
project_name="my_site",
bypass_robots=True,
debug=True,
open_in_browser=True,
delay=None,
threaded=False,
)

Running Tests

Running tests is simple and doesn't require any external library. Just run this command from root directory of pywebcopy package.

$ python -m pywebcopy --tests

Command Line Interface

pywebcopy have a very easy to use command-line interface which can help you do task without having to worrying about the inner long way.

  • Getting list of commands

    $ python -m pywebcopy --help
    
  • Using CLI

    Usage: pywebcopy [-p|--page|-s|--site|-t|--tests] [--url=URL [,--location=LOCATION [,--name=NAME [,--pop [,--bypass_robots [,--quite [,--delay=DELAY]]]]]]]
    
    Python library to clone/archive pages or sites from the Internet.
    
    Options:
      --version             show program's version number and exit
      -h, --help            show this help message and exit
      --url=URL             url of the entry point to be retrieved.
      --location=LOCATION   Location where files are to be stored.
      -n NAME, --name=NAME  Project name of this run.
      -d DELAY, --delay=DELAY
                            Delay between consecutive requests to the server.
      --bypass_robots       Bypass the robots.txt restrictions.
      --threaded            Use threads for faster downloading.
      -q, --quite           Suppress the logging from this library.
      --pop                 open the html page in default browser window after
                            finishing the task.
    
      CLI Actions List:
        Primary actions available through cli.
    
        -p, --page          Quickly saves a single page.
        -s, --site          Saves the complete site.
        -t, --tests         Runs tests for this library.
    
    
    
  • Running tests

      $ python -m pywebcopy run_tests
    

Authentication and Cookies

Most of the time authentication is needed to access a certain page. Its real easy to authenticate with pywebcopy because it uses an requests.Session object for base http activity which can be accessed through WebPage.session attribute. And as you know there are ton of tutorials on setting up authentication with requests.Session.

Here is an example to fill forms

from pywebcopy.configs import get_config

config = get_config('http://httpbin.org/')
wp = config.create_page()
wp.get(config['project_url'])
form = wp.get_forms()[0]
form.inputs['email'].value = 'bar' # etc
form.inputs['password'].value = 'baz' # etc
wp.submit_form(form)
wp.get_links()

You can read more in the github repositories docs folder.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pywebcopy-7.0.1.tar.gz (42.7 kB view details)

Uploaded Source

Built Distribution

pywebcopy-7.0.1-py2.py3-none-any.whl (46.0 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file pywebcopy-7.0.1.tar.gz.

File metadata

  • Download URL: pywebcopy-7.0.1.tar.gz
  • Upload date:
  • Size: 42.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.10.0

File hashes

Hashes for pywebcopy-7.0.1.tar.gz
Algorithm Hash digest
SHA256 dcab530ec877229f29a57ec03f161636082e055f0565170cc03fff9590ed2f91
MD5 9f30d19140a31071fa2871b38ae6c15a
BLAKE2b-256 37e48970d4b9baee1085bc63bef618e8d866eefaf2548798a15c9549d245d6ab

See more details on using hashes here.

File details

Details for the file pywebcopy-7.0.1-py2.py3-none-any.whl.

File metadata

  • Download URL: pywebcopy-7.0.1-py2.py3-none-any.whl
  • Upload date:
  • Size: 46.0 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.10.0

File hashes

Hashes for pywebcopy-7.0.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 8ed3f4d91cca3edb27fbbe9c7bd8fa74a2879189a585e93ac49501c2d6865686
MD5 13d1f2e7d9c38c3bb4acc3b1f5d1e7cf
BLAKE2b-256 be6f12371371fda116f86704f2e4b30c3da5bbb3665d77b28aca08e0905f7224

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page