Skip to main content

A Python package to scrape and clone websites.

Project description

Scraply

Scraply is a Python package designed to scrape websites, extract all internal URLs, and clone pages by saving them as HTML files. You can use it through the command line interface or import it as a library in your Python scripts.

Features

  • Scrape all internal URLs from a given website.
  • Clone and save HTML content from all URLs or a specific URL.
  • Simple command-line interface (CLI) for easy usage.

Installation

Using pip

To install Scraply, use the following command:

pip install scraply

Usage

Command-Line Usage

You can use Scraply via the command line for scraping URLs and cloning pages.

Scrape All URLs from a Website

To scrape all internal URLs from a given website (e.g., https://example.com), run the following command:

scraply https://example.com

This command will print all the internal URLs found on the website.

Save Scraped URLs to a File

To scrape all URLs and save them to a file (urls.txt), use the --output flag:

scraply https://example.com --output urls.txt

Clone All Pages from a Website

To clone (download) the HTML content of all internal pages on the website, use the --clone flag:

scraply https://example.com --clone

This will save each HTML page from the website locally.

Clone a Specific Page

To clone a specific page (e.g., https://example.com/privacy), use the --clone-single flag:

scraply https://facebook.com --clone-single https://example.com/privacy

Python Library Usage

You can also use Scraply as a Python library in your script. Here’s how:

Scraping All URLs from a Website

import time
from scraply import scrape_urls

# URL to scrape
url = 'https://example.com'

# Scraping all URLs from the website
start_time = time.time()
urls = scrape_urls(url)

# Print the scraped URLs
for url in urls:
    print(url)

end_time = time.time()
print(f"Total scraping time: {end_time - start_time:.2f} seconds")

Cloning All Pages

from scraply import clone_page

# Clone each URL from a list
urls = ['https://example.com/privacy', 'https://example.com/about']

for url in urls:
    clone_page(url)

Cloning a Single Specific Page

from scraply import clone_page

# Clone a single page
clone_page('https://example.com/privacy')

Example Directory Structure

Here's an example of how your project structure should look after installation:

License

This project is licensed under the MIT License.

Contributing

Feel free to fork, contribute, or open issues on the GitHub repository.

Author

Developed by Fidal.
Email: mrfidal@proton.me GitHub: mr-fidal

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scraply-1.0.1.tar.gz (3.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scraply-1.0.1-py3-none-any.whl (3.9 kB view details)

Uploaded Python 3

File details

Details for the file scraply-1.0.1.tar.gz.

File metadata

  • Download URL: scraply-1.0.1.tar.gz
  • Upload date:
  • Size: 3.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.2

File hashes

Hashes for scraply-1.0.1.tar.gz
Algorithm Hash digest
SHA256 5157f3f0cead0b13ea9d37e5ee921820f49fc9e05dc8b20b215ad9a22615976e
MD5 43c80911fc67cc538e703766645afedc
BLAKE2b-256 94067c14834b3b69072bf840139a38294dbc5dae4fad7316238334f01bcf954a

See more details on using hashes here.

File details

Details for the file scraply-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: scraply-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 3.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.2

File hashes

Hashes for scraply-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 67802013bd368c5c331a64d792b3064d116ac2806f2fdb09988493a51c9ed933
MD5 1ca76ff297baaf15be96c3e2ea90af74
BLAKE2b-256 39bd0efbac07cedde4dd26ab53135f8a5de38e0729e4147dd3637867b9939b18

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page