Skip to main content

tool to help scrape sitemaps and the links they scrape

Project description

Simple Scraper


Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact

About The Project

Wrote this program to scrape some sitemaps and the following sitemap links on multiple servers. In order to save time it was pip packaged for easy repeated use.

(back to top)

Built With

(back to top)

Getting Started

Follow the installation instructions. The docstrings have detailed explainations for use.

Prerequisites

This program uses python 3.8

Installation

pip install package and use as needed.

Install pip package

pip install samssimplescraper==0.1.3

(back to top)

Usage

The package has two modules.

  1. sitemapscraper is used to scrape sitemaps and can also scrape further levels of sub-sitemaps The methods will return lists of the scraped links that can be used to scrape the wanted links.
  2. scraper is used to scrape the the list that is returned from the sitemapscraper or a user-made list of links. There is also a method that returns a status check of how many links have been scraped of the total.

(back to top)

Roadmap

  1. Find sitemap for the site you are looking to scrape. Some tips can be found here:

how-to-find-your-sitemap https://writemaps.com/blog/how-to-find-your-sitemap/

  1. Scrape sitemap:
from samssimplescraper import LinksRetriever

# instantiate LinksRetriever with the sitemap you wish to scrape
links_retriever = LinksRetriever(url='https://www.example.com/sitemap_index.xml')
# get a list of the link using .get_sitemap_links method, can also add filter
mainpage_links = links_retriever.get_sitemap_links(tag='loc')
# if website has more layers use the method to get the links on those pages
final_links = links_retriever.get_next_links(links=mainpage_links, tag='loc')

Note: If you are not going to continue scraping in the same script then be sure to save your list using pickle:

import pickle

# the data folder is automatically created when LinksRetriever is instantiated
with open('./data/pickled_lists/sitemap_links_list.pkl', 'wb') as fp:
        pickle.dump(final_links, fp)
  1. Now you can scrape the list of links that the LinksRetriever module has produced for you. The files will be saved in the data/scraped_html folder.
from samssimplescraper import Scraper

# pass the list of links and for naming purposes the root_url
Scraper.get_html(link_list=final_links, root_url='https://www.example.com/)

See the open issues for a full list of proposed features (and known issues).

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

Samuel Adams McGuire - samuelmcguire@engineer.com

Pypi Link: https://pypi.org/project/samssimplescraper/0.1.3/

Linkedin: LinkedIn

Project Link: https://github.com/SamuelAdamsMcGuire/simplescraper

(back to top)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

samssimplescraper-0.1.3.tar.gz (6.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

samssimplescraper-0.1.3-py3-none-any.whl (7.7 kB view details)

Uploaded Python 3

File details

Details for the file samssimplescraper-0.1.3.tar.gz.

File metadata

  • Download URL: samssimplescraper-0.1.3.tar.gz
  • Upload date:
  • Size: 6.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/3.10.0 pkginfo/1.8.2 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.5

File hashes

Hashes for samssimplescraper-0.1.3.tar.gz
Algorithm Hash digest
SHA256 2b23f7d9c7b7a618665280de1ca12efb976dfb917809e72f92d3d1b9984288bb
MD5 e8ac784755481a3677a99e5c825c232f
BLAKE2b-256 93bc66b9a2c0004ac14d66351d93c91ceca9b1c14efc27f38ae16e160347f526

See more details on using hashes here.

File details

Details for the file samssimplescraper-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: samssimplescraper-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 7.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/3.10.0 pkginfo/1.8.2 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.5

File hashes

Hashes for samssimplescraper-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 fa1896913e7286de1df3e7e59c78ac00472956cfd0aea8c7eee95088cf788ffd
MD5 86163835eccd594f390f4d5be9e092f8
BLAKE2b-256 43779e76392bdf202c7dd12358ce81c67176268c228194bb845d3e1979c8a218

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page