Scrape a websites chart into either a list, dict, or pandas dataframe
Project description
Scrape Charts
Follow the steps completed in test_scraper.py (located in the tests folder or below). The virtual environment is not necessary, pip will download all required packagesdoes not need to be activited, but the import statements probably appear slightly differently, however comments will rectify your confusion on the correct string to paste.
Main feature: scrape multiple charts from one website or multiple websites, and then turn it into one large list
Features include:
- Scraping a chart from a website or multiple websites
- Choosing what charts you want from a website along with combining multiple websites' charts into one large chart
- Processing that larger chart into a list
- Cleaning that list
- Cleaning the text with regex functions
- No errors in the python package, eg. works out of the box
- Converting that cleaned list into a panda dictionary
- Converting that cleaned list into a dictionary
- Saving that cleaned list as a json file, etc
- A very friendly to use class that does everything for you
- Return statements at every stage so that if one part doesn't work, you can see the data and process it yourself (a contingency, because if you contacted me, I would fix the issue)
- A maintainer
- No bugs or issues at the time of writing, (unit testing exists partially)
- Regex functions that are explained below
from src.chart_scraper.ChartScraper import Scraper
# Importing from pip can be done without the src
# This is for educational purposes only, this code is example code, but not for usage
chartScraper = Scraper("https://www.learnthat.org/pages/view/roots.html", chartNumber=[2])
# At this stage. chartScraper.combinedCharts hold this one mega list, so you can manually change one or two things, however this isn't necessary, the following code below will still work as if nothing happened
chartScraper.cleanList(whichToKeep="[a-zA-Z0-9 ]+", whereToSplit="\(|,", whereToCombine="/", whereToClean=[[" -", ":"],[";", ","], ["[^a-zA-Z ,:]+", ""], [" +", " "]])
chartScraper.listToDict(includePrintStatement=False)
chartScraper.getDictKeys(includePrintStatement=False)
# All lowercase
chartScraper.findWordComponents("philology")
chartScraper.createDataFrame()
chartScraper.saveFiles(fileType=2)
chartScraper.cleanList(whichToKeep="[a-zA-Z0-9]+", whereToSplit="\(|,", whereToCombine="/", whereToClean=[[" -", ":"],[";", ","], ["[^a-zA-Z ,:]+", ""], [" +", " "]])
# whichToKeep="[a-zA-Z0-9 ]+" removes strings that don't contain either letters (a-z, A-Z) or numbers (0-9, eg. the larger number 123 works)
# whereToSplit="\(|," splits the string whenever there is a parantehsis or comma, the paranthesis is backslashed because regex requires it
# whereToCombine="/" combines a/b/c into ["a", "ab", "ac"], my niche case required it when I built this package
# whereToClean=[[" -", ":"],[";", ","], ["[^a-zA-Z ,:]+", ""], [" +", " "]]
# [" -", ":"] turns any " -" into ":"
# Likewise, [";", ","] converts ";" into ","
# ["[^a-zA-Z ,:]+", ""] removes characters that aren't letters (a-z, A-z), spaces (" "), commas (,), or colons (:)
# [" +", " "] rectifies the issue of multiple spaces into one space, eg " " into " "
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file scrape_charts-1.2.0.tar.gz
.
File metadata
- Download URL: scrape_charts-1.2.0.tar.gz
- Upload date:
- Size: 17.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 291f5a3d81e99d75afc3d2a05f8c6a5b8d3efca50b3356fd2609f9782490d95b |
|
MD5 | ca68b06d63df5544807a64dedab643aa |
|
BLAKE2b-256 | bcc3d77d655649cb29b8456a39cab16a0730ab2d0321c4bc8d1994a1ab14e99a |
File details
Details for the file scrape_charts-1.2.0-py3-none-any.whl
.
File metadata
- Download URL: scrape_charts-1.2.0-py3-none-any.whl
- Upload date:
- Size: 18.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7a1dc35ba15fecbdc0ce4a3d0d26d6fbd0e57fd223924f2b99978300bd7fc6ef |
|
MD5 | 3388000c175ba340a914ab3b7a058d6e |
|
BLAKE2b-256 | 0e4b34ab5f4436dac1f8e809e87e37f7c5209c432f6d2480fa0b7c145ee6650a |