A base class for building web scrapers for statistical data.
Project description
Statscraper is a base library for building web scrapers for statistical data, with a helper ontology for (primarily Swedish) statistical data. A set of ready-to-use scrapers are included.
For users
You can use Statscraper as a foundation for your next scraper, or try out any of the included scrapers. With Statscraper comes a unified interface for scraping, and some useful helper methods for scraper authors.
Full documentation: ReadTheDocs
For updates and discussion: Facebook
By Journalism++ Stockholm, and Robin Linderborg.
Installing
pip install statscraper
Using a scraper
Scrapers acts like “cursors” that move around a hierarchy of datasets and collections of dataset. Collections and datasets are refered to as “items”.
┏━ Collection ━━━ Collection ━┳━ Dataset ROOT ━╋━ Collection ━┳━ Dataset ┣━ Dataset ┗━ Collection ┣━ Dataset ┗━ Dataset ┗━ Dataset ╰─────────────────────────┬───────────────────────╯ items
Here’s a simple example, with a scraper that returns only a single dataset:
# encoding: utf-8
""" Get the number of cranes at Hornborgarsjön """
from statscraper.scrapers import Cranes
scraper = Cranes()
print scraper.items # List available datasets
# [<Dataset: Number of cranes>]
dataset = scraper.items[0]
print dataset.dimensions
# [<Dimension: date (date)>, <Dimension: month (month)>, <Dimension: year (year)>]
print dataset.data[0] # Print first row of data
# {'date': u'1', 'year': u'2010', 'value': u'', 'month': u'januari'}
Building a scraper
Scrapers are built by extending a base scraper, or a derative of that. You need to provide a method for listing datasets or collections of datasets, and for fetching data.
Statscraper is built for statistical data, meaning that it’s most useful when the data you are scraping/fetching can be organized with a numerical value in each row:
city |
year |
value |
---|---|---|
Voi |
2009 |
45483 |
Kabarnet |
2006 |
10191 |
Taveta |
2009 |
67505 |
A scraper can override these methods:
_fetch_itemslist(item) to yield collections or datasets at the current cursor position
_fetch_data(dataset) to yield rows from the currently selected dataset
_fetch_dimensions(dataset) to yield dimensions available for the currently selected dataset
_fetch_allowed_values(dimension) to yield allowed values for a dimension
A number of hooks are avaiable for more advanced scrapers. These are called by adding the on decorator on a method:
@on("up")
def my_method(self):
# Do something when the user moves up one level
Available hooks are:
init: Called when initiating the BaseScraper
up: Called when trying to go up one level
select: Called when trying to move to a Collection or Dataset
top: Called when reaching the top level
For developers
These instructions are for developers working on the BaseScraper. See above for instructions for developing a scraper using the BaseScraper.
Downloading
git clone https://github.com/jplusplus/skrejperpark
python setup.py install
Tests
python setup.py test
Run python setup.py test from the root directory. This will install everything needed for testing, before running tests with nosetests.
Changelog
1.0.0.dev1 - Semantic versioning starts here - Implement datatypes and dialects
0.0.2
Added some demo scrapers
The cursor is now moved when accessing datasets
Renamed methods for moving cursor: move_up(), move_to()
Added many more methods
Added tests
Added datatypes subtree
It should now be possible to write a basic scraper
0.0.1
First version
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for statscraper-1.0.0.dev1-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e85bce4195a810f0ab7441f10c2e40d099005bc34f11de662096f0b352fff7f9 |
|
MD5 | e23cd7cf419840f85307a82607dcc969 |
|
BLAKE2b-256 | 576753270c07f4911db745d8eaa180c7854401cecf27c4be7e12d9be2f7de184 |