A minimalistic, recursive web crawling library for Python.
Project description
The solitary and lucid spectator of a multiform, instantaneous and almost intolerably precise world.
—Funes the Memorious, Jorge Luis Borges
memorious is a light-weight web scraping toolkit. It supports scrapers that collect structured or un-structured data. This includes the following use cases:
Make crawlers modular and simple tasks re-usable
Provide utility functions to do common tasks such as data storage, HTTP session management
Integrate crawlers with the Aleph and FollowTheMoney ecosystem
Get out of your way as much as possible
Design
When writing a scraper, you often need to paginate through through an index page, then download an HTML page for each result and finally parse that page and insert or update a record in a database.
memorious handles this by managing a set of crawlers, each of which can be composed of multiple stages. Each stage is implemented using a Python function, which can be re-used across different crawlers.
The basic steps of writing a Memorious crawler:
Make YAML crawler configuration file
Add different stages
Write code for stage operations (optional)
Test, rinse, repeat
Documentation
The documentation for Memorious is available at alephdata.github.io/memorious. Feel free to edit the source files in the docs folder and send pull requests for improvements.
To build the documentation, inside the docs folder run make html
You’ll find the resulting HTML files in /docs/_build/html.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for memorious-2.6.5-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5997259e0e5e3e92012bd87d506dfd947f8900c53e8c5717696169a523c48780 |
|
MD5 | 3c64862426bff79ca7744238b1057dcd |
|
BLAKE2b-256 | 70dcf8543dbc42b92a041bfa59a5aa57e61e9a8906cd8bde1bcbe6fcca51dbe1 |