Graph implementation that loads graph data (nodes and edges) from external sources and caches the loaded data in a database using sqlalchemy or flask-sqlalchemy.
GraphScraper is a Python 3 library that contains a base graph implementation designed to be turned into a web scraper for graph data. It has two major features:
1) The graph automatically manages a database (using either SQLAlchemy or Flask-SQLAlchemy) where it stores all the nodes and edges the graph has seen.
2) The base graph implementation provides hook methods that, if implemented, turn the graph into a web scraper.
Yet another graph implementation - why
There are many excellent graph libraries available for different purposes. I started implementing this one because i haven’t found a graph library that is dynamic (i don’t need the whole graph in memory - or on disk - before i start working with it), that can be used as a web scraper (to seamlessly load nodes and edges from some remote data source when that piece of data is needed) and that keeps all data (the graph) automatically up-to-date on the disk. GraphScraper aims to satisfy these requirements.
Besides the base graph implementation, the following working examples are also included in the library, that show you how you can implement and use an actual graph scraper:
If you wish to use one of the included graph implementations, then please read the corresponding module’s description for additional requirements.
Any form of constructive contribution (feedback, features, bug fixes, tests, additional documentation, etc.) is welcome.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size graphscraper-0.4.0-py3-none-any.whl (22.9 kB)||File type Wheel||Python version py3||Upload date||Hashes View hashes|
Hashes for graphscraper-0.4.0-py3-none-any.whl