Pull and standardize data on cloud compute resources.
Project description
Spare Cores Crawler
SC Crawler is a Python package to pull and standardize data on cloud compute resources, with tooling to help organize and update the collected data into databases.
Installation
Stable version from PyPI:
pip install sparecores-crawler
Most recent version from GitHub:
pip install "sparecores-crawler @ git+https://git@github.com/SpareCores/sc-crawler.git"
References
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
sparecores_crawler-0.3.1.tar.gz
(100.3 kB
view hashes)
Built Distribution
Close
Hashes for sparecores_crawler-0.3.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c3435bda5993fcbe66d877301061bbfe41e5e89d6eefca034e50de2c65c5fd99 |
|
MD5 | 842005705cfacab083cf8863dc6ce9a8 |
|
BLAKE2b-256 | df8c747ddce79207674671f5b23496f0d97828b29c8bc354263a12c279c84235 |