Skip to main content

Web scraper to extract data from E-REDES website and load it into database storage.

Project description

E-REDES Scraper

Description

This is a web scraper that collects data from the E-REDES website and can upload it to a database. Since there is no exposed programatic interface to the data, this web scraper was developed as approach to collect it. A high-level of the process is:

  1. The scraper collects the data from the E-REDES website.
  2. A file with the energy consumption readings is downloaded.
  3. [ Optional ] The file is parsed and the data is uploaded to the selected database.
  4. [ Optional ] A feature supporting only the insertion of "deltas" is available.

This package supports E-REDES website available at time of writing 14/06/2023. The entrypoint for the scraper is the page https://balcaodigital.e-redes.pt/consumptions/history.

Installation

The package can be installed using pip:

pip install eredesscraper

Configuration

Usage is based on a YAML configuration file.
config.yml holds the credentials for the E-REDES website and the database connection. Currently, only InfluxDB is supported as a database sink.

Template config.yml:

eredes:
  # eredes credentials
  nif: <my-eredes-nif>
  pwd: <my-eredes-password>
  # CPE to monitor. e.g. PT00############04TW (where # is a digit). CPE can be found in your bill details
  cpe: PT00############04TW


influxdb:
  # url to InfluxDB.  e.g. http://localhost or https://influxdb.my-domain.com
  host: http://localhost
  # default port is 8086
  port: 8086
  bucket: <my-influx-bucket>
  org: <my-influx-org>
  # access token with write access
  token: <token>

Usage

CLI:

ers config load "/path/to/config.yml"

# get current month readings
ers run -d influxdb

# get only deltas from last month readings 
ers run -w previous -d influxdb --delta

# get readings from May 2023
ers run -w select -d influxdb -m 5 -y 2023

# start an API server
ers server -H "localhost" -p 8778 --reload -S <path/to/database>

API:

For more details refer to the OpenAPI documentation or the UI endpoints available at http://<host>:<port>/docs and http://<host>:<port>/redoc

# main methods:

# load an ers configuration 
# different options to load available:
# - directly in the request body,
# - download remote file,
# - upload local file
curl -X 'POST' \
  'http://localhost:8778/config/upload' \
  -H 'Content-Type: multipart/form-data' \
  -F 'file=@my-config.yml'


# run sync workflow
curl -X 'POST' \
  'http://localhost:8778/run' \
  -H 'Content-Type: application/json' \
  -d '{
  "workflow": "current"
}'

# run async workflow
curl -X 'POST' \
  'http://localhost:8778/run_async' \
  -H 'Content-Type: application/json' \
  -d '{
  "workflow": "select",
  "db": [
    "influxdb"
  ],
  "month": 5,
  "year": 2023,
  "delta": true,
  "download": true
}'

# get task status (`task_id` returned in /run_async response body)
curl -X 'GET' \
  'http://localhost:8778/status/<task_id>'

# download the file retrieved by the workflow
curl -X 'GET' \
  'http://localhost:8778/download/<task_id>'

Python:

from eredesscraper.workflows import switchboard
from pathlib import Path

# get deltas from current month readings
switchboard(config_path=Path("./config.yml"),
            name="current",
            db=list("influxdb"),
            delta=True,
            keep=True)

# get readings from May 2023
switchboard(config_path=Path("./config.yml"),
            name="select",
            db=list("influxdb"),
            month=5,
            year=2023)

Features

Available workflows:

  • current: Collects the current month consumption.
  • previous: Collects the previous month consumption data.
  • select: Collects the consumption data from an arbitrary month parsed by the user.

Available databases:

Roadmap

  • Add workflow for retrieving previous month data.
  • Add workflow for retrieving data form an arbitrary month.
  • Build CLI.
  • Build API
  • Containerize app.
  • Documentation.
  • Add CI/CD.
  • Add logging.
  • Add tests (limited coverage).
  • Add runtime support for multiple CPEs.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

License

See LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eredesscraper-1.0.1.tar.gz (55.4 kB view details)

Uploaded Source

Built Distribution

eredesscraper-1.0.1-py3-none-any.whl (37.5 kB view details)

Uploaded Python 3

File details

Details for the file eredesscraper-1.0.1.tar.gz.

File metadata

  • Download URL: eredesscraper-1.0.1.tar.gz
  • Upload date:
  • Size: 55.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: poetry/1.8.3 CPython/3.11.9 Linux/6.5.0-1021-azure

File hashes

Hashes for eredesscraper-1.0.1.tar.gz
Algorithm Hash digest
SHA256 925766b4b61b4209ede298f5d502ed01f4bea08d3972bdcc5e806f96df3a6b77
MD5 452a5ea37f06f09200327b0b944dd49d
BLAKE2b-256 c7480f158adba4dad4f39ce566d1d4834288006dd956c342a47ff98988823acb

See more details on using hashes here.

File details

Details for the file eredesscraper-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: eredesscraper-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 37.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: poetry/1.8.3 CPython/3.11.9 Linux/6.5.0-1021-azure

File hashes

Hashes for eredesscraper-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ebf1299da2f4018559923040ee4d31ef7cd4de8fb8a459a79d6a735a839b579c
MD5 99ba1fd286c14941b437bfd517425275
BLAKE2b-256 9de0378ed3ae0a50e0716e2314c71182a7d4afcae28e24b6532c69acd76f6983

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page