Skip to main content

Web scraper to extract data from E-REDES website and load it into database storage.

Project description

E-REDES Scraper

Description

This is a web scraper that collects data from the E-REDES website and can upload it to a database. Since there is no exposed programatic interface to the data, this web scraper was developed as approach to collect it. A high-level of the process is:

  1. The scraper collects the data from the E-REDES website.
  2. A file with the energy consumption readings is downloaded.
  3. [ Optional ] The file is parsed and the data is uploaded to the selected database.
  4. [ Optional ] A feature supporting only the insertion of "deltas" is available.

This package supports E-REDES website available at time of writing 14/06/2023. The entrypoint for the scraper is the page https://balcaodigital.e-redes.pt/consumptions/history.

Installation

The package can be installed using pip:

pip install eredesscraper

Configuration

Usage is based on a YAML configuration file.
config.yml holds the credentials for the E-REDES website and the database connection. Currently, only InfluxDB is supported as a database sink.

Template config.yml:

eredes:
  # eredes credentials
  nif: <my-eredes-nif>
  pwd: <my-eredes-password>
  # CPE to monitor. e.g. PT00############04TW (where # is a digit). CPE can be found in your bill details
  cpe: PT00############04TW


influxdb:
  # url to InfluxDB.  e.g. http://localhost or https://influxdb.my-domain.com
  host: http://localhost
  # default port is 8086
  port: 8086
  bucket: <my-influx-bucket>
  org: <my-influx-org>
  # access token with write access
  token: <token>

Usage

CLI:

ers config load "/path/to/config.yml"

# get current month readings
ers run -d influxdb

# get only deltas from last month readings 
ers run -w previous -d influxdb --delta

# get readings from May 2023
ers run -w select -d influxdb -m 5 -y 2023

# start an API server
ers server -H "localhost" -p 8778 --reload -S <path/to/database>

API:

For more details refer to the OpenAPI documentation or the UI endpoints available at http://<host>:<port>/docs and http://<host>:<port>/redoc

# main methods:

# load an ers configuration 
# different options to load available:
# - directly in the request body,
# - download remote file,
# - upload local file
curl -X 'POST' \
  'http://localhost:8778/config/upload' \
  -H 'Content-Type: multipart/form-data' \
  -F 'file=@my-config.yml'


# run sync workflow
curl -X 'POST' \
  'http://localhost:8778/run' \
  -H 'Content-Type: application/json' \
  -d '{
  "workflow": "current"
}'

# run async workflow
curl -X 'POST' \
  'http://localhost:8778/run_async' \
  -H 'Content-Type: application/json' \
  -d '{
  "workflow": "select",
  "db": [
    "influxdb"
  ],
  "month": 5,
  "year": 2023,
  "delta": true,
  "download": true
}'

# get task status (`task_id` returned in /run_async response body)
curl -X 'GET' \
  'http://localhost:8778/status/<task_id>'

# download the file retrieved by the workflow
curl -X 'GET' \
  'http://localhost:8778/download/<task_id>'

Python:

from eredesscraper.workflows import switchboard
from pathlib import Path

# get deltas from current month readings
switchboard(config_path=Path("./config.yml"),
            name="current",
            db=list("influxdb"),
            delta=True,
            keep=True)

# get readings from May 2023
switchboard(config_path=Path("./config.yml"),
            name="select",
            db=list("influxdb"),
            month=5,
            year=2023)

Features

Available workflows:

  • current: Collects the current month consumption.
  • previous: Collects the previous month consumption data.
  • select: Collects the consumption data from an arbitrary month parsed by the user.

Available databases:

Roadmap

  • Add workflow for retrieving previous month data.
  • Add workflow for retrieving data form an arbitrary month.
  • Build CLI.
  • Build API
  • Containerize app.
  • Documentation.
  • Add CI/CD.
  • Add logging.
  • Add tests (limited coverage).
  • Add runtime support for multiple CPEs.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

License

See LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eredesscraper-1.0.0.tar.gz (55.2 kB view details)

Uploaded Source

Built Distribution

eredesscraper-1.0.0-py3-none-any.whl (37.3 kB view details)

Uploaded Python 3

File details

Details for the file eredesscraper-1.0.0.tar.gz.

File metadata

  • Download URL: eredesscraper-1.0.0.tar.gz
  • Upload date:
  • Size: 55.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: poetry/1.8.3 CPython/3.11.9 Linux/6.5.0-1021-azure

File hashes

Hashes for eredesscraper-1.0.0.tar.gz
Algorithm Hash digest
SHA256 45fd1a8fab884d64140eba9fb561aa5197343ef88dd0fc36024f0593760ec2d7
MD5 82c1aa9b13a8ada15fc91f8c98924a22
BLAKE2b-256 d7c513e431c0ed2c155c3075359de0f191ab93e456b2c1d21bc8619808d37e8e

See more details on using hashes here.

File details

Details for the file eredesscraper-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: eredesscraper-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 37.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: poetry/1.8.3 CPython/3.11.9 Linux/6.5.0-1021-azure

File hashes

Hashes for eredesscraper-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d2f5d52c3773711c7bedb41376debef07655bc99a4a2ec33e1f2f60024853c46
MD5 3d4784f6f1f5d52381dabfc028188ac1
BLAKE2b-256 863c79e7ce21e505f629dd871d1a97c262c0a8152fe761b08ce7e393d1b3c2e1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page