Facilitate the users' interaction with the 24SEA API (https://api/24sea/eu) by providing pandas interfaces to the API endpoints.
Project description
API 24SEA
api_24sea is a Python project designed to provide aid for the interaction with data from the 24SEA API.
Installation
The package supports Python 3.8 and above. To install it, run the following command in your terminal:
pip install api_24sea
DataSignals Usage
The following example shows the classical usage of the datasignals module.
- The first step is to import the package and the necessary libraries.
- Then, the environment variables are loaded from a
.envfile. This step is optional, since if any of the following names for user and password in the system, the package will authenticate automatically."API_24SEA_USERNAME","24SEA_API_USERNAME","TWOFOURSEA_API_USERNAME","API_TWOFOURSEA_USERNAME"for the username."API_24SEA_PASSWORD","24SEA_API_PASSWORD","TWOFOURSEA_API_PASSWORD","API_TWOFOURSEA_PASSWORD"for the password.
- After that, API dataframe is initialized.
- Finally, the user can get data from the API. The dataframe will authenticate lazily if the environment variables are loaded, or the user can authenticate manually before performing the data retrieval.
Importing the package
# %%
# **Package Imports**
# - From the Python Standard Library
import logging
import os
import sys
# - From third party libraries
import pandas as pd
import dotenv # <-- Not necessary to api_24sea per se, but useful for
# loading environment variables. Install it with
# `pip install python-dotenv`
# - Local imports
from api_24sea.version import __version__, parse_version
import api_24sea
# %%
# **Package Versions**
print("Working Folder: ", os.getcwd())
print(f"Python Version: {sys.version}")
print(f"Pandas Version: {pd.__version__}")
print(f"Package {parse_version(__version__)}")
# **Notebook Configuration**
logging.basicConfig(level=logging.INFO)
Setting up the environment variables (optional)
This step assumes that you have a file structure similar to the following one:
.
├── env
│ └── .env
├── notebooks
│ └── example.ipynb
└── requirements.txt
The [.env]{.title-ref} file should look like this:
API_24SEA_USERNAME=your_username
API_24SEA_PASSWORD=your_password
With this in mind, the following code snippet shows how to load the environment variables from the [.env]{.title-ref} file:
# %%
# **Load Environment Variables from .env File**
_ = dotenv.load_dotenv("../env/.env")
if _:
print("Environment Variables Loaded Successfully")
print(os.getenv("API_24SEA_USERNAME"))
# print(os.getenv("API_24SEA_PASSWORD"))
else:
raise Exception("Environment Variables Not Loaded")
Initializing an empty dataframe
Initializing an empty dataframe is necessary to use the API, as here is where the data will be stored.
# %%
# **DataFrame initialization**
# The empty DataFrame is created beforehand because it needs to authenticate
# with the API to fetch the data.
df = pd.DataFrame()
Authentication (optional)
If any of the following names for user and password in the system, the package will authenticate automatically.
"API_24SEA_USERNAME","24SEA_API_USERNAME","TWOFOURSEA_API_USERNAME","API_TWOFOURSEA_USERNAME"for the username."API_24SEA_PASSWORD","24SEA_API_PASSWORD","TWOFOURSEA_API_PASSWORD","API_TWOFOURSEA_PASSWORD"for the password.
The user can also authenticate manually by calling the authenticate method
from the DataFrame.
# %%
# **Authentication**
df.datasignals.authenticate("some_other_username", "some_other_password")
Alternatively, the user can authenticate with the API on DataFrame instantiation:
# %%
# **DataFrame initialization with authentication**
df = pd.DataFrame().datasignals.authenticate("some_other_username",
"some_other_password")
Checking the available metrics
# %%
# **Metrics Overview**
# The metrics overview is a summary of the metrics available in the API and
# can be accessed from a hidden method in the DataSignals class.
df.datasignals._DataSignals__api.metrics_overview
# It will show all the available metrics with the corresponding units
# and the time window for which the user is allowed to get data
Getting sample data from the API
After loading the environment variables and authenticating with the API, the user can get data from 24SEA API endpoints.
The data is retrieved and stored in the DataFrame. All the metrics are stored in separate columns, and the timestamps are set as the index of the DataFrame.
The data retrieval is done by specifying the sites or the locations or both, the metrics, and timestamps.
- Sites: Case insensitive, it can either match [site]{.title-ref} or [site_id]{.title-ref}. It is an optional parameter.
- Locations: Case insensitive, it can either match
[location]{.title-ref} or [location_id]{.title-ref}. It is an
optional parameter. Prefer the
locationsparameter. The legacy singularlocationparameter is still accepted for backward compatibility, but it is deprecated. - Metrics: Case insensitive, it can be a partial match of the metric
name. If the site and location are specified, and metrics equals
all, all the "allowed" metrics for the specified site and location will be retrieved. - Timestamps: Timezone-aware datetime, strings in ISO 8601 format, or shorthand strings compatible with the shorthand_datetime package.
# %%
# **Data Retrieval**
sites = ["wf"]
locations = ["a01", "a02"]
metrics = ["mean WinDSpEed", "mean pitch", "mean-Yaw", "mean Power"]
# Assigning metrics="all" will retrieve all the metrics available for the
# specified sites and locations.
start_timestamp = "2020-03-01T00:00:00Z"
end_timestamp = "2020-06-01T00:00:00Z"
df.datasignals.get_data(sites, locations, metrics,
start_timestamp, end_timestamp)
Checking the metrics selected and the data
# %%
df.datasignals.selected_metrics
df
Split the data by site and location
The as_dict method is used to split the data by site and location and
return a dictionary of dictionaries of DataFrames.
# %%
# Data is a dictionary of dictionary of DataFrames in the shape of:
# {
# "site1": {
# "location1": DataFrame,
# "location2": DataFrame,
# ...
# },
# ...
# }
data = df.datasignals.as_dict()
# %%
# Retrieve the DataFrame for the windfarm WFA01 only
data["windfarm"]["WFA02"]
If df was defined using local data, rather than from API call, the
user can still use the as_dict method to split the data by site and
location by passing a metrics_map dataframe (i.e., the metrics
overview table) from the datasignals
app. For example:
import pandas as pd
import api_24sea
df = pd.DataFrame({
"timestamp": ["2021-01-01", "2021-01-02"],
"mean_WF_A01_windspeed": [10.0, 11.0],
"mean_WF_A02_windspeed": [12.0, 13.0]
})
metrics_map = pd.DataFrame({
"site": ["wf", "wf"],
"location": ["a01", "a02"],
"metric": ["mean_WF_A01_windspeed", "mean_WF_A02_windspeed"]
})
df.datasignals.as_dict(metrics_map)
# output
# {
# "wf": {
# "a01": pd.DataFrame({
# "timestamp": ["2021-01-01", "2021-01-02"],
# "mean_WF_A01_windspeed": [10.0, 11.0]
# }),
# "a02": pd.DataFrame({
# "timestamp": ["2021-01-01", "2021-01-02"],
# "mean_WF_A02_windspeed": [12.0, 13.0]
# })
# }
# }
Project Structure
.
├── .azure/
├── api_24sea/
│ ├── __init__.py
│ ├── datasignals/
│ │ ├── __init__.py
│ │ ├── fatigue.py
│ │ └── schemas.py
│ ├── core.py
│ ├── exceptions.py
│ ├── singleton.py
│ ├── utils.py
│ └── version.py
├── tests/
├── docs/
├── notebooks/
├── pyproject.toml
├── LICENSE
├── VERSION
└── README.md
License
The package is licensed under the GNU General Public License v3.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file api_24sea-2.2.0.tar.gz.
File metadata
- Download URL: api_24sea-2.2.0.tar.gz
- Upload date:
- Size: 56.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Zorin OS","version":"17","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f251d0ac403e2d865f2ad0a0a3bc64cdfce23e80948a1db03a535c23dd54edf8
|
|
| MD5 |
cf7e15eabfc5516c64c642dbbce4c75f
|
|
| BLAKE2b-256 |
9052da5f8f2a75a056b5adfbbc50bfe74c7fcd8fe29cfe3b2d7c6f22c5fbf856
|
File details
Details for the file api_24sea-2.2.0-py3-none-any.whl.
File metadata
- Download URL: api_24sea-2.2.0-py3-none-any.whl
- Upload date:
- Size: 55.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Zorin OS","version":"17","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
247736896fc1b38a34f467c6ceb2576c105ef889556d9f6e11ef84712d7a2aa1
|
|
| MD5 |
50362bc62366cce061d842c19def285e
|
|
| BLAKE2b-256 |
18389323959d03467b12c8f21a709673af46d1a36f1bada5ed663e4535c95dc1
|