Skip to main content

Hubeau client to collect data from the different APIs

Project description

cl-hubeau

PyPI - Version Supported Python Versions PyPI - Status

Code style: black flake8 checks Test Coverage GitHub Issues or Pull Requests GitHub commits since latest release

Monthly Downloads Total Downloads

Hub'eau Coverage

Simple hub'eau client for python

This package is currently under active development. Every API on Hub'eau will be covered by this package in due time.

At this stage, the following APIs are covered by cl-hubeau:

For any help on available kwargs for each endpoint, please refer directly to the documentation on hubeau (this will not be covered by the current documentation).

Assume that each function from cl-hubeau will be consistent with it's hub'eau counterpart, with the exception of the size and page or cursor arguments (those will be set automatically by cl-hubeau to crawl allong the results).

Parallelization

cl-hubeau already uses simple multithreading pools to perform requests. In order not to endanger the webservers and share ressources among users, a rate limiter is set to 10 queries per second. This limiter should work fine on any given machine, whatever the context (even with a new parallelization overlay).

However cl-hubeau should NOT be used in containers (or pods) with parallelization. There is currently no way of tracking the queries' rate among multiple machines: greedy queries may end up blacklisted by the team managing Hub'eau.

Configuration

Starting with pynsee 0.2.0, no API keys are needed anymore.

Support

In case of bugs, please open an issue on the repo.

Contribution

Any help is welcome. Please refer to the CONTRIBUTING file.

Licence

GPL-3.0-or-later

Project Status

This package is currently under active development.

Basic examples

Clean cache

from cl_hubeau.utils import clean_all_cache
clean_all_cache

Piezometry

3 high level functions are available (and one class for low level operations).

Get all piezometers (uses a 30 days caching):

from cl_hubeau import piezometry
gdf = piezometry.get_all_stations()

Get chronicles for the first 100 piezometers (uses a 30 days caching):

df = piezometry.get_chronicles(gdf["code_bss"].head(100).tolist())

Get realtime data for the first 100 piezometers:

A small cache is stored to allow for realtime consumption (cache expires after only 15 minutes). Please, adopt a responsible usage with this functionnality !

df = get_realtime_chronicles(gdf["code_bss"].head(100).tolist())

Low level class to perform the same tasks:

Note that :

  • the API is forbidding results > 20k rows and you may need inner loops
  • the cache handling will be your responsibility, noticely for realtime data
with piezometry.PiezometrySession() as session:
    df = session.get_chronicles(code_bss="07548X0009/F")
    df = session.get_stations(code_departement=['02', '59', '60', '62', '80'], format="geojson")
    df = session.get_chronicles_real_time(code_bss="07548X0009/F")

Hydrometry

4 high level functions are available (and one class for low level operations).

Get all stations (uses a 30 days caching):

from cl_hubeau import hydrometry
gdf = hydrometry.get_all_stations()

Get all sites (uses a 30 days caching):

gdf = hydrometry.get_all_sites()

Get observations for the first 5 sites (uses a 30 days caching): Note that this will also work with stations (instead of sites).

df = hydrometry.get_observations(gdf["code_site"].head(5).tolist())

Get realtime data for the first 5 sites (no cache stored):

A small cache is stored to allow for realtime consumption (cache expires after only 15 minutes). Please, adopt a responsible usage with this functionnality !

df = hydrometry.get_realtime_observations(gdf["code_site"].head(5).tolist())

Low level class to perform the same tasks:

Note that :

  • the API is forbidding results > 20k rows and you may need inner loops
  • the cache handling will be your responsibility, noticely for realtime data
with hydrometry.HydrometrySession() as session:
    df = session.get_stations(code_station="K437311001")
    df = session.get_sites(code_departement=['02', '59', '60', '62', '80'], format="geojson")
    df = session.get_realtime_observations(code_entite="K437311001")
    df = session.get_observations(code_entite="K437311001")

Drinking water quality

2 high level functions are available (and one class for low level operations).

Get all water networks (UDI) (uses a 30 days caching):

from cl_hubeau import drinking_water_quality
df = drinking_water_quality.get_all_water_networks()

Get the sanitary controls's results for nitrates on all networks of Paris, Lyon & Marseille (uses a 30 days caching) for nitrates

networks = drinking_water_quality.get_all_water_networks()
networks = networks[
    networks.nom_commune.isin(["PARIS", "MARSEILLE", "LYON"])
    ]["code_reseau"].unique().tolist()

df = drinking_water_quality.get_control_results(
    codes_reseaux=networks,
    code_parametre="1340"
)

Note that this query is heavy, even if this was already restricted to nitrates. In theory, you could also query the API without specifying the substance you're tracking, but you may hit the 20k threshold and trigger an exception.

You can also call the same function, using official city codes directly:

df = drinking_water_quality.get_control_results(
    codes_communes=['59350'],
    code_parametre="1340"
)

Low level class to perform the same tasks:

Note that :

  • the API is forbidding results > 20k rows and you may need inner loops
  • the cache handling will be your responsibility
with drinking_water_quality.DrinkingWaterQualitySession() as session:
    df = session.get_cities_networks(nom_commune="LILLE")
    df = session.get_control_results(code_departement='02', code_parametre="1340")

Superficial waterbodies quality

4 high level functions are available (and one class for low level operations).

Get all stations (uses a 30 days caching):

from cl_hubeau import superficial_waterbodies_quality
df = superficial_waterbodies_quality.get_all_stations()

Get all operations (uses a 30 days caching):

from cl_hubeau import superficial_waterbodies_quality
df = superficial_waterbodies_quality.get_all_operations()

Note that this query is heavy, users should restrict it to a given territory. For instance, you could use :

df = superficial_waterbodies_quality.get_all_operations(code_region="11")

Get all environmental conditions:

from cl_hubeau import superficial_waterbodies_quality
df = superficial_waterbodies_quality.get_all_environmental_conditions()

Note that this query is heavy, users should restrict it to a given territory. For instance, you could use :

df = superficial_waterbodies_quality.get_all_environmental_conditions(code_region="11")

Get all physicochemical analyses:

from cl_hubeau import superficial_waterbodies_quality
df = superficial_waterbodies_quality.get_all_analyses()

Note that this query is heavy, users should restrict it to a given territory and given parameters. For instance, you could use :

df = superficial_waterbodies_quality.get_all_analyses(
    code_departement="59",
    code_parametre="1313"
    )

Low level class to perform the same tasks:

Note that :

  • the API is forbidding results > 20k rows and you may need inner loops
  • the cache handling will be your responsibility
with superficial_waterbodies_quality.SuperficialWaterbodiesQualitySession() as session:
    df = session.get_stations(code_commune="59183")
    df = session.get_operations(code_commune="59183")
    df = session.get_environmental_conditions(code_commune="59183")
    df = session.get_analyses(code_commune='59183', code_parametre="1340")

Ground water quality

2 high level functions are available (and one class for low level operations).

Get all stations (uses a 30 days caching):

from cl_hubeau import ground_water_quality
df = ground_water_quality.get_all_stations()

Get the tests results for nitrates :

df = ground_water_quality.df = get_all_analyses(code_param="1340")

Note that this query is heavy, even if this was already restricted to nitrates, and that it may fail. In theory, you could even query the API without specifying the substance you're tracking, but you will hit the 20k threshold and trigger an exception.

In practice, you should call the same function with a territorial restriction or with specific bss_ids. For instance, you could use official city codes directly:

df = ground_water_quality.get_all_analyses(
    num_departement=["59"]
    code_param="1340"
)

Note: a bit of caution is needed here, as the arguments are NOT the same in the two endpoints. Please have a look at the documentation on hubeau. For instance, the city's number is called "code_insee_actuel" on analyses' endpoint and "code_commune" on station's.

Low level class to perform the same tasks:

Note that :

  • the API is forbidding results > 20k rows and you may need inner loops
  • the cache handling will be your responsibility
with ground_water_quality.GroundWaterQualitySession() as session:
    df = session.get_stations(bss_id="01832B0600")
    df = session.get_analyses(
        bss_ids=["BSS000BMMA"],
        code_param="1461",
        )

Watercourses flow

3 high level functions are available (and one class for low level operations).

get_all_campaigns

Get all stations (uses a 30 days caching):

from cl_hubeau import watercourses_flow
df = watercourses_flow.get_all_stations()

Get all observations (uses a 30 days caching):

from cl_hubeau import watercourses_flow
df = watercourses_flow.get_all_observations()

Note that this query is heavy, users should restrict it to a given territory when possible. For instance, you could use :

df = watercourses_flow.get_all_observations(code_region="11")

Get all campagins:

from cl_hubeau import watercourses_flow
df = watercourses_flow.get_all_campaigns()

Low level class to perform the same tasks:

Note that :

  • the API is forbidding results > 20k rows and you may need inner loops
  • the cache handling will be your responsibility
with watercourses_flow.WatercoursesFlowSession() as session:
    df = session.get_stations(code_departement="59")
    df = session.get_campaigns(code_campagne=[12])
    df = session.get_observations(code_station="F6640008")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cl_hubeau-0.7.0.tar.gz (26.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cl_hubeau-0.7.0-py3-none-any.whl (37.3 kB view details)

Uploaded Python 3

File details

Details for the file cl_hubeau-0.7.0.tar.gz.

File metadata

  • Download URL: cl_hubeau-0.7.0.tar.gz
  • Upload date:
  • Size: 26.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.9.21 Linux/6.8.0-1021-azure

File hashes

Hashes for cl_hubeau-0.7.0.tar.gz
Algorithm Hash digest
SHA256 826ac6d066117cd5b41bc1cd04f403d496ae749962526327e13f50319ce34d79
MD5 51e0cda8d3c0f9ad06eb495dc5e8072d
BLAKE2b-256 fff4a6fd0d63486dab74d92101cc71dcc5dcc50cd6713094ed36d7b30e2f09bf

See more details on using hashes here.

File details

Details for the file cl_hubeau-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: cl_hubeau-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 37.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.9.21 Linux/6.8.0-1021-azure

File hashes

Hashes for cl_hubeau-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8a4e753f7af3605081b31d1b3ad8a51d25cff9c771799d799e1823d2f95f6aa8
MD5 6a63519b3b821c73c0da800c57783a44
BLAKE2b-256 c1bfac77071ca93abb5d66814863523140cfbb7f835958b5d80267e06d32a921

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page