Skip to main content

Hubeau client to collect data from the different APIs

Project description

cl-hubeau

Simple hub'eau client for python

This package is currently under active development. Every API on Hub'eau will be covered by this package in due time.

At this stage, the following APIs are covered by cl-hubeau:

For any help on available kwargs for each endpoint, please refer directly to the documentation on hubeau (this will not be covered by the current documentation).

Assume that each function from cl-hubeau will be consistent with it's hub'eau counterpart, with the exception of the size and page or cursor arguments (those will be set automatically by cl-hubeau to crawl allong the results).

Parallelization

cl-hubeau already uses simple multithreading pools to perform requests. In order not to endanger the webservers and share ressources amont users, a rate limiter is set to 10 queries per second. This limiter should work fine on any given machine, whatever the context (even with a new parallelization overlay).

However cl-hubeau should NOT be used in containers or pods with parallelization. There is currently no way of tracking the rate of querying amont multiple machines and greedy queries may end up blacklisted by the team managing Hub'eau.

Configuration

First of all, you will need API keys from INSEE to use some high level operations, which may loop over cities'official codes. Please refer to pynsee's API subscription Tutorial for help.

Basic examples

Clean cache

from cl_hubeau.utils import clean_all_cache
clean_all_cache

Piezometry

3 high level functions are available (and one class for low level operations).

Get all piezometers (uses a 30 days caching):

from cl_hubeau import piezometry
gdf = piezometry.get_all_stations()

Get chronicles for the first 100 piezometers (uses a 30 days caching):

df = piezometry.get_chronicles(gdf["code_bss"].head(100).tolist())

Get realtime data for the first 100 piezometers:

A small cache is stored to allow for realtime consumption (cache expires after only 15 minutes). Please, adopt a responsible usage with this functionnality !

df = get_realtime_chronicles(gdf["code_bss"].head(100).tolist())

Low level class to perform the same tasks:

Note that :

  • the API is forbidding results > 20k rows and you may need inner loops
  • the cache handling will be your responsibility, noticely for realtime data
with piezometry.PiezometrySession() as session:
    df = session.get_chronicles(code_bss="07548X0009/F")
    df = session.get_stations(code_departement=['02', '59', '60', '62', '80'], format="geojson")
    df = session.get_chronicles_real_time(code_bss="07548X0009/F")

Hydrometry

4 high level functions are available (and one class for low level operations).

Get all stations (uses a 30 days caching):

from cl_hubeau import hydrometry 
gdf = hydrometry.get_all_stations()

Get all sites (uses a 30 days caching):

gdf = hydrometry.get_all_sites()

Get observations for the first 5 sites (uses a 30 days caching): Note that this will also work with stations (instead of sites).

df = hydrometry.get_observations(gdf["code_site"].head(5).tolist())

Get realtime data for the first 5 sites (no cache stored):

A small cache is stored to allow for realtime consumption (cache expires after only 15 minutes). Please, adopt a responsible usage with this functionnality !

df = hydrometry.get_realtime_observations(gdf["code_site"].head(5).tolist())

Low level class to perform the same tasks:

Note that :

  • the API is forbidding results > 20k rows and you may need inner loops
  • the cache handling will be your responsibility, noticely for realtime data
with hydrometry.HydrometrySession() as session:
    df = session.get_stations(code_station="K437311001")
    df = session.get_sites(code_departement=['02', '59', '60', '62', '80'], format="geojson")
    df = session.get_realtime_observations(code_entite="K437311001")
    df = session.get_observations(code_entite="K437311001")

Drinking water quality

2 high level functions are available (and one class for low level operations).

Get all water networks (UDI) (uses a 30 days caching):

from cl_hubeau import drinking_water_quality 
df = drinking_water_quality.get_all_water_networks()

Get the sanitary controls's results for nitrates on all networks of Paris, Lyon & Marseille (uses a 30 days caching) for nitrates

networks = drinking_water_quality.get_all_water_networks()
networks = networks[
    networks.nom_commune.isin(["PARIS", "MARSEILLE", "LYON"])
    ]["code_reseau"].unique().tolist()

df = drinking_water_quality.get_control_results(
    codes_reseaux=networks,
    code_parametre="1340"
)

Note that this query is heavy, even if this was already restricted to nitrates. In theory, you could also query the API without specifying the substance you're tracking, but you may hit the 20k threshold and trigger an exception.

You can also call the same function, using official city codes directly:

df = drinking_water_quality.get_control_results(
    codes_communes=['59350'],
    code_parametre="1340"
)

Low level class to perform the same tasks:

Note that :

  • the API is forbidding results > 20k rows and you may need inner loops
  • the cache handling will be your responsibility
with drinking_water_quality.DrinkingWaterQualitySession() as session:
    df = session.get_cities_networks(nom_commune="LILLE")
    df = session.get_control_results(code_departement='02', code_parametre="1340")

Superficial waterbodies quality

4 high level functions are available (and one class for low level operations).

Get all stations (uses a 30 days caching):

from cl_hubeau import superficial_waterbodies_quality 
df = superficial_waterbodies_quality.get_all_stations()

Get all operations (uses a 30 days caching):

from cl_hubeau import superficial_waterbodies_quality
df = superficial_waterbodies_quality.get_all_operations()

Note that this query is heavy, users should restrict it to a given territory. For instance, you could use :

df = superficial_waterbodies_quality.get_all_operations(code_region="11")

Get all environmental conditions:

from cl_hubeau import superficial_waterbodies_quality
df = superficial_waterbodies_quality.get_all_environmental_conditions()

Note that this query is heavy, users should restrict it to a given territory. For instance, you could use :

df = superficial_waterbodies_quality.get_all_environmental_conditions(code_region="11")

Get all physicochemical analysis:

from cl_hubeau import superficial_waterbodies_quality
df = superficial_waterbodies_quality.get_all_analysis()

Note that this query is heavy, users should restrict it to a given territory and given parameters. For instance, you could use :

df = superficial_waterbodies_quality.get_all_analysis(
    code_departement="59", 
    code_parametre="1313"
    )

Low level class to perform the same tasks:

Note that :

  • the API is forbidding results > 20k rows and you may need inner loops
  • the cache handling will be your responsibility
with superficial_waterbodies_quality.SuperficialWaterbodiesQualitySession() as session:
    df = session.get_stations(code_commune="59183")
    df = session.get_operations(code_commune="59183")
    df = session.get_environmental_conditions(code_commune="59183")
    df = session.get_analysis(code_commune='59183', code_parametre="1340")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cl_hubeau-0.5.0.tar.gz (22.7 kB view details)

Uploaded Source

Built Distribution

cl_hubeau-0.5.0-py3-none-any.whl (31.9 kB view details)

Uploaded Python 3

File details

Details for the file cl_hubeau-0.5.0.tar.gz.

File metadata

  • Download URL: cl_hubeau-0.5.0.tar.gz
  • Upload date:
  • Size: 22.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.9.20 Linux/6.5.0-1025-azure

File hashes

Hashes for cl_hubeau-0.5.0.tar.gz
Algorithm Hash digest
SHA256 514e8d35ccbf3c005d93970caa52a50f1922d656447059aca0823fa208091a4c
MD5 06ca17557ca520602431a798e2bbb91a
BLAKE2b-256 3cf4dc5423226134c16ce20a369ba84b7db2bc2661cc608e60f87af73805deed

See more details on using hashes here.

File details

Details for the file cl_hubeau-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: cl_hubeau-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 31.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.9.20 Linux/6.5.0-1025-azure

File hashes

Hashes for cl_hubeau-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3fa647cc97cd7c07d9b3138537f2800bf6f4df4ad2fd02a88e06e7df96c57434
MD5 99c26c72037887e56048f5d40b46ff6d
BLAKE2b-256 d67f41b07027cb692785f7b83fb4c0ea0abae8d462f85aeeb5ef8167ef64a0d8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page