Skip to main content

User-friendly PySpark helpers for Microsoft Fabric Lakehouses and Warehouses

Project description

fabrictools

Bibliotheque Python pour simplifier le travail de donnees dans Microsoft Fabric.
Vous utilisez des fonctions courtes pour lire, nettoyer, fusionner et publier vos tables, sans gerer des chemins techniques complexes.


Table des matieres


Pourquoi utiliser fabrictools

  • Vous passez le nom du Lakehouse/Warehouse, pas une URL longue.
  • Vous avez des operations courantes pretes a l'emploi (read, write, merge, clean).
  • Vous pouvez lancer un pipeline de preparation en plusieurs etapes claires.
  • Vous disposez d’aides generiques sur DataFrame (filtrer par liste de valeurs, jointure avec colonnes prefixees).
  • Vous gagnez du temps avec des fonctions d'orchestration (table unique ou bulk).
  • Vous gardez un code notebook lisible pour toute l'equipe.

Prerequis

  • Python >= 3.9
  • Un environnement Microsoft Fabric (recommande)
  • Un notebook attache a un Lakehouse pour les operations Lakehouse

Bon a savoir :

  • Dans Fabric, pyspark et delta-spark sont deja disponibles.
  • Hors Fabric, certaines fonctions de resolution de chemins peuvent echouer (ex: absence de notebookutils).

Installation

# Cas standard (notebook Fabric)
pip install fabrictools

# Cas local avec Spark + Delta
pip install "fabrictools[spark]"

# Option visualisation (graphiques pour scan qualite)
pip install "fabrictools[visualization]"

Premiers pas (5 minutes)

import fabrictools as ft

# Lire une table/fichier depuis un Lakehouse
df = ft.read_lakehouse("BronzeLakehouse", "dbo/orders")
df.show(5)

Ensuite, vous pouvez faire :

  1. Nettoyer les donnees (clean_data)
  2. Ajouter des metadonnees (add_silver_metadata)
  3. Ecrire vers un Lakehouse cible (write_lakehouse)

Tutoriel interactif : projet fictif NovaRetail

Objectif : partir de donnees brutes de ventes et finir avec des tables preparees pour le reporting.

Vue d'ensemble (visuel)

flowchart LR
    sourceLakehouse["BronzeLakehouse (brut)"] --> cleanStep["Nettoyage"]
    cleanStep --> silverStep["Enrichissement Silver"]
    silverStep --> curatedLakehouse["SilverLakehouse (curated)"]
    curatedLakehouse --> preparedStep["Preparation semantique"]
    preparedStep --> preparedLakehouse["PreparedLakehouse"]
    preparedLakehouse --> warehouseStep["Warehouse + BI"]

Etape 1 - Lire les ventes brutes

import fabrictools as ft

orders_raw = ft.read_lakehouse("BronzeLakehouse", "dbo/orders_raw")
orders_raw.show(5)

Etape 2 - Nettoyer les donnees

orders_clean = ft.clean_data(orders_raw)

Etape 3 - Enrichir en metadonnees Silver

orders_silver = ft.add_silver_metadata(
    orders_clean,
    source_lakehouse_name="BronzeLakehouse",
    source_relative_path="dbo/orders_raw",
    source_layer="bronze",
)

Etape 4 - Ecrire en Silver

ft.write_lakehouse(
    orders_silver,
    lakehouse_name="SilverLakehouse",
    relative_path="dbo/orders",
    mode="overwrite",
    partition_by=["year", "month", "day"],
)

Etape 5 - Scanner la qualite

quality = ft.scan_data_errors(orders_silver, include_samples=True, display_results=True)
quality["summary_df"].show(truncate=False)

Etape 6 - Fusion incrementale (upsert)

orders_updates = ft.read_lakehouse("BronzeLakehouse", "dbo/orders_updates")

ft.merge_lakehouse(
    source_df=orders_updates,
    lakehouse_name="SilverLakehouse",
    relative_path="dbo/orders",
    merge_condition="src.order_id = tgt.order_id",
)

Etape 7 - Ecriture dans un Warehouse

ft.write_warehouse(
    df=orders_silver,
    warehouse_name="RetailWarehouse",
    table="dbo.orders",
    mode="overwrite",
)

Etape 8 - Pipeline prepare (table unique)

prepared_df = ft.prepare_and_write_data(
    source_lakehouse_name="SilverLakehouse",
    source_relative_path="Tables/dbo/orders",
    target_lakehouse_name="PreparedLakehouse",
    target_relative_path="Tables/dbo/orders_prepared",
    mode="overwrite",
)

Etape 9 - Pipeline prepare (bulk)

bulk_result = ft.prepare_and_write_all_tables(
    source_lakehouse_name="SilverLakehouse",
    target_lakehouse_name="PreparedLakehouse",
    include_schemas=["dbo"],
    continue_on_error=True,
)
print(bulk_result["successful_tables"], bulk_result["failed_tables"])

Etape 10 - Dimensions pour reporting

dims = ft.generate_dimensions(
    lakehouse_name="PreparedLakehouse",
    warehouse_name="RetailWarehouse",
    include_date=True,
    include_country=True,
    include_city=True,
)

Index rapide : toutes les fonctions publiques

Chaque fonction ci-dessous est exportee directement depuis import fabrictools as ft.

Lakehouse

read_lakehouse

df = ft.read_lakehouse("BronzeLakehouse", "dbo/customers")

write_lakehouse

ft.write_lakehouse(df, "SilverLakehouse", "dbo/customers", mode="overwrite")

merge_lakehouse

ft.merge_lakehouse(
    source_df=df_updates,
    lakehouse_name="SilverLakehouse",
    relative_path="dbo/customers",
    merge_condition="src.customer_id = tgt.customer_id",
)

delete_all_lakehouse_tables

ft.delete_all_lakehouse_tables(
    lakehouse_name="SandboxLakehouse",
    include_schemas=["dbo"],
    dry_run=True,
)

clean_data

df_clean = ft.clean_data(df)

add_silver_metadata

df_silver = ft.add_silver_metadata(df_clean, "BronzeLakehouse", "dbo/customers_raw")

scan_data_errors

scan = ft.scan_data_errors(df_silver, include_samples=True, display_results=False)
scan["summary_df"].show()

clean_and_write_data

df_out = ft.clean_and_write_data(
    source_lakehouse_name="BronzeLakehouse",
    source_relative_path="dbo/customers_raw",
    target_lakehouse_name="SilverLakehouse",
    target_relative_path="dbo/customers",
    mode="overwrite",
)

clean_and_write_all_tables

result = ft.clean_and_write_all_tables(
    source_lakehouse_name="BronzeLakehouse",
    target_lakehouse_name="SilverLakehouse",
    include_schemas=["dbo"],
    continue_on_error=True,
)

Warehouse

read_warehouse

df_wh = ft.read_warehouse("RetailWarehouse", "SELECT TOP 100 * FROM dbo.orders")

write_warehouse

ft.write_warehouse(df_wh, warehouse_name="RetailWarehouse", table="dbo.orders_snapshot", mode="append")

Dimensions

build_dimension_date

dim_date = ft.build_dimension_date(start_date="2020-01-01", end_date="2030-12-31")

build_dimension_country

dim_country = ft.build_dimension_country(countries_limit=100)

build_dimension_city

dim_city = ft.build_dimension_city(
    regions=["Europe"],
    countries=["FR", "DEU", "Belgium"],
)

generate_dimensions

all_dims = ft.generate_dimensions(
    lakehouse_name="PreparedLakehouse",
    warehouse_name="RetailWarehouse",
    include_date=True,
    include_country=True,
    include_city=True,
)

Source -> Prepared

snapshot_source_schema

schema_hash = ft.snapshot_source_schema("SilverLakehouse", "Tables/dbo/orders")

resolve_columns

mappings = ft.resolve_columns(
    df=orders_silver,
    source_lakehouse_name="SilverLakehouse",
    schema_hash=schema_hash,
)

transform_to_prepared

prepared_df = ft.transform_to_prepared(
    df=orders_silver,
    resolved_mappings=mappings,
    source_lakehouse_name="SilverLakehouse",
)

write_prepared_table

ft.write_prepared_table(
    df=prepared_df,
    resolved_mappings=mappings,
    target_lakehouse_name="PreparedLakehouse",
    target_relative_path="Tables/dbo/orders_prepared",
    mode="overwrite",
)

generate_prepared_aggregations

agg_tables = ft.generate_prepared_aggregations(
    source_lakehouse_name="SilverLakehouse",
    target_lakehouse_name="PreparedLakehouse",
    target_relative_path="Tables/dbo/orders_prepared",
    resolved_mappings=mappings,
)

publish_semantic_model

publish_result = ft.publish_semantic_model(
    target_lakehouse_name="PreparedLakehouse",
    agg_tables=agg_tables,
    resolved_mappings=mappings,
    semantic_workspace="<workspace-id-ou-nom>",
    semantic_model_name="novaretail_dataset",
)

prepare_and_write_data

one_table = ft.prepare_and_write_data(
    source_lakehouse_name="SilverLakehouse",
    source_relative_path="Tables/dbo/orders",
    target_lakehouse_name="PreparedLakehouse",
    target_relative_path="Tables/dbo/orders_prepared",
)

prepare_and_write_all_tables

all_tables = ft.prepare_and_write_all_tables(
    source_lakehouse_name="SilverLakehouse",
    target_lakehouse_name="PreparedLakehouse",
    include_schemas=["dbo"],
    continue_on_error=True,
)

Transform (DataFrame)

Helpers reutilisables DataFrame → DataFrame (notebooks, Bronze/Silver/Gold). Pour merge_dataframes, le prefixe des colonnes ajoutees est deduit du nom de la variable join_df a l’appel (ou join_prefix=...) ; les suffixes sont normalises (snake_case, comme clean_data). Pas besoin de .alias() Spark sur le DataFrame de droite.

filter_by_value_list

Filtre sur une colonne et une liste de valeurs : pas de cast ; trim uniquement si la colonne est de type chaine ; les str dans la liste sont strip()’es. Avec exclude=True (defaut), les lignes dont la valeur est dans la liste sont exclues.

df2 = ft.filter_by_value_list(df, "Compte", ("70830000", "70840000"), exclude=True)

merge_dataframes

Joint main a join_df sur une ou plusieurs paires de cles (colonne_main, colonne_droite) ; apporte les colonnes listees dans join_columns, renommees en {prefix_snake}_{colonne_snake_unique} (prefixe = nom de variable projets ci-dessous, ou join_prefix="...").

out = ft.merge_dataframes(
    main=detail,
    join_df=projets,
    join_columns=["Client", "Type projet", "Nom client"],
    keys=[("Code projet", "ID projet")],
    how="left",
)
# Ex. colonnes : projets_client, projets_type_projet, projets_nom_client

FAQ

1) Est-ce que je peux utiliser fabrictools sans Microsoft Fabric ?

Partiellement oui. Les fonctions purement Spark peuvent marcher en local avec fabrictools[spark], mais les fonctions de resolution de chemins Lakehouse dependent de notebookutils (disponible dans Fabric).

2) Y a-t-il une commande CLI (fabrictools ...) ?

Non. L'usage est en Python, via import fabrictools as ft.

3) Plotly est-il obligatoire ?

Non. C'est utile pour les graphiques de scan_data_errors. Sans Plotly, vous gardez la partie tabulaire.

4) Comment choisir entre clean_and_write_data et clean_and_write_all_tables ?

  • clean_and_write_data : une table cible
  • clean_and_write_all_tables : plusieurs tables en lot

5) delete_all_lakehouse_tables est-il dangereux ?

Oui, c'est une action destructive. Commencez avec dry_run=True pour verifier la liste avant suppression.

6) Je debute : quel chemin minimum recommandez-vous ?

read_lakehouse -> clean_data -> add_silver_metadata -> write_lakehouse.


Support

Pour aider rapidement, partagez :

  • la fonction utilisee
  • un exemple de parametres
  • le message d'erreur complet

Ressources mainteneur

Guide de publication PyPI : docs/PYPI_PUBLISH.md


Licence

MIT

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fabrictools-0.5.18.tar.gz (46.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fabrictools-0.5.18-py3-none-any.whl (55.0 kB view details)

Uploaded Python 3

File details

Details for the file fabrictools-0.5.18.tar.gz.

File metadata

  • Download URL: fabrictools-0.5.18.tar.gz
  • Upload date:
  • Size: 46.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for fabrictools-0.5.18.tar.gz
Algorithm Hash digest
SHA256 e773ab7d40b40fb6b480af4fe1c54d9bc0666586861eea7b243554fa7ece80bc
MD5 d7aa2c51e3309a4a6d6fbf06cc5bb640
BLAKE2b-256 5c7726edf9d527fc4a09969bf14a0189e66ee829af6d39d434503a2ee53cb897

See more details on using hashes here.

Provenance

The following attestation bundles were made for fabrictools-0.5.18.tar.gz:

Publisher: publish.yml on willykinfoussia/FabricPackage

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file fabrictools-0.5.18-py3-none-any.whl.

File metadata

  • Download URL: fabrictools-0.5.18-py3-none-any.whl
  • Upload date:
  • Size: 55.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for fabrictools-0.5.18-py3-none-any.whl
Algorithm Hash digest
SHA256 915c786518ac802e31c744051191cacc943c27b55faaa1abcf03b0d223f9c9c8
MD5 d3f52c5bc33df4bbb4a46c2465010a88
BLAKE2b-256 3e66b55a9f1bef3bcfef9e44c98cd51c37540a0070836f489d3fd04cb70d4c2a

See more details on using hashes here.

Provenance

The following attestation bundles were made for fabrictools-0.5.18-py3-none-any.whl:

Publisher: publish.yml on willykinfoussia/FabricPackage

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page