Skip to main content

Microsoft Fabric ETL toolbox

Project description

bifabrik

Microsoft Fabric ETL toolbox

What is the point?

  • make BI development in Microsoft Fabric easier by providing a fluent API for common ETL tasks
  • reduce repetitive code by setting preferences in config files

See the project page for info on all the features and check the changelog to see what's new.

If you find a problem or have a feature request, please submit it here: https://github.com/rjankovic/bifabrik/issues. Thanks!

Quickstart

First, let's install the library. Either add the bifabrik library to an environment in Fabric and attach that environment to your notebook.

Or you can add %pip install bifabrik at the beginning of the notebook.

Import the library

import bifabrik as bif

Also, make sure that your notebook is connected to a lakehouse. This is the lakehouse to which bifabrik will save data by default.

default_lakehouse

You can also configure it to target different lakehouses.

Load CSV files (JSON is similar)

Simple tasks should be easy.

import bifabrik as bif
bif.fromCsv('Files/CsvFiles/annual-enterprise-survey-2021.csv').toTable('Survey2021').run()

...and the table is in place

display(spark.sql('SELECT * FROM Survey2021'))

Or you can make use of pattern matching

# take all files matching the pattern and concat them
bif.fromCsv('Files/*/annual-enterprise-survey-*.csv').toTable('SurveyAll').run()

These are full loads, overwriting the target table if it exists.

Configure load preferences

Is your CSV is a bit...special? No problem, we'll tend to it.

Let's say you have a European CSV with commas instead of decimal points and semicolons instead of commas as separators.

bif.fromCsv("Files/CsvFiles/dimBranch.csv").delimiter(';').decimal(',').toTable('DimBranch').run()

The backend uses pandas, so you can take advantage of many other options - see help(bif.fromCsv())

Keep the configuration

What, you have more files like that? Well then, you probably don't want to repeat the setup each time. Good news is, the bifabrik object can keep all your preferences:

import bifabrik as bif

# set the configuration
bif.config.csv.delimiter = ';'
bif.config.csv.decimal = ','

# the configuration will be applied to all these loads
bif.fromCsv("Files/CsvFiles/dimBranch.csv").toTable('DimBranch').run()
bif.fromCsv("Files/CsvFiles/dimDepartment.csv").toTable('DimDepartment').run()
bif.fromCsv("Files/CsvFiles/dimDivision.csv").toTable('DimDivision').run()

# (You can still apply configuration in the individual loads, as seen above, to override the general configuration.)

If you want to persist your configuration beyond the PySpark session, you can save it to a JSON file - see Configuration

Consistent configuration is one of the core values of the project.

We like our lakehouses to be uniform in terms of loading patterns, table structures, tracking, etc. At the same time, we want to keep it DRY.

bifabrik configuration aims to cover many aspects of the lakehouse so that you can define your conventions once, use it repeatedly, and override when neccessary.

See the github page for more details on this.

Spark SQL transformations

Enough with the files! Let's make a simple Spark SQL transformation, writing data to another SQL table - a straightforward full load:

bif.fromSql('''

SELECT Industry_name_NZSIOC AS Industry_Name 
,AVG(`Value`) AS AvgValue
FROM LakeHouse1.Survey2021
WHERE Variable_Code = 'H35'
GROUP BY Industry_name_NZSIOC

''').toTable('SurveySummarized').run()

# The resulting table will be saved to the lakehouse attached to your notebook.
# You can refer to a different source warehouse in the query, though.

More options

bifabrik can help with incremental loads, identity columsn (auto-increment), dataframe transformations and more

For example

import bifabrik as bif
from pyspark.sql.functions import col, upper

(
bif
  .fromCsv('CsvFiles/fact_append_*.csv')
  .transformSparkDf(lambda df: df.withColumn('CodeUppercase', upper(col('Code'))))
  .toTable('SnapshotTable1')
  .increment('snapshot')
  .snapshotKeyColumns(['Date', 'Code'])
  .identityColumnPattern('{tablename}ID')
  .run()
)

For more details, see the project page

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bifabrik-0.7.0.tar.gz (40.4 kB view details)

Uploaded Source

Built Distribution

bifabrik-0.7.0-py3-none-any.whl (59.3 kB view details)

Uploaded Python 3

File details

Details for the file bifabrik-0.7.0.tar.gz.

File metadata

  • Download URL: bifabrik-0.7.0.tar.gz
  • Upload date:
  • Size: 40.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for bifabrik-0.7.0.tar.gz
Algorithm Hash digest
SHA256 f56f7146f01fdd917116681129cd2b4557763d31d7a3744aa9e337ba83932690
MD5 4e5e948d3ca7bb946529b66efaf9300e
BLAKE2b-256 1926edfaf1eb2be1afd1cf3e6361ff8daebac81b71f69bb3079a48ba4e09987e

See more details on using hashes here.

File details

Details for the file bifabrik-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: bifabrik-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 59.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for bifabrik-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 37ec2e683d18dabf7d6494e74d6b66e79ea39dd989cc7ebfdeed20575e3cc33b
MD5 0e2b524da693cb0071b7e9f31756a091
BLAKE2b-256 50de452391c08e0b688de11b86825166c9427b8746639c824267841898f40f6c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page