Skip to main content

No project description provided

Project description

The BMS Lake API

A FastAPI Plugin that allows you to expose your Data Lake as an API, allowing multiple output formats, such as Parquet, Csv, Json, Excel, ...

The lake API also contains a minimal security layer for convenience (Basic Auth), but you can also bring your own.

It constrast to roapi, we intentionally do not want to expose most SQL by default, but we limit possible queries using a config. This makes it easy for you to control what happens on your data. If you want the sql endpoint, you can enable this.

To run the app with default config, just do:

app = FastAPI()
bmsdna.lakeapi.init_lakeapi(app)

To adjust the config, you can do like this:

import dataclasses
import bmsdna.lakeapi

def_cfg = bmsdna.lakeapi.get_default_config() # Get default startup config
cfg = dataclasses.replace(def_cfg, enable_sql_endpoint=True, data_path="tests/data") # Use dataclasses.replace to set the properties you want
sti = bmsdna.lakeapi.init_lakeapi(app, cfg, "config_test.yml") # Enable it. The first parameter is the FastAPI instance, the 2nd one is the basic config and the third one the config of the tables

Installation

PyPI version

Pypi Package bmsdna-lakeapi can be installed like any python package : pip install bmsdna-lakeapi

OpenApi

Of course, everything works with Open API and FastAPI. Meaning you can add other FastAPI routes, you can use the /docs and /redoc endpoint.

Default Security

By Default, Basic Authentication is enabled. To add a user, simply run add_lakeapi_user YOURUSERNAME --yaml-file config.yml. This will add the user to your config yaml (argon2 encrypted). The generated Password is printed. If you do not want this logic, you can overwrite the username_retriver of the Default Config

Standalone Mode

If you just want to run this thing, you can run it with a webserver:

Uvicorn: uvicorn bmsdna.lakeapi.standalone:app --host 0.0.0.0 --port 8080

Gunicorn: gunicorn bmsdna.lakeapi.standalone:app --workers 4 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:80

Of course you need to adjust your http options as needed. Also, you need to pip install uvicorn/gunicorn

You can still use environment variables for configuration

Environment Variables

  • CONFIG_PATH: The path of the config file, defaults to config.yml. If you want to split the config, you can specify a folder, too
  • DATA_PATH: The path of the data files, defaults to data. Paths in config.yml are relative to DATA_PATH
  • ENABLE_SQL_ENDPOINT: Set this to 1 to enable the SQL Endpoint

Config File

The application by default relies on a Config file beeing present at the root of your project that's call config.yml.

The config file looks something like this, see also our test yaml:

tables:
  - name: fruits
    tag: test
    version: 1
    api_method:
      - get
      - post
    params:
      - name: cars
        operators:
          - "="
          - in
      - name: fruits
        operators:
          - "="
          - in
    dataframe:
      uri: delta/fruits
      file_type: delta

  - name: fruits_partition
    tag: test
    version: 1
    api_method:
      - get
      - post
    params:
      - name: cars
        operators:
          - "="
          - in
      - name: fruits
        operators:
          - "="
          - in
      - name: pk
        combi:
          - fruits
          - cars
      - name: combi
        combi:
          - fruits
          - cars
    dataframe:
      uri: delta/fruits_partition
      file_type: delta
      select:
        - name: A
        - name: fruits
        - name: B
        - name: cars

  - name: fake_delta
    tag: test
    version: 1
    allow_get_all_pages: true
    api_method:
      - get
      - post
    params:
      - name: name
        operators:
          - "="
      - name: name1
        operators:
          - "="
    dataframe:
      uri: delta/fake
      file_type: delta

  - name: fake_delta_partition
    tag: test
    version: 1
    allow_get_all_pages: true
    api_method:
      - get
      - post
    params:
      - name: name
        operators:
          - "="
      - name: name1
        operators:
          - "="
    dataframe:
      uri: delta/fake
      file_type: delta

  - name: fruits_csv
    tag: test
    version: 1
    allow_get_all_pages: true
    api_method:
      - get
      - post
    params:
      - name: fruits
        operators:
          - "="
      - name: cars
        operators:
          - "="
    dataframe:
      uri: csv/fruits.csv
      file_type: csv

Partioning for awesome performance

In order to use partitions, you can either:

  • partition by a column you filter on. Obviously
  • partition on a special column called columnname_md5_prefix_2 which means that you're partitioning by the first two chars of your hex-encoded md5 hash. If you now filter by columnname this will greatly reduce files searched for. The number of chars used is up to you, we found two to be meaningful
  • partition on a special column called columnname_md5_mod_NRPARTIIONS where your partition value is str(int(hashlib.md5(COLUMNNAME).hexdigest(), 16) % NRPARTITIONS). That might look a bit complicated, but it's not that hard :) your just doing a modulo on your md5 hash which allows you to set the exact number of partitions. Filtering is still happening on columname correctly

You must use deltalake to use parttions and you must only have str partition columns for now.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bmsdna_lakeapi-0.3.1.tar.gz (29.8 kB view details)

Uploaded Source

Built Distribution

bmsdna_lakeapi-0.3.1-py3-none-any.whl (37.4 kB view details)

Uploaded Python 3

File details

Details for the file bmsdna_lakeapi-0.3.1.tar.gz.

File metadata

  • Download URL: bmsdna_lakeapi-0.3.1.tar.gz
  • Upload date:
  • Size: 29.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.1 CPython/3.11.3

File hashes

Hashes for bmsdna_lakeapi-0.3.1.tar.gz
Algorithm Hash digest
SHA256 7d1f530a631ac0e76ec765ac639d8e2f8945bb10eee2c57b3e53c3ce551890e8
MD5 c5f7d98c85651feb39f49e9beb6fe397
BLAKE2b-256 818f32dae262975a632ff80b376faf0b9de00fbad4d880e7d0abd7bda9100d98

See more details on using hashes here.

File details

Details for the file bmsdna_lakeapi-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for bmsdna_lakeapi-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3536ac6e599a7035f5b57216239c4331e580bf8d4a2cd6d4d1e70193c9d55caa
MD5 a71cdeb28b6396efa4660de071b6b98c
BLAKE2b-256 83789ada69e3df044e162e8c06c05c854763a4f2e6b085776f427ee5c5209242

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page