Skip to main content

DLT is an open-source python-native scalable data loading framework that does not require any devops efforts to run.

Project description

Quickstart Guide: Data Load Tool (DLT)

TL;DR: This guide shows you how to load a JSON document into Google BigQuery using DLT.

Please open a pull request here if there is something you can improve about this quickstart.

Grab the demo

Clone the example repository:

git clone https://github.com/scale-vector/dlt-quickstart-example.git

Enter the directory:

cd dlt-quickstart-example

Open the files in your favorite IDE / text editor:

  • data.json (i.e. the JSON document you will load)
  • credentials.json (i.e. contains the credentials to our demo Google BigQuery warehouse)
  • quickstart.py (i.e. the script that uses DLT)

Set up a virtual environment

Ensure you are using either Python 3.8 or 3.9:

python3 --version

Create a new virtual environment:

python3 -m venv ./env

Activate the virtual environment:

source ./env/bin/activate

Install DLT and support for the target data warehouse

Install DLT using pip:

pip3 install -U python-dlt

Install support for Google BigQuery:

pip3 install -U python-dlt[gcp]

Understanding the code

  1. Configure DLT

  2. Create a DLT pipeline

  3. Load the data from the JSON document

  4. Pass the data to the DLT pipeline

  5. Use DLT to load the data

Running the code

Run the quickstart script in /examples folder:

python3 quickstart.py

Inspect schema.yml that has been printed by the script or the generated file:

vim schema.yml

See results of querying the Google BigQuery table:

json_doc table

SELECT * FROM `{schema_prefix}_example.json_doc`
{  "name": "Ana",  "age": "30",  "id": "456",  "_dlt_load_id": "1654787700.406905",  "_dlt_id": "5b018c1ba3364279a0ca1a231fbd8d90"}
{  "name": "Bob",  "age": "30",  "id": "455",  "_dlt_load_id": "1654787700.406905",  "_dlt_id": "afc8506472a14a529bf3e6ebba3e0a9e"}

json_doc__children table

SELECT * FROM `{schema_prefix}_example.json_doc__children` LIMIT 1000
    # {"name": "Bill", "id": "625", "_dlt_parent_id": "5b018c1ba3364279a0ca1a231fbd8d90", "_dlt_list_idx": "0", "_dlt_root_id": "5b018c1ba3364279a0ca1a231fbd8d90",
    #   "_dlt_id": "7993452627a98814cc7091f2c51faf5c"}
    # {"name": "Bill", "id": "625", "_dlt_parent_id": "afc8506472a14a529bf3e6ebba3e0a9e", "_dlt_list_idx": "0", "_dlt_root_id": "afc8506472a14a529bf3e6ebba3e0a9e",
    #   "_dlt_id": "9a2fd144227e70e3aa09467e2358f934"}
    # {"name": "Dave", "id": "621", "_dlt_parent_id": "afc8506472a14a529bf3e6ebba3e0a9e", "_dlt_list_idx": "1", "_dlt_root_id": "afc8506472a14a529bf3e6ebba3e0a9e",
    #   "_dlt_id": "28002ed6792470ea8caf2d6b6393b4f9"}
    # {"name": "Elli", "id": "591", "_dlt_parent_id": "5b018c1ba3364279a0ca1a231fbd8d90", "_dlt_list_idx": "1", "_dlt_root_id": "5b018c1ba3364279a0ca1a231fbd8d90",
    #   "_dlt_id": "d18172353fba1a492c739a7789a786cf"}

Joining the two tables above on autogenerated keys (i.e. p._record_hash = c._parent_hash)

select p.name, p.age, p.id as parent_id,
            c.name as child_name, c.id as child_id, c._dlt_list_idx as child_order_in_list
        from `{schema_prefix}_example.json_doc` as p
        left join `{schema_prefix}_example.json_doc__children`  as c
            on p._dlt_id = c._dlt_parent_id
    # {  "name": "Ana",  "age": "30",  "parent_id": "456",  "child_name": "Bill",  "child_id": "625",  "child_order_in_list": "0"}
    # {  "name": "Ana",  "age": "30",  "parent_id": "456",  "child_name": "Elli",  "child_id": "591",  "child_order_in_list": "1"}
    # {  "name": "Bob",  "age": "30",  "parent_id": "455",  "child_name": "Bill",  "child_id": "625",  "child_order_in_list": "0"}
    # {  "name": "Bob",  "age": "30",  "parent_id": "455",  "child_name": "Dave",  "child_id": "621",  "child_order_in_list": "1"}

Next steps

  1. Replace data.json with data you want to explore

  2. Check that the inferred types are correct in schema.yml

  3. Set up your own Google BigQuery warehouse (and replace the credentials)

  4. Use this new clean staging layer as the starting point for a semantic layer / analytical model (e.g. using dbt)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

python_dlt-0.2.0a7.tar.gz (481.6 kB view details)

Uploaded Source

Built Distribution

python_dlt-0.2.0a7-py3-none-any.whl (566.5 kB view details)

Uploaded Python 3

File details

Details for the file python_dlt-0.2.0a7.tar.gz.

File metadata

  • Download URL: python_dlt-0.2.0a7.tar.gz
  • Upload date:
  • Size: 481.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.2 CPython/3.8.11 Linux/4.19.128-microsoft-standard

File hashes

Hashes for python_dlt-0.2.0a7.tar.gz
Algorithm Hash digest
SHA256 f2105d6fee58ea3e1f017403499d3479c2ac21e05900e539128bf8d01c0887f5
MD5 7b3a9e4f0b6b7c7c913ea5ba37462b5a
BLAKE2b-256 b722362da27dc8fae706d779ab173e8d9bd0a8b91133fb2cdac27efd415d483d

See more details on using hashes here.

File details

Details for the file python_dlt-0.2.0a7-py3-none-any.whl.

File metadata

  • Download URL: python_dlt-0.2.0a7-py3-none-any.whl
  • Upload date:
  • Size: 566.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.2 CPython/3.8.11 Linux/4.19.128-microsoft-standard

File hashes

Hashes for python_dlt-0.2.0a7-py3-none-any.whl
Algorithm Hash digest
SHA256 9f575c04dad65af96a3f97b2f898313e5028782ca04a21c4c6507a8b94805918
MD5 7ac5354b63e6739936057bda6b4123d8
BLAKE2b-256 e4995d93702b2cc78d42f25dd84901c01088940bbb79f5a72d3b946308fb31e9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page