Skip to main content

A simple data ingestion library to guide data flows from some places to other places

Project description

Viadot

build status formatting codecov

Documentation: https://dyvenia.github.io/viadot/

Source Code: https://github.com/dyvenia/viadot


A simple data ingestion library to guide data flows from some places to other places.

Getting Data from a Source

Viadot supports several API and RDBMS sources, private and public. Currently, we support the UK Carbon Intensity public API and base the examples on it.

from viadot.sources.uk_carbon_intensity import UKCarbonIntensity
ukci = UKCarbonIntensity()
ukci.query("/intensity")
df = ukci.to_df()
df

Output:

from to forecast actual index
0 2021-08-10T11:00Z 2021-08-10T11:30Z 211 216 moderate

The above df is a python pandas DataFrame object. The above df contains data downloaded from viadot from the Carbon Intensity UK API.

Loading Data to a Source

Depending on the source, viadot provides different methods of uploading data. For instance, for SQL sources, this would be bulk inserts. For data lake sources, it would be a file upload. We also provide ready-made pipelines including data validation steps using Great Expectations.

An example of loading data into SQLite from a pandas DataFrame using the SQLiteInsert Prefect task:

from viadot.tasks import SQLiteInsert

insert_task = SQLiteInsert()
insert_task.run(table_name=TABLE_NAME, dtypes=dtypes, db_path=database_path, df=df, if_exists="replace")

Set up

Note: If you're running on Unix, after cloning the repo, you may need to grant executable privileges to the update.sh and run.sh scripts:

sudo chmod +x viadot/docker/update.sh && \
sudo chmod +x viadot/docker/run.sh

a) user

Clone the main branch, enter the docker folder, and set up the environment:

git clone https://github.com/dyvenia/viadot.git && \
cd viadot/docker && \
./update.sh

Run the enviroment:

./run.sh

b) developer

Clone the dev branch, enter the docker folder, and set up the environment:

git clone -b dev https://github.com/dyvenia/viadot.git && \
cd viadot/docker && \
./update.sh -t dev

Run the enviroment:

./run.sh -t dev

Install the library in development mode (repeat for the viadot_jupyter_lab container if needed):

docker exec -it viadot_testing pip install -e . --user

Running tests

To run tests, log into the container and run pytest:

docker exec -it viadot_testing bash
pytest

Running flows locally

You can run the example flows from the terminal:

docker exec -it viadot_testing bash
FLOW_NAME=hello_world; python -m viadot.examples.$FLOW_NAME

However, when developing, the easiest way is to use the provided Jupyter Lab container available in the browser at http://localhost:9000/.

Executing Spark jobs

Setting up

To begin using Spark, you must first declare the environmental variables as follows:

DATABRICKS_HOST = os.getenv("DATABRICKS_HOST")
DATABRICKS_API_TOKEN = os.getenv("DATABRICKS_API_TOKEN")
DATABRICKS_ORG_ID = os.getenv("DATABRICKS_ORG_ID")
DATABRICKS_PORT = os.getenv("DATABRICKS_PORT")
DATABRICKS_CLUSTER_ID = os.getenv("DATABRICKS_CLUSTER_ID")

Alternatively, you can also create a file called .databricks-connect in the root directory of viadot and add the required variables there. It should follow the following format:

{
  "host": "",
  "token": "",
  "cluster_id": "",
  "org_id": "",
  "port": ""
}

To retrieve the values, follow step 2 in this link

Executing Spark functions

To begin using Spark, you must first create a Spark Session: spark = SparkSession.builder.appName('session_name').getOrCreate(). spark will be used to access all the Spark methods. Here is a list of commonly used Spark methods (WIP):

  • spark.createDataFrame(df): Create a Spark DataFrame from a Pandas DataFrame
  • sparkdf.write.saveAsTable("schema.table"): Takes a Spark DataFrame and saves it as a table in Databricks.
  • Ensure to use the correct schema, as it should be created and specified by the administrator
  • table = spark.sql("select * from schema.table"): example of a simple query ran through Python

How to contribute

  1. Fork repository if you do not have write access
  2. Set up locally
  3. Test your changes with pytest
  4. Submit a PR. The PR should contain the following:
    • new/changed functionality
    • tests for the changes
    • changes added to CHANGELOG.md
    • any other relevant resources updated (esp. viadot/docs)

The general flow of working for this repository in case of forking:

  1. Pull before making any changes
  2. Create a new branch with
git checkout -b <name>
  1. Make some work on repository
  2. Stage changes with
git add <files>
  1. Commit the changes with
git commit -m <message>

Note: See out Style Guidelines for more information about commit messages and PR names

  1. Fetch and pull the changes that could happen while working with
git fetch <remote> <branch>
git checkout <remote>/<branch>
  1. Push your changes on repostory using
git push origin <name>
  1. Use merge to finish your push to repository
git checkout <where_merging_to>
git merge <branch_to_merge>

Please follow the standards and best practices used within the library (eg. when adding tasks, see how other tasks are constructed, etc.). For any questions, please reach out to us here on GitHub.

Style guidelines

  • the code should be formatted with Black using default settings (easiest way is to use the VSCode extension)
  • commit messages should:
    • begin with an emoji
    • start with one of the following verbs, capitalized, immediately after the summary emoji: "Added", "Updated", "Removed", "Fixed", "Renamed", and, sporadically, other ones, such as "Upgraded", "Downgraded", or whatever you find relevant for your particular situation
    • contain a useful description of what the commit is doing

Set up Black for development in VSCode

Your code should be formatted with Black when you want to contribute. To set up Black in Visual Studio Code follow instructions below.

  1. Install black in your environment by writing in the terminal:
pip install black
  1. Go to the settings - gear icon in the bottom left corner and select Settings or type "Ctrl" + ",".
  2. Find the Format On Save setting - check the box.
  3. Find the Python Formatting Provider and select "black" in the drop-down list.
  4. Your code should auto format on save now.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

viadot-2.0a10.tar.gz (32.2 kB view details)

Uploaded Source

Built Distribution

viadot-2.0a10-py3-none-any.whl (35.5 kB view details)

Uploaded Python 3

File details

Details for the file viadot-2.0a10.tar.gz.

File metadata

  • Download URL: viadot-2.0a10.tar.gz
  • Upload date:
  • Size: 32.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.7

File hashes

Hashes for viadot-2.0a10.tar.gz
Algorithm Hash digest
SHA256 b71d3d9833df4b463e6cef14b0a14a0ff86e49ae23a4755dc3462d3a2f40cfbc
MD5 836effe6e2b3818c1ca9a1f93c0d8abc
BLAKE2b-256 76eff6b445d93eff41226c0d386f9509d22f94b734c37c0268e1d5a08682311a

See more details on using hashes here.

File details

Details for the file viadot-2.0a10-py3-none-any.whl.

File metadata

  • Download URL: viadot-2.0a10-py3-none-any.whl
  • Upload date:
  • Size: 35.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.7

File hashes

Hashes for viadot-2.0a10-py3-none-any.whl
Algorithm Hash digest
SHA256 784ea99ec0a2677069963e467b56dae07f498f9b34488a4bd66bc58f3dfdac39
MD5 682e54347ac1069eea2f2a8a6952d703
BLAKE2b-256 ff647a58763f3d8806c66b2c620942785e016ad724402c74dc9415f2520c6db9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page