Skip to main content

Datapunt generic ETL command line scripts and functions for shell scripting in Docker.

Project description

Data-processing

https://img.shields.io/badge/python-3.6-blue.svg https://img.shields.io/badge/license-MPLv2.0-blue.svg

At the City of Amsterdam we deal with many different types of structured and unstructered data. Much of the data is not of high quality and are missing proper semantics to do proper analytics with.

This repository combines generic command line functions to create extract, transform and load steps we can then use for creating a reproducable data for analytics and usage in dashboards and maps.

For more information about the how we use these functions in our workflow, read the data-pipeline guide.

How to use

All functions are searchable and described in full here: amsterdam.github.io/data-processing

To use a function in python you can use:

from extract import download_from_data_amsterdam

or

from helpers.connections import objectstore_connection

To use the functions directly from the command line, you can do this after installing the package via:

pip install .

You can then use functions directly in your virtual environment or docker shell like this:

download_from_data_amsterdam -h

To see the list of command line functions see the modules below or directly in setup.py

Getting Started

To get the functions up and running, also for running them from the command line, follow these 4 steps:

  1. Clone the repository:

git clone https://github.com/Amsterdam/data-processing.git
cd data-processing
  1. Create Virtual environment in Windows

# Create and activate a virtual environment, for example with:
python -m venv --copies --prompt data-processing .venv
.venv\Scripts\activate
  1. Create Virtual environment in OSX

virtualenv --python=$(which python3) venv
source venv/bin/activate
  1. Install the data-processing modules in editable mode

pip install -e .

4. A database is required for the transform and load functions. You can setup your postgres database credentials in the config.ini file to apply to the functions.

If want to use Docker, you can start a database server for your project in a new terminal. The name, port and login of the database can be changed in the docker-compose.yml. Also change them in the config.ini file which will be used by the functions to connect to that database.

docker-compose up -d database

Notebooks

Some of the examples are in the form of runnable Jupyter notebooks. Copies of these with all the images and output included are hosted at Anaconda Cloud. To run these notebooks on your own system, start up a Jupyter notebook server:

To install jupyter: .. code-block:: bash

pip install -e .[dev]

jupyter notebook –NotebookApp.iopub_data_rate_limit=100000000

How to Contribute

If you want to contribute please follow the contribute guidelines

Prequisites

Fork this repository to your local github account.

To add new documentation and test new functions, install the docs,test,dev packages using this command:

pip install -e .[docs,test,dev]
or when using zsh
pip install -e ./[docs,test,dev/]

Steps to add code

This package is build by using setuptools to be able to deploy this later on PyPi with version control. It follows some of these guidelines of setting up a python package.

  1. Convert your function into a python-package command line script using the boilerplate_function.py

side note: not all functions are suitable for CL. Machine learning preprocessing steps or general API calls for instance, (that often require parameters in the form of dicts or lists) as input are not suitable and can be used as stand-alone scripts.

2. Add test to the test folder and run .. code-block:: bash

python setup.py test

to test if no other functions are breaking. Correct those issues if needed.

  1. Add your commandline name and end point location to the console_scripts in setup.py.

  2. Add a awesome_module.rst file with Sphinx Argparse extension fields to generate the description and argument fields by reusing an existing rst file. Helpers will generate automatically, so you can skip this step if it is only a helper function.

  3. add the rst file to the modules.rst to be found on the main page.

  4. Regenerate the documentation to test the docs output using:

sphinx/make docs
  1. Make a PR to add the add your awesome function to our processing code to be reused by many other developpers and data analists.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datapunt-processing-0.0.1a1.tar.gz (1.6 MB view details)

Uploaded Source

Built Distribution

datapunt_processing-0.0.1a1-py3-none-any.whl (59.4 kB view details)

Uploaded Python 3

File details

Details for the file datapunt-processing-0.0.1a1.tar.gz.

File metadata

File hashes

Hashes for datapunt-processing-0.0.1a1.tar.gz
Algorithm Hash digest
SHA256 473a12c556e3e6f9cb6959b14c341b0d23ed88dc9964177aadaa12a25d1143dd
MD5 b611d23a92707ba962dfb23a1b7cc3c8
BLAKE2b-256 7f4d3aa853906016599f75c2bc32daa1c50f60ae28337ec693db188c68c9a5cb

See more details on using hashes here.

File details

Details for the file datapunt_processing-0.0.1a1-py3-none-any.whl.

File metadata

File hashes

Hashes for datapunt_processing-0.0.1a1-py3-none-any.whl
Algorithm Hash digest
SHA256 50994f9eb86284af96caf81efbeedd9beb249d8d175a6323f53a2c527fc07b9e
MD5 6321a985ccc384d804f62b32767a58e9
BLAKE2b-256 cd02905ef6b749dd40ee08bc4728e8ea53414881693035217c56d4792d99ec03

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page