Skip to main content

Singer.io target for loading data to PostgreSQL - PipelineWise compatible

Project description

pipelinewise-target-postgres

PyPI version PyPI - Python Version License: MIT

Singer target that loads data into Snowflake following the Singer spec.

This is a PipelineWise compatible target connector.

How to use it

The recommended method of running this target is to use it from PipelineWise. When running it from PipelineWise you don't need to configure this tap with JSON files and most of things are automated. Please check the related documentation at Target Postgres

If you want to run this Singer Target independently please read further.

Install

First, make sure Python 3 is installed on your system or follow these installation instructions for Mac or Ubuntu.

It's recommended to use a virtualenv:

  python3 -m venv venv
  pip install pipelinewise-target-postgres

or

  python3 -m venv venv
  . venv/bin/activate
  pip install --upgrade pip
  pip install .

To run

Like any other target that's following the singer specificiation:

some-singer-tap | target-postgres --config [config.json]

It's reading incoming messages from STDIN and using the properites in config.json to upload data into Postgres.

Note: To avoid version conflicts run tap and targets in separate virtual environments.

Configuration settings

Running the the target connector requires a config.json file. An example with the minimal settings:

{
  "host": "localhost",
  "port": 5432,
  "user": "my_user",
  "password": "secret",
  "dbname": "my_db_name",
  "default_target_schema": "my_target_schema"
}

Full list of options in config.json:

Property Type Required? Description
host String Yes PostgreSQL host
port Integer Yes PostgreSQL port
user String Yes PostgreSQL user
password String Yes PostgreSQL password
dbname String Yes PostgreSQL database name
batch_size Integer (Default: 100000) Maximum number of rows in each batch. At the end of each batch, the rows in the batch are loaded into Snowflake.
default_target_schema String Name of the schema where the tables will be created. If schema_mapping is not defined then every stream sent by the tap is loaded into this schema.
default_target_schema_select_permission String Grant USAGE privilege on newly created schemas and grant SELECT privilege on newly created
schema_mapping Object Useful if you want to load multiple streams from one tap to multiple Snowflake schemas.

If the tap sends the stream_id in <schema_name>-<table_name> format then this option overwrites the default_target_schema value. Note, that using schema_mapping you can overwrite the default_target_schema_select_permission value to grant SELECT permissions to different groups per schemas or optionally you can create indices automatically for the replicated tables.

Note: This is an experimental feature and recommended to use via PipelineWise YAML files that will generate the object mapping in the right JSON format. For further info check a PipelineWise YAML Example.
add_metadata_columns Boolean (Default: False) Metadata columns add extra row level information about data ingestions, (i.e. when was the row read in source, when was inserted or deleted in postgres etc.) Metadata columns are creating automatically by adding extra columns to the tables with a column prefix _SDC_. The column names are following the stitch naming conventions documented at https://www.stitchdata.com/docs/data-structure/integration-schemas#sdc-columns. Enabling metadata columns will flag the deleted rows by setting the _SDC_DELETED_AT metadata column. Without the add_metadata_columns option the deleted rows from singer taps will not be recongisable in Snowflake.
hard_delete Boolean (Default: False) When hard_delete option is true then DELETE SQL commands will be performed in Snowflake to delete rows in tables. It's achieved by continuously checking the _SDC_DELETED_AT metadata column sent by the singer tap. Due to deleting rows requires metadata columns, hard_delete option automatically enables the add_metadata_columns option as well.

To run tests:

  1. Define environment variables that requires running the tests
  export TARGET_POSTGRES_HOST=<postgres-host>
  export TARGET_POSTGRES_PORT=<postgres-port>
  export TARGET_POSTGRES_USER=<postgres-password>
  export TARGET_POSTGRES_PASSWORD=<postgres-password>
  export TARGET_POSTGRES_DBNAME=<postgres-dbname>
  export TARGET_POSTGRES_SCHEMA=<postgres-schema>
  1. Install python dependencies in a virtual env and run nose unit and integration tests
  python3 -m venv venv
  . venv/bin/activate
  pip install --upgrade pip
  pip install .
  pip install nose
  1. To run unit tests:
  nosetests --where=tests/unit
  1. To run integration tests:
  nosetests --where=tests/integration

To run pylint:

  1. Install python dependencies and run python linter
  python3 -m venv venv
  . venv/bin/activate
  pip install --upgrade pip
  pip install .
  pip install pylint
  pylint target_postgres -d C,W,unexpected-keyword-arg,duplicate-code

License

Apache License Version 2.0

See LICENSE to see the full text.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pipelinewise-target-postgres-1.0.2.tar.gz (11.8 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page