Skip to main content

Snowflake loader for mkpipe.

Project description

mkpipe-loader-snowflake

Snowflake loader plugin for MkPipe. Writes Spark DataFrames into Snowflake tables using the native Snowflake Spark connector (spark-snowflake), which stages data via internal cloud storage (S3/Azure/GCS) — significantly faster than JDBC for large datasets.

Documentation

For more detailed documentation, please visit the GitHub repository.

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.


Connection Configuration

connections:
  snowflake_target:
    variant: snowflake
    host: myaccount.snowflakecomputing.com
    port: 443
    database: MY_DATABASE
    schema: MY_SCHEMA
    user: myuser
    password: mypassword
    warehouse: MY_WAREHOUSE

With RSA key pair authentication:

connections:
  snowflake_target:
    variant: snowflake
    host: myaccount.snowflakecomputing.com
    port: 443
    database: MY_DATABASE
    schema: MY_SCHEMA
    user: myuser
    warehouse: MY_WAREHOUSE
    private_key_file: /path/to/rsa_key.p8
    private_key_file_pwd: mypassphrase

Table Configuration

pipelines:
  - name: pg_to_snowflake
    source: pg_source
    destination: snowflake_target
    tables:
      - name: public.events
        target_name: STG_EVENTS
        replication_method: full
        batchsize: 50000

Write Strategy

Control how data is written to Snowflake:

      - name: public.events
        target_name: STG_EVENTS
        write_strategy: upsert       # append | replace | upsert | merge
        write_key: [id]              # required for upsert/merge
Strategy Snowflake Behavior
append Insert via Spark connector (default for incremental)
replace Overwrite table via Spark connector (default for full)
upsert Write to temp table, then MERGE INTO target USING temp ON ... WHEN MATCHED THEN UPDATE ... WHEN NOT MATCHED THEN INSERT ...
merge Same as upsert for Snowflake

Note: upsert/merge requires write_key. The loader creates a temp table, writes data there, executes a MERGE statement, then drops the temp table.


Write Parallelism & Throughput

Snowflake loader uses the native Spark connector. Two parameters control write performance:

      - name: public.events
        target_name: STG_EVENTS
        replication_method: full
        batchsize: 50000        # rows per batch insert (default: 10000)
        write_partitions: 4     # coalesce DataFrame to N partitions before writing

How they work

  • batchsize: number of rows buffered before sending to Snowflake. Larger batches reduce round-trips and staging overhead.
  • write_partitions: calls coalesce(N) on the DataFrame before writing, controlling the number of concurrent write operations to Snowflake.

Performance Notes

  • Snowflake Warehouse size is the primary write performance lever. A larger warehouse processes inserts faster regardless of partition count.
  • The Spark connector stages data internally before committing. Large batchsize (50,000+) reduces staging overhead.
  • For very large loads, consider using Snowflake's native COPY INTO via an external stage (S3/GCS) instead — that is significantly faster but requires additional infrastructure.
  • write_partitions: 4–8 is a good default to balance throughput and connection count.

All Table Parameters

Parameter Type Default Description
name string required Source table name
target_name string required Snowflake destination table name
replication_method full / incremental full Replication strategy
batchsize int 10000 Rows per batch insert
write_partitions int Coalesce DataFrame to N partitions before writing
write_strategy string append, replace, upsert, merge
write_key list Key columns for upsert/merge (required)
dedup_columns list Columns used for mkpipe_id hash deduplication
tags list [] Tags for selective pipeline execution
pass_on_error bool false Skip table on error instead of failing

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mkpipe_loader_snowflake-0.6.0.tar.gz (9.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mkpipe_loader_snowflake-0.6.0-py3-none-any.whl (10.0 kB view details)

Uploaded Python 3

File details

Details for the file mkpipe_loader_snowflake-0.6.0.tar.gz.

File metadata

  • Download URL: mkpipe_loader_snowflake-0.6.0.tar.gz
  • Upload date:
  • Size: 9.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for mkpipe_loader_snowflake-0.6.0.tar.gz
Algorithm Hash digest
SHA256 a28847199064b6297dcfa1f8d787437696a88fe8f2f816c65b151cc6299d8db7
MD5 5496f4a44aad94470449b2db32f1a393
BLAKE2b-256 5ccf3e2a3b9979b80e201334cae375d366b355ad8759b79d1ab6228a8b3c54d6

See more details on using hashes here.

File details

Details for the file mkpipe_loader_snowflake-0.6.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mkpipe_loader_snowflake-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bfb2a441caad67cb284437d449ac3b0ea059090503bacc6875b306cda9e489a1
MD5 fef40143c4bc544615414bf638211f1d
BLAKE2b-256 cf354c51158cea92f585e8f104968fe191c9f7b75a73d9a0a709ee945c5c6650

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page