Skip to main content

Clickhouse loader for mkpipe.

Project description

mkpipe-loader-clickhouse

ClickHouse loader plugin for MkPipe. Writes Spark DataFrames into ClickHouse tables using the native clickhouse-spark connector, which uses ClickHouse's binary HTTP protocol for efficient columnar inserts.

Documentation

For more detailed documentation, please visit the GitHub repository.

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.


Connection Configuration

connections:
  clickhouse_target:
    variant: clickhouse
    host: localhost
    port: 8123
    database: target_db
    user: default
    password: mypassword

Table Configuration

pipelines:
  - name: pg_to_clickhouse
    source: pg_source
    destination: clickhouse_target
    tables:
      - name: public.events
        target_name: stg_events
        replication_method: full
        batchsize: 50000

Write Strategy

Control how data is written to ClickHouse:

      - name: public.events
        target_name: stg_events
        write_strategy: upsert       # append | replace | upsert
        write_key: [id]              # required for upsert
Strategy ClickHouse Behavior
append Insert via ClickHouse Spark connector (default for incremental)
replace Drop and recreate table, then insert (default for full)
upsert Creates table with ReplacingMergeTree engine using write_key as ORDER BY. ClickHouse deduplicates rows with the same key on background merges.

Note: ClickHouse does not support SQL MERGE statements. Upsert semantics are achieved via ReplacingMergeTree, which deduplicates asynchronously during compaction. Use FINAL in queries to get deduplicated results.


Write Parallelism & Throughput

ClickHouse loader inherits from JdbcLoader. Two parameters control write performance:

      - name: public.events
        target_name: stg_events
        replication_method: full
        batchsize: 50000        # rows per JDBC batch insert (default: 10000)
        write_partitions: 4     # coalesce DataFrame to N partitions before writing

How they work

  • batchsize: number of rows buffered before sending a single INSERT to ClickHouse. ClickHouse benefits greatly from large batches — use 50,000–500,000 for best throughput.
  • write_partitions: calls coalesce(N) on the DataFrame before writing, reducing the number of concurrent JDBC connections. Useful when you have many Spark partitions and want to limit load on ClickHouse.

Performance Notes

  • ClickHouse is optimized for large bulk inserts. batchsize is the most impactful parameter — increase it as much as your driver memory allows.
  • Avoid many small write_partitions (e.g. 1) as it reduces parallelism. A value of 4–8 balances load and throughput.
  • ClickHouse's MergeTree engine merges parts in the background; very frequent small inserts create many parts and degrade query performance. Prefer fewer large batches.

All Table Parameters

Parameter Type Default Description
name string required Source table name
target_name string required ClickHouse destination table name
replication_method full / incremental full Replication strategy
batchsize int 10000 Rows per JDBC batch insert
write_partitions int Coalesce DataFrame to N partitions before writing
write_strategy string append, replace, upsert
write_key list Key columns for upsert (used as ReplacingMergeTree ORDER BY)
dedup_columns list Columns used for mkpipe_id hash deduplication
tags list [] Tags for selective pipeline execution
pass_on_error bool false Skip table on error instead of failing

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mkpipe_loader_clickhouse-0.9.0.tar.gz (9.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mkpipe_loader_clickhouse-0.9.0-py3-none-any.whl (10.9 kB view details)

Uploaded Python 3

File details

Details for the file mkpipe_loader_clickhouse-0.9.0.tar.gz.

File metadata

  • Download URL: mkpipe_loader_clickhouse-0.9.0.tar.gz
  • Upload date:
  • Size: 9.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for mkpipe_loader_clickhouse-0.9.0.tar.gz
Algorithm Hash digest
SHA256 80b0594e94071c110072944fa1f8919e8f4a30614e0db36704f8e3e799994f15
MD5 39d20711506398d93a09e834d6c9554c
BLAKE2b-256 6d1452e8dd1be18e640731c9a2d1b70d7f103d821c5a119bfd4fc693fafde5da

See more details on using hashes here.

File details

Details for the file mkpipe_loader_clickhouse-0.9.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mkpipe_loader_clickhouse-0.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f1e62cfe50e870023eca6f729f2be454bdeb38226a456119b8f90f6dc34ef4db
MD5 aeabc5982a231a0cb698922650a8af72
BLAKE2b-256 09fc3cbe01c57b936e9019fe85be62f50b4a5ce895e12e61a483222ed2f6a913

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page