Skip to main content

Clickhouse loader for mkpipe.

Project description

mkpipe-loader-clickhouse

ClickHouse loader plugin for MkPipe. Writes Spark DataFrames into ClickHouse tables using the native clickhouse-spark connector, which uses ClickHouse's binary HTTP protocol for efficient columnar inserts.

Documentation

For more detailed documentation, please visit the GitHub repository.

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.


Connection Configuration

connections:
  clickhouse_target:
    variant: clickhouse
    host: localhost
    port: 8123
    database: target_db
    user: default
    password: mypassword

Table Configuration

pipelines:
  - name: pg_to_clickhouse
    source: pg_source
    destination: clickhouse_target
    tables:
      - name: public.events
        target_name: stg_events
        replication_method: full
        batchsize: 50000

Write Parallelism & Throughput

ClickHouse loader inherits from JdbcLoader. Two parameters control write performance:

      - name: public.events
        target_name: stg_events
        replication_method: full
        batchsize: 50000        # rows per JDBC batch insert (default: 10000)
        write_partitions: 4     # coalesce DataFrame to N partitions before writing

How they work

  • batchsize: number of rows buffered before sending a single INSERT to ClickHouse. ClickHouse benefits greatly from large batches — use 50,000–500,000 for best throughput.
  • write_partitions: calls coalesce(N) on the DataFrame before writing, reducing the number of concurrent JDBC connections. Useful when you have many Spark partitions and want to limit load on ClickHouse.

Performance Notes

  • ClickHouse is optimized for large bulk inserts. batchsize is the most impactful parameter — increase it as much as your driver memory allows.
  • Avoid many small write_partitions (e.g. 1) as it reduces parallelism. A value of 4–8 balances load and throughput.
  • ClickHouse's MergeTree engine merges parts in the background; very frequent small inserts create many parts and degrade query performance. Prefer fewer large batches.

All Table Parameters

Parameter Type Default Description
name string required Source table name
target_name string required ClickHouse destination table name
replication_method full / incremental full Replication strategy
batchsize int 10000 Rows per JDBC batch insert
write_partitions int Coalesce DataFrame to N partitions before writing
dedup_columns list Columns used for mkpipe_id hash deduplication
tags list [] Tags for selective pipeline execution
pass_on_error bool false Skip table on error instead of failing

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mkpipe_loader_clickhouse-0.6.3.tar.gz (8.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mkpipe_loader_clickhouse-0.6.3-py3-none-any.whl (9.3 kB view details)

Uploaded Python 3

File details

Details for the file mkpipe_loader_clickhouse-0.6.3.tar.gz.

File metadata

  • Download URL: mkpipe_loader_clickhouse-0.6.3.tar.gz
  • Upload date:
  • Size: 8.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for mkpipe_loader_clickhouse-0.6.3.tar.gz
Algorithm Hash digest
SHA256 994e97211f0e7ff3332f211278651827972f71e3a8d7a1e59da74a206f08cf0e
MD5 30ef09b24f54bbac629535d607938e8d
BLAKE2b-256 68405dd11cd916e33a695125190ed3986619f0a87f9665c9e628ffee64130161

See more details on using hashes here.

File details

Details for the file mkpipe_loader_clickhouse-0.6.3-py3-none-any.whl.

File metadata

File hashes

Hashes for mkpipe_loader_clickhouse-0.6.3-py3-none-any.whl
Algorithm Hash digest
SHA256 c2e3586df4aa8b764f26e1da3f1204813ea218d4736af8c36424986402d54608
MD5 bcfa520beee424dea0ecc42617f6a64f
BLAKE2b-256 4e5b78410419022202448a38df8e6905c503c908eae779182f6111cb5d275ae5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page