Clickhouse extractor for mkpipe.
Project description
mkpipe-extractor-clickhouse
ClickHouse extractor plugin for MkPipe. Reads ClickHouse tables using the native clickhouse-spark connector, which uses ClickHouse's binary HTTP protocol for columnar data transfer — faster than JDBC, especially for analytical queries.
Documentation
For more detailed documentation, please visit the GitHub repository.
License
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Connection Configuration
connections:
clickhouse_source:
variant: clickhouse
host: localhost
port: 8123
database: source_db
user: default
password: mypassword
Table Configuration
pipelines:
- name: clickhouse_to_pg
source: clickhouse_source
destination: pg_target
tables:
- name: events
target_name: stg_events
replication_method: full
fetchsize: 100000
Incremental Replication
- name: events
target_name: stg_events
replication_method: incremental
iterate_column: updated_at
iterate_column_type: datetime
partitions_column: id
partitions_count: 8
fetchsize: 50000
Custom SQL
- name: events
target_name: stg_events
replication_method: full
custom_query: "SELECT id, user_id, event_type, created_at FROM events WHERE {query_filter}"
Use {query_filter} as a placeholder — it is replaced with the incremental WHERE clause on incremental runs, or WHERE 1=1 on full runs.
Read Parallelism
ClickHouse extractor uses JDBC with Spark's native partition support. For large tables, set partitions_column and partitions_count to read in parallel:
- name: events
target_name: stg_events
replication_method: incremental
iterate_column: created_at
iterate_column_type: datetime
partitions_column: id # numeric column to split on
partitions_count: 8 # number of parallel JDBC partitions
fetchsize: 50000
How it works
- Spark reads the min/max of
partitions_columnand divides the range intopartitions_countequal slices - Each slice is fetched by a separate Spark task via a separate JDBC connection
fetchsizecontrols how many rows each connection fetches per round-trip
Performance Notes
- Full replication: partitioning is not applied (only works with
incremental). partitions_columnshould be a numeric column with good distribution (e.g. primary key).fetchsize: ClickHouse is a columnar store — largefetchsize(50,000–200,000) works well.- ClickHouse handles large scans efficiently; for distributed tables parallelism is less critical than for row-based databases.
All Table Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
name |
string | required | ClickHouse table name |
target_name |
string | required | Destination table name |
replication_method |
full / incremental |
full |
Replication strategy |
iterate_column |
string | — | Column used for incremental watermark |
iterate_column_type |
int / datetime |
— | Type of iterate_column |
partitions_column |
string | same as iterate_column |
Column to split JDBC reads on |
partitions_count |
int | 10 |
Number of parallel JDBC partitions |
fetchsize |
int | 100000 |
Rows per JDBC fetch |
custom_query |
string | — | Override SQL with {query_filter} placeholder |
custom_query_file |
string | — | Path to SQL file (relative to sql/ dir) |
write_partitions |
int | — | Coalesce to N partitions before writing |
tags |
list | [] |
Tags for selective pipeline execution |
pass_on_error |
bool | false |
Skip table on error instead of failing |
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mkpipe_extractor_clickhouse-0.5.0.tar.gz.
File metadata
- Download URL: mkpipe_extractor_clickhouse-0.5.0.tar.gz
- Upload date:
- Size: 8.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9584da64699f310a8dceb51c4d6a5c8a48412281b2b09b5565c4c02a81b5564d
|
|
| MD5 |
2a60ebba8f0780dffb132e14ac4698db
|
|
| BLAKE2b-256 |
949c8daa3b142758d0b4c9a41eed5adf931648d3a435db739bcee3a59d461741
|
File details
Details for the file mkpipe_extractor_clickhouse-0.5.0-py3-none-any.whl.
File metadata
- Download URL: mkpipe_extractor_clickhouse-0.5.0-py3-none-any.whl
- Upload date:
- Size: 9.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2c123b0ea5dd6e4f965909894073b01fed733a376f263d112e13e0c670956213
|
|
| MD5 |
d98d83d5a0e289c9303fd698b6ee3bf6
|
|
| BLAKE2b-256 |
de8bb880e4833c7f5a61aeeafc5fcd7604ba8872a76da4bc2859f4d3b508b92c
|