Skip to main content

clickhouse-s3-etl-tools is a powerful utility designed for seamless data transfers between ClickHouse clusters using the flexibility of Amazon S3 or any S3-compatible storage as an intermediate staging area.

Project description

clickhouse-s3-etl-tools

clickhouse-s3-etl-tools is a powerful utility designed for seamless data transfers between ClickHouse clusters using the flexibility of Amazon S3 or any S3-compatible storage as an intermediate staging area.

Introduction

Managing and orchestrating data movement between different ClickHouse clusters can be a challenging task. The clickhouse-s3-etl-tools service simplifies this process by acting as a bridge, enabling efficient transfers via S3 storage. This toolset is especially useful in scenarios where you need to synchronize or backup data between ClickHouse databases or clusters with ease.

Installation

To quickly get started with clickhouse-s3-etl-tools, you can install the package using the following:

pip install clickhouse-s3-etl-tools

Utilities

The ClickHouse S3 ETL Tools consist of two main utilities:

  • s3_exporter: Processes data from a ClickHouse instance and exports it to an S3 bucket.
s3_exporter --ch-url-source='clickhouse+native://user:password@localhost:9000/database' \
            --s3-access-key='your_s3_access_key' \
            --s3-secret-key='your_s3_secret_key' \
            --s3-path='s3://your_bucket/path/to/data' \
            --table-name='your_table' \
            --database='your_database'
  • s3_to_clickhouse_transfer: Retrieves data from an S3 bucket and transfers it to a ClickHouse instance.
s3_to_clickhouse_transfer --ch-url-destination='clickhouse+native://user:password@localhost:9000/destination_database' \
                          --s3-access-key='your_s3_access_key' \
                          --s3-secret-key='your_s3_secret_key' \
                          --s3-path='s3://your_bucket/path/to/data' \
                          --table-name='your_table' \
                          --database='your_database' \
                          --database-destination='your_destination_database' \
                          --drop-destination-table-if-exists \
                          --use-s3-cluster \
                          --on-cluster-directive='your_directive'

Common Parameters (Applicable to both s3_exporter and s3_to_clickhouse_transfer):

  • --ch-url-source (s3_exporter) / --ch-url-destination (s3_to_clickhouse_transfer):

    • Description: ClickHouse URL for either source or destination, depending on the utility.
    • Example: --ch-url-source='clickhouse+native://user:password@localhost:9000/database'
  • --s3-access-key:

    • Description: Access key for the S3 bucket.
    • Example: --s3-access-key='your_s3_access_key'
  • --s3-secret-key:

    • Description: Secret key for the S3 bucket.
    • Example: --s3-secret-key='your_s3_secret_key'
  • --s3-path:

    • Description: Path to the data in the S3 bucket.
    • Example: --s3-path='s3://your_bucket/path/to/data'
  • --table-name:

    • Description: Name of the table in ClickHouse.
    • Example: --table-name='your_table'
  • --database (s3_exporter) / --database-destination (s3_to_clickhouse_transfer):

    • Description: Database in ClickHouse.
    • Example: --database='your_database'
  • --batch-size:

    • Description: Batch size for data transfer (optional, default: DEFAULT_VALUE_BATCH_SIZE).
    • Example: --batch-size=100
  • --log-level:

    • Description: Log level for the utility (optional, default: DEFAULT_VALUE_LOG_LEVEL).
    • Example: --log-level='DEBUG'
  • --drop-destination-table-if-exists (s3_to_clickhouse_transfer only):

    • Description: Drop destination table if it exists.
    • Example: --drop-destination-table-if-exists
  • --use-s3-cluster (s3_to_clickhouse_transfer only):

    • Description: Use S3 cluster for data transfer.
    • Example: --use-s3-cluster
  • --on-cluster-directive (s3_to_clickhouse_transfer only):

    • Description: Directive for cluster configuration (default: "").
    • Example: --on-cluster-directive='your_directive'

Environment Variables:

  • NUM_PARTITIONS_DROP_IN_QUERY:

    • Description: Number of partitions to drop in each query.
  • MAX_TABLE_SIZE_TO_DROP_TABLE_MB:

    • Description: Maximum table size (in MB) to trigger table dropping.
  • NUMB_RECONNECT_ATTEMPTS_CH:

    • Description: Number of attempts to reconnect to ClickHouse.
  • MAX_PERCENTAGE_DIFF_EXTRACT:

    • Description: Maximum percentage difference for extraction.
  • MAX_PERCENTAGE_DIFF_TRANSFORM:

    • Description: Maximum percentage difference for transformation.
  • MAX_PARTITIONS_PER_INSERT_BLOCK:

    • Description: Maximum partitions per insert block.
  • DELAY_BETWEEN_DROP_PARTITIONS_SEC:

    • Description: Delay between drop partitions (in seconds).

Example Usage with Environment Variables:

NUM_PARTITIONS_DROP_IN_QUERY=100 \
MAX_TABLE_SIZE_TO_DROP_TABLE_MB=1000 \
NUMB_RECONNECT_ATTEMPTS_CH=3 \
MAX_PERCENTAGE_DIFF_EXTRACT=1 \
MAX_PERCENTAGE_DIFF_TRANSFORM=1 \
MAX_PARTITIONS_PER_INSERT_BLOCK=500 \
DELAY_BETWEEN_DROP_PARTITIONS_SEC=10 \
s3_exporter --ch-url-source='clickhouse+native://user:password@localhost:9000/database' \
            --s3-access-key='your_s3_access_key' \
            --s3-secret-key='your_s3_secret_key' \
            --s3-path='s3://your_bucket/path/to/data' \
            --table-name='your_table' \
            --database='your_database'


# ClickHouse S3 ETL Tools Documentation

## File Naming Conventions

### Metadata Files

Metadata files store information about the tables and their properties. These files follow the format:

- **File Name Format:** `__metadata__[TABLE_NAME].parquet`
- **Example:** `__metadata__my_table.parquet`

### Full Table Export

Full table exports result in Parquet files containing all the data from the table. These files follow the format:

- **File Name Format:** `[TABLE_NAME]_all.parquet`
- **Example:** `my_table_all.parquet`

### Partitioned Exports

For partitioned exports, where data is saved based on specific partition keys, files are named using the following format:

- **File Name Format:** `[TABLE_NAME]{_partition_id}.parquet`
- **Example:** `my_table{123}.parquet`

Here, `{_partition_id}` is replaced with the actual partition ID.

## Exported File Contents

### Metadata File Contents

Metadata files (`__metadata__[TABLE_NAME].parquet`) typically contain the following information:

- Table schema details
- Engine type
- Partitioning key
- Total number of rows

### Full Table Export Contents

Files generated for full table exports (`[TABLE_NAME]_all.parquet`) contain the entire dataset for the specified table.

### Partitioned Export Contents

Files generated for partitioned exports (`[TABLE_NAME]{_partition_id}.parquet`) contain data specific to the identified partition.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

clickhouse-s3-etl-tools-0.0.7.tar.gz (22.8 kB view details)

Uploaded Source

Built Distribution

clickhouse_s3_etl_tools-0.0.7-py3-none-any.whl (28.3 kB view details)

Uploaded Python 3

File details

Details for the file clickhouse-s3-etl-tools-0.0.7.tar.gz.

File metadata

File hashes

Hashes for clickhouse-s3-etl-tools-0.0.7.tar.gz
Algorithm Hash digest
SHA256 3c13f4a64442c5ee39b901dba04effb65d73bd71dd7e13d8bfdc15f43ad733c5
MD5 6b27b04bbfb5fb14aa3e6d3b0feba29b
BLAKE2b-256 e0979f84c62a5cabc2609629192a8e44d820dabfdcae1d90dc38536eec9f3ab3

See more details on using hashes here.

File details

Details for the file clickhouse_s3_etl_tools-0.0.7-py3-none-any.whl.

File metadata

File hashes

Hashes for clickhouse_s3_etl_tools-0.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 8eb0cd25013fef488c27593b498a55a5b893b4005d2398ef8249d8a8928ac805
MD5 1e82d4f5ba7d2abda736f9d25099a646
BLAKE2b-256 2cadc2aea1735c1ce384a1b45ffe8d8301e09dc16638cb211b26914d32b67ea2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page