Skip to main content

Tool for replication of MySQL databases to ClickHouse

Project description

mysql_ch_replicator

tests Release License

img

mysql_ch_replicator is a powerful and efficient tool designed for real-time replication of MySQL databases to ClickHouse.

With a focus on high performance, it utilizes batching heavily and uses C++ extension for faster execution. This tool ensures seamless data integration with support for migrations, schema changes, and correct data management.

Features

  • Real-Time Replication: Keeps your ClickHouse database in sync with MySQL in real-time.
  • High Performance: Utilizes batching and ports slow parts to C++ (e.g., MySQL internal JSON parsing) for optimal performance.
  • Supports Migrations/Schema Changes: Handles adding, altering, and removing tables without breaking the replication process.
  • Recovery without Downtime: Allows for preserving old data while performing initial replication, ensuring continuous operation.
  • Correct Data Removal: Unlike MaterializedMySQL, mysql_ch_replicator ensures physical removal of data.
  • Comprehensive Data Type Support: Accurately replicates most data types, including JSON, booleans, and more. Easily extensible for additional data types.
  • Multi-Database Handling: Replicates the binary log once for all databases, optimizing the process compared to MaterializedMySQL, which replicates the log separately for each database.

Installation

To install mysql_ch_replicator, use the following command:

pip install mysql_ch_replicator

You may need to also compile C++ components if they're not pre-built for your platform.

Usage

Basic Usage

For realtime data sync from MySQL to ClickHouse:

  1. Prepare config file. Use example_config.yaml as an example.
  2. Configure MySQL and ClickHouse servers:
  • MySQL server configuration file my.cnf should include following settings (required to write binary log in raw format, and enable password authentication):
[mysqld]
# ... other settings ...
gtid_mode = on
enforce_gtid_consistency = 1
binlog_expire_logs_seconds = 864000
max_binlog_size            = 500M
binlog_format              = ROW
  • For MariaDB use following settings:
[mysqld]
# ... other settings ...
gtid_strict_mode = ON
gtid_domain_id = 0
server_id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_expire_logs_seconds = 864000
max_binlog_size = 500M
binlog_format = ROW

For AWS RDS you need to set following settings in Parameter groups:

binlog_format                       ROW
binlog_expire_logs_seconds          86400
  • ClickHouse server config override.xml should include following settings (it makes clickhouse apply final keyword automatically to handle updates correctly):
<clickhouse>
    <!-- ... other settings ... -->
    <profiles>
        <default>
            <!-- ... other settings ... -->
            <final>1</final>
        </default>
    </profiles>
</clickhouse>
  1. Start the replication:
mysql_ch_replicator --config config.yaml run_all

This will keep data in ClickHouse updating as you update data in MySQL. It will always be in sync.

One Time Data Copy

If you just need to copy data once, and don't need continuous synchronization for all changes, you should do following:

  1. Prepare config file. Use example_config.yaml as an example.
  2. Run one-time data copy:
mysql_ch_replicator --config config.yaml db_replicator --database mysql_db_name --initial_only=True

Where mysql_db_name is the name of the database you want to copy.

Don't be afraid to interrupt process in the middle. It will save the state and continue copy after restart.

Configuration

mysql_ch_replicator can be configured through a configuration file. Here is the config example:

mysql:
  host: 'localhost'
  port: 8306
  user: 'root'
  password: 'root'

clickhouse:
  host: 'localhost'
  port: 8323
  user: 'default'
  password: 'default'
  connection_timeout: 30        # optional
  send_receive_timeout: 300     # optional

binlog_replicator:
  data_dir: '/home/user/binlog/'
  records_per_file: 100000

databases: 'database_name_pattern_*'
tables: '*'

exclude_databases: ['database_10', 'database_*_42']   # optional
exclude_tables: ['meta_table_*']                      # optional

log_level: 'info'   # optional             

Required settings

  • mysql MySQL connection settings
  • clickhouse ClickHouse connection settings
  • binlog_replicator.data_dir Create a new empty directory, it will be used by script to store it's state
  • databases Databases name pattern to replicate, e.g. db_* will match db_1 db_2 db_test, list is also supported

Optional settings

  • tables - tables to filter, list is also supported
  • exclude_databases - databases to exclude, string or list, eg 'table1*' or ['table2', 'table3*']. If same database matches databases and exclude_databases, exclude has higher priority.
  • exclude_tables - databases to exclude, string or list. If same table matches tables and exclude_tables, exclude has higher priority.
  • log_level - log level, default is info, you can set to debug to get maximum information (allowed values are debug, info, warning, error, critical)

Few more tables / dbs examples:

databases: ['my_database_1', 'my_database_2']
tables: ['table_1', 'table_2*']

Advanced Features

Migrations & Schema Changes

mysql_ch_replicator supports the following:

  • Adding Tables: Automatically starts replicating data from newly added tables.
  • Altering Tables: Adjusts replication strategy based on schema changes.
  • Removing Tables: Handles removal of tables without disrupting the replication process.

Recovery Without Downtime

In case of a failure or during the initial replication, mysql_ch_replicator will preserve old data and continue syncing new data seamlessly. You could remove the state and restart replication from scratch.

Development

To contribute to mysql_ch_replicator, clone the repository and install the required dependencies:

git clone https://github.com/your-repo/mysql_ch_replicator.git
cd mysql_ch_replicator
pip install -r requirements.txt

Running Tests

  1. Use docker-compose to install all requirements:
sudo docker compose -f docker-compose-tests.yaml up
  1. Run tests with:
sudo docker exec -w /app/ -it mysql_ch_replicator-replicator-1 python3 -m pytest -v -s test_mysql_ch_replicator.py

Contribution

Contributions are welcome! Please open an issue or submit a pull request for any bugs or features you would like to add.

License

mysql_ch_replicator is licensed under the MIT License. See the LICENSE file for more details.

Acknowledgements

Thank you to all the contributors who have helped build and improve this tool.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mysql_ch_replicator-0.0.35.tar.gz (127.1 kB view details)

Uploaded Source

Built Distribution

mysql_ch_replicator-0.0.35-py3-none-any.whl (138.9 kB view details)

Uploaded Python 3

File details

Details for the file mysql_ch_replicator-0.0.35.tar.gz.

File metadata

  • Download URL: mysql_ch_replicator-0.0.35.tar.gz
  • Upload date:
  • Size: 127.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.15 Darwin/24.1.0

File hashes

Hashes for mysql_ch_replicator-0.0.35.tar.gz
Algorithm Hash digest
SHA256 ee41aeb09aaa41125a8da8ecc4711cd34d7b6dee8a08e89a23b277f27584bae0
MD5 e5c0383e5616f5e6c9e5b1867771f5ca
BLAKE2b-256 06d68c80fb29917df5c0ccd1c8b4feb179abca6ef219f3173ff208a900b2d8a5

See more details on using hashes here.

File details

Details for the file mysql_ch_replicator-0.0.35-py3-none-any.whl.

File metadata

File hashes

Hashes for mysql_ch_replicator-0.0.35-py3-none-any.whl
Algorithm Hash digest
SHA256 98701b3717e393e8aec91633d034eff990400e7dd2a0b4c40a3a6b7ee176262b
MD5 fee5e29ff3a585819e8c486a7b7b0bcf
BLAKE2b-256 4622fa1c0d02cce3324402a5fa27b241aec43a7e8c57fa7cdcadc3e8b6964798

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page