L3-level cdk constructs for DMS
Project description
DMS Patterns - a library to facilitate migrations
This library aims at simplifying the task of setting up a migration project using AWS DMS and the AWS CDK, by providing L2 constructs that take care of creating the necessary roles and resources and high-level L3 constructs that represent migration patterns from a database (in the cloud or a DB in AWS RDS) to AWS s3. See the example section below for more details.
This library is written using the wonderful projen framework.
Note: this library is just the result of some personal experimentation. It is not an official AWS library and is not supported by AWS!
Installation
The library is available on npmjs.com and can be installed using:
npm i dms-patterns
And on pypi:
pip install dms-patterns
Usage Examples
Deploying the dms-vpc-role
If you use the AWS CLI or the AWS DMS API for your database migration, you must add three IAM roles to your AWS account before you can use the features of AWS DMS (see here).
The dms-patterns includes a stack that creates these roles for you. Here is an example of how to use it:
import { DmsVpcRoleStack } from 'dms-patterns';
const app = new cdk.App();
new DmsVpcRoleStack(app, 'DmsVpcRoleStack');
adding an explicit dependency might be required to make sure that the role is deployed prior to the migration stack.
Migrating data from MySQL to S3
This section demonstrates creating a stack that migrates data from a MySQL database to S3. The stack is created in TypeScript, but the same constructs are available in Python. We start by adding a source endpoint to our stack:
this.source = new MySqlEndpoint(this, 'SourceEndpoint', {
endpointType: EndpointType.SOURCE,
databaseName: *******,
endpointIdentifier: 'mysqlEndpoint',
mySqlEndpointSettings: {
secretsManagerSecretId: 'arn:aws:secretsmanager:**********',
},
});
A MySqlEndpoint is just a dms.CfnEndpoint, but the construct also takes care of initializing a secretsManagerAccessRole used by DMS to read the specified secret, that contains the necessary connection details:
{
"password":"*****",
"username":"*****",
"port":3306,
"host":"****"
}
Our target is :
this.target = new S3TargetEndpoint(this, 'S3Target', {
bucketArn: props.bucketArn,
});
the construct takes again care of creating the necessary role for creating objects within the given bucket.
We can now define the replication itself. In case no default (subnet) vpc exists in your account, you may want to create a replicationSubnetGroup that describes where your resources will be deployed:
const replicationSubnetGroup = new dms.CfnReplicationSubnetGroup(this, 'ReplicationSubnetGroup', {
replicationSubnetGroupDescription: 'ReplicationSubnetGroup',
replicationSubnetGroupIdentifier: 'mysqlreplicationsubnetID',
subnetIds: [
'subnet-******',
'subnet-******',
'subnet-******',
],
});
and, finally, a computeConfig:
const computeConfig: dms.CfnReplicationConfig.ComputeConfigProperty = {
minCapacityUnits: CapacityUnits._1,
maxCapacityUnits: CapacityUnits._2,
multiAz: false,
replicationSubnetGroupId: replicationSubnetGroup.replicationSubnetGroupIdentifier,
vpcSecurityGroupIds: [
replicationSecurityGroup.securityGroupId,
],
};
and some rules for e.g. selecting the right tables:
const tableMappings = new TableMappings(
[
new SelectionRule(
{
objectLocator: {
schemaName: 'schemaname',
tableName: 'tableName',
},
ruleAction: SelectionAction.INCLUDE,
},
),
],
);
The tableMappings object takes care of assigning the rules a name and id if not specified and comes with a format method, see below. Finally, we can write the replication in itself:
new dms.CfnReplicationConfig(this, 'ReplicationConfig', {
computeConfig: computeConfig,
replicationConfigIdentifier: 'replicationConfigIdentifier',
replicationType: ReplicationTypes.FULL_LOAD,
sourceEndpointArn: this.source.ref,
tableMappings: tableMappings.format(),
targetEndpointArn: this.target.ref,
});
In the example above the 'experiment' table of the database specified above will be migrated to S3, and after executing the migration, a file 'LOAD00000001' is created in the specified 'folder' in the S3 bucket.
So far I demonstrated a couple of L2 constructs for the endpoints; but an L3 construct is also available that in turn takes care of creating all the endpoints and the migration itself:
const mySql2S3 = new MySql2S3(this, 'mysql2S3', {
databaseName: ******,
mySqlEndpointSettings: {
secretsManagerSecretId: 'arn:aws:secretsmanager:*******',
},
bucketArn: bucketArn,
tableMappings: tableMappings,
computeConfig: computeConfig,
replicationConfigIdentifier: '*****',
});
which makes migrations easier. A similar pattern exists for postgres, and more could be added in future versions.
Contributors
Matteo Giani Bruno Baido
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file dms-patterns-0.0.12.tar.gz
.
File metadata
- Download URL: dms-patterns-0.0.12.tar.gz
- Upload date:
- Size: 188.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a5ccdf7bd89dbe1acc14c9bc16ff4e6846d72aae85ebe51d607167ca6411dd89 |
|
MD5 | 558188a788c93fe79afa24ebff9298a1 |
|
BLAKE2b-256 | 3533ecc1d1adf3986af5ee88e6c7fa7365fd12452cb1269c1af3cefc804ba0d7 |
File details
Details for the file dms_patterns-0.0.12-py3-none-any.whl
.
File metadata
- Download URL: dms_patterns-0.0.12-py3-none-any.whl
- Upload date:
- Size: 186.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 886e7aa1fc4b8daff613ca663b72aab9679143fb03fd16d58db7ecb5dc2c12d6 |
|
MD5 | 523fd6768c652a77faf4b458c39cb5a4 |
|
BLAKE2b-256 | 0d2ad618317f44c48e2451b7b64c18783944deefe048527ae204eadb3e5e1065 |