Skip to main content

A CDK Python app for deploying foundational infrastructure for an Insurance Lake in AWS

Project description

InsuranceLake Infrastructure with CDK Pipeline

The Insurance Lake solution is comprised of two codebases: Infrastructure and ETL. This codebase and the documentation that follows is specific to the Infrastructure. For more comprehensive documentation, including several ways to get started quickly, refer to the InsuranceLake ETL with CDK Pipeline README.

This solution helps you deploy ETL processes and data storage resources to create an Insurance Lake. It uses Amazon S3 buckets for storage, AWS Glue for data transformation, and AWS CDK Pipelines. The solution is originally based on the AWS blog Deploy data lake ETL jobs using CDK Pipelines.

CDK Pipelines is a construct library module for painless continuous delivery of CDK applications. CDK stands for Cloud Development Kit. It is an open source software development framework to define your cloud application resources using familiar programming languages.

Specifically, this solution helps you to:

  1. Deploy a 3 Cs (Collect, Cleanse, Consume) Insurance Lake
  2. Deploy ETL jobs needed make common insurance industry data souces available in a data lake
  3. Use pySpark Glue jobs and supporting resoures to perform data transforms in a modular approach
  4. Build and replicate the application in multiple environments quickly
  5. Deploy ETL jobs from a central deployment account to multiple AWS environments such as Dev, Test, and Prod
  6. Leverage the benefit of self-mutating feature of CDK Pipelines; specifically, the pipeline itself is infrastructure as code and can be changed as part of the deployment
  7. Increase the speed of prototyping, testing, and deployment of new ETL jobs

Contents


Architecture

In this section we talk about the overall Insurance Lake architecture and the infrastructure component.

Insurance Lake

As shown in the figure below, we use Amazon S3 for storage. We use three S3 buckets:

  1. Collect bucket to store raw data in its original format
  2. Cleanse/Curate bucket to store the data that meets the quality and consistency requirements of the lake
  3. Consume bucket for data that is used by analysts and data consumers of the lake (e.g. Amazon Quicksight, Amazon Sagemaker)

The Insurance Lake is designed to support a number of source systems with different file formats and data partitions. To demonstrate, we have provided a CSV parser and sample data files for a source system with two data tables, which are uploaded to the Collect bucket.

We use AWS Lambda and AWS Step Functions for orchestration and scheduling of ETL workloads. We then use AWS Glue with pySpark for ETL and data cataloging, Amazon DynamoDB for transformation persistence, Amazon Athena for interactive queries and analysis. We use various AWS services for logging, monitoring, security, authentication, authorization, notification, build, and deployment.

Note: AWS Lake Formation is a service that makes it easy to set up a secure data lake in days. Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud. These two services are not used in this solution but can be added.

Conceptual Data Lake


Infrastructure

The figure below represents the infrastructure resources we provision for Data Lake.

  1. Amazon Virtual Private Cloud (VPC)
  2. Subnets
  3. Security Groups
  4. Route Table(s)
  5. VPC Endpoints
  6. Amazon S3 buckets for:
    1. Collect data
    2. Cleanse/Curate data
    3. Consume data

Data Lake Infrastructure Architecture


Codebase

Source Code Structure

Table below explains how this source code structured:

File / Folder Description
app.py Application entry point.
code_commit_stack.py Optional stack to deploy an empty CodeCommit respository for mirroring.
pipeline_stack.py Pipeline stack entry point.
pipeline_deploy_stage.py Pipeline deploy stage entry point.
s3_bucket_zones_stack.py Stack creates S3 buckets - raw, conformed, and purpose-built. This also creates an S3 bucket for server access logging and AWS KMS Key to enabled server side encryption for all buckets.
tagging.py Program to tag all provisioned resources.
vpc_stack.py Contains all resources related to the VPC used by Data Lake infrastructure and services. This includes: VPC, Security Groups, and VPC Endpoints (both Gateway and Interface types).
test This folder contains pytest unit tests
resources This folder has static resources such as architecture diagrams.

Automation scripts

This repository has the following automation scripts to complete steps before the deployment:

# Script Purpose
1 bootstrap_deployment_account.sh Used to bootstrap deployment account
2 bootstrap_target_account.sh Used to bootstrap target environments for example dev, test, and production.
3 configure_account_secrets.py Used to configure account secrets for GitHub access token.

Authors and Reviewers

The following people are involved in the design, architecture, development, testing, and review of this solution:

  1. Cory Visi, Senior Solutions Architect, Amazon Web Services
  2. Ratnadeep Bardhan Roy, Senior Solutions Architect, Amazon Web Services
  3. Isaiah Grant, Cloud Consultant, 2nd Watch, Inc.
  4. Muhammad Zahid Ali, Data Architect, Amazon Web Services
  5. Ravi Itha, Senior Data Architect, Amazon Web Services
  6. Justiono Putro, Cloud Infrastructure Architect, Amazon Web Services
  7. Mike Apted, Principal Solutions Architect, Amazon Web Services
  8. Nikunj Vaidya, Senior DevOps Specialist, Amazon Web Services

License Summary

This sample code is made available under the MIT-0 license. See the LICENSE file.

Copyright Amazon.com and its affiliates; all rights reserved. This file is Amazon Web Services Content and may not be duplicated or distributed without permission.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aws-insurancelake-infrastructure-2.3.0.tar.gz (20.0 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file aws-insurancelake-infrastructure-2.3.0.tar.gz.

File metadata

File hashes

Hashes for aws-insurancelake-infrastructure-2.3.0.tar.gz
Algorithm Hash digest
SHA256 e3e816866ad6a7732928448f3d7b9715c9b65e2352c5c693d5165e927278d566
MD5 9fdb47c7db69cf7fe09927c6ed16ff95
BLAKE2b-256 3506c3bde0519cf3980b15d6f1a30f3c59f27d3f26edf1739dd570328b65082e

See more details on using hashes here.

File details

Details for the file aws_insurancelake_infrastructure-2.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for aws_insurancelake_infrastructure-2.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9f8088a8d2ddfc1b08283b476596e914d69260d6fe3928f1ad0a427529b82c67
MD5 ed8bf9fe1e2671a1d5eed8b0db82aade
BLAKE2b-256 f54926dcaef7fcf3a9b0664d30492363b2d6a6f3aad6e0945cffee46702e7560

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page