Skip to main content

Read infrastructure data from your cloud and export it to a SQL database.

Project description

Cloud2SQL 🤩

Read infrastructure data from your cloud ☁️ and export it to a SQL database 📋.

Cloud2SQL

Installation

Install via homebrew

This is the easiest way to install Cloud2SQL. Please note, that the installation process will take a couple of minutes.

brew install someengineering/tap/cloud2sql

Install via Python pip

Alternatively you can install Cloud2SQL as Python package, where Python 3.9 or higher is required.

If you only need support for a specific database, instead of cloud2sql[all] you can choose between cloud2sql[snowflake], cloud2sql[parquet], cloud2sql[postgresql], cloud2sql[mysql].

pip3 install --user "cloud2sql[all]"

This will install the executable to the user install directory of your platform. Please make sure this installation directory is listed in PATH.

Usage

The sources and destinations for cloud2sql are configured via a configuration file. Create your own configuration by adjusting the config template file.

You can safely delete the sections that are not relevant to you (e.g. if you do not use AWS, you can delete the aws section). All sections refer to cloud providers and are enabled if a configuration section is provided.

In the next section you will create a YAML configuration file. Once you have created your configuration file, you can run cloud2sql with the following command:

cloud2sql --config myconfig.yaml

Configuration

Cloud2SQL uses a YAML configuration file to define the sources and destinations.

Sources

AWS

sources:
  aws:
    # AWS Access Key ID (null to load from env - recommended)
    access_key_id: null
    # AWS Secret Access Key (null to load from env - recommended)
    secret_access_key: null
    # IAM role name to assume
    role: null
    # List of AWS profiles to collect
    profiles: null
    # List of AWS Regions to collect (null for all)
    region: null
    # Scrape the entire AWS organization
    scrape_org: false
    # Assume given role in current account
    assume_current: false
    # Do not scrape current account
    do_not_scrape_current: false

Google Cloud

sources:
  gcp:
    # GCP service account file(s)
    service_account: []
    # GCP project(s)
    project: []

Kubernetes

sources:
  k8s:
    # Configure access via kubeconfig files.
    # Structure:
    #   - path: "/path/to/kubeconfig"
    #     all_contexts: false
    #     contexts: ["context1", "context2"]
    config_files: []
    # Alternative: configure access to k8s clusters directly in the config.
    # Structure:
    #   - name: 'k8s-cluster-name'
    #     certificate_authority_data: 'CERT'
    #     server: 'https://k8s-cluster-server.example.com'
    #     token: 'TOKEN'
    configs: []

DigitalOcean

sources:
  digitalocean:
    # DigitalOcean API tokens for the teams to be collected
    api_tokens: []
    # DigitalOcean Spaces access keys for the teams to be collected, separated by colons
    spaces_access_keys: []

Destinations

SQLite

destinations:
  sqlite:
    database: /path/to/database.db

PostgreSQL

destinations:
  postgresql:
    host: 127.0.0.1
    port: 5432
    user: cloud2sql
    password: changeme
    database: cloud2sql
    args:
      key: value

MySQL

destinations:
  mysql:
    host: 127.0.0.1
    port: 3306
    user: cloud2sql
    password: changeme
    database: cloud2sql
    args:
      key: value

MariaDB

destinations:
  mariadb:
    host: 127.0.0.1
    port: 3306
    user: cloud2sql
    password: changeme
    database: cloud2sql
    args:
      key: value

Snowflake

destinations:
  snowflake:
    host: myorg-myaccount
    user: cloud2sql
    password: changeme
    database: cloud2sql/public
    args:
      warehouse: compute_wh
      role: accountadmin

Apache Parquet

destinations:
  file:
    path: /where/to/write/parquet/files/
    format: parquet
    batch_size: 100_000

CSV

destinations:
  file:
    path: /where/to/write/to/csv/files/
    format: csv
    batch_size: 100_000

Upload to S3

destinations:
  s3:
    uri: s3://bucket_name/
    region: eu-central-1
    format: csv
    batch_size: 100_000

Upload to Google Cloud Storage

destinations:
  gcs:
    uri: gs://bucket_name/
    format: parquet
    batch_size: 100_000

My database is not listed here

Cloud2SQL uses SQLAlchemy to connect to the database. If your database is not listed here, you can check if it is supported in SQLAlchemy Dialects. Install the relevant driver and use the connection string from the documentation.

Example

We use a minimal configuration example and export the data to a SQLite database. The example uses our AWS default credentials and the default kubernetes config.

cloud2sql --config config-example.yaml

For a more in-depth example, check out our blog post.

Local Development

Create a local development environment with the following command:

make setup
source venv/bin/activate

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cloud2sql-0.9.0.tar.gz (15.1 kB view hashes)

Uploaded Source

Built Distribution

cloud2sql-0.9.0-py3-none-any.whl (13.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page