Skip to main content

GCP Provider Layer Plugin for the Virtual Test Development System (vTDS) suite

Project description

vtds-provider-gcp

The GCP provider layer implementation for vTDS allowing a vTDS cluster to be built as a GCP project.

Description

This repo provides the code and a base configuration to deploy a vTDS cluster in a Google Cloud Platform (GCP) project within an existing Google organization. It is intended as the GCP Provider Layer for vTDS which is a provider and product neutral framework for building virtual clusters to test and develop software. The Provider Layer defines the configuration structure and software implementation required to establish the lowest level resources needed for a vTDS cluster on a given host provider, in this case GCP.

Each Provider Layer implementation contains provider specific code and a fully defined base configuration capable of deploying the provider resources of the cluster. The base configuration of the GCP Provider Layer implementation, defines the default settings for resources needed to construct a vTDS platform consisting of Ubuntu based linux GCP instances (Virtual Blades) connected by a GCP provided network (Blade Interconnect) within a single VPC in a single GCP region. The Blade Interconnect and Virtual Blade configurations are provided as templates or base-classes on which other configurations can be built. Each GCP instance (Virtual Blade) is configured to permit nested virtualization and with enough CPU and memory to host (at least) a single nested virtual machine. The assignment of virtual machines (Virtual Nodes) and Virtual Networks to these blade and interconnect resources as well as the configuration of Virtual Blades at the OS level are handled in higher layers of the vTDS stack.

NOTE: while the base configuration contains examples of every configuration setting and its default value in a given context, this config is not sufficient to deploy a Provider Layer for an actual vTDS system. Three things are needed to complete a working configuration:

  • The GCP Organization configuration of the system
  • A Blade Interconnect configuration that is not a 'pure_base_class'
  • A Virtual Blade configuration with at least one instance specified that is not a 'pure_base_class'

The GCP Organization overlay provides information specific to your GCP Organization. There is more information on this in the Getting Started Guide section of this README.

Canned configuration overlays for all layers of vTDS that are appropriate for various different applications can be found in the vtds-configs GitHub repository. Canned configuration overlays that offer GCP Provider Layer specific configuration of Blade Interconnects and Virtual Blades (among other things) are available in the layers/provider/gcp sub-directory of that repository.

An overview of vTDS is available in the vTDS Core Repository.

Getting Started with the GCP Provider Implementation

GCP Resources, Roles and Tools

As its name suggests, the GCP Provider Layer Implementation uses Google Cloud Platform (GCP) to implement a vTDS Provider Layer. To be able to use GCP, the user must have access to the resources of a GCP Organization, must be assigned a set of roles related to those resources and must have installed the necessary GCP related tools on their local system. Much of this is administrative preparation that the user does not control. In order to make it possible to set up, though, it is described here.

GCP Organization and Administrative Setup

The GCP Provider Layer requires you to have access to GCP through a GCP organization. You will need to arrange to create one, which will also involve setting up Google Cloud Identity or Google Workspace if you don't already have one. As part of setting that up, a billing account will be created and associated with your organization. The billing account will have a name, which name can be anything, but for this guide, we will name it gcp-billing.

The administrator of your organization must also create a folder for vTDS projects within your organization. They may name the folder anything they like, but for the sake of this guide, we will use the name vtds-systems.

Within the vtds-systems folder, your administrator must create a 'seed project' for vTDS deployments. The seed project is a GCP project that has no compute instances and serves as a persistent well known place to store vTDS system state using Google Cloud Storage. This project may also be named anything, but for this guide we will use vtds-seed.

Finally, your administrator should set up a Google Group within your organization. This group will permit its members to obtain the permissions needed to create, destroy and use vTDS systems. This group can be named anything, but for this guide we will use vtds-users, which, when fully qualified will be vtds-users@myorganization.net if your organization's domain name is myorgaization.net. This group needs the following access roles:

  • On the gcp-billing billing account, the vtds-users group needs to be a pricipal with the Billing User role.

  • At the GCP Organization level the vtds-users group needs the viewer role.

  • On the vtds-systems folder the vtds-users group needs the following roles:

    • Project Creator

    • Project Deleter

    • Project IAM Admin

    • Project Billing Manager

  • On the vtds-seed project the vtds-users group needs the storage-admin role.

GCP User Requirements and SDK Installation

As a vTDS user, you will need an account within your organization that is a member of the vtds-users group.

As a vTDS user you need to have the Google Cloud SDK installed on your local system.

As a vTDS user you will need to be logged into your GCP account both as an SDK user and as an application user (portions of the vTDS code have to use the gcloud command instead of GCP client libraries, which forces vTDS to require both). To do this, run the following two commands on your local system:

gcloud auth login

and

gcloud auth application-default login

These will (typically) pop up a browser and let you log into your account and authorize access. The first authorizes SDK (gcloud command) access. The second authorizes application client library (in this case, primarily terraform) access.

Terraform and Terragrunt Preparation

The vTDS GCP Provider implementation uses Terragrunt and Terraform to construct the GCP project that will be used for a vTDS cluster. The layer code manages the versions of Terraform and Terragrunt using the Terraform Version Manager (tfenv) and the Terragrunt Version Manager (tgenv). You will need to install both of these before using the GCP Provider Implementation.

Installation of the Terraform Version Manager is explained here.

Installation of Terragrunt Version Manager is explained here.

Using the GCP Provider Layer Implementation

To use the GCP Provider Layer Implementation in your vTDS stack, edit the core configuration you are using to deploy your vTDS system and configure the Provider Layer to pull in vtds-provider-gcp. The GCP Provider Layer Implementation is available as a stream of stable releases from PyPI or in source form from GitHub. When pulling from PyPI the version can be null, in which case the latest version will be used, or it can specify any of the published stable versions. When pulling from GitHub the version can be null, in which case the main branch will be used, or set to a tag, branch or digest indicating a git version.

Pulling from PyPI

Here is the form of the configuration for pulling the GCP Provider Layer Implementation from PyPI:

    provider:
      package: vtds-provider-gcp
      module: vtds_provider_gcp
      source_type: pypi
      metadata:
        version: null

Pulling from GitHub

Here is the form of the configuration for pulling the GCP Provider Layer Implementation from GitHub:

    provider:
      package: vtds-provider-gcp
      module: vtds_provider_gcp
      source_type: git
      metadata:
        url: "git@github.com:Cray-HPE/vtds-provider-gcp.git"
        version: null

Generally speaking, there will be a canned core configuration for your vTDS application available in the core configurations provided by vtds-configs that will already be set up to pull in the GCP Provider Layer Implementation, so you should be able to simply copy and modify that. Instructions for setting up to deploy your vTDS system can be found in the vTDS Core Getting Started guide.

Using an Organization Config Overlay

The canned core configurations generally split the Provider Layer configuration into two separate overlays. One that provides the desired application specific configuration of the layer and another that provides information about the organization hosting the vTDS system. By decoupling organization information, these two configuration overlays allow multiple core configurations to share the same organization config for different applications, and multiple organizations to share the same application specific configuration overlay without conflict. This approach also allows an organization to host its organization configuration separately from the canned configurations. You will need to create an organization configuration overlay and make it available somewhere. You have the choice of simply adding the necessary content to your core configuration, making a separate file and using it locally through command line options to the vtds commands, hosting the file at a simple URL of your choosing, or hosting the file in a GitHub or private remote Git repository. In any case, your organization configuration should be based on this annotated example Organization configuration overlay.

Once you have the Organization configuration overlay prepared and hosted, assuming you are not putting it in the core configuration or in a local file, modify your core configuration file to pull in the Organization configuration overlay.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vtds_provider_gcp-0.0.27.tar.gz (58.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vtds_provider_gcp-0.0.27-py3-none-any.whl (86.0 kB view details)

Uploaded Python 3

File details

Details for the file vtds_provider_gcp-0.0.27.tar.gz.

File metadata

  • Download URL: vtds_provider_gcp-0.0.27.tar.gz
  • Upload date:
  • Size: 58.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for vtds_provider_gcp-0.0.27.tar.gz
Algorithm Hash digest
SHA256 efda4d853f154a435c91fb55aee2bd84e2cc026a8c3d1a828ce0b6d491c8d44b
MD5 d744d34f5ebe23fc6d738e7743c8c455
BLAKE2b-256 4e63e3fa79873f327da0416e50185db664880e894aa097bf52f347dead35f034

See more details on using hashes here.

Provenance

The following attestation bundles were made for vtds_provider_gcp-0.0.27.tar.gz:

Publisher: build.yml on Cray-HPE/vtds-provider-gcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file vtds_provider_gcp-0.0.27-py3-none-any.whl.

File metadata

File hashes

Hashes for vtds_provider_gcp-0.0.27-py3-none-any.whl
Algorithm Hash digest
SHA256 2f0f730300a60ac0a46957e57bd139ce522f848fd89735f5a3dca8cb9b1b5bb3
MD5 3cfe36584596633429ae220727d13311
BLAKE2b-256 44a806043019ed5c52c1c7429bedf98903eec29a643fce0983481e51a130dae9

See more details on using hashes here.

Provenance

The following attestation bundles were made for vtds_provider_gcp-0.0.27-py3-none-any.whl:

Publisher: build.yml on Cray-HPE/vtds-provider-gcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page