Model driven resource provisioning and deployment framework using StackQL.
Project description
Model driven resource provisioning and deployment framework using StackQL.
stackql-deploy is a multi-cloud Infrastructure as Code (IaC) framework using stackql, inspired by dbt (data build tool), which manages data transformation workflows in analytics engineering by treating SQL scripts as models that can be built, tested, and materialized incrementally. You can create a similar framework for infrastructure provisioning with StackQL. The goal is to treat infrastructure-as-code (IaC) queries as models that can be deployed, managed, and interconnected.
This ELT/model-based framework to IaC allows you to provision, test, update and teardown multi-cloud stacks similar to how dbt manages data transformation projects, with the benefits of version control, peer review, and automation. This approach enables you to deploy complex, dependent infrastructure components in a reliable and repeatable manner.
The use of StackQL simplifies the interaction with cloud resources by using SQL-like syntax, making it easier to define and execute complex cloud management operations. Resources are provisioned with INSERT statements and tests are structured around SELECT statements.
Features include:
Dynamic state determination (eliminating the need for state files)
Simple flow control with rollback capabilities
Single code base for multiple target environments
SQL-based definitions for resources and tests
How stackql-deploy Works
stackql-deploy orchestrates cloud resource provisioning by parsing SQL-like definitions. It determines the necessity of creating or updating resources based on preflight checks, and ensures the creation and correct desired configuration through post-deployment verifications.
Installing from PyPI
To install stackql-deploy directly from PyPI, run the following command:
pip install stackql-deploy
This will install the latest version of stackql-deploy and its dependencies from the Python Package Index.
Running stackql-deploy
Once installed, use the build, test, or teardown commands as shown here:
stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000 --dry-run
stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
stackql-deploy test prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
stackql-deploy teardown prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
additional options include:
--dry-run: perform a dry run of the stack operations.
--on-failure=rollback: action on failure: rollback, ignore or error.
--env-file=.env: specify an environment variable file.
-e KEY=value`: pass additional environment variables.
--log-level : logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL), defaults to INFO.
use stackql-deploy info to show information about the package and environment, for example
$ stackql-deploy info
stackql-deploy version: 1.0.0
pystackql version : 3.5.4
stackql version : v0.5.612
stackql binary path : /mnt/c/LocalGitRepos/stackql/stackql-deploy/stackql
platform : Linux x86_64 (Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35), Python 3.10.12
Use the --help option to see more information about the commands and options available:
stackql-deploy --help
Project Structure
stackql-deploy uses a modular structure where each component of the infrastructure is defined in separate files, allowing for clear separation of concerns and easy management. This example is based on a stack named example_stack, with a resource named monitor_resource_group.
├── example_stack │ ├── stackql_manifest.yml │ └── resources │ └── monitor_resource_group.iql
Manifest File
Manifest File: The stackql_manifest.yml is used to define your stack and manage dependencies between infrastructure components. This file defines which resources need to be provisioned before others and parameterizes resources based on environment variables or other configurations.
Providers: List the cloud service providers that your stack will interact with. Each provider specified in the list will be initialized and made ready for use with the stack.
providers: - azure - github
Globals: Defines a set of global variables that can be used across the entire stack configuration. These variables can hold values related to environment settings, default configurations, or any commonly used data.
globals: - name: subscription_id description: azure subscription id value: "{{ vars.AZURE_SUBSCRIPTION_ID }}" - name: location value: eastus ... (additional globals)
Resources: Describes all the infrastructure components, such as networks, compute instances, databases, etc., that make up your stack. Here you can define the resources, their properties, and any dependencies between them.
resources: - name: resource_group description: azure resource group for activity monitor app - name: storage_account description: azure storage account for activity monitor app ... (additional properties and exports) ...
Each resource can have the following attributes:
Name: A unique identifier for the resource within the stack.
Description: A brief explanation of the resource’s purpose and functionality.
Type: (Optional) Specifies the kind of resource (e.g., ‘resource’, ‘query’, ‘script’).
Props: (Optional) Lists the properties of the resource that define its configuration.
Exports: (Optional) Variables that are exported by this resource which can be used by other resources.
Protected: (Optional) A list of sensitive information that should not be logged or exposed outside secure contexts.
Scripts: If your stack involves the execution of scripts for setup, data manipulation, or deployment actions, they are defined under the resources with a type of ‘script’.
- name: install_dependencies type: script run: | pip install pynacl ...
The script’s execution output can be captured and used within the stack or for further processing.
Integration with External Systems: For stacks that interact with external services like GitHub, special resource types like ‘query’ can be used to fetch data from these services and use it within your deployment.
- name: get_github_public_key type: query ... (additional properties and exports)
This can be useful for dynamic configurations based on external state or metadata.
Resource and Test SQL Files
These files define the SQL-like commands for creating, updating, and testing the deployment of resources.
Resource SQL (resources/monitor_resource_group.iql):
/*+ create */
INSERT INTO azure.resources.resource_groups(
resourceGroupName,
subscriptionId,
data__location
)
SELECT
'{{ resource_group_name }}',
'{{ subscription_id }}',
'{{ location }}'
/*+ update */
UPDATE azure.resources.resource_groups
SET data__location = '{{ location }}'
WHERE resourceGroupName = '{{ resource_group_name }}'
AND subscriptionId = '{{ subscription_id }}'
/*+ delete */
DELETE FROM azure.resources.resource_groups
WHERE resourceGroupName = '{{ resource_group_name }}' AND subscriptionId = '{{ subscription_id }}'
Test SQL (resources/monitor_resource_group.iql):
/*+ preflight */
SELECT COUNT(*) as count FROM azure.storage.accounts
WHERE SPLIT_PART(SPLIT_PART(JSON_EXTRACT(properties, '$.primaryEndpoints.blob'), '//', 2), '.', 1) = '{{ storage_account_name }}'
AND subscriptionId = '{{ subscription_id }}'
AND resourceGroupName = '{{ resource_group_name }}'
/*+ postdeploy, retries=5, retry_delay=5 */
SELECT
COUNT(*) as count
FROM azure.storage.accounts
WHERE SPLIT_PART(SPLIT_PART(JSON_EXTRACT(properties, '$.primaryEndpoints.blob'), '//', 2), '.', 1) = '{{ storage_account_name }}'
AND subscriptionId = '{{ subscription_id }}'
AND resourceGroupName = '{{ resource_group_name }}'
AND kind = '{{ storage_kind }}'
AND JSON_EXTRACT(sku, '$.name') = 'Standard_LRS'
AND JSON_EXTRACT(sku, '$.tier') = 'Standard'
/*+ exports, retries=5, retry_delay=5 */
select json_extract(keys, '$[0].value') as storage_account_key
from azure.storage.accounts_keys
WHERE resourceGroupName = '{{ resource_group_name }}'
AND subscriptionId = '{{ subscription_id }}'
AND accountName = '{{ storage_account_name }}'
Resource SQL Anchors
Resource SQL files use special anchor comments as directives for the stackql-deploy tool to indicate the intended operations:
/*+ create */ This anchor precedes SQL INSERT statements for creating new resources.
/*+ create */ INSERT INTO azure.resources.resource_groups( resourceGroupName, subscriptionId, data__location ) SELECT '{{ resource_group_name }}', '{{ subscription_id }}', '{{ location }}'
/*+ createorupdate */ Specifies an operation to either create a new resource or update an existing one.
/*+ update */ Marks SQL UPDATE statements intended to modify existing resources.
/*+ delete */ Tags SQL DELETE statements for removing resources from the environment.
Query SQL Anchors
Query SQL files contain SQL statements for testing and validation with the following anchors:
/*+ preflight */ Used to perform initial checks before a deployment.
/*+ preflight */ SELECT COUNT(*) as count FROM azure.resources.resource_groups WHERE subscriptionId = '{{ subscription_id }}' AND resourceGroupName = '{{ resource_group_name }}'
/*+ postdeploy, retries=5, retry_delay=5 */ Post-deployment checks to confirm the success of the operation, with optional retries and retry_delay parameters.
/*+ postdeploy, retries=5, retry_delay=5 */ SELECT COUNT(*) as count FROM azure.resources.resource_groups WHERE subscriptionId = '{{ subscription_id }}' AND resourceGroupName = '{{ resource_group_name }}' AND location = '{{ location }}' AND JSON_EXTRACT(properties, '$.provisioningState') = 'Succeeded'
/*+ exports, retries=5, retry_delay=5 */ Extracts and exports information after a deployment. Similar to post-deploy checks but specifically for exporting data.
stackql-deploy simplifies cloud resource management by treating infrastructure as flexible, dynamically assessed code.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for stackql_deploy-1.6.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5e165c869bf6546259b3da281a4e6c9432aced58bfa9731d590bde2b8f1fdc5f |
|
MD5 | 2d615fa646d0f2fe7c09aded74981f16 |
|
BLAKE2b-256 | f36e8bd9c1a98eb29de93a4154e391252c51a67d5f3b7ec50579e0db3b4d6653 |