Model driven resource provisioning and deployment framework using StackQL.
Project description
Model driven resource provisioning and deployment framework using StackQL.
stackql-deploy is a multi-cloud Infrastructure as Code (IaC) framework using stackql, inspired by dbt (data build tool), which manages data transformation workflows in analytics engineering by treating SQL scripts as models that can be built, tested, and materialized incrementally. You can create a similar framework for infrastructure provisioning with StackQL. The goal is to treat infrastructure-as-code (IaC) queries as models that can be deployed, managed, and interconnected.
This ELT/model-based framework to IaC allows you to provision, test, update and teardown multi-cloud stacks similar to how dbt manages data transformation projects, with the benefits of version control, peer review, and automation. This approach enables you to deploy complex, dependent infrastructure components in a reliable and repeatable manner.
The use of StackQL simplifies the interaction with cloud resources by using SQL-like syntax, making it easier to define and execute complex cloud management operations. Resources are provisioned with INSERT statements and tests are structured around SELECT statements.
Features include:
Dynamic state determination (eliminating the need for state files)
Simple flow control with rollback capabilities
Single code base for multiple target environments
SQL-based definitions for resources and tests
How stackql-deploy Works
stackql-deploy orchestrates cloud resource provisioning by parsing SQL-like definitions. It determines the necessity of creating or updating resources based on exists checks, and ensures the creation and correct desired configuration through post-deployment verifications.
Installing from PyPI
To install stackql-deploy directly from PyPI, run the following command:
pip install stackql-deploy
This will install the latest version of stackql-deploy and its dependencies from the Python Package Index.
Running stackql-deploy
Once installed, use the build, test, or teardown commands as shown here:
stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000 --dry-run
stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
stackql-deploy test prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
stackql-deploy teardown prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
additional options include:
--dry-run: perform a dry run of the stack operations.
--on-failure=rollback: action on failure: rollback, ignore or error.
--env-file=.env: specify an environment variable file.
-e KEY=value`: pass additional environment variables.
--log-level : logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL), defaults to INFO.
use stackql-deploy info to show information about the package and environment, for example
$ stackql-deploy info
stackql-deploy version: 1.0.0
pystackql version : 3.5.4
stackql version : v0.5.612
stackql binary path : /mnt/c/LocalGitRepos/stackql/stackql-deploy/stackql
platform : Linux x86_64 (Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35), Python 3.10.12
Use the --help option to see more information about the commands and options available:
stackql-deploy --help
Project Structure
stackql-deploy uses a modular structure where each component of the infrastructure is defined in separate files, allowing for clear separation of concerns and easy management. This example is based on a stack named example_stack, with a resource named monitor_resource_group.
├── example_stack │ ├── stackql_manifest.yml │ └── resources │ └── monitor_resource_group.iql
Manifest File
Manifest File: The stackql_manifest.yml is used to define your stack and manage dependencies between infrastructure components. This file defines which resources need to be provisioned before others and parameterizes resources based on environment variables or other configurations.
Providers: List the cloud service providers that your stack will interact with. Each provider specified in the list will be initialized and made ready for use with the stack.
providers: - azure - github
Globals: Defines a set of global variables that can be used across the entire stack configuration. These variables can hold values related to environment settings, default configurations, or any commonly used data.
globals: - name: subscription_id description: azure subscription id value: "{{ vars.AZURE_SUBSCRIPTION_ID }}" - name: location value: eastus ... (additional globals)
Resources: Describes all the infrastructure components, such as networks, compute instances, databases, etc., that make up your stack. Here you can define the resources, their properties, and any dependencies between them.
resources: - name: resource_group description: azure resource group for activity monitor app - name: storage_account description: azure storage account for activity monitor app ... (additional properties and exports) ...
Each resource can have the following attributes:
Name: A unique identifier for the resource within the stack.
Description: A brief explanation of the resource’s purpose and functionality.
Type: (Optional) Specifies the kind of resource (e.g., ‘resource’, ‘query’, ‘script’).
Props: (Optional) Lists the properties of the resource that define its configuration.
Exports: (Optional) Variables that are exported by this resource which can be used by other resources.
Protected: (Optional) A list of sensitive information that should not be logged or exposed outside secure contexts.
Scripts: If your stack involves the execution of scripts for setup, data manipulation, or deployment actions, they are defined under the resources with a type of ‘script’.
- name: install_dependencies type: script run: | pip install pynacl ...
The script’s execution output can be captured and used within the stack or for further processing.
Integration with External Systems: For stacks that interact with external services like GitHub, special resource types like ‘query’ can be used to fetch data from these services and use it within your deployment.
- name: get_github_public_key type: query ... (additional properties and exports)
This can be useful for dynamic configurations based on external state or metadata.
Resource and Test SQL Files
These files define the SQL-like commands for creating, updating, and testing the deployment of resources.
Resource SQL (resources/monitor_resource_group.iql):
/*+ create */
INSERT INTO azure.resources.resource_groups(
resourceGroupName,
subscriptionId,
data__location
)
SELECT
'{{ resource_group_name }}',
'{{ subscription_id }}',
'{{ location }}'
/*+ update */
UPDATE azure.resources.resource_groups
SET data__location = '{{ location }}'
WHERE resourceGroupName = '{{ resource_group_name }}'
AND subscriptionId = '{{ subscription_id }}'
/*+ delete */
DELETE FROM azure.resources.resource_groups
WHERE resourceGroupName = '{{ resource_group_name }}' AND subscriptionId = '{{ subscription_id }}'
Test SQL (resources/monitor_resource_group.iql):
/*+ exists */
SELECT COUNT(*) as count FROM azure.storage.accounts
WHERE SPLIT_PART(SPLIT_PART(JSON_EXTRACT(properties, '$.primaryEndpoints.blob'), '//', 2), '.', 1) = '{{ storage_account_name }}'
AND subscriptionId = '{{ subscription_id }}'
AND resourceGroupName = '{{ resource_group_name }}'
/*+ statecheck, retries=5, retry_delay=5 */
SELECT
COUNT(*) as count
FROM azure.storage.accounts
WHERE SPLIT_PART(SPLIT_PART(JSON_EXTRACT(properties, '$.primaryEndpoints.blob'), '//', 2), '.', 1) = '{{ storage_account_name }}'
AND subscriptionId = '{{ subscription_id }}'
AND resourceGroupName = '{{ resource_group_name }}'
AND kind = '{{ storage_kind }}'
AND JSON_EXTRACT(sku, '$.name') = 'Standard_LRS'
AND JSON_EXTRACT(sku, '$.tier') = 'Standard'
/*+ exports, retries=5, retry_delay=5 */
select json_extract(keys, '$[0].value') as storage_account_key
from azure.storage.accounts_keys
WHERE resourceGroupName = '{{ resource_group_name }}'
AND subscriptionId = '{{ subscription_id }}'
AND accountName = '{{ storage_account_name }}'
Resource SQL Anchors
Resource SQL files use special anchor comments as directives for the stackql-deploy tool to indicate the intended operations:
/*+ create */ This anchor precedes SQL INSERT statements for creating new resources.
/*+ create */ INSERT INTO azure.resources.resource_groups( resourceGroupName, subscriptionId, data__location ) SELECT '{{ resource_group_name }}', '{{ subscription_id }}', '{{ location }}'
/*+ createorupdate */ Specifies an operation to either create a new resource or update an existing one.
/*+ update */ Marks SQL UPDATE statements intended to modify existing resources.
/*+ delete */ Tags SQL DELETE statements for removing resources from the environment.
Query SQL Anchors
Query SQL files contain SQL statements for testing and validation with the following anchors:
/*+ exists */ Used to perform initial checks before a deployment.
/*+ exists */ SELECT COUNT(*) as count FROM azure.resources.resource_groups WHERE subscriptionId = '{{ subscription_id }}' AND resourceGroupName = '{{ resource_group_name }}'
/*+ statecheck, retries=5, retry_delay=5 */ Post-deployment checks to confirm the success of the operation, with optional retries and retry_delay parameters.
/*+ statecheck, retries=5, retry_delay=5 */ SELECT COUNT(*) as count FROM azure.resources.resource_groups WHERE subscriptionId = '{{ subscription_id }}' AND resourceGroupName = '{{ resource_group_name }}' AND location = '{{ location }}' AND JSON_EXTRACT(properties, '$.provisioningState') = 'Succeeded'
/*+ exports, retries=5, retry_delay=5 */ Extracts and exports information after a deployment. Similar to post-deploy checks but specifically for exporting data.
stackql-deploy simplifies cloud resource management by treating infrastructure as flexible, dynamically assessed code.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file stackql_deploy-1.8.0.tar.gz
.
File metadata
- Download URL: stackql_deploy-1.8.0.tar.gz
- Upload date:
- Size: 33.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4f6251ebe0c1231f905ddd2140cf7eb09cc4e38001f5ae39fd555ce367e6536f |
|
MD5 | f53472bc49f9ee8f7d52716ed786150e |
|
BLAKE2b-256 | 119fa212fee843d8cb59fc9ba4d2178fc1f393d6e0811a51646d4ce3c364d034 |
File details
Details for the file stackql_deploy-1.8.0-py3-none-any.whl
.
File metadata
- Download URL: stackql_deploy-1.8.0-py3-none-any.whl
- Upload date:
- Size: 36.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 21339a8a3514fbb2fda2d0a72bb0822162398efb39dde0e0b5501de17b0629fc |
|
MD5 | a718d0fa39d0466969d7aab1ec48f775 |
|
BLAKE2b-256 | db3ab71e8bba05bfe80700fac65cab4b5c86a1afa2edf071e8f6b6efbe71daaf |