Skip to main content

A tool for analyzing HCL files.

Project description

🚀 HCL Processor with AWS Bedrock

MIT License Python 3.10+ PRs Welcome

⭐ Like this? Star the repo and help spread the word!

⚡ From Terraform to beautiful docs — powered by AI

AI (AWS Bedrock / Claude) automatically generates design and resource summaries by simply reading Terraform HCL files!

  • ✅ Just write the code and get the documentation.
  • ✅ Reduce man-hours for training and reviewing new staff.
  • ✅ Instant understanding of complex IaC structures.

Why this matters

  • Writing Terraform is fast, but documenting it slows teams down.
  • Missing docs = harder onboarding, harder reviews, more production risks.
  • With HCL Processor, AI handles the grunt work — so you can focus on building.

Project Summary

This is a Python tool that reads Terraform HCL files and sends them to the AWS Bedrock API (e.g. Claude) and generates JSON output and Markdown reports. It supports automated analysis and documentation of infrastructure configurations.

Features and Functions at a Glance

[!warning] The default region is us-east-1. If you want to change it, please specify bedrock.aws_region in confg.yaml.

  • ✅ Validate YAML configuration files with JSON Schema
  • ✅ Reading and parsing HCL files and module files
  • ✅ Bedrock API call and response parsing
  • ✅ JSON output and Markdown report generation
  • ✅ Per-file/per-folder batch processing support
  • ✅ Failback design with chunk processing (retry)
  • ✅ Log level adjustment (INFO/DEBUG)

🌟 Who will find this tool useful

  • 🛠 You, the Terraform engineer tired of writing docs manually every day!
    • 👷‍♂️ SRE and infrastructure engineers who build and maintain Terraform HCL.
    • 👩‍💼 Managers and leaders needing to explain designs to new or non-technical staff.
    • 👨‍👩‍👧‍👦 Development teams looking to enrich PRs and review discussions with design intent.

Install

> hcl_processor --config_file config/config.yaml

Usage Example

config yaml

bedrock:
  # Note: The accuracy of the generated markdown can be improved by providing details of the content to be generated in system_prompt.
  system_prompt: |
    <instart prompt>
  output_json:
    {
      "$schema": "http://json-schema.org/draft-07/schema#",
      "type": "array",
      "items": { "type": "object" }
    }
  # Note: The following is the configuration for the Bedrock API.
  payload:
    anthropic_version: "bedrock-2023-05-31"  # API version of Anthropic (e.g. Claude) to be used. Specify specific version for bedrock integration.
    max_tokens: 4096                         # Maximum number of tokens generated by the model. Limit the length of the response.
    temperature: 0                           # Control the randomness of the output (0 is fully deterministic = most predictable response).
    top_p: 1                                 # Parameters for nucleus sampling. 1 is selected from the top 100% = no limit.
    top_k: 0                                 # Random selection from top k tokens (0 is disabled, i.e., all are eligible).

  read_timeout: 120
  connect_timeout: 30

  retries:
    max_attempts: 5
    mode: standard

input:
  # Note: Please describe the hcl file of terraform in which you want to generate the document.
  resource_data:
    files:
      - input/example.tf
  modules:
    # Note: Specify the modules (code in which the resource is described) of the resource for which you want to generate documentation.
    path: input/modules.tf
    enabled: true  # This allows you to switch between availability and unavailability.
  # Note: local.tf <env>(key):<directly>/file(value)
  local_files:
    - dev: input/dev.tf

output:
  # Note: You can specify the name of the intermediate file to generate the markdown.
  json_path: output/result.json
  # Note: You can specify the markdown_path name.
  markdown_path: output/result.md
  # Note: You can specify the format of markdown_template.
  # Note: support template title & table
  markdown_template: |
    ##### {title}

    {table}
  # Note: Specify the modules (code in which the resource is described) of the resource for which you want to generate documentation.
  failback:
    enabled: true  # This allows you to switch between availability and unavailability.
    # Note: For type, select modules or resource.
    type: modules
    # Note: The option can only be selected for the modules type.
    options:
      target: monitors # variable name

UseCase

Create a monitor list from the terraform modules created and the call hcl

[!warning] Failback is not supported when multiple modules calls are written in one file. This feature will be supported in the next version, but implementing it will increase the generation cost.

Suppose we have the following hcl code

modules

locals {
  platform_name    = var.platform_name
  platform_product = var.platform_product
}
resource "datadog_monitor" "monitors" {

  for_each = { for idx, mon in var.monitors : idx => mon }

  name    = each.value.name
  type    = each.value.type
  message = each.value.message
  query   = each.value.query

  tags = toset(concat([
    "platform_name:${local.platform_name}",
    "platform_product:${local.platform_product}"
  ], coalesce(each.value.tags, [])))

  monitor_thresholds {
    critical = try(each.value.thresholds.critical, null)
    warning  = try(each.value.thresholds.warning, null)
  }
}

resource

locals {
  product_name = "apigateway"
}
module "apigateway_monitor" {
  source           = "../../modules/monitor/"
  platform_name    = local.platform_name
  platform_product = local.product_name
  monitors = [
    {
      name    = "[${local.env}] API Gateway Many Responses Rate"
      type    = "query alert"
      message = <<-EOT
      {{#is_alert}}
      ${local.slack_channel_mention}
      ${local.env == "prd" ? local.sre_pagerduty_mention : ""}
      The number of API Gateway responses has exceeded the critical threshold: ${local.apigateway_many_responses_rate_critical} req/sec.
      There is a possibility that requests will continue to increase.
      Please check the details, and if the API Gateway response count is rapidly increasing, consider submitting a request to AWS via the Service Quotas console to raise the rate limits as needed.
      {{/is_alert}}
      {{#is_alert_recovery}}
      ${local.slack_channel_mention}
      The number of API Gateway responses has fallen below the critical threshold: ${local.apigateway_many_responses_rate_critical} req/sec.
      {{/is_alert_recovery}}
      {{#is_warning}}
      ${local.slack_channel_mention}
      The number of API Gateway responses has exceeded the warning threshold: ${local.apigateway_many_responses_rate_warning} req/sec.
      There is a possibility that requests will continue to increase.
      Please be aware that an alert may be triggered if this continues.
      {{/is_warning}}
      {{#is_warning_recovery}}
      ${local.slack_channel_mention}
      The number of API Gateway responses has fallen below the warning threshold: ${local.apigateway_many_responses_rate_warning} req/sec.
      {{/is_warning_recovery}}
      EOT
      query   = "sum(last_5m):default_zero(sum:aws.apigateway.count{env:${local.env}}.as_rate()) > ${local.apigateway_many_responses_rate_critical}"
      tags    = ["env:${local.env}", "team:${local.team_name}", "service:${local.project_prefix}"]
      thresholds = {
      critical = local.apigateway_many_responses_rate_critical
      warning  = local.apigateway_many_responses_rate_warning
      }
      description = "Monitors the number of API Gateway responses. If the response count increases rapidly, an alert will be triggered, providing an indicator to understand the service load status. Please consider appropriate actions if critical or warning thresholds are exceeded."
    },
    {
      name    = "[${local.env}] API Gateway 429 Error Count (Rate Limit)"
      type    = "log alert"
      message = <<-EOT
        {{#is_alert}}
        ${local.slack_channel_mention}
        ${local.env == "prd" ? local.sre_pagerduty_mention : ""}
        429 errors have been detected in API Gateway.
        Please check the details, and if the errors are due to hitting quotas, consider submitting a request to AWS via the Service Quotas console to raise the rate limits as needed.
        {{/is_alert}}
      EOT
      query   = "logs(\"env:${local.env} service:${local.product_name} @http.status_code:429\").index(\"*\").rollup(\"count\").last(\"5m\") > ${local.apigateway_429_error_rate_critical}"
      tags    = ["env:${local.env}", "team:${local.team_name}", "service:${local.project_prefix}"]
      thresholds = {
        critical = local.apigateway_429_error_rate_critical
      }
      description = "Monitors API Gateway 429 errors, which indicate that the request rate has reached the rate limit. If the critical threshold is exceeded, an alert will be triggered, providing an indicator to understand service usage and, if necessary, request an increase in AWS rate limits."
    }
  ]
}

locals

locals {
  env            = "dev"
  platform_name  = "aws"
  project_prefix = "aaa"
  team_name      = "xxx"
  role           = "sre"
  slack_channel         = join("_", [format("#%s", local.project_prefix), "alert", local.env])
  sre_pagerduty_mention = join("-", [format("@%s", local.project_prefix), local.role, "service", local.env])
  slack_channel_mention = join("_", [format("@slack-%s", local.project_prefix), "alert", local.env])
  # API Gateway Monitor Parameter
  apigateway_quartorly_limit              = 10000
  apigateway_many_responses_rate_critical = local.apigateway_quartorly_limit * 0.9
  apigateway_many_responses_rate_warning  = local.apigateway_quartorly_limit * 0.8
locals {
  env            = "dev"
  platform_name  = "aws"
  project_prefix = "aaa"
  team_name      = "xxx"
  role           = "sre"
  slack_channel         = join("_", [format("#%s", local.project_prefix), "alert", local.env])
  sre_pagerduty_mention = join("-", [format("@%s", local.project_prefix), local.role, "service", local.env])
  slack_channel_mention = join("_", [format("@slack-%s", local.project_prefix), "alert", local.env])
  # API Gateway Monitor Parameter
  apigateway_quartorly_limit              = 10000
  apigateway_many_responses_rate_critical = local.apigateway_quartorly_limit * 0.9
  apigateway_many_responses_rate_warning  = local.apigateway_quartorly_limit * 0.8
  apigateway_429_error_rate_critical = 60 # (Error Response Rate 1req/sec)
}

config/config.yaml

bedrock:
  system_prompt: |
    The following Terraform code defines several calls to the Datadog Monitor module.
    Datadog Alert Design List (Title) Example

    ・ Please output an **Alert Design Document (json table format)** for each alert in the following format as described above.
    ・ The terraform code should be placed with reference to locals.tf.
    ・ The Alert Message column should be extracted from the terraform code and put the values as they are while expanding local. (e.g., message value) )
        sample)
          ```
          message = <<-EOT

          EOT
          ```
          Create the value while expanding local.tf for the contents of
    ・ Put the local variable as it is for the destination of the notification.
    ・ Put the local variable as it is for the destination of the Query.
    ・ Put the local variable as it is for the destination of the tag.
    ・ Put the local variable as it is for the destination of the alert message.
    ・ For threshold values, if the local variable is a numerical value, insert the value as is. If the local variable is a calculation formula, insert the calculation result and also include the formula itself in the format (calculation formula). Calculation formulas should be defined as local variables.
    ・ Monitor name prefix: [${local.env}] must never be included in the title.
    ・ Create columns for local dev, stg, and prd environments, respectively.
  output_json: |
    {
      "$schema": "http://json-schema.org/draft-07/schema#",
      "type": "array",
      "title": "Datadog Alert Design Schema",
      "description": "Schema for Datadog alert design document",
      "items": {
        "type": "object",
        "properties": {
          "Monitor Name": {"type": "string", "description": "The name of the monitor"},
          "Type": {"type": "string", "description": "The type of the monitor (e.g., query alert)"},
          "Query": {"type": "string", "description": "The query for the Datadog monitor"},
          "Evaluation Period": {"type": "string", "description": "The evaluation period (e.g., 5 minutes)"},
          "Notification Destination": {"type": "string", "description": "The notification destination (e.g., Slack channel)"},
          "Tags": {"type": "string", "description": "Tags associated with the monitor"},
          "Alert Message": {"type": "string", "description": "The message displayed when an alert is triggered"},
          "Remarks": {"type": "string", "description": "Remarks about the monitor"},
          "dev Threshold Condition": {"type": "string", "description": "Threshold condition for the development environment"}
        },
        "required": [
          "Monitor Name", "Type", "Query", "Evaluation Period", "Notification Destination", "Tags",
          "Alert Message", "Remarks", "dev Threshold Condition", "stg Threshold Condition", "prd Threshold Condition"
        ]
      }
    }
  payload:
    anthropic_version: "bedrock-2023-05-31"
    max_tokens: 4096
    temperature: 0
    top_p: 1
    top_k: 0
  read_timeout: 120
  connect_timeout: 30
  retries:
    max_attempts: 5
    mode: standard
schema_columns:
  - Monitor Name
  - Type
  - Query
  - Evaluation Period
  - Notification Destination
  - Tags
  - Alert Message
  - Remarks
  - dev Threshold Condition

output:
  json_path: output/output.json
  markdown_path: output/output.md
  markdown_template: |
    ###### {title}

    {table}
input:
  resource_data:
    files:
      - sample/test_data/monitor1/apigateway.tf
  local_files:
    - dev: sample/test_data/monitor1/dev/locals.tf
  modules:
    path: sample/test_data/modules/monitor/main.tf
    enabled: true
  failback:
    enabled: true
    type: modules
    options:
      target: monitors

command.

❯ poetry run hcl-processor --config_file config/config.yaml
2025-07-26 12:38:22 [INFO] hcl_processor.main: Processing files...
2025-07-26 12:38:22 [INFO] hcl_processor.main: 1 files found to process.
2025-07-26 12:39:15 [INFO] hcl_processor.main: All files processed successfully.
Generated by
apigateway
Monitor Name Type Query Evaluation Period Notification Destination Tags Alert Message Remarks dev Threshold Condition
API Gateway Many Responses Rate query alert sum(last_5m):default_zero(sum:aws.apigateway.count{env:${local.env}}.as_rate()) > ${local.apigateway_many_responses_rate_critical} 5 minutes #aaa_alert_dev env:dev, team:xxx, service:aaa {{#is_alert}}
@slack-aaa_alert_dev

The number of API Gateway responses has exceeded the critical threshold: 9000 req/sec.
There is a possibility that requests will continue to increase.
Please check the details, and if the API Gateway response count is rapidly increasing, consider submitting a request to AWS via the Service Quotas console to raise the rate limits as needed.
{{/is_alert}}
{{#is_alert_recovery}}
@slack-aaa_alert_dev
The number of API Gateway responses has fallen below the critical threshold: 9000 req/sec.
{{/is_alert_recovery}}
{{#is_warning}}
@slack-aaa_alert_dev
The number of API Gateway responses has exceeded the warning threshold: 8000 req/sec.
There is a possibility that requests will continue to increase.
Please be aware that an alert may be triggered if this continues.
{{/is_warning}}
{{#is_warning_recovery}}
@slack-aaa_alert_dev
The number of API Gateway responses has fallen below the warning threshold: 8000 req/sec.
{{/is_warning_recovery}}
Monitors the number of API Gateway responses. If the response count increases rapidly, an alert will be triggered, providing an indicator to understand the service load status. Please consider appropriate actions if critical or warning thresholds are exceeded. Critical: 9000 (10000 * 0.9), Warning: 8000 (10000 * 0.8)
API Gateway 429 Error Count (Rate Limit) log alert logs("env:${local.env} service:${local.product_name} @http.status_code:429").index("*").rollup("count").last("5m") > ${local.apigateway_429_error_rate_critical} 5 minutes #aaa_alert_dev env:dev, team:xxx, service:aaa {{#is_alert}}
@slack-aaa_alert_dev

429 errors have been detected in API Gateway.
Please check the details, and if the errors are due to hitting quotas, consider submitting a request to AWS via the Service Quotas console to raise the rate limits as needed.
{{/is_alert}}
Monitors API Gateway 429 errors, which indicate that the request rate has reached the rate limit. If the critical threshold is exceeded, an alert will be triggered, providing an indicator to understand service usage and, if necessary, request an increase in AWS rate limits. Critical: 60

Create a monitor list from the terraform resource created and the call hcl

Suppose we have the following hcl code

resource

locals {
  product_name    = "apigateway"
  platform_name   = var.platform_name
  platform_product = var.platform_product
}

resource "datadog_monitor" "apigateway_many_responses_rate" {
  name    = "[${local.env}] API Gateway Many Responses Rate"
  type    = "query alert"
  message = <<-EOT
    {{#is_alert}}
    ${local.slack_channel_mention}
    ${local.env == "prd" ? local.sre_pagerduty_mention : ""}
    The number of API Gateway responses has exceeded the critical threshold: ${local.apigateway_many_responses_rate_critical} req/sec.
    There is a possibility that requests will continue to increase.
    Please check the details, and if the API Gateway response count is rapidly increasing, consider submitting a request to AWS via the Service Quotas console to raise the rate limits as needed.
    {{/is_alert}}
    {{#is_alert_recovery}}
    ${local.slack_channel_mention}
    The number of API Gateway responses has fallen below the critical threshold: ${local.apigateway_many_responses_rate_critical} req/sec.
    {{/is_alert_recovery}}
    {{#is_warning}}
    ${local.slack_channel_mention}
    The number of API Gateway responses has exceeded the warning threshold: ${local.apigateway_many_responses_rate_warning} req/sec.
    There is a possibility that requests will continue to increase.
    Please be aware that an alert may be triggered if this continues.
    {{/is_warning}}
    {{#is_warning_recovery}}
    ${local.slack_channel_mention}
    The number of API Gateway responses has fallen below the warning threshold: ${local.apigateway_many_responses_rate_warning} req/sec.
    {{/is_warning_recovery}}
  EOT

  query = "sum(last_5m):default_zero(sum:aws.apigateway.count{env:${local.env}}.as_rate()) > ${local.apigateway_many_responses_rate_critical}"

  tags = toset([
    "platform_name:${local.platform_name}",
    "platform_product:${local.platform_product}",
    "env:${local.env}",
    "team:${local.team_name}",
    "service:${local.project_prefix}"
  ])

  monitor_thresholds {
    critical = local.apigateway_many_responses_rate_critical
    warning  = local.apigateway_many_responses_rate_warning
  }
}

resource "datadog_monitor" "apigateway_429_error_count" {
  name    = "[${local.env}] API Gateway 429 Error Count (Rate Limit)"
  type    = "log alert"
  message = <<-EOT
    {{#is_alert}}
    ${local.slack_channel_mention}
    ${local.env == "prd" ? local.sre_pagerduty_mention : ""}
    429 errors have been detected in API Gateway.
    Please check the details, and if the errors are due to hitting quotas, consider submitting a request to AWS via the Service Quotas console to raise the rate limits as needed.
    {{/is_alert}}
  EOT

  query = "logs(\"env:${local.env} service:${local.product_name} @http.status_code:429\").index(\"*\").rollup(\"count\").last(\"5m\") > ${local.apigateway_429_error_rate_critical}"

  tags = toset([
    "platform_name:${local.platform_name}",
    "platform_product:${local.platform_product}",
    "env:${local.env}",
    "team:${local.team_name}",
    "service:${local.project_prefix}"
  ])

  monitor_thresholds {
    critical = local.apigateway_429_error_rate_critical
  }
}

locals

locals {
  env            = "dev"
  platform_name  = "aws"
  project_prefix = "aaa"
  team_name      = "xxx"
  role           = "sre"
  slack_channel         = join("_", [format("#%s", local.project_prefix), "alert", local.env])
  sre_pagerduty_mention = join("-", [format("@%s", local.project_prefix), local.role, "service", local.env])
  slack_channel_mention = join("_", [format("@slack-%s", local.project_prefix), "alert", local.env])
  # API Gateway Monitor Parameter
  apigateway_quartorly_limit              = 10000
  apigateway_many_responses_rate_critical = local.apigateway_quartorly_limit * 0.9
  apigateway_many_responses_rate_warning  = local.apigateway_quartorly_limit * 0.8
locals {
  env            = "dev"
  platform_name  = "aws"
  project_prefix = "aaa"
  team_name      = "xxx"
  role           = "sre"
  slack_channel         = join("_", [format("#%s", local.project_prefix), "alert", local.env])
  sre_pagerduty_mention = join("-", [format("@%s", local.project_prefix), local.role, "service", local.env])
  slack_channel_mention = join("_", [format("@slack-%s", local.project_prefix), "alert", local.env])
  # API Gateway Monitor Parameter
  apigateway_quartorly_limit              = 10000
  apigateway_many_responses_rate_critical = local.apigateway_quartorly_limit * 0.9
  apigateway_many_responses_rate_warning  = local.apigateway_quartorly_limit * 0.8
  apigateway_429_error_rate_critical = 60 # (Error Response Rate 1req/sec)
}

config/config.yaml

bedrock:
  system_prompt: |
    The following Terraform code defines several calls to the Datadog Monitor module.
    Datadog Alert Design List (Title) Example

    ・ Please output an **Alert Design Document (json table format)** for each alert in the following format as described above.
    ・ The terraform code should be placed with reference to locals.tf.
    ・ The Alert Message column should be extracted from the terraform code and put the values as they are while expanding local. (e.g., message value) )
        sample)
          ```
          message = <<-EOT

          EOT
          ```
          Create the value while expanding local.tf for the contents of
    ・ Put the local variable as it is for the destination of the notification.
    ・ Put the local variable as it is for the destination of the Query.
    ・ Put the local variable as it is for the destination of the tag.
    ・ Put the local variable as it is for the destination of the alert message.
    ・ For threshold values, if the local variable is a numerical value, insert the value as is. If the local variable is a calculation formula, insert the calculation result and also include the formula itself in the format (calculation formula). Calculation formulas should be defined as local variables.
    ・ Monitor name prefix: [${local.env}] must never be included in the title.
    ・ Create columns for local dev, stg, and prd environments, respectively.
  output_json: |
    {
      "$schema": "http://json-schema.org/draft-07/schema#",
      "type": "array",
      "title": "Datadog Alert Design Schema",
      "description": "Schema for Datadog alert design document",
      "items": {
        "type": "object",
        "properties": {
          "Monitor Name": {"type": "string", "description": "The name of the monitor"},
          "Type": {"type": "string", "description": "The type of the monitor (e.g., query alert)"},
          "Query": {"type": "string", "description": "The query for the Datadog monitor"},
          "Evaluation Period": {"type": "string", "description": "The evaluation period (e.g., 5 minutes)"},
          "Notification Destination": {"type": "string", "description": "The notification destination (e.g., Slack channel)"},
          "Tags": {"type": "string", "description": "Tags associated with the monitor"},
          "Alert Message": {"type": "string", "description": "The message displayed when an alert is triggered"},
          "Remarks": {"type": "string", "description": "Remarks about the monitor"},
          "dev Threshold Condition": {"type": "string", "description": "Threshold condition for the development environment"}
        },
        "required": [
          "Monitor Name", "Type", "Query", "Evaluation Period", "Notification Destination", "Tags",
          "Alert Message", "Remarks", "dev Threshold Condition", "stg Threshold Condition", "prd Threshold Condition"
        ]
      }
    }
  payload:
    anthropic_version: "bedrock-2023-05-31"
    max_tokens: 4096
    temperature: 0
    top_p: 1
    top_k: 0
  read_timeout: 120
  connect_timeout: 30
  retries:
    max_attempts: 5
    mode: standard
schema_columns:
  - Monitor Name
  - Type
  - Query
  - Evaluation Period
  - Notification Destination
  - Tags
  - Alert Message
  - Remarks
  - dev Threshold Condition

output:
  json_path: output/output.json
  markdown_path: output/output.md
  markdown_template: |
    ###### {title}

    {table}
input:
  resource_data:
    files:
      - sample/test_data/monitor1/apigateway.tf
  local_files:
    - dev: sample/test_data/monitor1/dev/locals.tf
  modules:
    enabled: false
  failback:
    enabled: true
    type: resource

command.

> hcl_processor --config_file config/config.yaml
2025-05-07 18:30:42 INFO: Processing files...
2025-05-07 18:30:42 INFO: 1 files found to process.
2025-05-07 18:30:42 INFO: Processing sample/test_data/monitor1/apigateway.tf
2025-05-07 18:30:42 INFO: No AWS profile specified, using environment variables for credentials.
2025-05-07 18:30:42 INFO: Using AWS region: ap-northeast-1
2025-05-07 18:30:57 INFO: Successfully processed file: sample/test_data/monitor1/apigateway.tf
<<mask>>: FutureWarning: DataFrame.applymap has been deprecated. Use DataFrame.map instead.
  df = df.applymap(clean_cell)                                                                                              2025-05-07 18:30:57 INFO: Saved to Markdown file: output/output.md
2025-05-07 18:30:57 INFO: Deleting JSON file: output/output.json
2025-05-07 18:30:57 INFO: All files processed successfully.
apigateway
Monitor Name Type Query Evaluation Period Notification Destination Tags Alert Message Remarks dev Threshold Condition
API Gateway Many Responses Rate query alert sum(last_5m):default_zero(sum:aws.apigateway.count{env:dev}.as_rate()) > 9000 5 minutes #aaa_alert_dev platform_name:aws, platform_product:${local.platform_product}, env:dev, team:xxx, service:aaa {{#is_alert}}
@slack-aaa_alert_dev

The number of API Gateway responses has exceeded the critical threshold: 9000 req/sec.
There is a possibility that requests will continue to increase.
Please check the details, and if the API Gateway response count is rapidly increasing, consider submitting a request to AWS via the Service Quotas console to raise the rate limits as needed.
{{/is_alert}}
{{#is_alert_recovery}}
@slack-aaa_alert_dev
The number of API Gateway responses has fallen below the critical threshold: 9000 req/sec.
{{/is_alert_recovery}}
{{#is_warning}}
@slack-aaa_alert_dev
The number of API Gateway responses has exceeded the warning threshold: 8000 req/sec.
There is a possibility that requests will continue to increase.
Please be aware that an alert may be triggered if this continues.
{{/is_warning}}
{{#is_warning_recovery}}
@slack-aaa_alert_dev
The number of API Gateway responses has fallen below the warning threshold: 8000 req/sec.
{{/is_warning_recovery}}
Monitors the rate of API Gateway responses Critical: > 9000 (10000 * 0.9), Warning: > 8000 (10000 * 0.8)
API Gateway 429 Error Count (Rate Limit) log alert logs("env:dev service:apigateway @http.status_code:429").index("*").rollup("count").last("5m") > 60 5 minutes #aaa_alert_dev platform_name:aws, platform_product:${local.platform_product}, env:dev, team:xxx, service:aaa {{#is_alert}}
@slack-aaa_alert_dev

429 errors have been detected in API Gateway.
Please check the details, and if the errors are due to hitting quotas, consider submitting a request to AWS via the Service Quotas console to raise the rate limits as needed.
{{/is_alert}}
Monitors the count of 429 errors in API Gateway Critical: > 60

Config Schema

The configuration file (config.yaml) must follow the structure below.

Top-level keys (required)

  • bedrock: AWS Bedrock API configuration.
  • input: Information about the Terraform files or folders to process.
  • output: Paths for saving the JSON and Markdown outputs.
  • schema_columns (optional but recommended): Array of column names for the Markdown table.

bedrock (object, required)

Field Type Required Description
aws_profile string AWS profile to use for authentication. default aws profile env
aws_region string AWS region where Bedrock is deployed. default us-east-1
system_prompt string System-level prompt to prepend to the Bedrock request.
payload object API parameters for the Bedrock model.
└ anthropic_version string Anthropic API version.
└ max_tokens integer Maximum tokens in the response.
└ temperature number Sampling temperature for randomness.
└ top_p number Top-p sampling parameter.
└ top_k number Top-k sampling parameter.
read_timeout integer Optional timeout for read operations.
connect_timeout integer Optional timeout for connection.
retries object Retry configuration (e.g., max_attempts, mode).
output_json object JSON schema describing the expected API response format.

input (object, required)

Field Type Required Description
resource_data object Specify which files or folder to process.
└ files array conditional (with folder) List of Terraform files to process.
└ folder string conditional (with files) Folder containing Terraform files.
modules object Module file settings.
└ path string Path to the module file.
└ enabled boolean Whether module processing is enabled (default: true).
local_files array List of local files with environment keys.
failback object Settings for fallback (chunk) processing when large inputs fail.
└ enabled boolean Whether fallback is enabled.
└ type string (enum: resource, modules) Type of fallback splitting.
└ options object conditional Additional options (e.g., target) required if type is modules.

schema_columns (array of strings, optional)

List of column names to include in the Markdown table.


output (object, required)

Field Type Required Description
json_path string File path to save the JSON result.
markdown_path string File path to save the Markdown report.
markdown_template string Optional custom Markdown template (e.g., using {title}, {table}).

Notes

  • Fields marked as ✅ are required; missing them will cause validation errors.
  • You must provide either files or folder under resource_data, but not both.
  • failback is useful if Bedrock requests fail due to input size; it retries per resource or module chunk.

🚨 Error Code List

Code Number Constant Name Meaning
0 exit_success Normal exit. Returned when all processing completes successfully.
1 exit_system_config_error Returned when system_config.yaml fails to load (file missing or parsing failure).
2 exit_config_error Returned when main configuration (config.yaml) fails to load or fails validation.
3 exit_file_read_error Returned when reading an input file fails.
4 exit_validation_error Returned when output JSON fails schema validation or other data validation checks.
5 exit_bedrock_error Returned on AWS Bedrock API errors (connection failure, timeout, client error, etc.).
99 exit_unknown_error Returned for any other undefined or unexpected exceptions.

Third-Party Licenses

This project uses third-party libraries under the following licenses:

link

For Developer

This section is intended for developers who want to understand, extend, or modify this project.

Development Notes

  • Python version: >=3.10 (confirm compatible version).
  • Logging levels: You can adjust logging to INFO or DEBUG as needed.
  • External services: Requires AWS Bedrock credentials; make sure your AWS profile is configured.

Testing

Please test your changes before submitting.

Contribution

Fork this repository and send a Pull Request (PR) if you want to contribute. Thanks!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hcl_processor-0.15.0rc0.tar.gz (28.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hcl_processor-0.15.0rc0-py3-none-any.whl (26.8 kB view details)

Uploaded Python 3

File details

Details for the file hcl_processor-0.15.0rc0.tar.gz.

File metadata

  • Download URL: hcl_processor-0.15.0rc0.tar.gz
  • Upload date:
  • Size: 28.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.1 CPython/3.14.2 Linux/6.11.0-1018-azure

File hashes

Hashes for hcl_processor-0.15.0rc0.tar.gz
Algorithm Hash digest
SHA256 3c0da78064d99d45975b4e869bb318eae69daa4e896fda714fb9da255b09aea0
MD5 7f857123e3ff21997ac6b0bb43c9c5ff
BLAKE2b-256 c653bb17c7b78a82ce0f92293d29a203ed89436de7b8d1d3041962a5becdb78b

See more details on using hashes here.

File details

Details for the file hcl_processor-0.15.0rc0-py3-none-any.whl.

File metadata

  • Download URL: hcl_processor-0.15.0rc0-py3-none-any.whl
  • Upload date:
  • Size: 26.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.1 CPython/3.14.2 Linux/6.11.0-1018-azure

File hashes

Hashes for hcl_processor-0.15.0rc0-py3-none-any.whl
Algorithm Hash digest
SHA256 b0e3434ff9d1928d63f4ce81ac318d3a39d17b0f2fcfd6589879d075a603978e
MD5 668f514d4daae3877d7ca50584da78ab
BLAKE2b-256 9e1027f2555dd9031c6bc6b446cfdca3b89bb86456cd0303d4d6a86eadbd0f27

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page