Skip to main content

Linting, security scanning, and reporting on infrastructure code and Kubernetes config

Project description

AI_Sec

AI_Sec is a powerful command-line tool for linting, security scanning, and reporting on infrastructure-as-code (IaC) such as Terraform and CloudFormation. It supports a variety of linters and security checkers, making it an essential tool for maintaining high-quality infrastructure code, with a focus on best practices and security.

Table of Contents

Motivation

Managing infrastructure code in a secure and scalable way is essential, especially with the rise of cloud-native technologies. AI_Sec was developed to automate the process of ensuring that your infrastructure code adheres to best practices by utilizing various linters and security scanners, generating detailed reports to highlight issues.

AI_Sec ensures that your infrastructure is both secure and follows the necessary guidelines by default using Checkov, while also supporting other popular linters such as TFLint and TFSec. The tool is designed to work with IaC frameworks like Terraform and CloudFormation, giving you comprehensive coverage.

Python Versions

This project supports Python versions specified in the pyproject.toml file:

[tool.poetry.dependencies]
python = ">=3.10,<4.0"

Features

  • Lint Terraform and CloudFormation Code: Support for Checkov by default, with optional support for TFLint (v0.53.0) and TFSec (v1.28.0).
  • Security Scanning: Detect vulnerabilities in your infrastructure code using popular security tools.
  • Customizable Reports: Generate detailed reports in JSON or HTML format.
  • Dashboard for Issue Navigation: Navigate and explore identified issues through an interactive dashboard. The dashboard categorizes and presents issues by severity, linter type, and more, providing an easy way to investigate and resolve problems.
  • Configurable Color Scheme: Customize the color scheme for different severity levels (CRITICAL, HIGH, MEDIUM, LOW, INFO).
  • AI-Generated Insights: Automatically infer severity and context for high-severity issues using OpenAI.
  • Caching for AI Responses: To reduce repeated calls to OpenAI, Ai_sec caches AI-generated insights for faster subsequent runs.
  • Modular Linter Support: Easily enable or disable linters through the configuration file.

Installation

Ensure you are using Python 3.10 or above.

Option 1: Using a Virtual Environment and Symbolic Links

  1. Ensure Python Version

    • Verify you have Python 3.10 or later:
      python --version
      
  2. Create and Activate Virtual Environment

    • Create:

      python -m venv myenv
      
    • Activate:

      • Windows:
        myenv\\Scripts\\activate
        
      • macOS/Linux:
        source myenv/bin/activate
        
  3. Install Ai_sec

    pip install ai_sec
    

Option 2: Installing Directly to System Python

  1. Ensure Python Version

    • Verify you have Python 3.10 or later:
      python --version
      
  2. Install AI_Sec

    python -m pip install ai_sec
    

Setting Up

To configure AI_Sec, follow these steps:

  1. You can export the default config by running ai_sec export-config.

  2. The default configuration file will be exported to ~/.ai_sec/config.yaml.

  3. By default, Checkov is the main linter used, but you can enable TFLint and TFSec as needed if you have them installed.

  4. Edit the config.yaml file to enable/disable linters and set the report output format.

Sample Configuration

Here’s the default config.yaml Before running AI_Sec, you need to set up the default configuration file. You can automatically export the default configuration to the ~/.ai_sec/config.yaml directory by running the following command:

ai_sec export-config
linters:
  tflint:
    enabled: false
  tfsec:
    enabled: false
  checkov:
    enabled: true
    framework: terraform # Default framework can also be Cloudformation
output:
  format: json
  save_to: ./reports/report.json
color_scheme:
  CRITICAL: "#FF6F61"
  HIGH: "#FFA07A"
  MEDIUM: "#FFD700"
  LOW: "#90EE90"
  INFO: "#B0C4DE"

Open AI Insights

AI_Sec integrates with OpenAI to provide enhanced insights on infrastructure issues. This includes determining the severity of issues and providing additional context and resolution suggestions for critical and high-severity issues. These insights can be particularly useful in understanding the nature of the problems and how to resolve them.

How to Enable OpenAI Insights

To enable OpenAI insights, you will need an API key from OpenAI

  1. Set the OpenAI API Key: You must set an environment variable OPENAI_API_KEY with your OpenAI API key. You can export it in your terminal before running the tool: bash export OPENAI_API_KEY="your-openai-api-key"
  2. Enable OpenAI Insights in the Configuration: Ensure that the OpenAI integration is enabled in the configuration file. By default, if the API key is set, the insights will automatically be enabled when issues are found.

How OpenAI Insights Work

When a linter detects an issue, AI_Sec sends a request to OpenAI to analyze the issue and provide:

Severity: The issue’s severity level (CRITICAL, HIGH, MEDIUM, or LOW). Context and Resolution: For critical and high-severity issues, additional context and resolution suggestions will be provided.

These insights are added to the linting report and can be viewed in the AI_Sec Dashboard.

Caching of OpenAI Responses

To avoid repeated API calls and improve performance, OpenAI responses are cached locally. The cache is created in the user’s home directory under ~/.ai_sec/openai_cache.json. This means if the same issue is analyzed multiple times, the tool will retrieve the result from the cache instead of querying OpenAI again.

Note: The cache key is generated based on the issue description and the framework used, so identical issues will have the same result retrieved from the cache.

Important Considerations

API Limits: Depending on your OpenAI subscription, you may have limits on the number of requests. Using the cache can help minimize the number of API calls. Performance: Querying OpenAI can add some additional time to the analysis, especially for large codebases or complex issues. The caching system helps mitigate this for repeated runs. Error Handling: If an error occurs while querying OpenAI (e.g., invalid API key, connection issues), the tool will log the error and continue running without OpenAI insights.

Commands

Here are some useful commands to interact with AI_Sec:

  • ai_sec run <path>: Run the linters on the specified path and generate a report.
  • ai_sec export-config - exports default config

Changelog

For detailed information about changes in each version, see the Changelog.

Contact

If you encounter any issues or have any suggestions, please feel free to send them to dev@darrenrabbitt.com. Thank you for your support!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_sec-0.0.1.tar.gz (93.5 kB view details)

Uploaded Source

Built Distribution

ai_sec-0.0.1-py3-none-any.whl (104.4 kB view details)

Uploaded Python 3

File details

Details for the file ai_sec-0.0.1.tar.gz.

File metadata

  • Download URL: ai_sec-0.0.1.tar.gz
  • Upload date:
  • Size: 93.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.10.14 Linux/6.5.0-1025-azure

File hashes

Hashes for ai_sec-0.0.1.tar.gz
Algorithm Hash digest
SHA256 cdcc3507b3019370efb6108be40ebe556372008a7f43c754a8dc4b9de7f34cad
MD5 80146447400b93f86158e6e58f68225f
BLAKE2b-256 857582744ff07260f87c50fd856b6b58def004856bd56b1da9645d4a8a175422

See more details on using hashes here.

File details

Details for the file ai_sec-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: ai_sec-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 104.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.10.14 Linux/6.5.0-1025-azure

File hashes

Hashes for ai_sec-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6b5c0dd3cdeedf994e25cedcffe2aead927b8c60346068eabd766b9bab40eb53
MD5 3fbd3ef1317b701f4b21b1b5941e9a14
BLAKE2b-256 aa66d6890d73d209a2bf625aa7f02587640dcd548bb001ffa21401adc5481f0b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page