Skip to main content

A terraform multi-repo module AI RAG ingestion engine that accepts a YAML file of terraform git repository sources, downloads them locally using existing credentials, creates JSON summaries of their purpose, inputs, outputs, and providers on the main and git tag branches for ingestion via a RAG pipeline into a vector database.

Project description

terraform-ingest

A terraform multi-repo module AI RAG ingestion engine that accepts a YAML file of terraform git repository sources, downloads them locally using existing credentials, creates JSON summaries of their purpose, inputs, outputs, and providers for branches or tagged releases you specify for ingestion via a RAG pipeline into a vector database. Includes an easy to use cli, API, or MCP server.

Features

  • 📥 Multi-Repository Ingestion: Process multiple Terraform repositories from a single YAML configuration
  • 🔍 Comprehensive Analysis: Extracts variables, outputs, providers, modules, and descriptions
  • 🏷️ Branch & Tag Support: Analyzes both branches and git tags
  • 🔌 Dual Interface: Use as a CLI tool (Click) or as a REST API service (FastAPI)
  • 🤖 MCP Integration: FastMCP service for AI agent access to ingested modules
  • 📊 JSON Output: Generates structured JSON summaries ready for RAG ingestion
  • 🔐 Credential Support: Uses existing git credentials for private repositories
  • 🧠 Vector Database Embeddings: Semantic search with ChromaDB, OpenAI, Claude, or sentence-transformers

Installation

uv sync
source .venv/bin/activate

Optional: Install with Vector Database Support

For semantic search with ChromaDB embeddings:

# Using pip
pip install chromadb sentence-transformers

# Or for all embedding options (OpenAI, Claude/Voyage)
pip install chromadb sentence-transformers openai voyageai

See Vector Database Embeddings for detailed setup instructions.

Usage

CLI Interface

Initialize a Configuration File

terraform-ingest init config.yaml

Ingest Repositories from Configuration

terraform-ingest ingest config.yaml

With custom output and clone directories:

terraform-ingest ingest config.yaml -o ./my-output -c ./my-repos

With cleanup after ingestion:

terraform-ingest ingest config.yaml --cleanup

Analyze a Single Repository

terraform-ingest analyze https://github.com/terraform-aws-modules/terraform-aws-vpc

With branch specification and tags:

terraform-ingest analyze https://github.com/user/terraform-module -b develop --include-tags --max-tags 5

Save output to file:

terraform-ingest analyze https://github.com/user/terraform-module -o output.json

Search with Vector Database

Search for modules using semantic search (requires embeddings to be enabled):

# Basic semantic search
terraform-ingest search "vpc module for aws"

# Filter by provider
terraform-ingest search "kubernetes cluster" --provider aws

# Filter by repository and limit results
terraform-ingest search "security group" --repository https://github.com/terraform-aws-modules/terraform-aws-vpc --limit 5

See Vector Database Embeddings for configuration and advanced usage.

MCP Service for AI Agents

The FastMCP service exposes ingested Terraform modules to AI agents through the Model Context Protocol (MCP). This allows AI assistants to query and discover Terraform modules from your ingested repositories.

Start the MCP Server

terraform-ingest-mcp

The server will start and expose two tools:

  1. list_repositories: Lists all accessible Git repositories containing Terraform modules
  2. search_modules: Searches for Terraform modules by name, provider, or keywords

MCP Tools

list_repositories

# Lists all repositories with metadata
list_repositories(
    filter="aws",           # Optional: filter by keyword
    limit=50,               # Optional: max results (default: 50)
    output_dir="./output"   # Optional: path to JSON summaries
)

Returns repository information including:

  • URL and name
  • Description
  • Branches/tags analyzed
  • Module count
  • Providers used

search_modules

# Search for modules
search_modules(
    query="vpc",                    # Required: search term
    repo_urls=["https://..."],      # Optional: specific repos
    provider="aws",                 # Optional: filter by provider
    output_dir="./output"           # Optional: path to JSON summaries
)

Returns detailed module information including:

  • Repository and ref (branch/tag)
  • Variables and outputs
  • Providers and sub-modules
  • README content

search_modules_vector (New)

# Semantic search with vector embeddings
search_modules_vector(
    query="module for creating VPCs in AWS",  # Natural language query
    provider="aws",                           # Optional: filter by provider
    repository="https://...",                 # Optional: filter by repo
    limit=10,                                 # Optional: max results
    config_file="config.yaml"                 # Config with embedding settings
)

Returns semantically similar modules with relevance scores. Requires vector database to be enabled in configuration.

Example MCP Usage

Once the MCP server is running, AI agents can use it to:

  1. Discover Available Modules:

    • "What Terraform modules are available for AWS?"
    • "Show me all modules that use the azurerm provider"
  2. Search for Specific Functionality:

    • "Find modules that create VPCs"
    • "Search for modules with security group configurations"
  3. Analyze Module Details:

    • "What are the inputs for the AWS VPC module?"
    • "Show me the outputs from the network module"

Configuring Output Directory

The MCP service reads from the directory where ingested JSON summaries are stored. By default, this is ./output. You can specify a different directory:

# Set via environment variable
export TERRAFORM_INGEST_OUTPUT_DIR=/path/to/output

# Or pass directly to tools in MCP calls
list_repositories(output_dir="/custom/path")

API Service

Start the API Server

uvicorn terraform_ingest.api:app --host 0.0.0.0 --port 8000

Or run directly:

python -m terraform_ingest.api

API Endpoints

  • GET / - API information and available endpoints
  • GET /health - Health check
  • POST /ingest - Ingest multiple repositories
  • POST /analyze - Analyze a single repository
  • POST /ingest-from-yaml - Ingest from YAML configuration string
  • POST /search/vector - Search modules using vector embeddings (New)

Example API Requests

Analyze a single repository:

curl -X POST http://localhost:8000/analyze \
  -H "Content-Type: application/json" \
  -d '{
    "repository_url": "https://github.com/terraform-aws-modules/terraform-aws-vpc",
    "branches": ["main"],
    "include_tags": true,
    "max_tags": 3
  }'

Ingest multiple repositories:

curl -X POST http://localhost:8000/ingest \
  -H "Content-Type: application/json" \
  -d '{
    "repositories": [
      {
        "url": "https://github.com/terraform-aws-modules/terraform-aws-vpc",
        "branches": ["main"],
        "include_tags": true,
        "max_tags": 5
      }
    ]
  }'

Search with vector embeddings: (New)

curl -X POST http://localhost:8000/search/vector \
  -H "Content-Type: application/json" \
  -d '{
    "query": "vpc module with public and private subnets",
    "provider": "aws",
    "limit": 5,
    "config_file": "config.yaml"
  }'

Configuration File Format

The YAML configuration file defines repositories to process:

repositories:
  - url: https://github.com/terraform-aws-modules/terraform-aws-vpc
    name: terraform-aws-vpc
    branches:
      - main
      - master
    include_tags: true
    max_tags: 5
    path: .

  - url: https://github.com/user/another-module
    name: another-module
    branches:
      - main
    include_tags: false
    path: modules/submodule

output_dir: ./output
clone_dir: ./repos

# Optional: Vector database configuration for semantic search
embedding:
  enabled: true
  strategy: chromadb-default  # or: openai, claude, sentence-transformers
  chromadb_path: ./chromadb
  collection_name: terraform_modules

Configuration Options

Repository Options:

  • url (required): Git repository URL
  • name (optional): Custom name for the repository
  • branches (optional): List of branches to analyze (default: ["main"])
  • include_tags (optional): Whether to include git tags (default: true)
  • max_tags (optional): Maximum number of tags to process (default: 10)
  • path (optional): Path within the repository to the Terraform module (default: ".")

Global Options:

  • output_dir (optional): Directory for JSON output files (default: "./output")
  • clone_dir (optional): Directory for cloning repositories (default: "./repos")

Embedding Options (New):

  • embedding.enabled (optional): Enable vector database embeddings (default: false)
  • embedding.strategy (optional): Embedding strategy - "chromadb-default", "openai", "claude", or "sentence-transformers" (default: "chromadb-default")
  • embedding.chromadb_path (optional): Path to ChromaDB storage (default: "./chromadb")
  • embedding.collection_name (optional): ChromaDB collection name (default: "terraform_modules")

See Vector Database Embeddings for complete embedding configuration options.

Output Format

Each processed module version generates a JSON file with the following structure:

{
  "repository": "https://github.com/user/terraform-module",
  "ref": "main",
  "path": ".",
  "description": "Module description from README or comments",
  "variables": [
    {
      "name": "vpc_cidr",
      "type": "string",
      "description": "CIDR block for VPC",
      "default": "10.0.0.0/16",
      "required": false
    }
  ],
  "outputs": [
    {
      "name": "vpc_id",
      "description": "ID of the VPC",
      "value": null,
      "sensitive": false
    }
  ],
  "providers": [
    {
      "name": "aws",
      "source": "hashicorp/aws",
      "version": ">= 4.0"
    }
  ],
  "modules": [
    {
      "name": "subnets",
      "source": "./modules/subnets",
      "version": null
    }
  ],
  "readme_content": "# Terraform Module\n..."
}

Use Cases

RAG Pipeline Integration

The JSON output is structured for easy ingestion into vector databases:

  1. Semantic Search: Find relevant Terraform modules based on natural language queries
  2. Module Discovery: Discover modules with specific inputs, outputs, or providers
  3. Version Analysis: Compare module versions across branches and tags
  4. Documentation Enhancement: Augment module documentation with AI-generated insights

Example RAG Workflow

from terraform_ingest import TerraformIngest

# Ingest modules
ingester = TerraformIngest.from_yaml('config.yaml')
summaries = ingester.ingest()

# Process for vector database (automatic if embeddings enabled in config)
# Or manually process JSON summaries for your own vector database
for summary in summaries:
    # Create embeddings from description, variables, outputs
    text = f"{summary.description} Inputs: {summary.variables} Outputs: {summary.outputs}"
    
    # Store in vector database with metadata
    metadata = {
        "repository": summary.repository,
        "ref": summary.ref,
        "providers": [p.name for p in summary.providers]
    }
    # vector_db.store(text, metadata)

# With built-in embeddings enabled, search semantically
if ingester.vector_db:
    results = ingester.search_vector_db(
        "vpc module with private subnets",
        filters={"provider": "aws"},
        n_results=5
    )
    for result in results:
        print(f"Found: {result['metadata']['repository']}")

Vector Database Embeddings

Terraform-ingest supports semantic search through vector database embeddings with ChromaDB. This enables natural language queries and AI-powered module discovery.

Quick Start

  1. Install ChromaDB:

    pip install chromadb
    
  2. Enable in config:

    embedding:
      enabled: true
      strategy: chromadb-default
      chromadb_path: ./chromadb
      collection_name: terraform_modules
    
  3. Ingest and search:

    terraform-ingest ingest config.yaml
    terraform-ingest search "vpc module for aws"
    

Features

  • 🎯 Semantic Search: Natural language queries like "vpc with private subnets"
  • 🔌 Multiple Strategies: ChromaDB default, OpenAI, Claude, or sentence-transformers
  • 🏷️ Metadata Filtering: Filter by provider, repository, tags
  • 🔄 Incremental Updates: Automatically update embeddings on re-ingestion
  • 🎛️ Hybrid Search: Combine vector similarity with keyword matching

Embedding Strategies

Strategy Description Setup
chromadb-default Built-in ChromaDB embeddings pip install chromadb
sentence-transformers Local models, no API pip install sentence-transformers
openai Best quality, requires API key pip install openai
claude Voyage AI embeddings pip install voyageai

Documentation

Example Queries

# Natural language
terraform-ingest search "module for creating kubernetes clusters with autoscaling"

# With filters
terraform-ingest search "database with replication" --provider aws --limit 3

# Via API
curl -X POST http://localhost:8000/search/vector \
  -H "Content-Type: application/json" \
  -d '{"query": "vpc with vpn support", "provider": "aws"}'

Development

Running Tests

pytest tests/

Code Quality

black src/
flake8 src/
mypy src/

License

MIT License

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

terraform_ingest-0.1.1.tar.gz (317.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

terraform_ingest-0.1.1-py3-none-any.whl (29.8 kB view details)

Uploaded Python 3

File details

Details for the file terraform_ingest-0.1.1.tar.gz.

File metadata

  • Download URL: terraform_ingest-0.1.1.tar.gz
  • Upload date:
  • Size: 317.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for terraform_ingest-0.1.1.tar.gz
Algorithm Hash digest
SHA256 cd18003d29ed9270f325ffdb1effa8cbe46c0cb97d29b1c61926f18082ac15d6
MD5 4bd30131196d9e2540aabb20cac6e216
BLAKE2b-256 295f246a22fdc284f212bc771c96980feb57a9bc8397d5481b75a0433284d1c2

See more details on using hashes here.

Provenance

The following attestation bundles were made for terraform_ingest-0.1.1.tar.gz:

Publisher: release.yaml on zloeber/terraform-ingest

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file terraform_ingest-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for terraform_ingest-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 fcb2be69daec9c73bbf87fad3b04952c66958ef20fa7179cb869ab52abbcff40
MD5 086757063c2667c6282128cb1aab159a
BLAKE2b-256 5e7b4a465f6338792206b9199be48444470564ccb91cfd92f4aa5f5d1b32b635

See more details on using hashes here.

Provenance

The following attestation bundles were made for terraform_ingest-0.1.1-py3-none-any.whl:

Publisher: release.yaml on zloeber/terraform-ingest

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page