Skip to main content

A tool to generate infrastructure files (Dockerfile, etc.) for codebases using local LLMs.

Project description

local-AI-infra-generation

local-AI-infra-generation leverages local Large Language Models (LLMs) to analyze code repositories and automatically generate infrastructure files such as Dockerfiles and docker-compose.yml. All processing is performed locally, ensuring privacy and control over your codebase.


Features

  • Codebase Embedding: Index and embed your codebase for semantic search and retrieval.
  • Natural Language Q&A: Ask questions about your codebase and receive context-aware answers.
  • Automated Infrastructure Generation: Generate Dockerfiles and docker-compose.yml files tailored to your project.
  • Multi-language Support: Works with Python, JavaScript, TypeScript, and Go projects.

Getting Started

1. Prerequisites

  • Python 3.11+
    Ensure you have Python 3.11 or higher installed.
    Check with:

    python --version
    
  • Ollama
    Download and install Ollama for local LLM inference.

  • C/C++ Build Tools
    Required for building tree-sitter-languages.

2. Installation

Create and activate a virtual environment:

python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

Install the package from this repository:

pip install .

This will install the infra-gen command-line tool and all necessary dependencies.


Usage

1. Start Ollama

Make sure the Ollama server is running in the background:

ollama serve &

2. Run the CLI

Once installed, you can use the infra-gen command.

infra-gen --help

Common Commands

  • Embed a Project:

    infra-gen embed /path/to/your/project
    
  • Ask a Question:

    infra-gen ask "How does authentication work?" --project your_project_name
    
  • List Embedded Projects:

    infra-gen list
    
  • Generate Full Infrastructure (Dockerfile, Compose, etc.):

    infra-gen generate-infra /path/to/your/project --output ./infra
    
  • Generate Only a Dockerfile:

    infra-gen generate-docker --project your_project_name
    
  • Generate Only a docker-compose.yml:

    infra-gen generate-compose --project your_project_name
    

Configuration

The tool uses a config.yaml file for settings. The configuration is loaded in the following order of priority:

  1. Via --config flag: Provide a direct path to a .yaml file.
    infra-gen --config /path/to/my-config.yaml embed /path/to/project
    
  2. User-level config: Place a file at ~/.config/infra-generator/config.yaml.
  3. Default package config: If no other config is found, a default version bundled with the package is used.

You can customize model names, ChromaDB storage directories, Ollama URLs, and more in your custom config file.


Development

If you want to contribute to the development of this tool, you can install it in editable mode.

  1. Clone the repository:
    git clone https://github.com/yourusername/local-AI-infra-generation.git
    cd local-AI-infra-generation
    
  2. Create and activate a virtual environment:
    python -m venv .venv
    source .venv/bin/activate
    
  3. Install in editable mode:
    pip install -e .
    

This allows you to make changes to the source code and have them reflected immediately when you run the infra-gen command.

Running Tests

TODO: Add unit tests and instructions for running them.


Troubleshooting

  • Ollama not found:
    Ensure Ollama is installed and available in your PATH.

  • tree-sitter language .so files missing:
    If you encounter errors about missing .so files, ensure tree-sitter-languages is installed and built correctly.

  • Model download issues:
    The first run will download required models. Ensure you have a stable internet connection.


TODO

  • Add comprehensive unit and integration tests.
  • Improve error handling and user feedback.
  • Add support for more programming languages (e.g., Java, Rust).
  • Enhance prompt templates for better infrastructure generation.
  • Add web or GUI interface.
  • Document API for programmatic usage.
  • Support for private model registries and custom LLMs.
  • Optimize embedding and retrieval for large codebases.
  • Add CI/CD pipeline for automated testing and deployment.

License

This project is licensed under the Apache 2.0 License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

local_ai_infra_generation-1.1.0.tar.gz (25.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

local_ai_infra_generation-1.1.0-py3-none-any.whl (28.9 kB view details)

Uploaded Python 3

File details

Details for the file local_ai_infra_generation-1.1.0.tar.gz.

File metadata

File hashes

Hashes for local_ai_infra_generation-1.1.0.tar.gz
Algorithm Hash digest
SHA256 9d4ed928fa2e39845aad387571b3dec15f427fb18adfcfbef350d414cdc25668
MD5 8305e00400ab5769ded7ad264e3dc7e4
BLAKE2b-256 e9fcd8c473914579a175e6ca365d12dd02f15282f1aa975774425d546c22bda6

See more details on using hashes here.

File details

Details for the file local_ai_infra_generation-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for local_ai_infra_generation-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d1e4976e731647ec2e1ed5a6a21f9bf3476ff0e89355fd7c812121dae77bfbdf
MD5 ea83b870278b7747a49a531ef9f9cf0e
BLAKE2b-256 de9b89416ac7d09201a912f2bdaa5c497911e07cf3bb4c4d740cfe5cbc4b0a1e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page